FEX: A Knowledge-Based System For Planimetric Feature Extraction
NASA Astrophysics Data System (ADS)
Zelek, John S.
1988-10-01
Topographical planimetric features include natural surfaces (rivers, lakes) and man-made surfaces (roads, railways, bridges). In conventional planimetric feature extraction, a photointerpreter manually interprets and extracts features from imagery on a stereoplotter. Visual planimetric feature extraction is a very labour intensive operation. The advantages of automating feature extraction include: time and labour savings; accuracy improvements; and planimetric data consistency. FEX (Feature EXtraction) combines techniques from image processing, remote sensing and artificial intelligence for automatic feature extraction. The feature extraction process co-ordinates the information and knowledge in a hierarchical data structure. The system simulates the reasoning of a photointerpreter in determining the planimetric features. Present efforts have concentrated on the extraction of road-like features in SPOT imagery. Keywords: Remote Sensing, Artificial Intelligence (AI), SPOT, image understanding, knowledge base, apars.
2017-01-01
Evidence-based dietary information represented as unstructured text is a crucial information that needs to be accessed in order to help dietitians follow the new knowledge arrives daily with newly published scientific reports. Different named-entity recognition (NER) methods have been introduced previously to extract useful information from the biomedical literature. They are focused on, for example extracting gene mentions, proteins mentions, relationships between genes and proteins, chemical concepts and relationships between drugs and diseases. In this paper, we present a novel NER method, called drNER, for knowledge extraction of evidence-based dietary information. To the best of our knowledge this is the first attempt at extracting dietary concepts. DrNER is a rule-based NER that consists of two phases. The first one involves the detection and determination of the entities mention, and the second one involves the selection and extraction of the entities. We evaluate the method by using text corpora from heterogeneous sources, including text from several scientifically validated web sites and text from scientific publications. Evaluation of the method showed that drNER gives good results and can be used for knowledge extraction of evidence-based dietary recommendations. PMID:28644863
KAM (Knowledge Acquisition Module): A tool to simplify the knowledge acquisition process
NASA Technical Reports Server (NTRS)
Gettig, Gary A.
1988-01-01
Analysts, knowledge engineers and information specialists are faced with increasing volumes of time-sensitive data in text form, either as free text or highly structured text records. Rapid access to the relevant data in these sources is essential. However, due to the volume and organization of the contents, and limitations of human memory and association, frequently: (1) important information is not located in time; (2) reams of irrelevant data are searched; and (3) interesting or critical associations are missed due to physical or temporal gaps involved in working with large files. The Knowledge Acquisition Module (KAM) is a microcomputer-based expert system designed to assist knowledge engineers, analysts, and other specialists in extracting useful knowledge from large volumes of digitized text and text-based files. KAM formulates non-explicit, ambiguous, or vague relations, rules, and facts into a manageable and consistent formal code. A library of system rules or heuristics is maintained to control the extraction of rules, relations, assertions, and other patterns from the text. These heuristics can be added, deleted or customized by the user. The user can further control the extraction process with optional topic specifications. This allows the user to cluster extracts based on specific topics. Because KAM formalizes diverse knowledge, it can be used by a variety of expert systems and automated reasoning applications. KAM can also perform important roles in computer-assisted training and skill development. Current research efforts include the applicability of neural networks to aid in the extraction process and the conversion of these extracts into standard formats.
Development of terminology for mammographic techniques for radiological technologists.
Yagahara, Ayako; Yokooka, Yuki; Tsuji, Shintaro; Nishimoto, Naoki; Uesugi, Masahito; Muto, Hiroshi; Ohba, Hisateru; Kurowarabi, Kunio; Ogasawara, Katsuhiko
2011-07-01
We are developing a mammographic ontology to share knowledge of the mammographic domain for radiologic technologists, with the aim of improving mammographic techniques. As a first step in constructing the ontology, we used mammography reference books to establish mammographic terminology for identifying currently available knowledge. This study proceeded in three steps: (1) determination of the domain and scope of the terminology, (2) lexical extraction, and (3) construction of hierarchical structures. We extracted terms mainly from three reference books and constructed the hierarchical structures manually. We compared features of the terms extracted from the three reference books. We constructed a terminology consisting of 440 subclasses grouped into 19 top-level classes: anatomic entity, image quality factor, findings, material, risk, breast, histological classification of breast tumors, role, foreign body, mammographic technique, physics, purpose of mammography examination, explanation of mammography examination, image development, abbreviation, quality control, equipment, interpretation, and evaluation of clinical imaging. The number of terms that occurred in the subclasses varied depending on which reference book was used. We developed a terminology of mammographic techniques for radiologic technologists consisting of 440 terms.
Secondary Teachers’ Mathematics-related Beliefs and Knowledge about Mathematical Problem-solving
NASA Astrophysics Data System (ADS)
E Siswono, T. Y.; Kohar, A. W.; Hartono, S.
2017-02-01
This study investigates secondary teachers’ belief about the three mathematics-related beliefs, i.e. nature of mathematics, teaching mathematics, learning mathematics, and knowledge about mathematical problem solving. Data were gathered through a set of task-based semi-structured interviews of three selected teachers with different philosophical views of teaching mathematics, i.e. instrumental, platonist, and problem solving. Those teachers were selected from an interview using a belief-related task from purposively selected teachers in Surabaya and Sidoarjo. While the interviews about knowledge examine teachers’ problem solving content and pedagogical knowledge, the interviews about beliefs examine their views on several cases extracted from each of such mathematics-related beliefs. Analysis included the categorization and comparison on each of beliefs and knowledge as well as their interaction. Results indicate that all the teachers did not show a high consistency in responding views of their mathematics-related beliefs, while they showed weaknesses primarily on problem solving content knowledge. Findings also point out that teachers’ beliefs have a strong relationship with teachers’ knowledge about problem solving. In particular, the instrumental teacher’s beliefs were consistent with his insufficient knowledge about problem-solving, while both platonist and problem-solving teacher’s beliefs were consistent with their sufficient knowledge of either content or pedagogical problem solving.
NASA Astrophysics Data System (ADS)
Roelofs, W. S. C.; Mathijssen, S. G. J.; Janssen, R. A. J.; de Leeuw, D. M.; Kemerink, M.
2012-02-01
The width and shape of the density of states (DOS) are key parameters to describe the charge transport of organic semiconductors. Here we extract the DOS using scanning Kelvin probe microscopy on a self-assembled monolayer field effect transistor (SAMFET). The semiconductor is only a single monolayer which has allowed extraction of the DOS over a wide energy range, pushing the methodology to its fundamental limit. The measured DOS consists of an exponential distribution of deep states with additional localized states on top. The charge transport has been calculated in a generic variable range-hopping model that allows any DOS as input. We show that with the experimentally extracted DOS an excellent agreement between measured and calculated transfer curves is obtained. This shows that detailed knowledge of the density of states is a prerequisite to consistently describe the transfer characteristics of organic field effect transistors.
Xu, Rong; Wang, QuanQiu
2015-02-01
Anticancer drug-associated side effect knowledge often exists in multiple heterogeneous and complementary data sources. A comprehensive anticancer drug-side effect (drug-SE) relationship knowledge base is important for computation-based drug target discovery, drug toxicity predication and drug repositioning. In this study, we present a two-step approach by combining table classification and relationship extraction to extract drug-SE pairs from a large number of high-profile oncological full-text articles. The data consists of 31,255 tables downloaded from the Journal of Oncology (JCO). We first trained a statistical classifier to classify tables into SE-related and -unrelated categories. We then extracted drug-SE pairs from SE-related tables. We compared drug side effect knowledge extracted from JCO tables to that derived from FDA drug labels. Finally, we systematically analyzed relationships between anti-cancer drug-associated side effects and drug-associated gene targets, metabolism genes, and disease indications. The statistical table classifier is effective in classifying tables into SE-related and -unrelated (precision: 0.711; recall: 0.941; F1: 0.810). We extracted a total of 26,918 drug-SE pairs from SE-related tables with a precision of 0.605, a recall of 0.460, and a F1 of 0.520. Drug-SE pairs extracted from JCO tables is largely complementary to those derived from FDA drug labels; as many as 84.7% of the pairs extracted from JCO tables have not been included a side effect database constructed from FDA drug labels. Side effects associated with anticancer drugs positively correlate with drug target genes, drug metabolism genes, and disease indications. Copyright © 2014 Elsevier Inc. All rights reserved.
Knowledge guided information fusion for segmentation of multiple sclerosis lesions in MRI images
NASA Astrophysics Data System (ADS)
Zhu, Chaozhe; Jiang, Tianzi
2003-05-01
In this work, T1-, T2- and PD-weighted MR images of multiple sclerosis (MS) patients, providing information on the properties of tissues from different aspects, are treated as three independent information sources for the detection and segmentation of MS lesions. Based on information fusion theory, a knowledge guided information fusion framework is proposed to accomplish 3-D segmentation of MS lesions. This framework consists of three parts: (1) information extraction, (2) information fusion, and (3) decision. Information provided by different spectral images is extracted and modeled separately in each spectrum using fuzzy sets, aiming at managing the uncertainty and ambiguity in the images due to noise and partial volume effect. In the second part, the possible fuzzy map of MS lesions in each spectral image is constructed from the extracted information under the guidance of experts' knowledge, and then the final fuzzy map of MS lesions is constructed through the fusion of the fuzzy maps obtained from different spectrum. Finally, 3-D segmentation of MS lesions is derived from the final fuzzy map. Experimental results show that this method is fast and accurate.
Extracting genetic alteration information for personalized cancer therapy from ClinicalTrials.gov
Xu, Jun; Lee, Hee-Jin; Zeng, Jia; Wu, Yonghui; Zhang, Yaoyun; Huang, Liang-Chin; Johnson, Amber; Holla, Vijaykumar; Bailey, Ann M; Cohen, Trevor; Meric-Bernstam, Funda; Bernstam, Elmer V
2016-01-01
Objective: Clinical trials investigating drugs that target specific genetic alterations in tumors are important for promoting personalized cancer therapy. The goal of this project is to create a knowledge base of cancer treatment trials with annotations about genetic alterations from ClinicalTrials.gov. Methods: We developed a semi-automatic framework that combines advanced text-processing techniques with manual review to curate genetic alteration information in cancer trials. The framework consists of a document classification system to identify cancer treatment trials from ClinicalTrials.gov and an information extraction system to extract gene and alteration pairs from the Title and Eligibility Criteria sections of clinical trials. By applying the framework to trials at ClinicalTrials.gov, we created a knowledge base of cancer treatment trials with genetic alteration annotations. We then evaluated each component of the framework against manually reviewed sets of clinical trials and generated descriptive statistics of the knowledge base. Results and Discussion: The automated cancer treatment trial identification system achieved a high precision of 0.9944. Together with the manual review process, it identified 20 193 cancer treatment trials from ClinicalTrials.gov. The automated gene-alteration extraction system achieved a precision of 0.8300 and a recall of 0.6803. After validation by manual review, we generated a knowledge base of 2024 cancer trials that are labeled with specific genetic alteration information. Analysis of the knowledge base revealed the trend of increased use of targeted therapy for cancer, as well as top frequent gene-alteration pairs of interest. We expect this knowledge base to be a valuable resource for physicians and patients who are seeking information about personalized cancer therapy. PMID:27013523
Extracting genetic alteration information for personalized cancer therapy from ClinicalTrials.gov.
Xu, Jun; Lee, Hee-Jin; Zeng, Jia; Wu, Yonghui; Zhang, Yaoyun; Huang, Liang-Chin; Johnson, Amber; Holla, Vijaykumar; Bailey, Ann M; Cohen, Trevor; Meric-Bernstam, Funda; Bernstam, Elmer V; Xu, Hua
2016-07-01
Clinical trials investigating drugs that target specific genetic alterations in tumors are important for promoting personalized cancer therapy. The goal of this project is to create a knowledge base of cancer treatment trials with annotations about genetic alterations from ClinicalTrials.gov. We developed a semi-automatic framework that combines advanced text-processing techniques with manual review to curate genetic alteration information in cancer trials. The framework consists of a document classification system to identify cancer treatment trials from ClinicalTrials.gov and an information extraction system to extract gene and alteration pairs from the Title and Eligibility Criteria sections of clinical trials. By applying the framework to trials at ClinicalTrials.gov, we created a knowledge base of cancer treatment trials with genetic alteration annotations. We then evaluated each component of the framework against manually reviewed sets of clinical trials and generated descriptive statistics of the knowledge base. The automated cancer treatment trial identification system achieved a high precision of 0.9944. Together with the manual review process, it identified 20 193 cancer treatment trials from ClinicalTrials.gov. The automated gene-alteration extraction system achieved a precision of 0.8300 and a recall of 0.6803. After validation by manual review, we generated a knowledge base of 2024 cancer trials that are labeled with specific genetic alteration information. Analysis of the knowledge base revealed the trend of increased use of targeted therapy for cancer, as well as top frequent gene-alteration pairs of interest. We expect this knowledge base to be a valuable resource for physicians and patients who are seeking information about personalized cancer therapy. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Apparatus for Assisting Childbirth
NASA Technical Reports Server (NTRS)
Smeltzer, Stanley S., III (Inventor); Lawson, Seth W. (Inventor)
1997-01-01
The invention consists of novel, scissors-like forceps in combination with optical monitoring hardware for measuring the extraction forces on a fetal head. The novel features of the forceps together with knowledge of real time forces on the fetal head enable a user to make a much safer delivery for mother and baby.
Buildings classification from airborne LiDAR point clouds through OBIA and ontology driven approach
NASA Astrophysics Data System (ADS)
Tomljenovic, Ivan; Belgiu, Mariana; Lampoltshammer, Thomas J.
2013-04-01
In the last years, airborne Light Detection and Ranging (LiDAR) data proved to be a valuable information resource for a vast number of applications ranging from land cover mapping to individual surface feature extraction from complex urban environments. To extract information from LiDAR data, users apply prior knowledge. Unfortunately, there is no consistent initiative for structuring this knowledge into data models that can be shared and reused across different applications and domains. The absence of such models poses great challenges to data interpretation, data fusion and integration as well as information transferability. The intention of this work is to describe the design, development and deployment of an ontology-based system to classify buildings from airborne LiDAR data. The novelty of this approach consists of the development of a domain ontology that specifies explicitly the knowledge used to extract features from airborne LiDAR data. The overall goal of this approach is to investigate the possibility for classification of features of interest from LiDAR data by means of domain ontology. The proposed workflow is applied to the building extraction process for the region of "Biberach an der Riss" in South Germany. Strip-adjusted and georeferenced airborne LiDAR data is processed based on geometrical and radiometric signatures stored within the point cloud. Region-growing segmentation algorithms are applied and segmented regions are exported to the GeoJSON format. Subsequently, the data is imported into the ontology-based reasoning process used to automatically classify exported features of interest. Based on the ontology it becomes possible to define domain concepts, associated properties and relations. As a consequence, the resulting specific body of knowledge restricts possible interpretation variants. Moreover, ontologies are machinable and thus it is possible to run reasoning on top of them. Available reasoners (FACT++, JESS, Pellet) are used to check the consistency of the developed ontologies, and logical reasoning is performed to infer implicit relations between defined concepts. The ontology for the definition of building is specified using the Ontology Web Language (OWL). It is the most widely used ontology language that is based on Description Logics (DL). DL allows the description of internal properties of modelled concepts (roof typology, shape, area, height etc.) and relationships between objects (IS_A, MEMBER_OF/INSTANCE_OF). It captures terminological knowledge (TBox) as well as assertional knowledge (ABox) - that represents facts about concept instances, i.e. the buildings in airborne LiDAR data. To assess the classification accuracy, ground truth data generated by visual interpretation and calculated classification results in terms of precision and recall are used. The advantages of this approach are: (i) flexibility, (ii) transferability, and (iii) extendibility - i.e. ontology can be extended with further concepts, data properties and object properties.
Development of a knowledge acquisition tool for an expert system flight status monitor
NASA Technical Reports Server (NTRS)
Disbrow, J. D.; Duke, E. L.; Regenie, V. A.
1986-01-01
Two of the main issues in artificial intelligence today are knowledge acquisition dion and knowledge representation. The Dryden Flight Research Facility of NASA's Ames Research Center is presently involved in the design and implementation of an expert system flight status monitor that will provide expertise and knowledge to aid the flight systems engineer in monitoring today's advanced high-performance aircraft. The flight status monitor can be divided into two sections: the expert system itself and the knowledge acquisition tool. The knowledge acquisition tool, the means it uses to extract knowledge from the domain expert, and how that knowledge is represented for computer use is discussed. An actual aircraft system has been codified by this tool with great success. Future real-time use of the expert system has been facilitated by using the knowledge acquisition tool to easily generate a logically consistent and complete knowledge base.
Development of a knowledge acquisition tool for an expert system flight status monitor
NASA Technical Reports Server (NTRS)
Disbrow, J. D.; Duke, E. L.; Regenie, V. A.
1986-01-01
Two of the main issues in artificial intelligence today are knowledge acquisition and knowledge representation. The Dryden Flight Research Facility of NASA's Ames Research Center is presently involved in the design and implementation of an expert system flight status monitor that will provide expertise and knowledge to aid the flight systems engineer in monitoring today's advanced high-performance aircraft. The flight status monitor can be divided into two sections: the expert system itself and the knowledge acquisition tool. This paper discusses the knowledge acquisition tool, the means it uses to extract knowledge from the domain expert, and how that knowledge is represented for computer use. An actual aircraft system has been codified by this tool with great success. Future real-time use of the expert system has been facilitated by using the knowledge acquisition tool to easily generate a logically consistent and complete knowledge base.
A Bayesian framework for extracting human gait using strong prior knowledge.
Zhou, Ziheng; Prügel-Bennett, Adam; Damper, Robert I
2006-11-01
Extracting full-body motion of walking people from monocular video sequences in complex, real-world environments is an important and difficult problem, going beyond simple tracking, whose satisfactory solution demands an appropriate balance between use of prior knowledge and learning from data. We propose a consistent Bayesian framework for introducing strong prior knowledge into a system for extracting human gait. In this work, the strong prior is built from a simple articulated model having both time-invariant (static) and time-variant (dynamic) parameters. The model is easily modified to cater to situations such as walkers wearing clothing that obscures the limbs. The statistics of the parameters are learned from high-quality (indoor laboratory) data and the Bayesian framework then allows us to "bootstrap" to accurate gait extraction on the noisy images typical of cluttered, outdoor scenes. To achieve automatic fitting, we use a hidden Markov model to detect the phases of images in a walking cycle. We demonstrate our approach on silhouettes extracted from fronto-parallel ("sideways on") sequences of walkers under both high-quality indoor and noisy outdoor conditions. As well as high-quality data with synthetic noise and occlusions added, we also test walkers with rucksacks, skirts, and trench coats. Results are quantified in terms of chamfer distance and average pixel error between automatically extracted body points and corresponding hand-labeled points. No one part of the system is novel in itself, but the overall framework makes it feasible to extract gait from very much poorer quality image sequences than hitherto. This is confirmed by comparing person identification by gait using our method and a well-established baseline recognition algorithm.
Support Vector Machine-Based Endmember Extraction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Filippi, Anthony M; Archibald, Richard K
Introduced in this paper is the utilization of Support Vector Machines (SVMs) to automatically perform endmember extraction from hyperspectral data. The strengths of SVM are exploited to provide a fast and accurate calculated representation of high-dimensional data sets that may consist of multiple distributions. Once this representation is computed, the number of distributions can be determined without prior knowledge. For each distribution, an optimal transform can be determined that preserves informational content while reducing the data dimensionality, and hence, the computational cost. Finally, endmember extraction for the whole data set is accomplished. Results indicate that this Support Vector Machine-Based Endmembermore » Extraction (SVM-BEE) algorithm has the capability of autonomously determining endmembers from multiple clusters with computational speed and accuracy, while maintaining a robust tolerance to noise.« less
Investigating implicit knowledge in ontologies with application to the anatomical domain.
Zhang, S; Bodenreider, O
2004-01-01
Knowledge in biomedical ontologies can be explicitly represented (often by means of semantic relations), but may also be implicit, i.e., embedded in the concept names and inferable from various combinations of semantic relations. This paper investigates implicit knowledge in two ontologies of anatomy: the Foundational Model of Anatomy and GALEN. The methods consist of extracting the knowledge explicitly represented, acquiring the implicit knowledge through augmentation and inference techniques, and identifying the origin of each semantic relation. The number of relations (12 million in FMA and 4.6 million in GALEN), broken down by source, is presented. Major findings include: each technique provides specific relations; and many relations can be generated by more than one technique. The application of these findings to ontology auditing, validation, and maintenance is discussed, as well as the application to ontology integration.
Knowledge Acquisition and Management for the NASA Earth Exchange (NEX)
NASA Astrophysics Data System (ADS)
Votava, P.; Michaelis, A.; Nemani, R. R.
2013-12-01
NASA Earth Exchange (NEX) is a data, computing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform with access to large supercomputing resources. As more and more projects are being executed on NEX, we are increasingly focusing on capturing the knowledge of the NEX users and provide mechanisms for sharing it with the community in order to facilitate reuse and accelerate research. There are many possible knowledge contributions to NEX, it can be a wiki entry on the NEX portal contributed by a developer, information extracted from a publication in an automated way, or a workflow captured during code execution on the supercomputing platform. The goal of the NEX knowledge platform is to capture and organize this information and make it easily accessible to the NEX community and beyond. The knowledge acquisition process consists of three main faucets - data and metadata, workflows and processes, and web-based information. Once the knowledge is acquired, it is processed in a number of ways ranging from custom metadata parsers to entity extraction using natural language processing techniques. The processed information is linked with existing taxonomies and aligned with internal ontology (which heavily reuses number of external ontologies). This forms a knowledge graph that can then be used to improve users' search query results as well as provide additional analytics capabilities to the NEX system. Such a knowledge graph will be an important building block in creating a dynamic knowledge base for the NEX community where knowledge is both generated and easily shared.
Knowledge extraction from evolving spiking neural networks with rank order population coding.
Soltic, Snjezana; Kasabov, Nikola
2010-12-01
This paper demonstrates how knowledge can be extracted from evolving spiking neural networks with rank order population coding. Knowledge discovery is a very important feature of intelligent systems. Yet, a disproportionally small amount of research is centered on the issue of knowledge extraction from spiking neural networks which are considered to be the third generation of artificial neural networks. The lack of knowledge representation compatibility is becoming a major detriment to end users of these networks. We show that a high-level knowledge can be obtained from evolving spiking neural networks. More specifically, we propose a method for fuzzy rule extraction from an evolving spiking network with rank order population coding. The proposed method was used for knowledge discovery on two benchmark taste recognition problems where the knowledge learnt by an evolving spiking neural network was extracted in the form of zero-order Takagi-Sugeno fuzzy IF-THEN rules.
Yildirim, Ilker; Jacobs, Robert A
2015-06-01
If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic "computer programs" and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects' experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects' and events' intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.
Knowledge Extraction from Atomically Resolved Images.
Vlcek, Lukas; Maksov, Artem; Pan, Minghu; Vasudevan, Rama K; Kalinin, Sergei V
2017-10-24
Tremendous strides in experimental capabilities of scanning transmission electron microscopy and scanning tunneling microscopy (STM) over the past 30 years made atomically resolved imaging routine. However, consistent integration and use of atomically resolved data with generative models is unavailable, so information on local thermodynamics and other microscopic driving forces encoded in the observed atomic configurations remains hidden. Here, we present a framework based on statistical distance minimization to consistently utilize the information available from atomic configurations obtained from an atomically resolved image and extract meaningful physical interaction parameters. We illustrate the applicability of the framework on an STM image of a FeSe x Te 1-x superconductor, with the segregation of the chalcogen atoms investigated using a nonideal interacting solid solution model. This universal method makes full use of the microscopic degrees of freedom sampled in an atomically resolved image and can be extended via Bayesian inference toward unbiased model selection with uncertainty quantification.
Knowledge-Driven Event Extraction in Russian: Corpus-Based Linguistic Resources
Solovyev, Valery; Ivanov, Vladimir
2016-01-01
Automatic event extraction form text is an important step in knowledge acquisition and knowledge base population. Manual work in development of extraction system is indispensable either in corpus annotation or in vocabularies and pattern creation for a knowledge-based system. Recent works have been focused on adaptation of existing system (for extraction from English texts) to new domains. Event extraction in other languages was not studied due to the lack of resources and algorithms necessary for natural language processing. In this paper we define a set of linguistic resources that are necessary in development of a knowledge-based event extraction system in Russian: a vocabulary of subordination models, a vocabulary of event triggers, and a vocabulary of Frame Elements that are basic building blocks for semantic patterns. We propose a set of methods for creation of such vocabularies in Russian and other languages using Google Books NGram Corpus. The methods are evaluated in development of event extraction system for Russian. PMID:26955386
Structuring and extracting knowledge for the support of hypothesis generation in molecular biology
Roos, Marco; Marshall, M Scott; Gibson, Andrew P; Schuemie, Martijn; Meij, Edgar; Katrenko, Sophia; van Hage, Willem Robert; Krommydas, Konstantinos; Adriaans, Pieter W
2009-01-01
Background Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable its extraction from literature. Their combination makes prior knowledge available for computational analysis and inference. While some tools provide complete solutions that limit the control over the modeling and extraction processes, we seek a methodology that supports control by the experimenter over these critical processes. Results We describe progress towards automated support for the generation of biomolecular hypotheses. Semantic Web technologies are used to structure and store knowledge, while a workflow extracts knowledge from text. We designed minimal proto-ontologies in OWL for capturing different aspects of a text mining experiment: the biological hypothesis, text and documents, text mining, and workflow provenance. The models fit a methodology that allows focus on the requirements of a single experiment while supporting reuse and posterior analysis of extracted knowledge from multiple experiments. Our workflow is composed of services from the 'Adaptive Information Disclosure Application' (AIDA) toolkit as well as a few others. The output is a semantic model with putative biological relations, with each relation linked to the corresponding evidence. Conclusion We demonstrated a 'do-it-yourself' approach for structuring and extracting knowledge in the context of experimental research on biomolecular mechanisms. The methodology can be used to bootstrap the construction of semantically rich biological models using the results of knowledge extraction processes. Models specific to particular experiments can be constructed that, in turn, link with other semantic models, creating a web of knowledge that spans experiments. Mapping mechanisms can link to other knowledge resources such as OBO ontologies or SKOS vocabularies. AIDA Web Services can be used to design personalized knowledge extraction procedures. In our example experiment, we found three proteins (NF-Kappa B, p21, and Bax) potentially playing a role in the interplay between nutrients and epigenetic gene regulation. PMID:19796406
Raboshchuk, Ganna; Nadeu, Climent; Jancovic, Peter; Lilja, Alex Peiro; Kokuer, Munevver; Munoz Mahamud, Blanca; Riverola De Veciana, Ana
2018-01-01
A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%.
Nadeu, Climent; Jančovič, Peter; Lilja, Alex Peiró; Köküer, Münevver; Muñoz Mahamud, Blanca; Riverola De Veciana, Ana
2018-01-01
A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%. PMID:29404227
Building a glaucoma interaction network using a text mining approach.
Soliman, Maha; Nasraoui, Olfa; Cooper, Nigel G F
2016-01-01
The volume of biomedical literature and its underlying knowledge base is rapidly expanding, making it beyond the ability of a single human being to read through all the literature. Several automated methods have been developed to help make sense of this dilemma. The present study reports on the results of a text mining approach to extract gene interactions from the data warehouse of published experimental results which are then used to benchmark an interaction network associated with glaucoma. To the best of our knowledge, there is, as yet, no glaucoma interaction network derived solely from text mining approaches. The presence of such a network could provide a useful summative knowledge base to complement other forms of clinical information related to this disease. A glaucoma corpus was constructed from PubMed Central and a text mining approach was applied to extract genes and their relations from this corpus. The extracted relations between genes were checked using reference interaction databases and classified generally as known or new relations. The extracted genes and relations were then used to construct a glaucoma interaction network. Analysis of the resulting network indicated that it bears the characteristics of a small world interaction network. Our analysis showed the presence of seven glaucoma linked genes that defined the network modularity. A web-based system for browsing and visualizing the extracted glaucoma related interaction networks is made available at http://neurogene.spd.louisville.edu/GlaucomaINViewer/Form1.aspx. This study has reported the first version of a glaucoma interaction network using a text mining approach. The power of such an approach is in its ability to cover a wide range of glaucoma related studies published over many years. Hence, a bigger picture of the disease can be established. To the best of our knowledge, this is the first glaucoma interaction network to summarize the known literature. The major findings were a set of relations that could not be found in existing interaction databases and that were found to be new, in addition to a smaller subnetwork consisting of interconnected clusters of seven glaucoma genes. Future improvements can be applied towards obtaining a better version of this network.
Model of experts for decision support in the diagnosis of leukemia patients.
Corchado, Juan M; De Paz, Juan F; Rodríguez, Sara; Bajo, Javier
2009-07-01
Recent advances in the field of biomedicine, specifically in the field of genomics, have led to an increase in the information available for conducting expression analysis. Expression analysis is a technique used in transcriptomics, a branch of genomics that deals with the study of messenger ribonucleic acid (mRNA) and the extraction of information contained in the genes. This increase in information is reflected in the exon arrays, which require the use of new techniques in order to extract the information. The purpose of this study is to provide a tool based on a mixture of experts model that allows the analysis of the information contained in the exon arrays, from which automatic classifications for decision support in diagnoses of leukemia patients can be made. The proposed model integrates several cooperative algorithms characterized for their efficiency for data processing, filtering, classification and knowledge extraction. The Cancer Institute of the University of Salamanca is making an effort to develop tools to automate the evaluation of data and to facilitate de analysis of information. This proposal is a step forward in this direction and the first step toward the development of a mixture of experts tool that integrates different cognitive and statistical approaches to deal with the analysis of exon arrays. The mixture of experts model presented within this work provides great capacities for learning and adaptation to the characteristics of the problem in consideration, using novel algorithms in each of the stages of the analysis process that can be easily configured and combined, and provides results that notably improve those provided by the existing methods for exon arrays analysis. The material used consists of data from exon arrays provided by the Cancer Institute that contain samples from leukemia patients. The methodology used consists of a system based on a mixture of experts. Each one of the experts incorporates novel artificial intelligence techniques that improve the process of carrying out various tasks such as pre-processing, filtering, classification and extraction of knowledge. This article will detail the manner in which individual experts are combined so that together they generate a system capable of extracting knowledge, thus permitting patients to be classified in an automatic and efficient manner that is also comprehensible for medical personnel. The system has been tested in a real setting and has been used for classifying patients who suffer from different forms of leukemia at various stages. Personnel from the Cancer Institute supervised and participated throughout the testing period. Preliminary results are promising, notably improving the results obtained with previously used tools. The medical staff from the Cancer Institute considers the tools that have been developed to be positive and very useful in a supporting capacity for carrying out their daily tasks. Additionally the mixture of experts supplies a tool for the extraction of necessary information in order to explain the associations that have been made in simple terms. That is, it permits the extraction of knowledge for each classification made and generalized in order to be used in subsequent classifications. This allows for a large amount of learning and adaptation within the proposed system.
Knowledge representation and management: transforming textual information into useful knowledge.
Rassinoux, A-M
2010-01-01
To summarize current outstanding research in the field of knowledge representation and management. Synopsis of the articles selected for the IMIA Yearbook 2010. Four interesting papers, dealing with structured knowledge, have been selected for the section knowledge representation and management. Combining the newest techniques in computational linguistics and natural language processing with the latest methods in statistical data analysis, machine learning and text mining has proved to be efficient for turning unstructured textual information into meaningful knowledge. Three of the four selected papers for the section knowledge representation and management corroborate this approach and depict various experiments conducted to .extract meaningful knowledge from unstructured free texts such as extracting cancer disease characteristics from pathology reports, or extracting protein-protein interactions from biomedical papers, as well as extracting knowledge for the support of hypothesis generation in molecular biology from the Medline literature. Finally, the last paper addresses the level of formally representing and structuring information within clinical terminologies in order to render such information easily available and shareable among the health informatics community. Delivering common powerful tools able to automatically extract meaningful information from the huge amount of electronically unstructured free texts is an essential step towards promoting sharing and reusability across applications, domains, and institutions thus contributing to building capacities worldwide.
Background Knowledge in Learning-Based Relation Extraction
ERIC Educational Resources Information Center
Do, Quang Xuan
2012-01-01
In this thesis, we study the importance of background knowledge in relation extraction systems. We not only demonstrate the benefits of leveraging background knowledge to improve the systems' performance but also propose a principled framework that allows one to effectively incorporate knowledge into statistical machine learning models for…
Pole-Like Street Furniture Decompostion in Mobile Laser Scanning Data
NASA Astrophysics Data System (ADS)
Li, F.; Oude Elberink, S.; Vosselman, G.
2016-06-01
Automatic semantic interpretation of street furniture has become a popular topic in recent years. Current studies detect street furniture as connected components of points above the street level. Street furniture classification based on properties of such components suffers from large intra class variability of shapes and cannot deal with mixed classes like traffic signs attached to light poles. In this paper, we focus on the decomposition of point clouds of pole-like street furniture. A novel street furniture decomposition method is proposed, which consists of three steps: (i) acquirement of prior-knowledge, (ii) pole extraction, (iii) components separation. For the pole extraction, a novel global pole extraction approach is proposed to handle 3 different cases of street furniture. In the evaluation of results, which involves the decomposition of 27 different instances of street furniture, we demonstrate that our method decomposes mixed classes street furniture into poles and different components with respect to different functionalities.
Chen, Yun; Yao, Fangke; Ming, Ke; Wang, Deyun; Hu, Yuanliang; Liu, Jiaguo
2016-12-13
Traditional Chinese Medicine (TCM) has been used to treat diseases in China for thousands of years. TCM compositions are complex, using as their various sources plants, animals, fungi, and minerals. Polysaccharides are one of the active and important ingredients of TCMs. Polysaccharides from TCMs exhibit a wide range of biological activities in terms of immunity- modifying, antiviral, anti-inflammatory, anti-oxidative, and anti-tumor properties. With their widespread biological activities, polysaccharides consistently attract scientist's interests, and the studies often concentrate on the extraction, purification, and biological activity of TCM polysaccharides. Currently, numerous studies have shown that the modification of polysaccharides can heighten or change the biological activities, which is a new angle of polysaccharide research. This review highlights the current knowledge of TCM polysaccharides, including their extraction, purification, modification, and biological activity, which will hopefully provide profound insights facilitating further research and development.
Workplace nutrition knowledge questionnaire: psychometric validation and application.
Guadagnin, Simone C; Nakano, Eduardo Y; Dutra, Eliane S; de Carvalho, Kênia M B; Ito, Marina K
2016-11-01
Workplace dietary intervention studies in low- and middle-income countries using psychometrically sound measures are scarce. This study aimed to validate a nutrition knowledge questionnaire (NQ) and its utility in evaluating the changes in knowledge among participants of a Nutrition Education Program (NEP) conducted at the workplace. A NQ was tested for construct validity, internal consistency and discriminant validity. It was applied in a NEP conducted at six workplaces, in order to evaluate the effect of an interactive or a lecture-based education programme on nutrition knowledge. Four knowledge domains comprising twenty-three items were extracted in the final version of the NQ. Internal consistency of each domain was significant, with Kuder-Richardson formula values>0·60. These four domains presented a good fit in the confirmatory factor analysis. In the discriminant validity test, both the Expert and Lay groups scored>0·52, but the Expert group scores were significantly higher than those of the Lay group in all domains. When the NQ was applied in the NEP, the overall questionnaire scores increased significantly because of the NEP intervention, in both groups (P<0·001). However, the increase in NQ scores was significantly higher in the interactive group than in the lecture group, in the overall score (P=0·008) and in the healthy eating domain (P=0·009). The validated NQ is a short and useful tool to assess gain in nutrition knowledge among participants of NEP at the workplace. According to the NQ, an interactive nutrition education had a higher impact on nutrition knowledge than a lecture programme.
Rassinoux, A-M
2011-01-01
To summarize excellent current research in the field of knowledge representation and management (KRM). A synopsis of the articles selected for the IMIA Yearbook 2011 is provided and an attempt to highlight the current trends in the field is sketched. This last decade, with the extension of the text-based web towards a semantic-structured web, NLP techniques have experienced a renewed interest in knowledge extraction. This trend is corroborated through the five papers selected for the KRM section of the Yearbook 2011. They all depict outstanding studies that exploit NLP technologies whenever possible in order to accurately extract meaningful information from various biomedical textual sources. Bringing semantic structure to the meaningful content of textual web pages affords the user with cooperative sharing and intelligent finding of electronic data. As exemplified by the best paper selection, more and more advanced biomedical applications aim at exploiting the meaningful richness of free-text documents in order to generate semantic metadata and recently to learn and populate domain ontologies. These later are becoming a key piece as they allow portraying the semantics of the Semantic Web content. Maintaining their consistency with documents and semantic annotations that refer to them is a crucial challenge of the Semantic Web for the coming years.
Evaluation of wound healing property of Caesalpinia mimosoides Lam.
Bhat, Pradeep Bhaskar; Hegde, Shruti; Upadhya, Vinayak; Hegde, Ganesh R; Habbu, Prasanna V; Mulgund, Gangadhar S
2016-12-04
Caesalpinia mimosoides Lam. is one of the important traditional folk medicinal plants in the treatment of skin diseases and wounds used by healers of Uttara Kannada district of Karnataka state (India). However scientific validation of documented traditional knowledge related to medicinal plants is an important path in current scenario to fulfill the increasing demand of herbal medicine. The study was carried out to evaluate the claimed uses of Caesalpinia mimosoides using antimicrobial, wound healing and antioxidant activities followed by detection of possible active bio-constituents. Extracts prepared by hot percolation method were subjected to preliminary phytochemical analysis followed by antimicrobial activity using MIC assay. In vivo wound healing activity was evaluated by circular excision and linear incision wound models. The extract with significant antimicrobial and wound healing activity was investigated for antioxidant capacity using DPPH, nitric oxide, antilipid peroxidation and total antioxidant activity methods. Total phenolic and flavonoid contents were also determined by Folin-Ciocalteu, Swain and Hillis methods. Possible bio-active constituents were identified by GC-MS technique. RP-UFLC-DAD analysis was carried out to quantify ethyl gallate and gallic acid in the plant extract. Preliminary phytochemical analysis showed positive results for ethanol and aqueous extracts for all the chemical constituents. The ethanol extract proved potent antimicrobial activity against both bacterial and fungal skin pathogens compared to other extracts. The efficacy of topical application of potent ethanol extract and traditionally used aqueous extracts was evidenced by the complete re-epithelization of the epidermal layer with increased percentage of wound contraction in a shorter period. However, aqueous extract failed to perform a consistent effect in the histopathological assessment. Ethanol extract showed effective scavenging activity against DPPH and nitric oxide free radicals with an expressive amount of phenolic and moderate concentration of flavonoid contents. Ethyl gallate and gallic acid were found to be the probable bio-active compounds evidenced by GCMS and RP-UFLC-DAD analysis. The study revealed the significant antimicrobial, wound healing and antioxidant activities of tender parts of C. mimosoides and proved the traditional folklore knowledge. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Simulation to Improve Trainee Knowledge and Comfort About Twin Vaginal Birth.
Easter, Sarah Rae; Gardner, Roxane; Barrett, Jon; Robinson, Julian N; Carusi, Daniela
2016-10-01
To describe a simulation-based curriculum on twin vaginal delivery and evaluate its effects on trainee knowledge and comfort about twin vaginal birth. Trainees participated in a three-part simulation consisting of a patient counseling session, a twin delivery scenario, and a breech extraction skills station. Consenting trainees completed a 21-item presimulation survey and a 22-item postsimulation survey assessing knowledge, experience, attitudes, and comfort surrounding twin vaginal birth. Presimulation and postsimulation results were compared using univariate analysis. Our primary outcomes were change in knowledge and comfort before and after the simulation. Twenty-four obstetrics and gynecology residents consented to participation with 18 postsimulation surveys available for comparison (75%). Trainees estimated their participation in 445 twin deliveries (median 19, range 0-52) with only 20.4% of these as vaginal births. Participants reported a need for more didactic or simulated training on this topic (64% and 88%, respectively). Knowledge about twin delivery improved after the simulation (33.3% compared with 58.3% questions correct, P<.01). Before training, 33.3% of participants reported they would strongly counsel a patient to attempt vaginal birth instead of elective cesarean delivery for twins compared with 50% after training (P=.52). Personal comfort with performing a breech extraction of a nonvertex second twin improved from 5.5% to 66.7% after the simulation (P<.01). Resident exposure to twin vaginal birth is infrequent and variable with a demonstrable need for more training. Our contemporary obstetric climate is prioritizing vaginal birth despite less frequent operative obstetric interventions. We describe a reproducible twin delivery simulation associated with a favorable effect on resident knowledge and comfort levels.
Seahorses in focus: local ecological knowledge of seahorse-watching operators in a tropical estuary.
Ternes, Maria L F; Gerhardinger, Leopoldo C; Schiavetti, Alexandre
2016-11-08
Seahorses are endangered teleost fishes under increasing human pressures worldwide. In Brazil, marine conservationists and policy-makers are thus often skeptical about the viability of sustainable human-seahorse interactions. This study focuses on local ecological knowledge on seahorses and the implications of their non-lethal touristic use by a coastal community in northeastern Brazil. Community-based seahorse-watching activities have been carried out in Maracaípe village since 1999, but remained uninvestigated until the present study. Our goal is to provide ethnoecological understanding on this non-extractive use to support seahorse conservation and management. We interviewed 32 informants through semi-structured questionnaires to assess their socioeconomic profile, their knowledge on seahorse natural history traits, human uses, threats and abundance trends. Seahorse-watching has high socioeconomic relevance, being the primary income source for all respondents. Interviewees elicited a body of knowledge on seahorse biology largely consistent with up-to-date research literature. Most informants (65.5 %) perceived no change in seahorse abundance. Their empirical knowledge often surpassed scientific reports, i.e. through remarks on trophic ecology; reproductive aspects, such as, behavior and breeding season; spatial and temporal distribution, suggesting seahorse migration related to environmental parameters. Seahorse-watching operators were aware of seahorse biological and ecological aspects. Despite the gaps remaining on biological data about certain seahorse traits, the respondents provided reliable information on all questions, adding ethnoecological remarks not yet assessed by conventional scientific surveys. We provide novel ethnobiological insight on non-extractive modes of human-seahorse interaction, eliciting environmental policies to integrate seahorse conservation with local ecological knowledge and innovative ideas for seahorse sustainable use. Our study resonates with calls for more active engagement with communities and their local ecologies if marine conservation and development are to be reconciled.
NASA Astrophysics Data System (ADS)
Tauscher, Keith; Rapetti, David; Burns, Jack O.; Switzer, Eric
2018-02-01
The sky-averaged (global) highly redshifted 21 cm spectrum from neutral hydrogen is expected to appear in the VHF range of ∼20–200 MHz and its spectral shape and strength are determined by the heating properties of the first stars and black holes, by the nature and duration of reionization, and by the presence or absence of exotic physics. Measurements of the global signal would therefore provide us with a wealth of astrophysical and cosmological knowledge. However, the signal has not yet been detected because it must be seen through strong foregrounds weighted by a large beam, instrumental calibration errors, and ionospheric, ground, and radio-frequency-interference effects, which we collectively refer to as “systematics.” Here, we present a signal extraction method for global signal experiments which uses Singular Value Decomposition of “training sets” to produce systematics basis functions specifically suited to each observation. Instead of requiring precise absolute knowledge of the systematics, our method effectively requires precise knowledge of how the systematics can vary. After calculating eigenmodes for the signal and systematics, we perform a weighted least square fit of the corresponding coefficients and select the number of modes to include by minimizing an information criterion. We compare the performance of the signal extraction when minimizing various information criteria and find that minimizing the Deviance Information Criterion most consistently yields unbiased fits. The methods used here are built into our widely applicable, publicly available Python package, pylinex, which analytically calculates constraints on signals and systematics from given data, errors, and training sets.
[Clinical reasoning in undergraduate nursing education: a scoping review].
Menezes, Sáskia Sampaio Cipriano de; Corrêa, Consuelo Garcia; Silva, Rita de Cássia Gengo E; Cruz, Diná de Almeida Monteiro Lopes da
2015-12-01
This study aimed at analyzing the current state of knowledge on clinical reasoning in undergraduate nursing education. A systematic scoping review through a search strategy applied to the MEDLINE database, and an analysis of the material recovered by extracting data done by two independent reviewers. The extracted data were analyzed and synthesized in a narrative manner. From the 1380 citations retrieved in the search, 23 were kept for review and their contents were summarized into five categories: 1) the experience of developing critical thinking/clinical reasoning/decision-making process; 2) teaching strategies related to the development of critical thinking/clinical reasoning/decision-making process; 3) measurement of variables related to the critical thinking/clinical reasoning/decision-making process; 4) relationship of variables involved in the critical thinking/clinical reasoning/decision-making process; and 5) theoretical development models of critical thinking/clinical reasoning/decision-making process for students. The biggest challenge for developing knowledge on teaching clinical reasoning seems to be finding consistency between theoretical perspectives on the development of clinical reasoning and methodologies, methods, and procedures in research initiatives in this field.
Yan, Binjun; Fang, Zhonghua; Shen, Lijuan; Qu, Haibin
2015-01-01
The batch-to-batch quality consistency of herbal drugs has always been an important issue. To propose a methodology for batch-to-batch quality control based on HPLC-MS fingerprints and process knowledgebase. The extraction process of Compound E-jiao Oral Liquid was taken as a case study. After establishing the HPLC-MS fingerprint analysis method, the fingerprints of the extract solutions produced under normal and abnormal operation conditions were obtained. Multivariate statistical models were built for fault detection and a discriminant analysis model was built using the probabilistic discriminant partial-least-squares method for fault diagnosis. Based on multivariate statistical analysis, process knowledge was acquired and the cause-effect relationship between process deviations and quality defects was revealed. The quality defects were detected successfully by multivariate statistical control charts and the type of process deviations were diagnosed correctly by discriminant analysis. This work has demonstrated the benefits of combining HPLC-MS fingerprints, process knowledge and multivariate analysis for the quality control of herbal drugs. Copyright © 2015 John Wiley & Sons, Ltd.
French, Beverley; Thomas, Lois H; Baker, Paula; Burton, Christopher R; Pennington, Lindsay; Roddam, Hazel
2009-05-19
Given the current emphasis on networks as vehicles for innovation and change in health service delivery, the ability to conceptualize and measure organisational enablers for the social construction of knowledge merits attention. This study aimed to develop a composite tool to measure the organisational context for evidence-based practice (EBP) in healthcare. A structured search of the major healthcare and management databases for measurement tools from four domains: research utilisation (RU), research activity (RA), knowledge management (KM), and organisational learning (OL). Included studies were reports of the development or use of measurement tools that included organisational factors. Tools were appraised for face and content validity, plus development and testing methods. Measurement tool items were extracted, merged across the four domains, and categorised within a constructed framework describing the absorptive and receptive capacities of organisations. Thirty measurement tools were identified and appraised. Eighteen tools from the four domains were selected for item extraction and analysis. The constructed framework consists of seven categories relating to three core organisational attributes of vision, leadership, and a learning culture, and four stages of knowledge need, acquisition of new knowledge, knowledge sharing, and knowledge use. Measurement tools from RA or RU domains had more items relating to the categories of leadership, and acquisition of new knowledge; while tools from KM or learning organisation domains had more items relating to vision, learning culture, knowledge need, and knowledge sharing. There was equal emphasis on knowledge use in the different domains. If the translation of evidence into knowledge is viewed as socially mediated, tools to measure the organisational context of EBP in healthcare could be enhanced by consideration of related concepts from the organisational and management sciences. Comparison of measurement tools across domains suggests that there is scope within EBP for supplementing the current emphasis on human and technical resources to support information uptake and use by individuals. Consideration of measurement tools from the fields of KM and OL shows more content related to social mechanisms to facilitate knowledge recognition, translation, and transfer between individuals and groups.
French, Beverley; Thomas, Lois H; Baker, Paula; Burton, Christopher R; Pennington, Lindsay; Roddam, Hazel
2009-01-01
Background Given the current emphasis on networks as vehicles for innovation and change in health service delivery, the ability to conceptualise and measure organisational enablers for the social construction of knowledge merits attention. This study aimed to develop a composite tool to measure the organisational context for evidence-based practice (EBP) in healthcare. Methods A structured search of the major healthcare and management databases for measurement tools from four domains: research utilisation (RU), research activity (RA), knowledge management (KM), and organisational learning (OL). Included studies were reports of the development or use of measurement tools that included organisational factors. Tools were appraised for face and content validity, plus development and testing methods. Measurement tool items were extracted, merged across the four domains, and categorised within a constructed framework describing the absorptive and receptive capacities of organisations. Results Thirty measurement tools were identified and appraised. Eighteen tools from the four domains were selected for item extraction and analysis. The constructed framework consists of seven categories relating to three core organisational attributes of vision, leadership, and a learning culture, and four stages of knowledge need, acquisition of new knowledge, knowledge sharing, and knowledge use. Measurement tools from RA or RU domains had more items relating to the categories of leadership, and acquisition of new knowledge; while tools from KM or learning organisation domains had more items relating to vision, learning culture, knowledge need, and knowledge sharing. There was equal emphasis on knowledge use in the different domains. Conclusion If the translation of evidence into knowledge is viewed as socially mediated, tools to measure the organisational context of EBP in healthcare could be enhanced by consideration of related concepts from the organisational and management sciences. Comparison of measurement tools across domains suggests that there is scope within EBP for supplementing the current emphasis on human and technical resources to support information uptake and use by individuals. Consideration of measurement tools from the fields of KM and OL shows more content related to social mechanisms to facilitate knowledge recognition, translation, and transfer between individuals and groups. PMID:19454008
1989-08-01
Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17
Computational Fact Checking from Knowledge Networks
Ciampaglia, Giovanni Luca; Shiralkar, Prashant; Rocha, Luis M.; Bollen, Johan; Menczer, Filippo; Flammini, Alessandro
2015-01-01
Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation. PMID:26083336
Development and validation of a Database Forensic Metamodel (DBFM)
Al-dhaqm, Arafat; Razak, Shukor; Othman, Siti Hajar; Ngadi, Asri; Ahmed, Mohammed Nazir; Ali Mohammed, Abdulalem
2017-01-01
Database Forensics (DBF) is a widespread area of knowledge. It has many complex features and is well known amongst database investigators and practitioners. Several models and frameworks have been created specifically to allow knowledge-sharing and effective DBF activities. However, these are often narrow in focus and address specified database incident types. We have analysed 60 such models in an attempt to uncover how numerous DBF activities are really public even when the actions vary. We then generate a unified abstract view of DBF in the form of a metamodel. We identified, extracted, and proposed a common concept and reconciled concept definitions to propose a metamodel. We have applied a metamodelling process to guarantee that this metamodel is comprehensive and consistent. PMID:28146585
Using decision-tree classifier systems to extract knowledge from databases
NASA Technical Reports Server (NTRS)
St.clair, D. C.; Sabharwal, C. L.; Hacke, Keith; Bond, W. E.
1990-01-01
One difficulty in applying artificial intelligence techniques to the solution of real world problems is that the development and maintenance of many AI systems, such as those used in diagnostics, require large amounts of human resources. At the same time, databases frequently exist which contain information about the process(es) of interest. Recently, efforts to reduce development and maintenance costs of AI systems have focused on using machine learning techniques to extract knowledge from existing databases. Research is described in the area of knowledge extraction using a class of machine learning techniques called decision-tree classifier systems. Results of this research suggest ways of performing knowledge extraction which may be applied in numerous situations. In addition, a measurement called the concept strength metric (CSM) is described which can be used to determine how well the resulting decision tree can differentiate between the concepts it has learned. The CSM can be used to determine whether or not additional knowledge needs to be extracted from the database. An experiment involving real world data is presented to illustrate the concepts described.
Liu, Xiao; Chen, Hsinchun
2015-12-01
Social media offer insights of patients' medical problems such as drug side effects and treatment failures. Patient reports of adverse drug events from social media have great potential to improve current practice of pharmacovigilance. However, extracting patient adverse drug event reports from social media continues to be an important challenge for health informatics research. In this study, we develop a research framework with advanced natural language processing techniques for integrated and high-performance patient reported adverse drug event extraction. The framework consists of medical entity extraction for recognizing patient discussions of drug and events, adverse drug event extraction with shortest dependency path kernel based statistical learning method and semantic filtering with information from medical knowledge bases, and report source classification to tease out noise. To evaluate the proposed framework, a series of experiments were conducted on a test bed encompassing about postings from major diabetes and heart disease forums in the United States. The results reveal that each component of the framework significantly contributes to its overall effectiveness. Our framework significantly outperforms prior work. Published by Elsevier Inc.
Extracting fuzzy rules under uncertainty and measuring definability using rough sets
NASA Technical Reports Server (NTRS)
Culas, Donald E.
1991-01-01
Although computers have come a long way since their invention, they are basically able to handle only crisp values at the hardware level. Unfortunately, the world we live in consists of problems which fail to fall into this category, i.e., uncertainty is all too common. A problem is looked at which involves uncertainty. To be specific, attributes are dealt with which are fuzzy sets. Under this condition, knowledge is acquired by looking at examples. In each example, a condition as well as a decision is made available. Based on the examples given, two sets of rules are extracted, certain and possible. Furthermore, measures are constructed of how much these rules are believed in, and finally, the decisions are defined as a function of the terms used in the conditions.
NASA Astrophysics Data System (ADS)
Xu, Jin; Li, Zheng; Li, Shuliang; Zhang, Yanyan
2015-07-01
There is still a lack of effective paradigms and tools for analysing and discovering the contents and relationships of project knowledge contexts in the field of project management. In this paper, a new framework for extracting and representing project knowledge contexts using topic models and dynamic knowledge maps under big data environments is proposed and developed. The conceptual paradigm, theoretical underpinning, extended topic model, and illustration examples of the ontology model for project knowledge maps are presented, with further research work envisaged.
García-Remesal, Miguel; Maojo, Victor; Crespo, José
2010-01-01
In this paper we present a knowledge engineering approach to automatically recognize and extract genetic sequences from scientific articles. To carry out this task, we use a preliminary recognizer based on a finite state machine to extract all candidate DNA/RNA sequences. The latter are then fed into a knowledge-based system that automatically discards false positives and refines noisy and incorrectly merged sequences. We created the knowledge base by manually analyzing different manuscripts containing genetic sequences. Our approach was evaluated using a test set of 211 full-text articles in PDF format containing 3134 genetic sequences. For such set, we achieved 87.76% precision and 97.70% recall respectively. This method can facilitate different research tasks. These include text mining, information extraction, and information retrieval research dealing with large collections of documents containing genetic sequences.
Extracting Information from Narratives: An Application to Aviation Safety Reports
DOE Office of Scientific and Technical Information (OSTI.GOV)
Posse, Christian; Matzke, Brett D.; Anderson, Catherine M.
2005-05-12
Aviation safety reports are the best available source of information about why a flight incident happened. However, stream of consciousness permeates the narratives making difficult the automation of the information extraction task. We propose an approach and infrastructure based on a common pattern specification language to capture relevant information via normalized template expression matching in context. Template expression matching handles variants of multi-word expressions. Normalization improves the likelihood of correct hits by standardizing and cleaning the vocabulary used in narratives. Checking for the presence of negative modifiers in the proximity of a potential hit reduces the chance of false hits.more » We present the above approach in the context of a specific application, which is the extraction of human performance factors from NASA ASRS reports. While knowledge infusion from experts plays a critical role during the learning phase, early results show that in a production mode, the automated process provides information that is consistent with analyses by human subjects.« less
Accurate airway centerline extraction based on topological thinning using graph-theoretic analysis.
Bian, Zijian; Tan, Wenjun; Yang, Jinzhu; Liu, Jiren; Zhao, Dazhe
2014-01-01
The quantitative analysis of the airway tree is of critical importance in the CT-based diagnosis and treatment of popular pulmonary diseases. The extraction of airway centerline is a precursor to identify airway hierarchical structure, measure geometrical parameters, and guide visualized detection. Traditional methods suffer from extra branches and circles due to incomplete segmentation results, which induce false analysis in applications. This paper proposed an automatic and robust centerline extraction method for airway tree. First, the centerline is located based on the topological thinning method; border voxels are deleted symmetrically to preserve topological and geometrical properties iteratively. Second, the structural information is generated using graph-theoretic analysis. Then inaccurate circles are removed with a distance weighting strategy, and extra branches are pruned according to clinical anatomic knowledge. The centerline region without false appendices is eventually determined after the described phases. Experimental results show that the proposed method identifies more than 96% branches and keep consistency across different cases and achieves superior circle-free structure and centrality.
Toward Scalable Trustworthy Computing Using the Human-Physiology-Immunity Metaphor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hively, Lee M; Sheldon, Frederick T
The cybersecurity landscape consists of an ad hoc patchwork of solutions. Optimal cybersecurity is difficult for various reasons: complexity, immense data and processing requirements, resource-agnostic cloud computing, practical time-space-energy constraints, inherent flaws in 'Maginot Line' defenses, and the growing number and sophistication of cyberattacks. This article defines the high-priority problems and examines the potential solution space. In that space, achieving scalable trustworthy computing and communications is possible through real-time knowledge-based decisions about cyber trust. This vision is based on the human-physiology-immunity metaphor and the human brain's ability to extract knowledge from data and information. The article outlines future steps towardmore » scalable trustworthy systems requiring a long-term commitment to solve the well-known challenges.« less
New Method for Knowledge Management Focused on Communication Pattern in Product Development
NASA Astrophysics Data System (ADS)
Noguchi, Takashi; Shiba, Hajime
In the field of manufacturing, the importance of utilizing knowledge and know-how has been growing. To meet this background, there is a need for new methods to efficiently accumulate and extract effective knowledge and know-how. To facilitate the extraction of knowledge and know-how needed by engineers, we first defined business process information which includes schedule/progress information, document data, information about communication among parties concerned, and information which corresponds to these three types of information. Based on our definitions, we proposed an IT system (FlexPIM: Flexible and collaborative Process Information Management) to register and accumulate business process information with the least effort. In order to efficiently extract effective information from huge volumes of accumulated business process information, focusing attention on “actions” and communication patterns, we propose a new extraction method using communication patterns. And the validity of this method has been verified for some communication patterns.
Grist : grid-based data mining for astronomy
NASA Technical Reports Server (NTRS)
Jacob, Joseph C.; Katz, Daniel S.; Miller, Craig D.; Walia, Harshpreet; Williams, Roy; Djorgovski, S. George; Graham, Matthew J.; Mahabal, Ashish; Babu, Jogesh; Berk, Daniel E. Vanden;
2004-01-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the 'hyperatlas' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
Grist: Grid-based Data Mining for Astronomy
NASA Astrophysics Data System (ADS)
Jacob, J. C.; Katz, D. S.; Miller, C. D.; Walia, H.; Williams, R. D.; Djorgovski, S. G.; Graham, M. J.; Mahabal, A. A.; Babu, G. J.; vanden Berk, D. E.; Nichol, R.
2005-12-01
The Grist project is developing a grid-technology based system as a research environment for astronomy with massive and complex datasets. This knowledge extraction system will consist of a library of distributed grid services controlled by a workflow system, compliant with standards emerging from the grid computing, web services, and virtual observatory communities. This new technology is being used to find high redshift quasars, study peculiar variable objects, search for transients in real time, and fit SDSS QSO spectra to measure black hole masses. Grist services are also a component of the ``hyperatlas'' project to serve high-resolution multi-wavelength imagery over the Internet. In support of these science and outreach objectives, the Grist framework will provide the enabling fabric to tie together distributed grid services in the areas of data access, federation, mining, subsetting, source extraction, image mosaicking, statistics, and visualization.
Extraction and Classification of Human Gait Features
NASA Astrophysics Data System (ADS)
Ng, Hu; Tan, Wooi-Haw; Tong, Hau-Lee; Abdullah, Junaidi; Komiya, Ryoichi
In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette images. The approach consists of six stages: clearing the background noise of image by morphological opening; measuring of the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; applying Hough transform to obtain the joint angles from the body segment skeletons; and measuring the distance between the bottom of right leg and left leg from the body segment skeletons. The angles of joints, step-size together with the height and width of the human silhouette are collected and used for gait analysis. The experimental results have demonstrated that the proposed system is feasible and achieved satisfactory results.
KnowLife: a versatile approach for constructing a large knowledge graph for biomedical sciences.
Ernst, Patrick; Siu, Amy; Weikum, Gerhard
2015-05-14
Biomedical knowledge bases (KB's) have become important assets in life sciences. Prior work on KB construction has three major limitations. First, most biomedical KBs are manually built and curated, and cannot keep up with the rate at which new findings are published. Second, for automatic information extraction (IE), the text genre of choice has been scientific publications, neglecting sources like health portals and online communities. Third, most prior work on IE has focused on the molecular level or chemogenomics only, like protein-protein interactions or gene-drug relationships, or solely address highly specific topics such as drug effects. We address these three limitations by a versatile and scalable approach to automatic KB construction. Using a small number of seed facts for distant supervision of pattern-based extraction, we harvest a huge number of facts in an automated manner without requiring any explicit training. We extend previous techniques for pattern-based IE with confidence statistics, and we combine this recall-oriented stage with logical reasoning for consistency constraint checking to achieve high precision. To our knowledge, this is the first method that uses consistency checking for biomedical relations. Our approach can be easily extended to incorporate additional relations and constraints. We ran extensive experiments not only for scientific publications, but also for encyclopedic health portals and online communities, creating different KB's based on different configurations. We assess the size and quality of each KB, in terms of number of facts and precision. The best configured KB, KnowLife, contains more than 500,000 facts at a precision of 93% for 13 relations covering genes, organs, diseases, symptoms, treatments, as well as environmental and lifestyle risk factors. KnowLife is a large knowledge base for health and life sciences, automatically constructed from different Web sources. As a unique feature, KnowLife is harvested from different text genres such as scientific publications, health portals, and online communities. Thus, it has the potential to serve as one-stop portal for a wide range of relations and use cases. To showcase the breadth and usefulness, we make the KnowLife KB accessible through the health portal (http://knowlife.mpi-inf.mpg.de).
An information extraction framework for cohort identification using electronic health records.
Liu, Hongfang; Bielinski, Suzette J; Sohn, Sunghwan; Murphy, Sean; Wagholikar, Kavishwar B; Jonnalagadda, Siddhartha R; Ravikumar, K E; Wu, Stephen T; Kullo, Iftikhar J; Chute, Christopher G
2013-01-01
Information extraction (IE), a natural language processing (NLP) task that automatically extracts structured or semi-structured information from free text, has become popular in the clinical domain for supporting automated systems at point-of-care and enabling secondary use of electronic health records (EHRs) for clinical and translational research. However, a high performance IE system can be very challenging to construct due to the complexity and dynamic nature of human language. In this paper, we report an IE framework for cohort identification using EHRs that is a knowledge-driven framework developed under the Unstructured Information Management Architecture (UIMA). A system to extract specific information can be developed by subject matter experts through expert knowledge engineering of the externalized knowledge resources used in the framework.
Mode separation in frequency-wavenumber domain through compressed sensing of far-field Lamb waves
NASA Astrophysics Data System (ADS)
Gao, Fei; Zeng, Liang; Lin, Jing; Luo, Zhi
2017-07-01
This method based on Lamb waves shows great potential for long-range damage detection. Mode superposition resulting from multi-modal and dispersive characteristics makes signal interpretation and damage feature extraction difficult. Mode separation in the frequency-wavenumber (f-k) domain using a 1D sparse sensing array is a promising solution. However, due to the lack of prior knowledge about damage location, this method based on 1D linear measurement, for the mode extraction of arbitrary reflections caused by defects that are not in line with the sensor array, is restricted. In this paper, an improved compressed sensing method under the far-field assumption is established, which is beneficial to the reconstruction of reflections in the f-k domain. Hence, multiple components consisting of structure and damage features could be recovered via a limited number of measurements. Subsequently, a mode sweeping process based on theoretical dispersion curves has been designed for mode characterization and direction of arrival estimation. Moreover, 2D f-k filtering and inverse transforms are applied to the reconstructed f-k distribution in order to extract the purified mode of interest. As a result, overlapping waveforms can be separated and the direction of defects can be estimated. A uniform linear sensor array consisting of 16 laser excitations is finally employed for experimental investigations and the results demonstrate the efficiency of the proposed method.
Chapter 4: neurology in the Bible and the Talmud.
Feinsod, Moshe
2010-01-01
The Bible, a major pillar of Western Civilization consists of Hebrew Scriptures, assembled over a millennium and accepted as of divine origin. The Talmud is a compendium of Jewish laws, covering every possible aspect of life, analyzed in depth from 200 BCE to 600 CE, becoming the foundation of Jewish existence. The all-encompassing character of the books provides numerous medical problems and observations that appear in various connotations. When in need to clarify various legal dilemmas, the Talmudic sages displayed astoundingly accurate anatomical knowledge and were pioneers in clinical-pathological correlations. The descriptions of "neurological" events in the Bible are very precise but show no evidence of neurological knowledge. Those reported in the various tractates of the Talmud are evidence of a substantial medical knowledge, marked by Hellenistic influence. Subjects such as head and spinal injuries, epilepsy, handedness neuralgias aphasia tinnitus and tremor were discussed in depth. This chapter is an updated collection of the studies, extracting observations and discussions of neurological manifestations from the ancient texts.
Verification and Validation of KBS with Neural Network Components
NASA Technical Reports Server (NTRS)
Wen, Wu; Callahan, John
1996-01-01
Artificial Neural Network (ANN) play an important role in developing robust Knowledge Based Systems (KBS). The ANN based components used in these systems learn to give appropriate predictions through training with correct input-output data patterns. Unlike traditional KBS that depends on a rule database and a production engine, the ANN based system mimics the decisions of an expert without specifically formulating the if-than type of rules. In fact, the ANNs demonstrate their superiority when such if-then type of rules are hard to generate by human expert. Verification of traditional knowledge based system is based on the proof of consistency and completeness of the rule knowledge base and correctness of the production engine.These techniques, however, can not be directly applied to ANN based components.In this position paper, we propose a verification and validation procedure for KBS with ANN based components. The essence of the procedure is to obtain an accurate system specification through incremental modification of the specifications using an ANN rule extraction algorithm.
Xu, Rong; Li, Li; Wang, QuanQiu
2013-01-01
Motivation: Systems approaches to studying phenotypic relationships among diseases are emerging as an active area of research for both novel disease gene discovery and drug repurposing. Currently, systematic study of disease phenotypic relationships on a phenome-wide scale is limited because large-scale machine-understandable disease–phenotype relationship knowledge bases are often unavailable. Here, we present an automatic approach to extract disease–manifestation (D-M) pairs (one specific type of disease–phenotype relationship) from the wide body of published biomedical literature. Data and Methods: Our method leverages external knowledge and limits the amount of human effort required. For the text corpus, we used 119 085 682 MEDLINE sentences (21 354 075 citations). First, we used D-M pairs from existing biomedical ontologies as prior knowledge to automatically discover D-M–specific syntactic patterns. We then extracted additional pairs from MEDLINE using the learned patterns. Finally, we analysed correlations between disease manifestations and disease-associated genes and drugs to demonstrate the potential of this newly created knowledge base in disease gene discovery and drug repurposing. Results: In total, we extracted 121 359 unique D-M pairs with a high precision of 0.924. Among the extracted pairs, 120 419 (99.2%) have not been captured in existing structured knowledge sources. We have shown that disease manifestations correlate positively with both disease-associated genes and drug treatments. Conclusions: The main contribution of our study is the creation of a large-scale and accurate D-M phenotype relationship knowledge base. This unique knowledge base, when combined with existing phenotypic, genetic and proteomic datasets, can have profound implications in our deeper understanding of disease etiology and in rapid drug repurposing. Availability: http://nlp.case.edu/public/data/DMPatternUMLS/ Contact: rxx@case.edu PMID:23828786
Knowledge Discovery and Data Mining: An Overview
NASA Technical Reports Server (NTRS)
Fayyad, U.
1995-01-01
The process of knowledge discovery and data mining is the process of information extraction from very large databases. Its importance is described along with several techniques and considerations for selecting the most appropriate technique for extracting information from a particular data set.
Lynx: a database and knowledge extraction engine for integrative medicine.
Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T Conrad; Maltsev, Natalia
2014-01-01
We have developed Lynx (http://lynx.ci.uchicago.edu)--a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces.
A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification
Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; ...
2013-01-01
Background . The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective . To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods . The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expertmore » knowledge was integrated into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results . The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions . Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification.« less
A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification
Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Varnum, Susan M.; Brown, Joseph N.; Riensche, Roderick M.; Adkins, Joshua N.; Jacobs, Jon M.; Hoidal, John R.; Scholand, Mary Beth; Pounds, Joel G.; Blackburn, Michael R.; Rodland, Karin D.; McDermott, Jason E.
2013-01-01
Background. The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective. To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods. The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expert knowledge was integrated into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results. The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions. Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification. PMID:24223463
A Semiautomated Framework for Integrating Expert Knowledge into Disease Marker Identification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jing; Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.
2013-10-01
Background. The availability of large complex data sets generated by high throughput technologies has enabled the recent proliferation of disease biomarker studies. However, a recurring problem in deriving biological information from large data sets is how to best incorporate expert knowledge into the biomarker selection process. Objective. To develop a generalizable framework that can incorporate expert knowledge into data-driven processes in a semiautomated way while providing a metric for optimization in a biomarker selection scheme. Methods. The framework was implemented as a pipeline consisting of five components for the identification of signatures from integrated clustering (ISIC). Expert knowledge was integratedmore » into the biomarker identification process using the combination of two distinct approaches; a distance-based clustering approach and an expert knowledge-driven functional selection. Results. The utility of the developed framework ISIC was demonstrated on proteomics data from a study of chronic obstructive pulmonary disease (COPD). Biomarker candidates were identified in a mouse model using ISIC and validated in a study of a human cohort. Conclusions. Expert knowledge can be introduced into a biomarker discovery process in different ways to enhance the robustness of selected marker candidates. Developing strategies for extracting orthogonal and robust features from large data sets increases the chances of success in biomarker identification.« less
Nardi, Mariane; Lira-Guedes, Ana Cláudia; Albuquerque Cunha, Helenilza Ferreira; Guedes, Marcelino Carneiro; Mustin, Karen; Gomes, Suellen Cristina Pantoja
2016-01-01
Várzea forests of the Amazon estuary contain species of importance to riverine communities. For example, the oil extracted from the seeds of crabwood trees is traditionally used to combat various illnesses and as such artisanal extraction processes have been maintained. The objectives of this study were to (1) describe the process involved in artisanal extraction of crabwood oil in the Fazendinha Protected Area, in the state of Amapá; (2) characterise the processes of knowledge transfer associated with the extraction and use of crabwood oil within a peri-urban riverine community; and (3) discern medicinal uses of the oil. The data were obtained using semistructured interviews with 13 community members involved in crabwood oil extraction and via direct observation. The process of oil extraction is divided into four stages: seed collection; cooking and resting of the seeds; shelling of the seeds and dough preparation; and oil collection. Oil extraction is carried out within the home for personal use, with surplus marketed within the community. More than 90% of the members of the community involved in extraction of crabwood oil highlighted the use of the oil to combat inflammation of the throat. Knowledge transfer occurs via oral transmission and through direct observation.
Lira-Guedes, Ana Cláudia; Albuquerque Cunha, Helenilza Ferreira; Guedes, Marcelino Carneiro; Mustin, Karen; Gomes, Suellen Cristina Pantoja
2016-01-01
Várzea forests of the Amazon estuary contain species of importance to riverine communities. For example, the oil extracted from the seeds of crabwood trees is traditionally used to combat various illnesses and as such artisanal extraction processes have been maintained. The objectives of this study were to (1) describe the process involved in artisanal extraction of crabwood oil in the Fazendinha Protected Area, in the state of Amapá; (2) characterise the processes of knowledge transfer associated with the extraction and use of crabwood oil within a peri-urban riverine community; and (3) discern medicinal uses of the oil. The data were obtained using semistructured interviews with 13 community members involved in crabwood oil extraction and via direct observation. The process of oil extraction is divided into four stages: seed collection; cooking and resting of the seeds; shelling of the seeds and dough preparation; and oil collection. Oil extraction is carried out within the home for personal use, with surplus marketed within the community. More than 90% of the members of the community involved in extraction of crabwood oil highlighted the use of the oil to combat inflammation of the throat. Knowledge transfer occurs via oral transmission and through direct observation. PMID:27478479
A model for indexing medical documents combining statistical and symbolic knowledge.
Avillach, Paul; Joubert, Michel; Fieschi, Marius
2007-10-11
To develop and evaluate an information processing method based on terminologies, in order to index medical documents in any given documentary context. We designed a model using both symbolic general knowledge extracted from the Unified Medical Language System (UMLS) and statistical knowledge extracted from a domain of application. Using statistical knowledge allowed us to contextualize the general knowledge for every particular situation. For each document studied, the extracted terms are ranked to highlight the most significant ones. The model was tested on a set of 17,079 French standardized discharge summaries (SDSs). The most important ICD-10 term of each SDS was ranked 1st or 2nd by the method in nearly 90% of the cases. The use of several terminologies leads to more precise indexing. The improvement achieved in the models implementation performances as a result of using semantic relationships is encouraging.
Design of Automatic Extraction Algorithm of Knowledge Points for MOOCs
Chen, Haijian; Han, Dongmei; Zhao, Lina
2015-01-01
In recent years, Massive Open Online Courses (MOOCs) are very popular among college students and have a powerful impact on academic institutions. In the MOOCs environment, knowledge discovery and knowledge sharing are very important, which currently are often achieved by ontology techniques. In building ontology, automatic extraction technology is crucial. Because the general methods of text mining algorithm do not have obvious effect on online course, we designed automatic extracting course knowledge points (AECKP) algorithm for online course. It includes document classification, Chinese word segmentation, and POS tagging for each document. Vector Space Model (VSM) is used to calculate similarity and design the weight to optimize the TF-IDF algorithm output values, and the higher scores will be selected as knowledge points. Course documents of “C programming language” are selected for the experiment in this study. The results show that the proposed approach can achieve satisfactory accuracy rate and recall rate. PMID:26448738
A Model for Indexing Medical Documents Combining Statistical and Symbolic Knowledge.
Avillach, Paul; Joubert, Michel; Fieschi, Marius
2007-01-01
OBJECTIVES: To develop and evaluate an information processing method based on terminologies, in order to index medical documents in any given documentary context. METHODS: We designed a model using both symbolic general knowledge extracted from the Unified Medical Language System (UMLS) and statistical knowledge extracted from a domain of application. Using statistical knowledge allowed us to contextualize the general knowledge for every particular situation. For each document studied, the extracted terms are ranked to highlight the most significant ones. The model was tested on a set of 17,079 French standardized discharge summaries (SDSs). RESULTS: The most important ICD-10 term of each SDS was ranked 1st or 2nd by the method in nearly 90% of the cases. CONCLUSIONS: The use of several terminologies leads to more precise indexing. The improvement achieved in the model’s implementation performances as a result of using semantic relationships is encouraging. PMID:18693792
An Information Extraction Framework for Cohort Identification Using Electronic Health Records
Liu, Hongfang; Bielinski, Suzette J.; Sohn, Sunghwan; Murphy, Sean; Wagholikar, Kavishwar B.; Jonnalagadda, Siddhartha R.; Ravikumar, K.E.; Wu, Stephen T.; Kullo, Iftikhar J.; Chute, Christopher G
Information extraction (IE), a natural language processing (NLP) task that automatically extracts structured or semi-structured information from free text, has become popular in the clinical domain for supporting automated systems at point-of-care and enabling secondary use of electronic health records (EHRs) for clinical and translational research. However, a high performance IE system can be very challenging to construct due to the complexity and dynamic nature of human language. In this paper, we report an IE framework for cohort identification using EHRs that is a knowledge-driven framework developed under the Unstructured Information Management Architecture (UIMA). A system to extract specific information can be developed by subject matter experts through expert knowledge engineering of the externalized knowledge resources used in the framework. PMID:24303255
Identification of research hypotheses and new knowledge from scientific literature.
Shardlow, Matthew; Batista-Navarro, Riza; Thompson, Paul; Nawaz, Raheel; McNaught, John; Ananiadou, Sophia
2018-06-25
Text mining (TM) methods have been used extensively to extract relations and events from the literature. In addition, TM techniques have been used to extract various types or dimensions of interpretative information, known as Meta-Knowledge (MK), from the context of relations and events, e.g. negation, speculation, certainty and knowledge type. However, most existing methods have focussed on the extraction of individual dimensions of MK, without investigating how they can be combined to obtain even richer contextual information. In this paper, we describe a novel, supervised method to extract new MK dimensions that encode Research Hypotheses (an author's intended knowledge gain) and New Knowledge (an author's findings). The method incorporates various features, including a combination of simple MK dimensions. We identify previously explored dimensions and then use a random forest to combine these with linguistic features into a classification model. To facilitate evaluation of the model, we have enriched two existing corpora annotated with relations and events, i.e., a subset of the GENIA-MK corpus and the EU-ADR corpus, by adding attributes to encode whether each relation or event corresponds to Research Hypothesis or New Knowledge. In the GENIA-MK corpus, these new attributes complement simpler MK dimensions that had previously been annotated. We show that our approach is able to assign different types of MK dimensions to relations and events with a high degree of accuracy. Firstly, our method is able to improve upon the previously reported state of the art performance for an existing dimension, i.e., Knowledge Type. Secondly, we also demonstrate high F1-score in predicting the new dimensions of Research Hypothesis (GENIA: 0.914, EU-ADR 0.802) and New Knowledge (GENIA: 0.829, EU-ADR 0.836). We have presented a novel approach for predicting New Knowledge and Research Hypothesis, which combines simple MK dimensions to achieve high F1-scores. The extraction of such information is valuable for a number of practical TM applications.
Lynx: a database and knowledge extraction engine for integrative medicine
Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T. Conrad; Maltsev, Natalia
2014-01-01
We have developed Lynx (http://lynx.ci.uchicago.edu)—a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces. PMID:24270788
A logical model of cooperating rule-based systems
NASA Technical Reports Server (NTRS)
Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.
1989-01-01
A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.
Data Mining and Knowledge Discovery tools for exploiting big Earth-Observation data
NASA Astrophysics Data System (ADS)
Espinoza Molina, D.; Datcu, M.
2015-04-01
The continuous increase in the size of the archives and in the variety and complexity of Earth-Observation (EO) sensors require new methodologies and tools that allow the end-user to access a large image repository, to extract and to infer knowledge about the patterns hidden in the images, to retrieve dynamically a collection of relevant images, and to support the creation of emerging applications (e.g.: change detection, global monitoring, disaster and risk management, image time series, etc.). In this context, we are concerned with providing a platform for data mining and knowledge discovery content from EO archives. The platform's goal is to implement a communication channel between Payload Ground Segments and the end-user who receives the content of the data coded in an understandable format associated with semantics that is ready for immediate exploitation. It will provide the user with automated tools to explore and understand the content of highly complex images archives. The challenge lies in the extraction of meaningful information and understanding observations of large extended areas, over long periods of time, with a broad variety of EO imaging sensors in synergy with other related measurements and data. The platform is composed of several components such as 1.) ingestion of EO images and related data providing basic features for image analysis, 2.) query engine based on metadata, semantics and image content, 3.) data mining and knowledge discovery tools for supporting the interpretation and understanding of image content, 4.) semantic definition of the image content via machine learning methods. All these components are integrated and supported by a relational database management system, ensuring the integrity and consistency of Terabytes of Earth Observation data.
Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif
2008-03-01
High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.
Abraham, John; Zhang, Aijun; Angeli, Sergio; Abubeker, Sitra; Michel, Caryn; Feng, Yan; Rodriguez-Saona, Cesar
2015-04-01
Native to Southeast Asia, the spotted wing drosophila, Drosophila suzukii Matsumura (Diptera: Drosophilidae), has become a serious pest of soft-skinned fruit crops since its introduction into North America and Europe in 2008. Current monitoring strategies use baits based on fermentation products; however, to date, no fruit-based volatile blends attractive to this fly have been identified. This is particularly important because females are able to cut into the epicarp of ripening fruit for oviposition. Thus, we conducted studies to: 1) investigate the behavioral responses of adult D. suzukii to volatiles from blueberry, cherry, raspberry, and strawberry fruit extracts; 2) identify the antennally active compounds from the most attractive among the tested extracts (raspberry) using gas chromatography (GC)-mass spectrometry and coupled gas chromatography -electroantennographic detection (GC-EAD); and 3) test a synthetic blend containing the EAD-active compounds identified from raspberry extract on adult attraction. In olfactometer studies, both female and male D. suzukii were attracted to all four fruit extracts. The attractiveness of the fruit extracts ranks as: raspberry ≥ strawberry > blueberry ≥ cherry. GC analyses showed that the fruit extracts emit distinct volatile compounds. In GC-EAD experiments, 11 raspberry extract volatiles consistently elicited antennal responses in D. suzukii. In choice test bioassays, a synthetic EAD-active blend attracted more D. suzukii than a blank control, but was not as attractive as the raspberry extract. To our knowledge, this is the first report of a behaviorally and antennally active blend of host fruit volatiles attractive to D. suzukii, offering promising opportunities for the development of improved monitoring and behaviourally based management tools. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi
2017-01-01
Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.
Bejan, Cosmin Adrian; Wei, Wei-Qi; Denny, Joshua C
2015-01-01
Objective To evaluate the contribution of the MEDication Indication (MEDI) resource and SemRep for identifying treatment relations in clinical text. Materials and methods We first processed clinical documents with SemRep to extract the Unified Medical Language System (UMLS) concepts and the treatment relations between them. Then, we incorporated MEDI into a simple algorithm that identifies treatment relations between two concepts if they match a medication-indication pair in this resource. For a better coverage, we expanded MEDI using ontology relationships from RxNorm and UMLS Metathesaurus. We also developed two ensemble methods, which combined the predictions of SemRep and the MEDI algorithm. We evaluated our selected methods on two datasets, a Vanderbilt corpus of 6864 discharge summaries and the 2010 Informatics for Integrating Biology and the Bedside (i2b2)/Veteran's Affairs (VA) challenge dataset. Results The Vanderbilt dataset included 958 manually annotated treatment relations. A double annotation was performed on 25% of relations with high agreement (Cohen's κ = 0.86). The evaluation consisted of comparing the manual annotated relations with the relations identified by SemRep, the MEDI algorithm, and the two ensemble methods. On the first dataset, the best F1-measure results achieved by the MEDI algorithm and the union of the two resources (78.7 and 80, respectively) were significantly higher than the SemRep results (72.3). On the second dataset, the MEDI algorithm achieved better precision and significantly lower recall values than the best system in the i2b2 challenge. The two systems obtained comparable F1-measure values on the subset of i2b2 relations with both arguments in MEDI. Conclusions Both SemRep and MEDI can be used to extract treatment relations from clinical text. Knowledge-based extraction with MEDI outperformed use of SemRep alone, but superior performance was achieved by integrating both systems. The integration of knowledge-based resources such as MEDI into information extraction systems such as SemRep and the i2b2 relation extractors may improve treatment relation extraction from clinical text. PMID:25336593
MMKG: An approach to generate metallic materials knowledge graph based on DBpedia and Wikipedia
NASA Astrophysics Data System (ADS)
Zhang, Xiaoming; Liu, Xin; Li, Xin; Pan, Dongyu
2017-02-01
The research and development of metallic materials are playing an important role in today's society, and in the meanwhile lots of metallic materials knowledge is generated and available on the Web (e.g., Wikipedia) for materials experts. However, due to the diversity and complexity of metallic materials knowledge, the knowledge utilization may encounter much inconvenience. The idea of knowledge graph (e.g., DBpedia) provides a good way to organize the knowledge into a comprehensive entity network. Therefore, the motivation of our work is to generate a metallic materials knowledge graph (MMKG) using available knowledge on the Web. In this paper, an approach is proposed to build MMKG based on DBpedia and Wikipedia. First, we use an algorithm based on directly linked sub-graph semantic distance (DLSSD) to preliminarily extract metallic materials entities from DBpedia according to some predefined seed entities; then based on the results of the preliminary extraction, we use an algorithm, which considers both semantic distance and string similarity (SDSS), to achieve the further extraction. Second, due to the absence of materials properties in DBpedia, we use an ontology-based method to extract properties knowledge from the HTML tables of corresponding Wikipedia Web pages for enriching MMKG. Materials ontology is used to locate materials properties tables as well as to identify the structure of the tables. The proposed approach is evaluated by precision, recall, F1 and time performance, and meanwhile the appropriate thresholds for the algorithms in our approach are determined through experiments. The experimental results show that our approach returns expected performance. A tool prototype is also designed to facilitate the process of building the MMKG as well as to demonstrate the effectiveness of our approach.
Deep Logic Networks: Inserting and Extracting Knowledge From Deep Belief Networks.
Tran, Son N; d'Avila Garcez, Artur S
2018-02-01
Developments in deep learning have seen the use of layerwise unsupervised learning combined with supervised learning for fine-tuning. With this layerwise approach, a deep network can be seen as a more modular system that lends itself well to learning representations. In this paper, we investigate whether such modularity can be useful to the insertion of background knowledge into deep networks, whether it can improve learning performance when it is available, and to the extraction of knowledge from trained deep networks, and whether it can offer a better understanding of the representations learned by such networks. To this end, we use a simple symbolic language-a set of logical rules that we call confidence rules-and show that it is suitable for the representation of quantitative reasoning in deep networks. We show by knowledge extraction that confidence rules can offer a low-cost representation for layerwise networks (or restricted Boltzmann machines). We also show that layerwise extraction can produce an improvement in the accuracy of deep belief networks. Furthermore, the proposed symbolic characterization of deep networks provides a novel method for the insertion of prior knowledge and training of deep networks. With the use of this method, a deep neural-symbolic system is proposed and evaluated, with the experimental results indicating that modularity through the use of confidence rules and knowledge insertion can be beneficial to network performance.
Reactive extraction at liquid-liquid systems
NASA Astrophysics Data System (ADS)
Wieszczycka, Karolina
2018-01-01
The chapter summarizes the state of knowledge about a metal transport in two-phase system. The first part of this review focuses on the distribution law and main factors determination in classical solvent extraction (solubility and polarity of the solute, as well as inter- and intramolecules interaction. Next part of the chapter is devoted to the reactive solvent extraction and the molecular modeling requiring knowledge on type of extractants, complexation mechanisms, metals ions speciation and oxidation during complexes forming, and other parameters that enable to understand the extraction process. Also the kinetic data that is needed for proper modeling, simulation and design of processes needed for critical separations are discussed. Extraction at liquid-solid system using solvent impregnated resins is partially identical as in the case of the corresponding solvent extraction, therefore this subject was also presented in all aspects of separation process (equilibrium, mechanism, kinetics).
Opinion mining feature-level using Naive Bayes and feature extraction based analysis dependencies
NASA Astrophysics Data System (ADS)
Sanda, Regi; Baizal, Z. K. Abdurahman; Nhita, Fhira
2015-12-01
Development of internet and technology, has major impact and providing new business called e-commerce. Many e-commerce sites that provide convenience in transaction, and consumers can also provide reviews or opinions on products that purchased. These opinions can be used by consumers and producers. Consumers to know the advantages and disadvantages of particular feature of the product. Procuders can analyse own strengths and weaknesses as well as it's competitors products. Many opinions need a method that the reader can know the point of whole opinion. The idea emerged from review summarization that summarizes the overall opinion based on sentiment and features contain. In this study, the domain that become the main focus is about the digital camera. This research consisted of four steps 1) giving the knowledge to the system to recognize the semantic orientation of an opinion 2) indentify the features of product 3) indentify whether the opinion gives a positive or negative 4) summarizing the result. In this research discussed the methods such as Naï;ve Bayes for sentiment classification, and feature extraction algorithm based on Dependencies Analysis, which is one of the tools in Natural Language Processing (NLP) and knowledge based dictionary which is useful for handling implicit features. The end result of research is a summary that contains a bunch of reviews from consumers on the features and sentiment. With proposed method, accuration for sentiment classification giving 81.2 % for positive test data, 80.2 % for negative test data, and accuration for feature extraction reach 90.3 %.
NASA Astrophysics Data System (ADS)
Schmidt, Frauke; Koch, Boris P.; Witt, Matthias; Hinrichs, Kai-Uwe
2014-09-01
Dissolved organic matter (DOM) in marine sediments is a complex mixture of thousands of individual constituents that participate in biogeochemical reactions and serve as substrates for benthic microbes. Knowledge of the molecular composition of DOM is a prerequisite for a comprehensive understanding of the biogeochemical processes in sediments. In this study, interstitial water DOM was extracted with Rhizon samplers from a sediment core from the Black Sea and compared to the corresponding water-extractable organic matter fraction (<0.4 μm) obtained by Soxhlet extraction, which mobilizes labile particulate organic matter and DOM. After solid phase extraction (SPE) of DOM, samples were analyzed for the molecular composition by Fourier Transform Ion-Cyclotron Resonance Mass Spectrometry (FT-ICR MS) with electrospray ionization in negative ion mode. The average SPE extraction yield of the dissolved organic carbon (DOC) in interstitial water was 63%, whereas less than 30% of the DOC in Soxhlet-extracted organic matter was recovered. Nevertheless, Soxhlet extraction yielded up to 4.35% of the total sedimentary organic carbon, which is more than 30-times the organic carbon content of the interstitial water. While interstitial water DOM consisted primarily of carbon-, hydrogen- and oxygen-bearing compounds, Soxhlet extracts yielded more complex FT-ICR mass spectra with more peaks and higher abundances of nitrogen- and sulfur-bearing compounds. The molecular composition of both sample types was affected by the geochemical conditions in the sediment; elevated concentrations of HS- promoted the early diagenetic sulfurization of organic matter. The Soxhlet extracts from shallow sediment contained specific three- and four-nitrogen-bearing molecular formulas that were also detected in bacterial cell extracts and presumably represent proteinaceous molecules. These compounds decreased with increasing sediment depth while one- and two-nitrogen-bearing molecules increased, resulting in a higher similarity of both sample types in the deep sediment. In summary, Soxhlet extraction of sediments accessed a larger and more complex pool of organic matter than present in interstitial water DOM.
Awosan, K J; Ibrahim, Mto; Saidu, S A; Ma'aji, S M; Danfulani, M; Yunusa, E U; Ikhuenbor, D B; Ige, T A
2016-08-01
Use of ionizing radiation in medical imaging for diagnostic and interventional purposes has risen dramatically in recent years with a concomitant increase in exposure of patients and health workers to radiation hazards. To assess the knowledge of radiation hazards, radiation protection practices and clinical profile of health workers in UDUTH, Sokoto, Nigeria. A cross-sectional study was conducted among 110 Radiology, Radiotherapy and Dentistry staff selected by universal sampling technique. The study comprised of administration of standardized semi-structured pre-tested questionnaire (to obtain information on socio-demographic characteristics, knowledge of radiation hazards, and radiation protection practices of participants), clinical assessment (comprising of chest X-ray, abdominal ultrasound and laboratory investigation on hematological parameters), and evaluation of radiation exposure of participants (extracted from existing hospital records on their radiation exposure status). The participants were aged 20 to 65 years (mean = 34.04 ± 8.83), most of them were males (67.3%) and married (65.7%). Sixty five (59.1%) had good knowledge of radiation hazards, 58 (52.7%) had good knowledge of Personal Protective Devices (PPDs), less than a third, 30 (27.3%) consistently wore dosimeter, and very few (10.9% and below) consistently wore the various PPDs at work. The average annual radiation exposure over a 4 year period ranged from 0.0475mSv to 1.8725mSv. Only 1 (1.2%) of 86 participants had abnormal chest X-ray findings, 8 (9.4%) of 85 participants had abnormal abdominal ultrasound findings; while 17 (15.5%) and 11 (10.0%) of 110 participants had anemia and leucopenia respectively. This study demonstrated poor radiation protection practices despite good knowledge of radiation hazards among the participants, but radiation exposure and prevalence of abnormal clinical conditions were found to be low. Periodic in-service training and monitoring on radiation safety was suggested.
Ibrahim, MTO; Saidu, SA; Ma’aji, SM; Danfulani, M; Yunusa, EU; Ikhuenbor, DB; Ige, TA
2016-01-01
Introduction Use of ionizing radiation in medical imaging for diagnostic and interventional purposes has risen dramatically in recent years with a concomitant increase in exposure of patients and health workers to radiation hazards. Aim To assess the knowledge of radiation hazards, radiation protection practices and clinical profile of health workers in UDUTH, Sokoto, Nigeria. Materials and Methods A cross-sectional study was conducted among 110 Radiology, Radiotherapy and Dentistry staff selected by universal sampling technique. The study comprised of administration of standardized semi-structured pre-tested questionnaire (to obtain information on socio-demographic characteristics, knowledge of radiation hazards, and radiation protection practices of participants), clinical assessment (comprising of chest X-ray, abdominal ultrasound and laboratory investigation on hematological parameters), and evaluation of radiation exposure of participants (extracted from existing hospital records on their radiation exposure status). Results The participants were aged 20 to 65 years (mean = 34.04 ± 8.83), most of them were males (67.3%) and married (65.7%). Sixty five (59.1%) had good knowledge of radiation hazards, 58 (52.7%) had good knowledge of Personal Protective Devices (PPDs), less than a third, 30 (27.3%) consistently wore dosimeter, and very few (10.9% and below) consistently wore the various PPDs at work. The average annual radiation exposure over a 4 year period ranged from 0.0475mSv to 1.8725mSv. Only 1 (1.2%) of 86 participants had abnormal chest X-ray findings, 8 (9.4%) of 85 participants had abnormal abdominal ultrasound findings; while 17 (15.5%) and 11 (10.0%) of 110 participants had anemia and leucopenia respectively. Conclusion This study demonstrated poor radiation protection practices despite good knowledge of radiation hazards among the participants, but radiation exposure and prevalence of abnormal clinical conditions were found to be low. Periodic in-service training and monitoring on radiation safety was suggested. PMID:27656470
Utilization of electrical impedance imaging for estimation of in-vivo tissue resistivities
NASA Astrophysics Data System (ADS)
Eyuboglu, B. Murat; Pilkington, Theo C.
1993-08-01
In order to determine in vivo resistivity of tissues in the thorax, the possibility of combining electrical impedance imaging (EII) techniques with (1) anatomical data extracted from high resolution images, (2) a prior knowledge of tissue resistivities, and (3) a priori noise information was assessed in this study. A Least Square Error Estimator (LSEE) and a statistically constrained Minimum Mean Square Error Estimator (MiMSEE) were implemented to estimate regional electrical resistivities from potential measurements made on the body surface. A two dimensional boundary element model of the human thorax, which consists of four different conductivity regions (the skeletal muscle, the heart, the right lung, and the left lung) was adopted to simulate the measured EII torso potentials. The calculated potentials were then perturbed by simulated instrumentation noise. The signal information used to form the statistical constraint for the MiMSEE was obtained from a prior knowledge of the physiological range of tissue resistivities. The noise constraint was determined from a priori knowledge of errors due to linearization of the forward problem and to the instrumentation noise.
Overview of refinement procedures within REFMAC5: utilizing data from different sources.
Kovalevskiy, Oleg; Nicholls, Robert A; Long, Fei; Carlon, Azzurra; Murshudov, Garib N
2018-03-01
Refinement is a process that involves bringing into agreement the structural model, available prior knowledge and experimental data. To achieve this, the refinement procedure optimizes a posterior conditional probability distribution of model parameters, including atomic coordinates, atomic displacement parameters (B factors), scale factors, parameters of the solvent model and twin fractions in the case of twinned crystals, given observed data such as observed amplitudes or intensities of structure factors. A library of chemical restraints is typically used to ensure consistency between the model and the prior knowledge of stereochemistry. If the observation-to-parameter ratio is small, for example when diffraction data only extend to low resolution, the Bayesian framework implemented in REFMAC5 uses external restraints to inject additional information extracted from structures of homologous proteins, prior knowledge about secondary-structure formation and even data obtained using different experimental methods, for example NMR. The refinement procedure also generates the `best' weighted electron-density maps, which are useful for further model (re)building. Here, the refinement of macromolecular structures using REFMAC5 and related tools distributed as part of the CCP4 suite is discussed.
Liu, Bo; Wu, Huayi; Wang, Yandong; Liu, Wenming
2015-01-01
Main road features extracted from remotely sensed imagery play an important role in many civilian and military applications, such as updating Geographic Information System (GIS) databases, urban structure analysis, spatial data matching and road navigation. Current methods for road feature extraction from high-resolution imagery are typically based on threshold value segmentation. It is difficult however, to completely separate road features from the background. We present a new method for extracting main roads from high-resolution grayscale imagery based on directional mathematical morphology and prior knowledge obtained from the Volunteered Geographic Information found in the OpenStreetMap. The two salient steps in this strategy are: (1) using directional mathematical morphology to enhance the contrast between roads and non-roads; (2) using OpenStreetMap roads as prior knowledge to segment the remotely sensed imagery. Experiments were conducted on two ZiYuan-3 images and one QuickBird high-resolution grayscale image to compare our proposed method to other commonly used techniques for road feature extraction. The results demonstrated the validity and better performance of the proposed method for urban main road feature extraction. PMID:26397832
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.
Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.
Bérubé, Marie-Ève; Poitras, Stéphane; Bastien, Marc; Laliberté, Lydie-Anne; Lacharité, Anyck; Gross, Douglas P
2018-03-01
Many physiotherapists underuse evidence-based practice guidelines or recommendations when treating patients with musculoskeletal disorders, yet synthesis of knowledge translation interventions used within the field of physiotherapy fails to offer clear conclusions to guide the implementation of clinical practice guidelines. To evaluate the effectiveness of various knowledge translation interventions used to implement changes in the practice of current physiotherapists treating common musculoskeletal issues. A computerized literature search of MEDLINE, CINHAL and ProQuest of systematic reviews (from inception until May 2016) and primary research studies (from January 2010 until June 2016). Eligibility criteria specified articles evaluating interventions for translating knowledge into physiotherapy practice. Two reviewers independently screened the titles and abstracts, reviewed full-text articles, performed data extraction, and performed quality assessment. Of a total of 13014 articles located and titles and abstracts screened, 34 studies met the inclusion criteria, including three overlapping publications, resulting in 31 individual studies. Knowledge translation interventions appear to have resulted in a positive change in physiotherapist beliefs, attitudes, skills and guideline awareness. However, no consistent improvement in clinical practice, patient and economic outcomes were observed. The studies included had small sample sizes and low methodological quality. The heterogeneity of the studies was not conducive to pooling the data. The intensity and type of knowledge translation intervention seem to have an effect on practice change. More research targeting financial, organizational and regulatory knowledge translation interventions is needed. Copyright © 2017 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
Knowledge Acquisition of Generic Queries for Information Retrieval
Seol, Yoon-Ho; Johnson, Stephen B.; Cimino, James J.
2002-01-01
Several studies have identified clinical questions posed by health care professionals to understand the nature of information needs during clinical practice. To support access to digital information sources, it is necessary to integrate the information needs with a computer system. We have developed a conceptual guidance approach in information retrieval, based on a knowledge base that contains the patterns of information needs. The knowledge base uses a formal representation of clinical questions based on the UMLS knowledge sources, called the Generic Query model. To improve the coverage of the knowledge base, we investigated a method for extracting plausible clinical questions from the medical literature. This poster presents the Generic Query model, shows how it is used to represent the patterns of clinical questions, and describes the framework used to extract knowledge from the medical literature.
Hunter, Lawrence; Lu, Zhiyong; Firby, James; Baumgartner, William A; Johnson, Helen L; Ogren, Philip V; Cohen, K Bretonnel
2008-01-01
Background Information extraction (IE) efforts are widely acknowledged to be important in harnessing the rapid advance of biomedical knowledge, particularly in areas where important factual information is published in a diverse literature. Here we report on the design, implementation and several evaluations of OpenDMAP, an ontology-driven, integrated concept analysis system. It significantly advances the state of the art in information extraction by leveraging knowledge in ontological resources, integrating diverse text processing applications, and using an expanded pattern language that allows the mixing of syntactic and semantic elements and variable ordering. Results OpenDMAP information extraction systems were produced for extracting protein transport assertions (transport), protein-protein interaction assertions (interaction) and assertions that a gene is expressed in a cell type (expression). Evaluations were performed on each system, resulting in F-scores ranging from .26 – .72 (precision .39 – .85, recall .16 – .85). Additionally, each of these systems was run over all abstracts in MEDLINE, producing a total of 72,460 transport instances, 265,795 interaction instances and 176,153 expression instances. Conclusion OpenDMAP advances the performance standards for extracting protein-protein interaction predications from the full texts of biomedical research articles. Furthermore, this level of performance appears to generalize to other information extraction tasks, including extracting information about predicates of more than two arguments. The output of the information extraction system is always constructed from elements of an ontology, ensuring that the knowledge representation is grounded with respect to a carefully constructed model of reality. The results of these efforts can be used to increase the efficiency of manual curation efforts and to provide additional features in systems that integrate multiple sources for information extraction. The open source OpenDMAP code library is freely available at PMID:18237434
Vedio, A; Liu, E Z H; Lee, A C K; Salway, S
2017-07-01
Migrant Chinese populations in Western countries have a high prevalence of chronic hepatitis B but often experience poor access to health care and late diagnosis. This systematic review aimed to identify obstacles and supports to timely and appropriate health service use among these populations. Systematic searches resulted in 48 relevant studies published between 1996 and 2015. Data extraction and synthesis were informed by models of healthcare access that highlight the interplay of patient, provider and health system factors. There was strong consistent evidence of low levels of knowledge among patients and community members; but interventions that were primarily focused on increasing knowledge had only modest positive effects on testing and/or vaccination. There was strong consistent evidence that Chinese migrants tend to misunderstand the need for health care for hepatitis B and have low satisfaction with services. Stigma was consistently associated with hepatitis B, and there was weak but consistent evidence of stigma acting as a barrier to care. However, available evidence on the effects of providing culturally appropriate services for hepatitis B on increasing uptake is limited. There was strong consistent evidence that health professionals miss opportunities for testing and vaccination. Practitioner education interventions may be important, but evidence of effectiveness is limited. A simple prompt in patient records for primary care physicians improved the uptake of testing, and a dedicated service increased targeted vaccination coverage for newborns. Further development and more rigorous evaluation of more holistic approaches that address patient, provider and system obstacles are needed. © 2017 The Authors. Journal of Viral Hepatitis Published by John Wiley & Sons Ltd.
Yeo, Tiong Chia; Naming, Margarita; Manurung, Rita
2014-03-01
The Sarawak Biodiversity Centre (SBC) is a state government agency which regulates research and promotes the sustainable use of biodiversity. It has a program on documentation of traditional knowledge (TK) and is well-equipped with facilities for natural product research. SBC maintains a Natural Product Library (NPL) consisting of local plant and microbial extracts for bioprospecting. The NPL is a core discovery platform for screening of bioactive compounds by researchers through a formal agreement with clear benefit sharing obligations. SBC aims to develop partnerships with leading institutions and the industries to explore the benefits of biodiversity.
Reusing Design Knowledge Based on Design Cases and Knowledge Map
ERIC Educational Resources Information Center
Yang, Cheng; Liu, Zheng; Wang, Haobai; Shen, Jiaoqi
2013-01-01
Design knowledge was reused for innovative design work to support designers with product design knowledge and help designers who lack rich experiences to improve their design capacity and efficiency. First, based on the ontological model of product design knowledge constructed by taxonomy, implicit and explicit knowledge was extracted from some…
Lee, Keonsoo; Rho, Seungmin; Lee, Seok-Won
2014-01-01
In mobile cloud computing environment, the cooperation of distributed computing objects is one of the most important requirements for providing successful cloud services. To satisfy this requirement, all the members, who are employed in the cooperation group, need to share the knowledge for mutual understanding. Even if ontology can be the right tool for this goal, there are several issues to make a right ontology. As the cost and complexity of managing knowledge increase according to the scale of the knowledge, reducing the size of ontology is one of the critical issues. In this paper, we propose a method of extracting ontology module to increase the utility of knowledge. For the given signature, this method extracts the ontology module, which is semantically self-contained to fulfill the needs of the service, by considering the syntactic structure and semantic relation of concepts. By employing this module, instead of the original ontology, the cooperation of computing objects can be performed with less computing load and complexity. In particular, when multiple external ontologies need to be combined for more complex services, this method can be used to optimize the size of shared knowledge.
Knowledge Representation Of CT Scans Of The Head
NASA Astrophysics Data System (ADS)
Ackerman, Laurens V.; Burke, M. W.; Rada, Roy
1984-06-01
We have been investigating diagnostic knowledge models which assist in the automatic classification of medical images by combining information extracted from each image with knowledge specific to that class of images. In a more general sense we are trying to integrate verbal and pictorial descriptions of disease via representations of knowledge, study automatic hypothesis generation as related to clinical medicine, evolve new mathematical image measures while integrating them into the total diagnostic process, and investigate ways to augment the knowledge of the physician. Specifically, we have constructed an artificial intelligence knowledge model using the technique of a production system blending pictorial and verbal knowledge about the respective CT scan and patient history. It is an attempt to tie together different sources of knowledge representation, picture feature extraction and hypothesis generation. Our knowledge reasoning and representation system (KRRS) works with data at the conscious reasoning level of the practicing physician while at the visual perceptional level we are building another production system, the picture parameter extractor (PPE). This paper describes KRRS and its relationship to PPE.
Pliszka, Barbara
2017-01-01
The pharmaceutical and food industries expect detailed knowledge on the physicochemical properties of elderberry fruit extracts, their stability and microbiological quality, as well as the polyphenol content in elderberry cultivars. The characteristics of the extracts might be additionally modified by citric acid, which improves the stability of anthocyanins and protects processed fruits and syrups from pathogenic microorganisms. The choice of the method with citric acid was a consequence of the physicochemical charac teristics of elderberry pigments, which are not stable under the effect of light in alcoholic solutions. The aim of study was to analyze the properties of elderberry fruit extracts regarding polyphenol content and antiradical activity, as well as their stability and microbiological quality. The plant material consisted of fruit from four cultivars (Alleso, Korsor, Sampo, Samyl) of black elderberry (Sambucus nigra L.). The following were determined in fruit extracts: polyphe- nolic content (HPLC), antiradical activity (ABTS and DPPH) and stability and microbiological quality. The HPLC analysis of polyphenols demonstrated that the extracts from fruits collected from cv. Samyl had the highest 3-sambubioside cyanidin content and those from cv. Korsor contained the highest quantity of 3-glucoside cyanidin. The extracts from cv. Sampo fruit had a dominant 3-sambubioside-5-gluco- side cyanidin and 3,5-diglucoside cyanidin content. The highest quercetin (5.92 mg 100 mg-1 of extract) and caffeic acid (1.21 mg 100 mg-1 of extract) content was found in fruit extracts from cv. Alleso. The cultivars Samyl and Korsor had a higher level of anthocyanins and higher antiradical activity (ABTS) in fruit extracts than cv. Alleso and Sampo. The antiradical activity (DPPH) of fruit extracts from elderberry cultivars as- sessed in this research was similar. The degradation index for all fruit extracts was similar (DI = 1.035). The microbiological species detected in extracts were classified as moulds (Penicillum sp., Aspergillus sp.) and yeasts (Rhodotorula sp., Torulopsis sp., Trichosporon sp., Saccharomyces sp.). The research findings may support the selection of certain cultivars for industrial applications. The high stability of anthocyanins and low level of microbiological impurities in elderberry extracts ensure the high quality of such a raw material in food and pharmaceutical processing.
DesAutels, Spencer J; Fox, Zachary E; Giuse, Dario A; Williams, Annette M; Kou, Qing-Hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia
2016-01-01
Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems.
Chapter 8. Tea and Cancer Prevention: Epidemiological Studies
Yuan, Jian-Min; Sun, Canlan; Butler, Lesley M.
2011-01-01
Experimental studies have consistently shown the inhibitory activities of tea extracts on tumorigenesis in multiple model systems. Epidemiologic studies, however, have produced inconclusive results in humans. A comprehensive review was conducted to assess the current knowledge on tea consumption and risk of cancers in humans. In general, consumption of black tea was not associated with lower risk of cancer. High intake of green tea was consistently associated with reduced risk of upper gastrointestinal tract cancers after sufficient control for confounders. Limited data support a protective effect of green tea on lung and hepatocellular carcinogenesis. Although observational studies do not support a beneficial role of tea intake on prostate cancer risk, phase II clinical trials have demonstrated an inhibitory effect of green tea extract against the progression of prostate pre-malignant lesions. Green tea may exert beneficial effects against mammary carcinogenesis in premenopausal women and recurrence of breast cancer. There is no sufficient evidence that supports a protective role of tea intake on the development of cancers of the colorectum, pancreas, urinary tract, glioma, lymphoma, and leukemia. Future prospective observational studies with biomarkers of exposure and phase III clinical trials are required to provide definitive evidence for the hypothesized beneficial effect of tea consumption on cancer formation in humans. PMID:21419224
First Extraction of Transversity from a Global Analysis of Electron-Proton and Proton-Proton Data
NASA Astrophysics Data System (ADS)
Radici, Marco; Bacchetta, Alessandro
2018-05-01
We present the first extraction of the transversity distribution in the framework of collinear factorization based on the global analysis of pion-pair production in deep-inelastic scattering and in proton-proton collisions with a transversely polarized proton. The extraction relies on the knowledge of dihadron fragmentation functions, which are taken from the analysis of electron-positron annihilation data. For the first time, the transversity is extracted from a global analysis similar to what is usually done for the spin-averaged and helicity distributions. The knowledge of transversity is important for, among other things, detecting possible signals of new physics in high-precision low-energy experiments.
Fonseca, Daniela F. S.; Salvador, Ângelo C.; Santos, Sónia A. O.; Vilela, Carla; Freire, Carmen S. R.; Silvestre, Armando J. D.; Rocha, Sílvia M.
2015-01-01
The lipophilic composition of wild Arbutus unedo L. berries, collected from six locations in Penacova (center of Portugal), as well as some general chemical parameters, namely total soluble solids, pH, titratable acidity, total phenolic content and antioxidant activity was studied in detail to better understand its potential as a source of bioactive compounds. The chemical composition of the lipophilic extracts, focused on the fatty acids, triterpenoids, sterols, long chain aliphatic alcohols and tocopherols, was investigated by gas chromatography–mass spectrometry (GC–MS) analysis of the dichloromethane extracts. The lipophilic extractives of the ripe A. unedo berries ranged from 0.72% to 1.66% (w/w of dry weight), and consisted mainly of triterpenoids, fatty acids and sterols. Minor amounts of long chain aliphatic alcohols and tocopherols were also identified. Forty-one compounds were identified and among these, ursolic acid, lupeol, α-amyrin, linoleic and α-linolenic acids, and β-sitosterol were highlighted as the major components. To the best of our knowledge the current research study provides the most detailed phytochemical repository for the lipophilic composition of A. unedo, and offers valuable information for future valuation and exploitation of these berries. PMID:26110390
Macro- and microscale investigation of selenium speciation in Blackfoot river, Idaho sediments.
Oram, Libbie L; Strawn, Daniel G; Marcus, Matthew A; Fakra, Sirine C; Möller, Gregory
2008-09-15
The transport and bioavailability of selenium in the environment is controlled by its chemical speciation. However, knowledge of the biogeochemistry and speciation of Se in streambed sediment is limited. We investigated the speciation of Se in sediment cores from the Blackfoot River (BFR), Idaho using sequential extractions and synchrotron-based micro-X-ray fluorescence (micro-SXRF). We collected micro-SXRF oxidation state maps of Se in sediments, which had not been done on natural sediment samples. Selective extractions showed that most Se in the sediments is present as either (1) nonextractable Se or (2) base extractable Se. Results from micro-SXRF showed three defined species of Se were present in all four samples: Se(-II,O), Se(IV), and Se(VI). Se(-II,O) was the predominant species in samples from one location, and Se(IV) was the predominant species in samples from a second location. Results from both techniques were consistent, and suggested that the predominant species were Se(-II) species associated with recalcitrant organic matter, and Se(IV) species tightly bound to organic materials. This information can be used to predict the biogeochemical cycling and bioavailability of Se in streambed sediment environments.
Coloristic and antimicrobial behaviour of polymeric substrates using bioactive substances
NASA Astrophysics Data System (ADS)
Coman, D.; Vrînceanu, N.; Oancea, S.; Rîmbu, C.
2016-08-01
A major concern in reducing microbial contamination of healthcare and hygiene products motivated us to seek viable alternatives in order to create such barriers. The antimicrobial and anti-oxidant effects of natural extracts are well-known, their application onto polymeric supports is still challenging in terms of investigation. To our knowledge, the method of natural dyeing of different polymeric substrates using bioactive substances derived from black currant and green walnut shells, in conjunction with biomordants, and their long term effects have not been very consistently reported. The main objective of the study is based on the comparative study of different polymeric fibrous substrates dyed by means of laboratory scaled classic methodology with extracts from black currant fruits and green walnut shells, with the assistance of conventional and biomordants (copper sulphate, citric and tannic acids). The assistance of biomordant in the dyeing process seems to conduct to improved synergetic colouring and antibacterial performances. The main results demonstrated that the extract of green walnut shells reinforced by the biomordants solutions expressed the best antimicrobial behaviour. The present research is a milestone in the identification of potential technological alternatives applied in dyeing of synthetic and natural textile supports, quantified and controlled by antimicrobial response correlated with colorimetric features.
NASA Astrophysics Data System (ADS)
Wang, H.; Ning, X.; Zhang, H.; Liu, Y.; Yu, F.
2018-04-01
Urban boundary is an important indicator for urban sprawl analysis. However, methods of urban boundary extraction were inconsistent, and construction land or urban impervious surfaces was usually used to represent urban areas with coarse-resolution images, resulting in lower precision and incomparable urban boundary products. To solve above problems, a semi-automatic method of urban boundary extraction was proposed by using high-resolution image and geographic information data. Urban landscape and form characteristics, geographical knowledge were combined to generate a series of standardized rules for urban boundary extraction. Urban boundaries of China's 31 provincial capitals in year 2000, 2005, 2010 and 2015 were extracted with above-mentioned method. Compared with other two open urban boundary products, accuracy of urban boundary in this study was the highest. Urban boundary, together with other thematic data, were integrated to measure and analyse urban sprawl. Results showed that China's provincial capitals had undergone a rapid urbanization from year 2000 to 2015, with the area change from 6520 square kilometres to 12398 square kilometres. Urban area of provincial capital had a remarkable region difference and a high degree of concentration. Urban land became more intensive in general. Urban sprawl rate showed inharmonious with population growth rate. About sixty percent of the new urban areas came from cultivated land. The paper provided a consistent method of urban boundary extraction and urban sprawl measurement using high-resolution remote sensing images. The result of urban sprawl of China's provincial capital provided valuable urbanization information for government and public.
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing
Xian, Xuefeng; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611
EliXR-TIME: A Temporal Knowledge Representation for Clinical Research Eligibility Criteria.
Boland, Mary Regina; Tu, Samson W; Carini, Simona; Sim, Ida; Weng, Chunhua
2012-01-01
Effective clinical text processing requires accurate extraction and representation of temporal expressions. Multiple temporal information extraction models were developed but a similar need for extracting temporal expressions in eligibility criteria (e.g., for eligibility determination) remains. We identified the temporal knowledge representation requirements of eligibility criteria by reviewing 100 temporal criteria. We developed EliXR-TIME, a frame-based representation designed to support semantic annotation for temporal expressions in eligibility criteria by reusing applicable classes from well-known clinical temporal knowledge representations. We used EliXR-TIME to analyze a training set of 50 new temporal eligibility criteria. We evaluated EliXR-TIME using an additional random sample of 20 eligibility criteria with temporal expressions that have no overlap with the training data, yielding 92.7% (76 / 82) inter-coder agreement on sentence chunking and 72% (72 / 100) agreement on semantic annotation. We conclude that this knowledge representation can facilitate semantic annotation of the temporal expressions in eligibility criteria.
Integrating Multiple On-line Knowledge Bases for Disease-Lab Test Relation Extraction.
Zhang, Yaoyun; Soysal, Ergin; Moon, Sungrim; Wang, Jingqi; Tao, Cui; Xu, Hua
2015-01-01
A computable knowledge base containing relations between diseases and lab tests would be a great resource for many biomedical informatics applications. This paper describes our initial step towards establishing a comprehensive knowledge base of disease and lab tests relations utilizing three public on-line resources. LabTestsOnline, MedlinePlus and Wikipedia are integrated to create a freely available, computable disease-lab test knowledgebase. Disease and lab test concepts are identified using MetaMap and relations between diseases and lab tests are determined based on source-specific rules. Experimental results demonstrate a high precision for relation extraction, with Wikipedia achieving the highest precision of 87%. Combining the three sources reached a recall of 51.40%, when compared with a subset of disease-lab test relations extracted from a reference book. Moreover, we found additional disease-lab test relations from on-line resources, indicating they are complementary to existing reference books for building a comprehensive disease and lab test relation knowledge base.
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
Development of the IMB Model and an Evidence-Based Diabetes Self-management Mobile Application.
Jeon, Eunjoo; Park, Hyeoun-Ae
2018-04-01
This study developed a diabetes self-management mobile application based on the information-motivation-behavioral skills (IMB) model, evidence extracted from clinical practice guidelines, and requirements identified through focus group interviews (FGIs) with diabetes patients. We developed a diabetes self-management (DSM) app in accordance with the following four stages of the system development life cycle. The functional and knowledge requirements of the users were extracted through FGIs with 19 diabetes patients. A system diagram, data models, a database, an algorithm, screens, and menus were designed. An Android app and server with an SSL protocol were developed. The DSM app algorithm and heuristics, as well as the usability of the DSM app were evaluated, and then the DSM app was modified based on heuristics and usability evaluation. A total of 11 requirement themes were identified through the FGIs. Sixteen functions and 49 knowledge rules were extracted. The system diagram consisted of a client part and server part, 78 data models, a database with 10 tables, an algorithm, and a menu structure with 6 main menus, and 40 user screens were developed. The DSM app was Android version 4.4 or higher for Bluetooth connectivity. The proficiency and efficiency scores of the algorithm were 90.96% and 92.39%, respectively. Fifteen issues were revealed through the heuristic evaluation, and the app was modified to address three of these issues. It was also modified to address five comments received by the researchers through the usability evaluation. The DSM app was developed based on behavioral change theory through IMB models. It was designed to be evidence-based, user-centered, and effective. It remains necessary to fully evaluate the effect of the DSM app on the DSM behavior changes of diabetes patients.
Development of the IMB Model and an Evidence-Based Diabetes Self-management Mobile Application
Jeon, Eunjoo
2018-01-01
Objectives This study developed a diabetes self-management mobile application based on the information-motivation-behavioral skills (IMB) model, evidence extracted from clinical practice guidelines, and requirements identified through focus group interviews (FGIs) with diabetes patients. Methods We developed a diabetes self-management (DSM) app in accordance with the following four stages of the system development life cycle. The functional and knowledge requirements of the users were extracted through FGIs with 19 diabetes patients. A system diagram, data models, a database, an algorithm, screens, and menus were designed. An Android app and server with an SSL protocol were developed. The DSM app algorithm and heuristics, as well as the usability of the DSM app were evaluated, and then the DSM app was modified based on heuristics and usability evaluation. Results A total of 11 requirement themes were identified through the FGIs. Sixteen functions and 49 knowledge rules were extracted. The system diagram consisted of a client part and server part, 78 data models, a database with 10 tables, an algorithm, and a menu structure with 6 main menus, and 40 user screens were developed. The DSM app was Android version 4.4 or higher for Bluetooth connectivity. The proficiency and efficiency scores of the algorithm were 90.96% and 92.39%, respectively. Fifteen issues were revealed through the heuristic evaluation, and the app was modified to address three of these issues. It was also modified to address five comments received by the researchers through the usability evaluation. Conclusions The DSM app was developed based on behavioral change theory through IMB models. It was designed to be evidence-based, user-centered, and effective. It remains necessary to fully evaluate the effect of the DSM app on the DSM behavior changes of diabetes patients. PMID:29770246
Acquired prior knowledge modulates audiovisual integration.
Van Wanrooij, Marc M; Bremen, Peter; John Van Opstal, A
2010-05-01
Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.
What Do We Know about the Chemistry of Strawberry Aroma?
Ulrich, Detlef; Kecke, Steffen; Olbricht, Klaus
2018-04-04
The strawberry, with its unique aroma, is one of the most popular fruits worldwide. The demand for specific knowledge of metabolism in strawberries is increasing. This knowledge is applicable for genetic studies, plant breeding, resistance research, nutritional science, and the processing industry. The molecular basis of strawberry aroma has been studied for more than 80 years. Thus far, hundreds of volatile organic compounds (VOC) have been identified. The qualitative composition of the strawberry volatilome remains controversial though considerable progress has been made during the past several decades. Between 1997 and 2016, 25 significant analytical studies were published. Qualitative VOC data were harmonized and digitized. In total, 979 VOC were identified, 590 of which were found since 1997. However, 659 VOC (67%) were only listed once (single entries). Interestingly, none of the identified compounds were consistently reported in all of the studies analyzed. The present need of data exchange between "omic" technologies requires high quality and robust metabolic data. Such data are unavailable for the strawberry volatilome thus far. This review discusses the divergence of published data regarding both the biological material and the analytical methods. The VOC extraction method is an essential step that restricts interlaboratory comparability. Finally, standardization of sample preparation and data documentation are suggested to improve consistency for VOC quantification and measurement.
Knowledge Discovery from Databases: An Introductory Review.
ERIC Educational Resources Information Center
Vickery, Brian
1997-01-01
Introduces new procedures being used to extract knowledge from databases and discusses rationales for developing knowledge discovery methods. Methods are described for such techniques as classification, clustering, and the detection of deviations from pre-established norms. Examines potential uses of knowledge discovery in the information field.…
Artificial intelligence within the chemical laboratory.
Winkel, P
1994-01-01
Various techniques within the area of artificial intelligence such as expert systems and neural networks may play a role during the problem-solving processes within the clinical biochemical laboratory. Neural network analysis provides a non-algorithmic approach to information processing, which results in the ability of the computer to form associations and to recognize patterns or classes among data. It belongs to the machine learning techniques which also include probabilistic techniques such as discriminant function analysis and logistic regression and information theoretical techniques. These techniques may be used to extract knowledge from example patients to optimize decision limits and identify clinically important laboratory quantities. An expert system may be defined as a computer program that can give advice in a well-defined area of expertise and is able to explain its reasoning. Declarative knowledge consists of statements about logical or empirical relationships between things. Expert systems typically separate declarative knowledge residing in a knowledge base from the inference engine: an algorithm that dynamically directs and controls the system when it searches its knowledge base. A tool is an expert system without a knowledge base. The developer of an expert system uses a tool by entering knowledge into the system. Many, if not the majority of problems encountered at the laboratory level are procedural. A problem is procedural if it is possible to write up a step-by-step description of the expert's work or if it can be represented by a decision tree. To solve problems of this type only small expert system tools and/or conventional programming are required.(ABSTRACT TRUNCATED AT 250 WORDS)
Panax ginseng Leaf Extracts Exert Anti-Obesity Effects in High-Fat Diet-Induced Obese Rats.
Lee, Seul-Gi; Lee, Yoon-Jeong; Jang, Myeong-Hwan; Kwon, Tae-Ryong; Nam, Ju-Ock
2017-09-10
Recent studies have reported that the aerial parts of ginseng contain various saponins, which have anti-oxidative, anti-inflammatory, and anti-obesity properties similar to those of ginseng root. However, the leaf extracts of Korean ginseng have not yet been investigated. In this study, we demonstrate the anti-obesity effects of green leaf and dried leaf extracts (GL and DL, respectively) of ginseng in high-fat diet (HFD)-induced obese rats. The administration of GL and DL to HFD-induced obese rats significantly decreased body weight (by 96.5% and 96.7%, respectively), and epididymal and abdominal adipose tissue mass. Furthermore, DL inhibited the adipogenesis of 3T3-L1 adipocytes through regulation of the expression of key adipogenic regulators, such as peroxisome proliferator-activated receptor (PPAR)-γ and CCAAT/enhancer-binding protein (C/EBP)-α. In contrast, GL had little effect on the adipogenesis of 3T3-L1 adipocytes but greatly increased the protein expression of PPARγ compared with that in untreated cells. These results were not consistent with an anti-obesity effect in the animal model, which suggested that the anti-obesity effect of GL in vivo resulted from specific factors released by other organs, or from increased energy expenditure. To our knowledge, these findings are the first evidence for the anti-obesity effects of the leaf extracts of Korean ginseng in vivo.
Li, Na; Jiang, Weiwei; Rao, Kaifeng; Ma, Mei; Wang, Zijian; Kumaran, Satyanarayanan Senthik
2011-01-01
Environmental chemicals in drinking water can impact human health through nuclear receptors. Additionally, estrogen-related receptors (ERRs) are vulnerable to endocrine-disrupting effects. To date, however, ERR disruption of drinking water potency has not been reported. We used ERRgamma two-hybrid yeast assay to screen ERRgamma disrupting activities in a drinking water treatment plant (DWTP) located in north China and in source water from a reservoir, focusing on agonistic, antagonistic, and inverse agonistic activity to 4-hydroxytamoxifen (4-OHT). Water treatment processes in the DWTP consisted of pre-chlorination, coagulation, coal and sand filtration, activated carbon filtration, and secondary chlorination processes. Samples were extracted by solid phase extraction. Results showed that ERRgamma antagonistic activities were found in all sample extracts, but agonistic and inverse agonistic activity to 4-OHT was not found. When calibrated with the toxic equivalent of 4-OHT, antagonistic effluent effects ranged from 3.4 to 33.1 microg/L. In the treatment processes, secondary chlorination was effective in removing ERRgamma antagonists, but the coagulation process led to significantly increased ERRgamma antagonistic activity. The drinking water treatment processes removed 73.5% of ERRgamma antagonists. To our knowledge, the occurrence of ERRgamma disruption activities on source and drinking water in vitro had not been reported previously. It is vital, therefore, to increase our understanding of ERRy disrupting activities in drinking water.
Document Exploration and Automatic Knowledge Extraction for Unstructured Biomedical Text
NASA Astrophysics Data System (ADS)
Chu, S.; Totaro, G.; Doshi, N.; Thapar, S.; Mattmann, C. A.; Ramirez, P.
2015-12-01
We describe our work on building a web-browser based document reader with built-in exploration tool and automatic concept extraction of medical entities for biomedical text. Vast amounts of biomedical information are offered in unstructured text form through scientific publications and R&D reports. Utilizing text mining can help us to mine information and extract relevant knowledge from a plethora of biomedical text. The ability to employ such technologies to aid researchers in coping with information overload is greatly desirable. In recent years, there has been an increased interest in automatic biomedical concept extraction [1, 2] and intelligent PDF reader tools with the ability to search on content and find related articles [3]. Such reader tools are typically desktop applications and are limited to specific platforms. Our goal is to provide researchers with a simple tool to aid them in finding, reading, and exploring documents. Thus, we propose a web-based document explorer, which we called Shangri-Docs, which combines a document reader with automatic concept extraction and highlighting of relevant terms. Shangri-Docsalso provides the ability to evaluate a wide variety of document formats (e.g. PDF, Words, PPT, text, etc.) and to exploit the linked nature of the Web and personal content by performing searches on content from public sites (e.g. Wikipedia, PubMed) and private cataloged databases simultaneously. Shangri-Docsutilizes Apache cTAKES (clinical Text Analysis and Knowledge Extraction System) [4] and Unified Medical Language System (UMLS) to automatically identify and highlight terms and concepts, such as specific symptoms, diseases, drugs, and anatomical sites, mentioned in the text. cTAKES was originally designed specially to extract information from clinical medical records. Our investigation leads us to extend the automatic knowledge extraction process of cTAKES for biomedical research domain by improving the ontology guided information extraction process. We will describe our experience and implementation of our system and share lessons learned from our development. We will also discuss ways in which this could be adapted to other science fields. [1] Funk et al., 2014. [2] Kang et al., 2014. [3] Utopia Documents, http://utopiadocs.com [4] Apache cTAKES, http://ctakes.apache.org
2013-01-01
Background A large-scale, highly accurate, machine-understandable drug-disease treatment relationship knowledge base is important for computational approaches to drug repurposing. The large body of published biomedical research articles and clinical case reports available on MEDLINE is a rich source of FDA-approved drug-disease indication as well as drug-repurposing knowledge that is crucial for applying FDA-approved drugs for new diseases. However, much of this information is buried in free text and not captured in any existing databases. The goal of this study is to extract a large number of accurate drug-disease treatment pairs from published literature. Results In this study, we developed a simple but highly accurate pattern-learning approach to extract treatment-specific drug-disease pairs from 20 million biomedical abstracts available on MEDLINE. We extracted a total of 34,305 unique drug-disease treatment pairs, the majority of which are not included in existing structured databases. Our algorithm achieved a precision of 0.904 and a recall of 0.131 in extracting all pairs, and a precision of 0.904 and a recall of 0.842 in extracting frequent pairs. In addition, we have shown that the extracted pairs strongly correlate with both drug target genes and therapeutic classes, therefore may have high potential in drug discovery. Conclusions We demonstrated that our simple pattern-learning relationship extraction algorithm is able to accurately extract many drug-disease pairs from the free text of biomedical literature that are not captured in structured databases. The large-scale, accurate, machine-understandable drug-disease treatment knowledge base that is resultant of our study, in combination with pairs from structured databases, will have high potential in computational drug repurposing tasks. PMID:23742147
Knowledge and Policy: Research and Knowledge Transfer
ERIC Educational Resources Information Center
Ozga, Jenny
2007-01-01
Knowledge transfer (KT) is the emergent "third sector" of higher education activity--alongside research and teaching. Its commercialization origins are evidenced in its concerns to extract maximum value from research, and in the policy push to make research-based knowledge trapped in disciplinary silos more responsive to the growing…
DesAutels, Spencer J.; Fox, Zachary E.; Giuse, Dario A.; Williams, Annette M.; Kou, Qing-hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia
2016-01-01
Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems. PMID:28269846
Extracting Useful Semantic Information from Large Scale Corpora of Text
ERIC Educational Resources Information Center
Mendoza, Ray Padilla, Jr.
2012-01-01
Extracting and representing semantic information from large scale corpora is at the crux of computer-assisted knowledge generation. Semantic information depends on collocation extraction methods, mathematical models used to represent distributional information, and weighting functions which transform the space. This dissertation provides a…
Concurrence of rule- and similarity-based mechanisms in artificial grammar learning.
Opitz, Bertram; Hofmann, Juliane
2015-03-01
A current theoretical debate regards whether rule-based or similarity-based learning prevails during artificial grammar learning (AGL). Although the majority of findings are consistent with a similarity-based account of AGL it has been argued that these results were obtained only after limited exposure to study exemplars, and performance on subsequent grammaticality judgment tests has often been barely above chance level. In three experiments the conditions were investigated under which rule- and similarity-based learning could be applied. Participants were exposed to exemplars of an artificial grammar under different (implicit and explicit) learning instructions. The analysis of receiver operating characteristics (ROC) during a final grammaticality judgment test revealed that explicit but not implicit learning led to rule knowledge. It also demonstrated that this knowledge base is built up gradually while similarity knowledge governed the initial state of learning. Together these results indicate that rule- and similarity-based mechanisms concur during AGL. Moreover, it could be speculated that two different rule processes might operate in parallel; bottom-up learning via gradual rule extraction and top-down learning via rule testing. Crucially, the latter is facilitated by performance feedback that encourages explicit hypothesis testing. Copyright © 2015 Elsevier Inc. All rights reserved.
Predictors affecting personal health information management skills.
Kim, Sujin; Abner, Erin
2016-01-01
This study investigated major factors affecting personal health records (PHRs) management skills associated with survey respondents' health information management related activities. A self-report survey was used to assess individuals' personal characteristics, health knowledge, PHR skills, and activities. Factors underlying respondents' current PHR-related activities were derived using principal component analysis (PCA). Scale scores were calculated based on the results of the PCA, and hierarchical linear regression analyses were used to identify respondent characteristics associated with the scale scores. Internal consistency of the derived scale scores was assessed with Cronbach's α. Among personal health information activities surveyed (N = 578 respondents), the four extracted factors were subsequently grouped and labeled as: collecting skills (Cronbach's α = 0.906), searching skills (Cronbach's α = 0.837), sharing skills (Cronbach's α = 0.763), and implementing skills (Cronbach's α = 0.908). In the hierarchical regression analyses, education and computer knowledge significantly increased the explanatory power of the models. Health knowledge (β = 0.25, p < 0.001) emerged as a positive predictor of PHR collecting skills. This study confirmed that PHR training and learning should consider a full spectrum of information management skills including collection, utilization and distribution to support patients' care and prevention continua.
ERIC Educational Resources Information Center
Woloshyn, Vera E.; And Others
1994-01-01
Thirty-two factual statements, half consistent and half not consistent with subjects' prior knowledge, were processed by 140 sixth and seventh graders. Half were directed to use elaborative interrogation (using prior knowledge) to answer why each statement was true. Across all memory measures, elaborative interrogation subjects performed better…
PKDE4J: Entity and relation extraction for public knowledge discovery.
Song, Min; Kim, Won Chul; Lee, Dahee; Heo, Go Eun; Kang, Keun Young
2015-10-01
Due to an enormous number of scientific publications that cannot be handled manually, there is a rising interest in text-mining techniques for automated information extraction, especially in the biomedical field. Such techniques provide effective means of information search, knowledge discovery, and hypothesis generation. Most previous studies have primarily focused on the design and performance improvement of either named entity recognition or relation extraction. In this paper, we present PKDE4J, a comprehensive text-mining system that integrates dictionary-based entity extraction and rule-based relation extraction in a highly flexible and extensible framework. Starting with the Stanford CoreNLP, we developed the system to cope with multiple types of entities and relations. The system also has fairly good performance in terms of accuracy as well as the ability to configure text-processing components. We demonstrate its competitive performance by evaluating it on many corpora and found that it surpasses existing systems with average F-measures of 85% for entity extraction and 81% for relation extraction. Copyright © 2015 Elsevier Inc. All rights reserved.
Wang, Yaping; Nie, Jingxin; Yap, Pew-Thian; Li, Gang; Shi, Feng; Geng, Xiujuan; Guo, Lei; Shen, Dinggang
2014-01-01
Accurate and robust brain extraction is a critical step in most neuroimaging analysis pipelines. In particular, for the large-scale multi-site neuroimaging studies involving a significant number of subjects with diverse age and diagnostic groups, accurate and robust extraction of the brain automatically and consistently is highly desirable. In this paper, we introduce population-specific probability maps to guide the brain extraction of diverse subject groups, including both healthy and diseased adult human populations, both developing and aging human populations, as well as non-human primates. Specifically, the proposed method combines an atlas-based approach, for coarse skull-stripping, with a deformable-surface-based approach that is guided by local intensity information and population-specific prior information learned from a set of real brain images for more localized refinement. Comprehensive quantitative evaluations were performed on the diverse large-scale populations of ADNI dataset with over 800 subjects (55∼90 years of age, multi-site, various diagnosis groups), OASIS dataset with over 400 subjects (18∼96 years of age, wide age range, various diagnosis groups), and NIH pediatrics dataset with 150 subjects (5∼18 years of age, multi-site, wide age range as a complementary age group to the adult dataset). The results demonstrate that our method consistently yields the best overall results across almost the entire human life span, with only a single set of parameters. To demonstrate its capability to work on non-human primates, the proposed method is further evaluated using a rhesus macaque dataset with 20 subjects. Quantitative comparisons with popularly used state-of-the-art methods, including BET, Two-pass BET, BET-B, BSE, HWA, ROBEX and AFNI, demonstrate that the proposed method performs favorably with superior performance on all testing datasets, indicating its robustness and effectiveness. PMID:24489639
Extracting knowledge from the World Wide Web
Henzinger, Monika; Lawrence, Steve
2004-01-01
The World Wide Web provides a unprecedented opportunity to automatically analyze a large sample of interests and activity in the world. We discuss methods for extracting knowledge from the web by randomly sampling and analyzing hosts and pages, and by analyzing the link structure of the web and how links accumulate over time. A variety of interesting and valuable information can be extracted, such as the distribution of web pages over domains, the distribution of interest in different areas, communities related to different topics, the nature of competition in different categories of sites, and the degree of communication between different communities or countries. PMID:14745041
Two frameworks for integrating knowledge in induction
NASA Technical Reports Server (NTRS)
Rosenbloom, Paul S.; Hirsh, Haym; Cohen, William W.; Smith, Benjamin D.
1994-01-01
The use of knowledge in inductive learning is critical for improving the quality of the concept definitions generated, reducing the number of examples required in order to learn effective concept definitions, and reducing the computation needed to find good concept definitions. Relevant knowledge may come in many forms (such as examples, descriptions, advice, and constraints) and from many sources (such as books, teachers, databases, and scientific instruments). How to extract the relevant knowledge from this plethora of possibilities, and then to integrate it together so as to appropriately affect the induction process is perhaps the key issue at this point in inductive learning. Here the focus is on the integration part of this problem; that is, how induction algorithms can, and do, utilize a range of extracted knowledge. Preliminary work on a transformational framework for defining knowledge-intensive inductive algorithms out of relatively knowledge-free algorithms is described, as is a more tentative problems-space framework that attempts to cover all induction algorithms within a single general approach. These frameworks help to organize what is known about current knowledge-intensive induction algorithms, and to point towards new algorithms.
Separation of crack extension modes in orthotropic delamination models
NASA Technical Reports Server (NTRS)
Beuth, Jack L.
1995-01-01
In the analysis of an interface crack between dissimilar elastic materials, the mode of crack extension is typically not unique, due to oscillatory behavior of near-tip stresses and displacements. This behavior currently limits the applicability of interfacial fracture mechanics as a means to predict composite delamination. The Virtual Crack Closure Technique (VCCT) is a method used to extract mode 1 and mode 2 energy release rates from numerical fracture solutions. The mode of crack extension extracted from an oscillatory solution using the VCCT is not unique due to the dependence of mode on the virtual crack extension length, Delta. In this work, a method is presented for using the VCCT to extract Delta-independent crack extension modes for the case of an interface crack between two in-plane orthotropic materials. The method does not involve altering the analysis to eliminate its oscillatory behavior. Instead, it is argued that physically reasonable, Delta-independent modes of crack extension can be extracted from oscillatory solutions. Knowledge of near-tip fields is used to determine the explicit Delta dependence of energy release rate parameters. Energy release rates are then defined that are separated from the oscillatory dependence on Delta. A modified VCCT using these energy release rate definitions is applied to results from finite element analyses, showing that Delta-independent modes of crack extension result. The modified technique has potential as a consistent method for extracting crack extension modes from numerical solutions. The Delta-independent modes extracted using this technique can also serve as guides for testing the convergence of finite element models. Direct applications of this work include the analysis of planar composite delamination problems, where plies or debonded laminates are modeled as in-plane orthotropic materials.
NASA Astrophysics Data System (ADS)
Sakakibara, Kai; Hagiwara, Masafumi
In this paper, we propose a 3-dimensional self-organizing memory and describe its application to knowledge extraction from natural language. First, the proposed system extracts a relation between words by JUMAN (morpheme analysis system) and KNP (syntax analysis system), and stores it in short-term memory. In the short-term memory, the relations are attenuated with the passage of processing. However, the relations with high frequency of appearance are stored in the long-term memory without attenuation. The relations in the long-term memory are placed to the proposed 3-dimensional self-organizing memory. We used a new learning algorithm called ``Potential Firing'' in the learning phase. In the recall phase, the proposed system recalls relational knowledge from the learned knowledge based on the input sentence. We used a new recall algorithm called ``Waterfall Recall'' in the recall phase. We added a function to respond to questions in natural language with ``yes/no'' in order to confirm the validity of proposed system by evaluating the quantity of correct answers.
Designing easy DNA extraction: Teaching creativity through laboratory practice.
Susantini, Endang; Lisdiana, Lisa; Isnawati; Tanzih Al Haq, Aushia; Trimulyono, Guntur
2017-05-01
Subject material concerning Deoxyribose Nucleic Acid (DNA) structure in the format of creativity-driven laboratory practice offers meaningful learning experience to the students. Therefore, a laboratory practice in which utilizes simple procedures and easy-safe-affordable household materials should be promoted to students to develop their creativity. This study aimed to examine whether designing and conducting DNA extraction with household materials could foster students' creative thinking. We also described how this laboratory practice affected students' knowledge and views. A total of 47 students participated in this study. These students were grouped and asked to utilize available household materials and modify procedures using hands-on worksheet. Result showed that this approach encouraged creative thinking as well as improved subject-related knowledge. Students also demonstrated positive views about content knowledge, social skills, and creative thinking skills. This study implies that extracting DNA with household materials is able to develop content knowledge, social skills, and creative thinking of the students. © 2016 by The International Union of Biochemistry and Molecular Biology, 45(3):216-225, 2017. © 2016 The International Union of Biochemistry and Molecular Biology.
Zhao, Chao; Jiang, Jingchi; Guan, Yi; Guo, Xitong; He, Bin
2018-05-01
Electronic medical records (EMRs) contain medical knowledge that can be used for clinical decision support (CDS). Our objective is to develop a general system that can extract and represent knowledge contained in EMRs to support three CDS tasks-test recommendation, initial diagnosis, and treatment plan recommendation-given the condition of a patient. We extracted four kinds of medical entities from records and constructed an EMR-based medical knowledge network (EMKN), in which nodes are entities and edges reflect their co-occurrence in a record. Three bipartite subgraphs (bigraphs) were extracted from the EMKN, one to support each task. One part of the bigraph was the given condition (e.g., symptoms), and the other was the condition to be inferred (e.g., diseases). Each bigraph was regarded as a Markov random field (MRF) to support the inference. We proposed three graph-based energy functions and three likelihood-based energy functions. Two of these functions are based on knowledge representation learning and can provide distributed representations of medical entities. Two EMR datasets and three metrics were utilized to evaluate the performance. As a whole, the evaluation results indicate that the proposed system outperformed the baseline methods. The distributed representation of medical entities does reflect similarity relationships with respect to knowledge level. Combining EMKN and MRF is an effective approach for general medical knowledge representation and inference. Different tasks, however, require individually designed energy functions. Copyright © 2018 Elsevier B.V. All rights reserved.
Multilingual Content Extraction Extended with Background Knowledge for Military Intelligence
2011-06-01
extended with background knowledge (WordNet [Fel98], YAGO [SKW08]) so that new conclusions (logical inferences) can be drawn. For this purpose theorem...such formalized content is extended with background knowledge (WordNet, YAGO ) so that new conclusions (logical inferences) can be drawn. Our aim is to...External Knowledge Formulas Transformation FOLE MRS to FOLE WordNet OpenCyc ... YAGO Logical Calculation Knowledge Background Knowledge Axioms Background
DiMeX: A Text Mining System for Mutation-Disease Association Extraction.
Mahmood, A S M Ashique; Wu, Tsung-Jung; Mazumder, Raja; Vijay-Shanker, K
2016-01-01
The number of published articles describing associations between mutations and diseases is increasing at a fast pace. There is a pressing need to gather such mutation-disease associations into public knowledge bases, but manual curation slows down the growth of such databases. We have addressed this problem by developing a text-mining system (DiMeX) to extract mutation to disease associations from publication abstracts. DiMeX consists of a series of natural language processing modules that preprocess input text and apply syntactic and semantic patterns to extract mutation-disease associations. DiMeX achieves high precision and recall with F-scores of 0.88, 0.91 and 0.89 when evaluated on three different datasets for mutation-disease associations. DiMeX includes a separate component that extracts mutation mentions in text and associates them with genes. This component has been also evaluated on different datasets and shown to achieve state-of-the-art performance. The results indicate that our system outperforms the existing mutation-disease association tools, addressing the low precision problems suffered by most approaches. DiMeX was applied on a large set of abstracts from Medline to extract mutation-disease associations, as well as other relevant information including patient/cohort size and population data. The results are stored in a database that can be queried and downloaded at http://biotm.cis.udel.edu/dimex/. We conclude that this high-throughput text-mining approach has the potential to significantly assist researchers and curators to enrich mutation databases.
DiMeX: A Text Mining System for Mutation-Disease Association Extraction
Mahmood, A. S. M. Ashique; Wu, Tsung-Jung; Mazumder, Raja; Vijay-Shanker, K.
2016-01-01
The number of published articles describing associations between mutations and diseases is increasing at a fast pace. There is a pressing need to gather such mutation-disease associations into public knowledge bases, but manual curation slows down the growth of such databases. We have addressed this problem by developing a text-mining system (DiMeX) to extract mutation to disease associations from publication abstracts. DiMeX consists of a series of natural language processing modules that preprocess input text and apply syntactic and semantic patterns to extract mutation-disease associations. DiMeX achieves high precision and recall with F-scores of 0.88, 0.91 and 0.89 when evaluated on three different datasets for mutation-disease associations. DiMeX includes a separate component that extracts mutation mentions in text and associates them with genes. This component has been also evaluated on different datasets and shown to achieve state-of-the-art performance. The results indicate that our system outperforms the existing mutation-disease association tools, addressing the low precision problems suffered by most approaches. DiMeX was applied on a large set of abstracts from Medline to extract mutation-disease associations, as well as other relevant information including patient/cohort size and population data. The results are stored in a database that can be queried and downloaded at http://biotm.cis.udel.edu/dimex/. We conclude that this high-throughput text-mining approach has the potential to significantly assist researchers and curators to enrich mutation databases. PMID:27073839
Hassanpour, Saeed; O'Connor, Martin J; Das, Amar K
2013-08-12
A variety of informatics approaches have been developed that use information retrieval, NLP and text-mining techniques to identify biomedical concepts and relations within scientific publications or their sentences. These approaches have not typically addressed the challenge of extracting more complex knowledge such as biomedical definitions. In our efforts to facilitate knowledge acquisition of rule-based definitions of autism phenotypes, we have developed a novel semantic-based text-mining approach that can automatically identify such definitions within text. Using an existing knowledge base of 156 autism phenotype definitions and an annotated corpus of 26 source articles containing such definitions, we evaluated and compared the average rank of correctly identified rule definition or corresponding rule template using both our semantic-based approach and a standard term-based approach. We examined three separate scenarios: (1) the snippet of text contained a definition already in the knowledge base; (2) the snippet contained an alternative definition for a concept in the knowledge base; and (3) the snippet contained a definition not in the knowledge base. Our semantic-based approach had a higher average rank than the term-based approach for each of the three scenarios (scenario 1: 3.8 vs. 5.0; scenario 2: 2.8 vs. 4.9; and scenario 3: 4.5 vs. 6.2), with each comparison significant at the p-value of 0.05 using the Wilcoxon signed-rank test. Our work shows that leveraging existing domain knowledge in the information extraction of biomedical definitions significantly improves the correct identification of such knowledge within sentences. Our method can thus help researchers rapidly acquire knowledge about biomedical definitions that are specified and evolving within an ever-growing corpus of scientific publications.
Washington, John W; Jenkins, Thomas M; Rankin, Keegan; Naile, Jonathan E
2015-01-20
Fluorotelomer-based polymers (FTPs) are the primary product of the fluorotelomer industry. Here we report on a 376-day study of the degradability of two commercial acrylate-linked FTPs in four saturated soils and in water. Using an exhaustive serial extraction, we report GC/MS and LC/MS/MS results for 50 species including fluorotelomer alcohols and acids, and perfluorocarboxylates. Modeling of seven sampling rounds, each consisting of ≥5 replicate microcosm treatments, for one commercial FTP in one soil yielded half-life estimates of 65–112 years and, when the other commercial FTP and soils were evaluated, the estimated half-lives ranged from 33 to 112 years. Experimental controls, consisting of commercial FTP in water, degraded roughly at the same rate as in soil. A follow-up experiment, with commercial FTP in pH 10 water, degraded roughly 10-fold faster than the circum-neutral control suggesting that commercial FTPs can undergo OH–-mediated hydrolysis. 8:2Fluorotelomer alcohol generated from FTP degradation in soil was more stable than without FTP present suggesting a clathrate guest–host association with the FTP. To our knowledge, these are the only degradability-test results for commercial FTPs that have been generated using exhaustive extraction procedures. They unambiguously show that commercial FTPs, the primary product of the fluorotelomer industry, are a source of fluorotelomer and perfluorinated compounds to the environment.
The Development and Validation of the Osteoporosis Prevention and Awareness Tool (OPAAT) in Malaysia
Wong, Kok Thong; Low, Bee Yean
2015-01-01
Objectives To develop and validate Osteoporosis Prevention and Awareness Tool (OPAAT) in Malaysia. Methods The OPAAT was modified from the Malaysian Osteoporosis Knowledge Tool and developed from an exploratory study on patients. Face and content validity was established by an expert panel. The OPAAT consists of 30 items, categorized into three domains. A higher score indicates higher knowledge level. English speaking non-osteoporotic postmenopausal women ≥50 years of age and pharmacists were included in the study. Results A total of 203 patients and 31 pharmacists were recruited. Factor analysis extracted three domains. Flesch reading ease was 59.2. The mean±SD accuracy rate was 0.60±0.22 (range: 0.26-0.94). The Cronbach’s α for each domain ranged from 0.286-0.748. All items were highly correlated (Spearman’s rho: 0.761-0.990, p<0.05), with no significant change in the overall test-retest scores, indicating that OPAAT has achieved stable reliability. Pharmacists had higher knowledge score than patients (80.9±8.7vs63.6±17.4, p<0.001), indicating that the OPAAT was able to discriminate between the knowledge levels of pharmacists and patients. Conclusion The OPAAT was found to be a valid and reliable instrument for assessing patient’s knowledge about osteoporosis and its prevention in Malaysia. The OPAAT can be used to identify individuals in need of osteoporosis educational intervention. PMID:25938494
Brust-Renck, Priscila G; Reyna, Valerie F; Wilhelms, Evan A; Wolfe, Christopher R; Widmer, Colin L; Cedillos-Whynott, Elizabeth M; Morant, A Kate
2017-08-01
We used Sharable Knowledge Objects (SKOs) to create an Intelligent Tutoring System (ITS) grounded in Fuzzy-Trace Theory to teach women about obesity prevention: GistFit, getting the gist of healthy eating and exercise. The theory predicts that reliance on gist mental representations (as opposed to verbatim) is more effective in reducing health risks and improving decision making. Technical information was translated into decision-relevant gist representations and gist principles (i.e., healthy values). The SKO was hypothesized to facilitate extracting these gist representations and principles by engaging women in dialogue, "understanding" their responses, and replying appropriately to prompt additional engagement. Participants were randomly assigned to either the obesity prevention tutorial (GistFit) or a control tutorial containing different content using the same technology. Participants were administered assessments of knowledge about nutrition and exercise, gist comprehension, gist principles, behavioral intentions and self-reported behavior. An analysis of engagement in tutorial dialogues and responses to multiple-choice questions to check understanding throughout the tutorial revealed significant correlations between these conversations and scores on subsequent knowledge tests and gist comprehension. Knowledge and comprehension measures correlated with healthier behavior and greater intentions to perform healthy behavior. Differences between GistFit and control tutorials were greater for participants who engaged more fully. Thus, results are consistent with the hypothesis that active engagement with a new gist-based ITS, rather than a passive memorization of verbatim details, was associated with an array of known psychosocial mediators of preventive health decisions, such as knowledge acquisition, and gist comprehension.
Enhancing acronym/abbreviation knowledge bases with semantic information.
Torii, Manabu; Liu, Hongfang
2007-10-11
In the biomedical domain, a terminology knowledge base that associates acronyms/abbreviations (denoted as SFs) with the definitions (denoted as LFs) is highly needed. For the construction such terminology knowledge base, we investigate the feasibility to build a system automatically assigning semantic categories to LFs extracted from text. Given a collection of pairs (SF,LF) derived from text, we i) assess the coverage of LFs and pairs (SF,LF) in the UMLS and justify the need of a semantic category assignment system; and ii) automatically derive name phrases annotated with semantic category and construct a system using machine learning. Utilizing ADAM, an existing collection of (SF,LF) pairs extracted from MEDLINE, our system achieved an f-measure of 87% when assigning eight UMLS-based semantic groups to LFs. The system has been incorporated into a web interface which integrates SF knowledge from multiple SF knowledge bases. Web site: http://gauss.dbb.georgetown.edu/liblab/SFThesurus.
An expert knowledge-based approach to landslide susceptibility mapping using GIS and fuzzy logic
NASA Astrophysics Data System (ADS)
Zhu, A.-Xing; Wang, Rongxun; Qiao, Jianping; Qin, Cheng-Zhi; Chen, Yongbo; Liu, Jing; Du, Fei; Lin, Yang; Zhu, Tongxin
2014-06-01
This paper presents an expert knowledge-based approach to landslide susceptibility mapping in an effort to overcome the deficiencies of data-driven approaches. The proposed approach consists of three generic steps: (1) extraction of knowledge on the relationship between landslide susceptibility and predisposing factors from domain experts, (2) characterization of predisposing factors using GIS techniques, and (3) prediction of landslide susceptibility under fuzzy logic. The approach was tested in two study areas in China - the Kaixian study area (about 250 km2) and the Three Gorges study area (about 4600 km2). The Kaixian study area was used to develop the approach and to evaluate its validity. The Three Gorges study area was used to test both the portability and the applicability of the developed approach for mapping landslide susceptibility over large study areas. Performance was evaluated by examining if the mean of the computed susceptibility values at landslide sites was statistically different from that of the entire study area. A z-score test was used to examine the statistical significance of the difference. The computed z for the Kaixian area was 3.70 and the corresponding p-value was less than 0.001. This suggests that the computed landslide susceptibility values are good indicators of landslide occurrences. In the Three Gorges study area, the computed z was 10.75 and the corresponding p-value was less than 0.001. In addition, we divided the susceptibility value into four levels: low (0.0-0.25), moderate (0.25-0.5), high (0.5-0.75) and very high (0.75-1.0). No landslides were found for areas of low susceptibility. Landslide density was about three times higher in areas of very high susceptibility than that in the moderate susceptibility areas, and more than twice as high as that in the high susceptibility areas. The results from the Three Gorge study area suggest that the extracted expert knowledge can be extrapolated to another study area and the developed approach can be used in large-scale projects. Results from these case studies suggest that the expert knowledge-based approach is effective in mapping landslide susceptibility and that its performance is maintained when it is moved to a new area from the model development area without changes to the knowledge base.
Information, intelligence, and interface: the pillars of a successful medical information system.
Hadzikadic, M; Harrington, A L; Bohren, B F
1995-01-01
This paper addresses three key issues facing developers of clinical and/or research medical information systems. 1. INFORMATION. The basic function of every database is to store information about the phenomenon under investigation. There are many ways to organize information in a computer; however only a few will prove optimal for any real life situation. Computer Science theory has developed several approaches to database structure, with relational theory leading in popularity among end users [8]. Strict conformance to the rules of relational database design rewards the user with consistent data and flexible access to that data. A properly defined database structure minimizes redundancy i.e.,multiple storage of the same information. Redundancy introduces problems when updating a database, since the repeated value has to be updated in all locations--missing even a single value corrupts the whole database, and incorrect reports are produced [8]. To avoid such problems, relational theory offers a formal mechanism for determining the number and content of data files. These files not only preserve the conceptual schema of the application domain, but allow a virtually unlimited number of reports to be efficiently generated. 2. INTELLIGENCE. Flexible access enables the user to harvest additional value from collected data. This value is usually gained via reports defined at the time of database design. Although these reports are indispensable, with proper tools more information can be extracted from the database. For example, machine learning, a sub-discipline of artificial intelligence, has been successfully used to extract knowledge from databases of varying size by uncovering a correlation among fields and records[1-6, 9]. This knowledge, represented in the form of decision trees, production rules, and probabilistic networks, clearly adds a flavor of intelligence to the data collection and manipulation system. 3. INTERFACE. Despite the obvious importance of collecting data and extracting knowledge, current systems often impede these processes. Problems stem from the lack of user friendliness and functionality. To overcome these problems, several features of a successful human-computer interface have been identified [7], including the following "golden" rules of dialog design [7]: consistency, use of shortcuts for frequent users, informative feedback, organized sequence of actions, simple error handling, easy reversal of actions, user-oriented focus of control, and reduced short-term memory load. To this list of rules, we added visual representation of both data and query results, since our experience has demonstrated that users react much more positively to visual rather than textual information. In our design of the Orthopaedic Trauma Registry--under development at the Carolinas Medical Center--we have made every effort to follow the above rules. The results were rewarding--the end users actually not only want to use the product, but also to participate in its development.
Kötter, Thomas; Bartel, Carmen; Schramm, Susanne; Lange, Petra; Höfer, Eva; Hänsel, Michaela; Waffenschmidt, Siw; Waldt, Susanne Ein; Hoffmann-Eßer, Wiebke; Rüther, Alric; Lühmann, Dagmar; Scherer, Martin
2013-01-01
Disease Management Programmes (DMPs) are structured treatment programmes for chronic diseases. The DMP requirements are primarily derived from evidence-based guidelines. DMPs are regularly revised to ensure that they reflect current best practice and medical knowledge. The aim of this study was to assess the need for updating the German DMP module on heart failure by comparing it to relevant guidelines and identifying recommendations that should be revised. We systematically searched for clinical guidelines on heart failure published in German, English or French, and extracted relevant guideline recommendations. All included guidelines were assessed for methodological quality. To identify revision needs in the DMP, we performed a synoptic analysis of the extracted guideline recommendations and DMP requirements. 27 guidelines were included. The extracted recommendations covered all aspects of the management of heart failure. The comparison of guideline recommendations with DMP requirements showed that, overall, guideline recommendations were more detailed than DMP requirements, and that the guidelines covered topics not included in the DMP module. The DMP module is largely consistent with current guidelines on heart failure. We did not identify any need for significant revision of the DMP requirements. However, some specific recommendations of the DMP module could benefit from revision. Copyright © 2013. Published by Elsevier GmbH.
Analysis of a Knowledge-Management-Based Process of Transferring Project Management Skills
ERIC Educational Resources Information Center
Ioi, Toshihiro; Ono, Masakazu; Ishii, Kota; Kato, Kazuhiko
2012-01-01
Purpose: The purpose of this paper is to propose a method for the transfer of knowledge and skills in project management (PM) based on techniques in knowledge management (KM). Design/methodology/approach: The literature contains studies on methods to extract experiential knowledge in PM, but few studies exist that focus on methods to convert…
PREDOSE: A Semantic Web Platform for Drug Abuse Epidemiology using Social Media
Cameron, Delroy; Smith, Gary A.; Daniulaityte, Raminta; Sheth, Amit P.; Dave, Drashti; Chen, Lu; Anand, Gaurish; Carlson, Robert; Watkins, Kera Z.; Falck, Russel
2013-01-01
Objectives The role of social media in biomedical knowledge mining, including clinical, medical and healthcare informatics, prescription drug abuse epidemiology and drug pharmacology, has become increasingly significant in recent years. Social media offers opportunities for people to share opinions and experiences freely in online communities, which may contribute information beyond the knowledge of domain professionals. This paper describes the development of a novel Semantic Web platform called PREDOSE (PREscription Drug abuse Online Surveillance and Epidemiology), which is designed to facilitate the epidemiologic study of prescription (and related) drug abuse practices using social media. PREDOSE uses web forum posts and domain knowledge, modeled in a manually created Drug Abuse Ontology (DAO) (pronounced dow), to facilitate the extraction of semantic information from User Generated Content (UGC). A combination of lexical, pattern-based and semantics-based techniques is used together with the domain knowledge to extract fine-grained semantic information from UGC. In a previous study, PREDOSE was used to obtain the datasets from which new knowledge in drug abuse research was derived. Here, we report on various platform enhancements, including an updated DAO, new components for relationship and triple extraction, and tools for content analysis, trend detection and emerging patterns exploration, which enhance the capabilities of the PREDOSE platform. Given these enhancements, PREDOSE is now more equipped to impact drug abuse research by alleviating traditional labor-intensive content analysis tasks. Methods Using custom web crawlers that scrape UGC from publicly available web forums, PREDOSE first automates the collection of web-based social media content for subsequent semantic annotation. The annotation scheme is modeled in the DAO, and includes domain specific knowledge such as prescription (and related) drugs, methods of preparation, side effects, routes of administration, etc. The DAO is also used to help recognize three types of data, namely: 1) entities, 2) relationships and 3) triples. PREDOSE then uses a combination of lexical and semantic-based techniques to extract entities and relationships from the scraped content, and a top-down approach for triple extraction that uses patterns expressed in the DAO. In addition, PREDOSE uses publicly available lexicons to identify initial sentiment expressions in text, and then a probabilistic optimization algorithm (from related research) to extract the final sentiment expressions. Together, these techniques enable the capture of fine-grained semantic information from UGC, and querying, search, trend analysis and overall content analysis of social media related to prescription drug abuse. Moreover, extracted data are also made available to domain experts for the creation of training and test sets for use in evaluation and refinements in information extraction techniques. Results A recent evaluation of the information extraction techniques applied in the PREDOSE platform indicates 85% precision and 72% recall in entity identification, on a manually created gold standard dataset. In another study, PREDOSE achieved 36% precision in relationship identification and 33% precision in triple extraction, through manual evaluation by domain experts. Given the complexity of the relationship and triple extraction tasks and the abstruse nature of social media texts, we interpret these as favorable initial results. Extracted semantic information is currently in use in an online discovery support system, by prescription drug abuse researchers at the Center for Interventions, Treatment and Addictions Research (CITAR) at Wright State University. Conclusion A comprehensive platform for entity, relationship, triple and sentiment extraction from such abstruse texts has never been developed for drug abuse research. PREDOSE has already demonstrated the importance of mining social media by providing data from which new findings in drug abuse research were uncovered. Given the recent platform enhancements, including the refined DAO, components for relationship and triple extraction, and tools for content, trend and emerging pattern analysis, it is expected that PREDOSE will play a significant role in advancing drug abuse epidemiology in future. PMID:23892295
Validation of an Information-Motivation-Behavioral Skills model of diabetes self-care (IMB-DSC).
Osborn, Chandra Y; Egede, Leonard E
2010-04-01
Comprehensive behavior change frameworks are needed to provide guidance for the design, implementation, and evaluation of diabetes self-care programs in diverse populations. We applied the Information-Motivation-Behavioral Skills (IMB) model, a well-validated, comprehensive health behavior change framework, to diabetes self-care. Patients with diabetes were recruited from an outpatient clinic. Information gathered pertained to demographics, diabetes knowledge (information); diabetes fatalism (personal motivation); social support (social motivation); and diabetes self-care (behavior). Hemoglobin A1C values were extracted from the patient medical record. Structural equation models tested the IMB framework. More diabetes knowledge (r=0.22 p<0.05), less fatalistic attitudes (r=-0.20, p<0.05), and more social support (r=0.27, p<0.01) were independent, direct predictors of diabetes self-care behavior; and through behavior, were related to glycemic control (r=-0.20, p<0.05). Consistent with the IMB model, having more information (more diabetes knowledge), personal motivation (less fatalistic attitudes), and social motivation (more social support) was associated with behavior; and behavior was the sole predictor of glycemic control. The IMB model is an appropriate, comprehensive health behavior change framework for diabetes self-care. The findings indicate that in addition to knowledge, diabetes education programs should target personal and social motivation to effect behavior change. 2009 Elsevier Ireland Ltd. All rights reserved.
Application of AI techniques to infer vegetation characteristics from directional reflectance(s)
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Smith, J. A.; Harrison, P. A.; Harrison, P. R.
1994-01-01
Traditionally, the remote sensing community has relied totally on spectral knowledge to extract vegetation characteristics. However, there are other knowledge bases (KB's) that can be used to significantly improve the accuracy and robustness of inference techniques. Using AI (artificial intelligence) techniques a KB system (VEG) was developed that integrates input spectral measurements with diverse KB's. These KB's consist of data sets of directional reflectance measurements, knowledge from literature, and knowledge from experts which are combined into an intelligent and efficient system for making vegetation inferences. VEG accepts spectral data of an unknown target as input, determines the best techniques for inferring the desired vegetation characteristic(s), applies the techniques to the target data, and provides a rigorous estimate of the accuracy of the inference. VEG was developed to: infer spectral hemispherical reflectance from any combination of nadir and/or off-nadir view angles; infer percent ground cover from any combination of nadir and/or off-nadir view angles; infer unknown view angle(s) from known view angle(s) (known as view angle extension); and discriminate between user defined vegetation classes using spectral and directional reflectance relationships developed from an automated learning algorithm. The errors for these techniques were generally very good ranging between 2 to 15% (proportional root mean square). The system is designed to aid scientists in developing, testing, and applying new inference techniques using directional reflectance data.
KneeTex: an ontology-driven system for information extraction from MRI reports.
Spasić, Irena; Zhao, Bo; Jones, Christopher B; Button, Kate
2015-01-01
In the realm of knee pathology, magnetic resonance imaging (MRI) has the advantage of visualising all structures within the knee joint, which makes it a valuable tool for increasing diagnostic accuracy and planning surgical treatments. Therefore, clinical narratives found in MRI reports convey valuable diagnostic information. A range of studies have proven the feasibility of natural language processing for information extraction from clinical narratives. However, no study focused specifically on MRI reports in relation to knee pathology, possibly due to the complexity of knee anatomy and a wide range of conditions that may be associated with different anatomical entities. In this paper we describe KneeTex, an information extraction system that operates in this domain. As an ontology-driven information extraction system, KneeTex makes active use of an ontology to strongly guide and constrain text analysis. We used automatic term recognition to facilitate the development of a domain-specific ontology with sufficient detail and coverage for text mining applications. In combination with the ontology, high regularity of the sublanguage used in knee MRI reports allowed us to model its processing by a set of sophisticated lexico-semantic rules with minimal syntactic analysis. The main processing steps involve named entity recognition combined with coordination, enumeration, ambiguity and co-reference resolution, followed by text segmentation. Ontology-based semantic typing is then used to drive the template filling process. We adopted an existing ontology, TRAK (Taxonomy for RehAbilitation of Knee conditions), for use within KneeTex. The original TRAK ontology expanded from 1,292 concepts, 1,720 synonyms and 518 relationship instances to 1,621 concepts, 2,550 synonyms and 560 relationship instances. This provided KneeTex with a very fine-grained lexico-semantic knowledge base, which is highly attuned to the given sublanguage. Information extraction results were evaluated on a test set of 100 MRI reports. A gold standard consisted of 1,259 filled template records with the following slots: finding, finding qualifier, negation, certainty, anatomy and anatomy qualifier. KneeTex extracted information with precision of 98.00 %, recall of 97.63 % and F-measure of 97.81 %, the values of which are in line with human-like performance. KneeTex is an open-source, stand-alone application for information extraction from narrative reports that describe an MRI scan of the knee. Given an MRI report as input, the system outputs the corresponding clinical findings in the form of JavaScript Object Notation objects. The extracted information is mapped onto TRAK, an ontology that formally models knowledge relevant for the rehabilitation of knee conditions. As a result, formally structured and coded information allows for complex searches to be conducted efficiently over the original MRI reports, thereby effectively supporting epidemiologic studies of knee conditions.
Kwok, Cannas; Pillay, Rona; Lee, Chun Fan
2016-01-01
Indian women have been consistently reported as having low participation in breast cancer screening practices. A valid and reliable instrument to explore their breast cancer beliefs is essential for development of interventions to promote breast cancer screening practices. The aim of this study was to report the psychometric properties of the Breast Cancer Screening Beliefs Questionnaire (BCSBQ) in an Indian community in Australia. A convenience sample of 242 Indian Australian women was recruited from Indian community organizations and personal networking. Explanatory factor analysis was conducted to study the factor structure. Clinical validity was examined by Cuzick's nonparametric test, and Cronbach's α was used to assess internal consistency reliability. Exploratory factor analysis showed a similar fit to the hypothesized 3-factor structure. The frequency of breast cancer screening practices was significantly associated with attitudes toward general health check-up. Knowledge and perceptions about the breast cancer scale were not significantly associated with clinical breast examinations and mammography. Perceived barriers to mammography were much less evident among women who engaged in breast awareness and clinical breast examination. Results indicated that the BCSBQ had satisfactory validity and internal consistency. Cronbach's α of the 3 subscales ranged from .81 to .91. The BCSBQ is a culturally appropriate, valid, and reliable instrument for assessing the beliefs, knowledge, and attitudes about breast cancer and breast cancer screening practices among women of Indian ethnic extraction living in Australia. The BCSBQ can be used to provide nurses with information relevant for the development of culturally sensitive breast health education programs.
Scurlock-Evans, Laura; Upton, Penney; Upton, Dominic
2014-09-01
Despite clear benefits of the Evidence-Based Practice (EBP) approach to ensuring quality and consistency of care, its uptake within physiotherapy has been inconsistent. Synthesise the findings of research into EBP barriers, facilitators and interventions in physiotherapy and identify methods of enhancing adoption and implementation. Literature concerning physiotherapists' practice between 2000 and 2012 was systematically searched using: Academic Search Complete, Cumulative Index of Nursing and Allied Health Literature Plus, American Psychological Association databases, Medline, Journal Storage, and Science Direct. Reference lists were searched to identify additional studies. Thirty-two studies, focusing either on physiotherapists' EBP knowledge, attitudes or implementation, or EBP interventions in physiotherapy were included. One author undertook all data extraction and a second author reviewed to ensure consistency and rigour. Synthesis was organised around the themes of EBP barriers/enablers, attitudes, knowledge/skills, use and interventions. Many physiotherapists hold positive attitudes towards EBP. However, this does not necessarily translate into consistent, high-quality EBP. Many barriers to EBP implementation are apparent, including: lack of time and skills, and misperceptions of EBP. Only studies published in the English language, in peer-reviewed journals were included, thereby introducing possible publication bias. Furthermore, narrative synthesis may be subject to greater confirmation bias. There is no "one-size fits all" approach to enhancing EBP implementation; assessing organisational culture prior to designing interventions is crucial. Although some interventions appear promising, further research is required to explore the most effective methods of supporting physiotherapists' adoption of EBP. Copyright © 2014 Chartered Society of Physiotherapy. Published by Elsevier Ltd. All rights reserved.
High precision automated face localization in thermal images: oral cancer dataset as test case
NASA Astrophysics Data System (ADS)
Chakraborty, M.; Raman, S. K.; Mukhopadhyay, S.; Patsa, S.; Anjum, N.; Ray, J. G.
2017-02-01
Automated face detection is the pivotal step in computer vision aided facial medical diagnosis and biometrics. This paper presents an automatic, subject adaptive framework for accurate face detection in the long infrared spectrum on our database for oral cancer detection consisting of malignant, precancerous and normal subjects of varied age group. Previous works on oral cancer detection using Digital Infrared Thermal Imaging(DITI) reveals that patients and normal subjects differ significantly in their facial thermal distribution. Therefore, it is a challenging task to formulate a completely adaptive framework to veraciously localize face from such a subject specific modality. Our model consists of first extracting the most probable facial regions by minimum error thresholding followed by ingenious adaptive methods to leverage the horizontal and vertical projections of the segmented thermal image. Additionally, the model incorporates our domain knowledge of exploiting temperature difference between strategic locations of the face. To our best knowledge, this is the pioneering work on detecting faces in thermal facial images comprising both patients and normal subjects. Previous works on face detection have not specifically targeted automated medical diagnosis; face bounding box returned by those algorithms are thus loose and not apt for further medical automation. Our algorithm significantly outperforms contemporary face detection algorithms in terms of commonly used metrics for evaluating face detection accuracy. Since our method has been tested on challenging dataset consisting of both patients and normal subjects of diverse age groups, it can be seamlessly adapted in any DITI guided facial healthcare or biometric applications.
Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis
NASA Astrophysics Data System (ADS)
Wang, M.; Hu, N. Q.; Qin, G. J.
2011-07-01
In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.
Extracting semantically enriched events from biomedical literature
2012-01-01
Background Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Results Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP’09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP’09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. Conclusions We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare. PMID:22621266
Extracting semantically enriched events from biomedical literature.
Miwa, Makoto; Thompson, Paul; McNaught, John; Kell, Douglas B; Ananiadou, Sophia
2012-05-23
Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP'09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP'09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare.
Institutional challenges for mining and sustainability in Peru.
Bebbington, Anthony J; Bury, Jeffrey T
2009-10-13
Global consumption continues to generate growth in mining. In lesser developed economies, this growth offers the potential to generate new resources for development, but also creates challenges to sustainability in the regions in which extraction occurs. This context leads to debate on the institutional arrangements most likely to build synergies between mining, livelihoods, and development, and on the socio-political conditions under which such institutions can emerge. Building from a multiyear, three-country program of research projects, Peru, a global center of mining expansion, serves as an exemplar for analyzing the effects of extractive industry on livelihoods and the conditions under which arrangements favoring local sustainability might emerge. This program is guided by three emergent hypotheses in human-environmental sciences regarding the relationships among institutions, knowledge, learning, and sustainability. The research combines in-depth and comparative case study analysis, and uses mapping and spatial analysis, surveys, in-depth interviews, participant observation, and our own direct participation in public debates on the regulation of mining for development. The findings demonstrate the pressures that mining expansion has placed on water resources, livelihood assets, and social relationships. These pressures are a result of institutional conditions that separate the governance of mineral expansion, water resources, and local development, and of relationships of power that prioritize large scale investment over livelihood and environment. A further problem is the poor communication between mining sector knowledge systems and those of local populations. These results are consistent with themes recently elaborated in sustainability science.
Multiresolution Approach for Noncontact Measurements of Arterial Pulse Using Thermal Imaging
NASA Astrophysics Data System (ADS)
Chekmenev, Sergey Y.; Farag, Aly A.; Miller, William M.; Essock, Edward A.; Bhatnagar, Aruni
This chapter presents a novel computer vision methodology for noncontact and nonintrusive measurements of arterial pulse. This is the only investigation that links the knowledge of human physiology and anatomy, advances in thermal infrared (IR) imaging and computer vision to produce noncontact and nonintrusive measurements of the arterial pulse in both time and frequency domains. The proposed approach has a physical and physiological basis and as such is of a fundamental nature. A thermal IR camera was used to capture the heat pattern from superficial arteries, and a blood vessel model was proposed to describe the pulsatile nature of the blood flow. A multiresolution wavelet-based signal analysis approach was applied to extract the arterial pulse waveform, which lends itself to various physiological measurements. We validated our results using a traditional contact vital signs monitor as a ground truth. Eight people of different age, race and gender have been tested in our study consistent with Health Insurance Portability and Accountability Act (HIPAA) regulations and internal review board approval. The resultant arterial pulse waveforms exactly matched the ground truth oximetry readings. The essence of our approach is the automatic detection of region of measurement (ROM) of the arterial pulse, from which the arterial pulse waveform is extracted. To the best of our knowledge, the correspondence between noncontact thermal IR imaging-based measurements of the arterial pulse in the time domain and traditional contact approaches has never been reported in the literature.
Institutional challenges for mining and sustainability in Peru
Bebbington, Anthony J.; Bury, Jeffrey T.
2009-01-01
Global consumption continues to generate growth in mining. In lesser developed economies, this growth offers the potential to generate new resources for development, but also creates challenges to sustainability in the regions in which extraction occurs. This context leads to debate on the institutional arrangements most likely to build synergies between mining, livelihoods, and development, and on the socio-political conditions under which such institutions can emerge. Building from a multiyear, three-country program of research projects, Peru, a global center of mining expansion, serves as an exemplar for analyzing the effects of extractive industry on livelihoods and the conditions under which arrangements favoring local sustainability might emerge. This program is guided by three emergent hypotheses in human-environmental sciences regarding the relationships among institutions, knowledge, learning, and sustainability. The research combines in-depth and comparative case study analysis, and uses mapping and spatial analysis, surveys, in-depth interviews, participant observation, and our own direct participation in public debates on the regulation of mining for development. The findings demonstrate the pressures that mining expansion has placed on water resources, livelihood assets, and social relationships. These pressures are a result of institutional conditions that separate the governance of mineral expansion, water resources, and local development, and of relationships of power that prioritize large scale investment over livelihood and environment. A further problem is the poor communication between mining sector knowledge systems and those of local populations. These results are consistent with themes recently elaborated in sustainability science. PMID:19805172
Medical knowledge discovery and management.
Prior, Fred
2009-05-01
Although the volume of medical information is growing rapidly, the ability to rapidly convert this data into "actionable insights" and new medical knowledge is lagging far behind. The first step in the knowledge discovery process is data management and integration, which logically can be accomplished through the application of data warehouse technologies. A key insight that arises from efforts in biosurveillance and the global scope of military medicine is that information must be integrated over both time (longitudinal health records) and space (spatial localization of health-related events). Once data are compiled and integrated it is essential to encode the semantics and relationships among data elements through the use of ontologies and semantic web technologies to convert data into knowledge. Medical images form a special class of health-related information. Traditionally knowledge has been extracted from images by human observation and encoded via controlled terminologies. This approach is rapidly being replaced by quantitative analyses that more reliably support knowledge extraction. The goals of knowledge discovery are the improvement of both the timeliness and accuracy of medical decision making and the identification of new procedures and therapies.
Concept of operations for knowledge discovery from Big Data across enterprise data warehouses
NASA Astrophysics Data System (ADS)
Sukumar, Sreenivas R.; Olama, Mohammed M.; McNair, Allen W.; Nutaro, James J.
2013-05-01
The success of data-driven business in government, science, and private industry is driving the need for seamless integration of intra and inter-enterprise data sources to extract knowledge nuggets in the form of correlations, trends, patterns and behaviors previously not discovered due to physical and logical separation of datasets. Today, as volume, velocity, variety and complexity of enterprise data keeps increasing, the next generation analysts are facing several challenges in the knowledge extraction process. Towards addressing these challenges, data-driven organizations that rely on the success of their analysts have to make investment decisions for sustainable data/information systems and knowledge discovery. Options that organizations are considering are newer storage/analysis architectures, better analysis machines, redesigned analysis algorithms, collaborative knowledge management tools, and query builders amongst many others. In this paper, we present a concept of operations for enabling knowledge discovery that data-driven organizations can leverage towards making their investment decisions. We base our recommendations on the experience gained from integrating multi-agency enterprise data warehouses at the Oak Ridge National Laboratory to design the foundation of future knowledge nurturing data-system architectures.
DNA Extraction Techniques for Use in Education
ERIC Educational Resources Information Center
Hearn, R. P.; Arblaster, K. E.
2010-01-01
DNA extraction provides a hands-on introduction to DNA and enables students to gain real life experience and practical knowledge of DNA. Students gain a sense of ownership and are more enthusiastic when they use their own DNA. A cost effective, simple protocol for DNA extraction and visualization was devised. Buccal mucosal epithelia provide a…
Sensitization to fragrance materials in Indonesian cosmetics.
Roesyanto-Mahadi, I D; Geursen-Reitsma, A M; van Joost, T; van den Akker, T W
1990-04-01
2 different groups of patients were patch tested with 2 test series (A and B) containing extracts of fragrance raw materials, traditionally used in Indonesian cosmetics. Series A consisted of diluted extracts of commercially available Indonesian fragrances. Series B consisted of extracts prepared in our department from corresponding indigenous flowers and fruits. Group 1 consisted of 32 patients positive to fragrance-mix, of whom 8 (25%) had positive tests to 1 or more of the different extracts of fragrance raw materials. Reactions were observed to extracts of: Rosa hybrida Hort (7); Canangium odoratum Baill (5); Citrus aurantifolia Swingle (4); Jasminum sambac Ait (2). 6 of the 8 patients had reactions to 1 or more of the components of fragrance-mix: oakmoss (3); cinnamic alcohol (2), isoeugenol (1); cinnamic aldehyde (1) and geraniol (1). Group 2 consisted of 159 patients patch tested on suspicion of contact dermatitis, who were fragrance-mix negative. Only 2 (1.2%) had a positive patch test to the extracts of fragrance raw materials. Specimens taken (as is) from the flowers and citrus fruits (being the basis sources of the fragrance raw materials) were less antigenic. The use of additional test series in Indonesia to detect allergy to traditional cosmetics and perfumes merits further investigation.
Analysis of x-ray hand images for bone age assessment
NASA Astrophysics Data System (ADS)
Serrat, Joan; Vitria, Jordi M.; Villanueva, Juan J.
1990-09-01
In this paper we describe a model-based system for the assessment of skeletal maturity on hand radiographs by the TW2 method. The problem consists in classiflying a set of bones appearing in an image in one of several stages described in an atlas. A first approach consisting in pre-processing segmentation and classification independent phases is also presented. However it is only well suited for well contrasted low noise images without superimposed bones were the edge detection by zero crossing of second directional derivatives is able to extract all bone contours maybe with little gaps and few false edges on the background. Hence the use of all available knowledge about the problem domain is needed to build a rather general system. We have designed a rule-based system for narrow down the rank of possible stages for each bone and guide the analysis process. It calls procedures written in conventional languages for matching stage models against the image and getting features needed in the classification process.
Neural network explanation using inversion.
Saad, Emad W; Wunsch, Donald C
2007-01-01
An important drawback of many artificial neural networks (ANN) is their lack of explanation capability [Andrews, R., Diederich, J., & Tickle, A. B. (1996). A survey and critique of techniques for extracting rules from trained artificial neural networks. Knowledge-Based Systems, 8, 373-389]. This paper starts with a survey of algorithms which attempt to explain the ANN output. We then present HYPINV, a new explanation algorithm which relies on network inversion; i.e. calculating the ANN input which produces a desired output. HYPINV is a pedagogical algorithm, that extracts rules, in the form of hyperplanes. It is able to generate rules with arbitrarily desired fidelity, maintaining a fidelity-complexity tradeoff. To our knowledge, HYPINV is the only pedagogical rule extraction method, which extracts hyperplane rules from continuous or binary attribute neural networks. Different network inversion techniques, involving gradient descent as well as an evolutionary algorithm, are presented. An information theoretic treatment of rule extraction is presented. HYPINV is applied to example synthetic problems, to a real aerospace problem, and compared with similar algorithms using benchmark problems.
Induction of belief decision trees from data
NASA Astrophysics Data System (ADS)
AbuDahab, Khalil; Xu, Dong-ling; Keane, John
2012-09-01
In this paper, a method for acquiring belief rule-bases by inductive inference from data is described and evaluated. Existing methods extract traditional rules inductively from data, with consequents that are believed to be either 100% true or 100% false. Belief rules can capture uncertain or incomplete knowledge using uncertain belief degrees in consequents. Instead of using singled-value consequents, each belief rule deals with a set of collectively exhaustive and mutually exclusive consequents. The proposed method extracts belief rules from data which contain uncertain or incomplete knowledge.
1992-04-01
AND SCHEDULING" TIM FINN, UNIVERSITY OF MARYLAND, BALTIMORE COUNTY E. " EXTRACTING RULES FROM SOFTWARE FOR KNOWLEDGE-BASES" NOAH S. PRYWES, UNIVERSITY...Databases for Planning and Scheduling" Tim Finin, Unisys Corporation 8:30 - 9:00 " Extracting Rules from Software for Knowledge Baseso Noah Prywes, U. of...Space Requirements are Tractable E.G.: FEM, Multiplication Routines, Sorting Programs Lebmwmy fo Al Roseew d. The Ohio Male Unlversity A-2 Type 2
Molecular phylogenetics of mastodon and Tyrannosaurus rex.
Organ, Chris L; Schweitzer, Mary H; Zheng, Wenxia; Freimark, Lisa M; Cantley, Lewis C; Asara, John M
2008-04-25
We report a molecular phylogeny for a nonavian dinosaur, extending our knowledge of trait evolution within nonavian dinosaurs into the macromolecular level of biological organization. Fragments of collagen alpha1(I) and alpha2(I) proteins extracted from fossil bones of Tyrannosaurus rex and Mammut americanum (mastodon) were analyzed with a variety of phylogenetic methods. Despite missing sequence data, the mastodon groups with elephant and the T. rex groups with birds, consistent with predictions based on genetic and morphological data for mastodon and on morphological data for T. rex. Our findings suggest that molecular data from long-extinct organisms may have the potential for resolving relationships at critical areas in the vertebrate evolutionary tree that have, so far, been phylogenetically intractable.
Chemical name extraction based on automatic training data generation and rich feature set.
Yan, Su; Spangler, W Scott; Chen, Ying
2013-01-01
The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.
Knowledge Discovery in Textual Documentation: Qualitative and Quantitative Analyses.
ERIC Educational Resources Information Center
Loh, Stanley; De Oliveira, Jose Palazzo M.; Gastal, Fabio Leite
2001-01-01
Presents an application of knowledge discovery in texts (KDT) concerning medical records of a psychiatric hospital. The approach helps physicians to extract knowledge about patients and diseases that may be used for epidemiological studies, for training professionals, and to support physicians to diagnose and evaluate diseases. (Author/AEF)
Learning a Health Knowledge Graph from Electronic Medical Records.
Rotmensch, Maya; Halpern, Yoni; Tlimat, Abdulhakim; Horng, Steven; Sontag, David
2017-07-20
Demand for clinical decision support systems in medicine and self-diagnostic symptom checkers has substantially increased in recent years. Existing platforms rely on knowledge bases manually compiled through a labor-intensive process or automatically derived using simple pairwise statistics. This study explored an automated process to learn high quality knowledge bases linking diseases and symptoms directly from electronic medical records. Medical concepts were extracted from 273,174 de-identified patient records and maximum likelihood estimation of three probabilistic models was used to automatically construct knowledge graphs: logistic regression, naive Bayes classifier and a Bayesian network using noisy OR gates. A graph of disease-symptom relationships was elicited from the learned parameters and the constructed knowledge graphs were evaluated and validated, with permission, against Google's manually-constructed knowledge graph and against expert physician opinions. Our study shows that direct and automated construction of high quality health knowledge graphs from medical records using rudimentary concept extraction is feasible. The noisy OR model produces a high quality knowledge graph reaching precision of 0.85 for a recall of 0.6 in the clinical evaluation. Noisy OR significantly outperforms all tested models across evaluation frameworks (p < 0.01).
NASA Astrophysics Data System (ADS)
Garrett, P. E.; Jigmeddorj, B.; Radich, A. J.; Andreoiu, C.; Ball, G. C.; Bangay, J. C.; Bianco, L.; Bildstein, V.; Chagnon-Lessard, S.; Cross, D. S.; Demand, G. A.; Diaz Varela, A.; Dunlop, R.; Finlay, P.; Garnsworthy, A. B.; Green, K. L.; Hackman, G.; Hadinia, B.; Leach, K. G.; Michetti-Wilson, J.; Orce, J. N.; Rajabali, M. M.; Rand, E. T.; Starosta, K.; Sumithrarachchi, C.; Svensson, C. E.; Triambak, S.; Wang, Z. M.; Williams, S. J.; Wood, J. L.; Wong, J.; Yates, S. W.; Zganjar, E. F.
2016-09-01
The 8π spectrometer, located at TRIUMF-ISAC, was the world's most powerful spectrometer dedicated to β-decay studies until its decommissioning in early 2014 for replacement with the GRIFFIN array. An integral part of the 8π spectrometer was the Pentagonal Array for Conversion Electron Spectroscopy (PACES) consisting of 5 Si(Li) detectors used for charged-particle detection. PACES enabled both γ - e- and e- - e- coincidence measurements, which were crucial for increasing the sensitivity for discrete e- lines in the presence of large backgrounds. Examples from a 124Cs decay experiment, where the data were vital for the expansion of the 124Cs decay scheme, are shown. With suffcient statistics, measurements of conversion coeffcients can be used to extract the E0 components of Jπ → Jπ transitions for J ≠ 0, which is demonstrated for data obtained in 110In→110Cd decay. With knowledge of the shapes of the states involved, as obtained, for example, from the use of Kumar-Cline shape invariants, the mixing of the states can be extracted.
Wu, Zhenyu; Guo, Yang; Lin, Wenfang; Yu, Shuyang; Ji, Yang
2018-04-05
Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs) and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM). However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D) to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods.
Successful medical treatment for globe penetration following tooth extraction in a dog.
Guerreiro, Cleo E; Appelboam, Helen; Lowe, Robert C
2014-03-01
A five-year-old entire male Tibetan Terrier was referred for left-sided periorbital swelling and blepharospasm 4 days following ipsilateral maxillary tooth extraction. Examination of the left eye revealed mild exophthalmos, pain on retropulsion, and absent menace response and pupillary light reflexes. Examination of the posterior segment was not possible owing to the anterior segment pathology. Differential diagnoses considered were iatrogenic globe penetration and peribulbar abscess/cellulitis. Ocular ultrasound was consistent with a penetrating wound to the globe. Treatment with systemic prednisolone and marbofloxacin, and topical atropine sulfate 1%, prednisolone acetate, and brinzolamide was started. Marked clinical improvement allowed visual confirmation of the perforation. Oral prednisolone was tapered over the following 10 weeks. At final re-examination (10 months), the patient was visual, and fundic examination revealed an additional chorioretinal scar, most likely an exit wound that was obscured by vitreal debris on initial examinations. Neither scar was associated with retinal detachment. To the authors' knowledge, this is the first reported case of successful medical management of iatrogenic globe penetration following exodontic procedures. © 2013 American College of Veterinary Ophthalmologists.
Guo, Yang; Lin, Wenfang; Yu, Shuyang; Ji, Yang
2018-01-01
Predictive maintenance plays an important role in modern Cyber-Physical Systems (CPSs) and data-driven methods have been a worthwhile direction for Prognostics Health Management (PHM). However, two main challenges have significant influences on the traditional fault diagnostic models: one is that extracting hand-crafted features from multi-dimensional sensors with internal dependencies depends too much on expertise knowledge; the other is that imbalance pervasively exists among faulty and normal samples. As deep learning models have proved to be good methods for automatic feature extraction, the objective of this paper is to study an optimized deep learning model for imbalanced fault diagnosis for CPSs. Thus, this paper proposes a weighted Long Recurrent Convolutional LSTM model with sampling policy (wLRCL-D) to deal with these challenges. The model consists of 2-layer CNNs, 2-layer inner LSTMs and 2-Layer outer LSTMs, with under-sampling policy and weighted cost-sensitive loss function. Experiments are conducted on PHM 2015 challenge datasets, and the results show that wLRCL-D outperforms other baseline methods. PMID:29621131
A knowledge-driven approach to cluster validity assessment.
Bolshakova, Nadia; Azuaje, Francisco; Cunningham, Pádraig
2005-05-15
This paper presents an approach to assessing cluster validity based on similarity knowledge extracted from the Gene Ontology. The program is freely available for non-profit use on request from the authors.
Kim, Youngwoo; Hong, Byung Woo; Kim, Seung Ja; Kim, Jong Hyo
2014-07-01
A major challenge when distinguishing glandular tissues on mammograms, especially for area-based estimations, lies in determining a boundary on a hazy transition zone from adipose to glandular tissues. This stems from the nature of mammography, which is a projection of superimposed tissues consisting of different structures. In this paper, the authors present a novel segmentation scheme which incorporates the learned prior knowledge of experts into a level set framework for fully automated mammographic density estimations. The authors modeled the learned knowledge as a population-based tissue probability map (PTPM) that was designed to capture the classification of experts' visual systems. The PTPM was constructed using an image database of a selected population consisting of 297 cases. Three mammogram experts extracted regions for dense and fatty tissues on digital mammograms, which was an independent subset used to create a tissue probability map for each ROI based on its local statistics. This tissue class probability was taken as a prior in the Bayesian formulation and was incorporated into a level set framework as an additional term to control the evolution and followed the energy surface designed to reflect experts' knowledge as well as the regional statistics inside and outside of the evolving contour. A subset of 100 digital mammograms, which was not used in constructing the PTPM, was used to validate the performance. The energy was minimized when the initial contour reached the boundary of the dense and fatty tissues, as defined by experts. The correlation coefficient between mammographic density measurements made by experts and measurements by the proposed method was 0.93, while that with the conventional level set was 0.47. The proposed method showed a marked improvement over the conventional level set method in terms of accuracy and reliability. This result suggests that the proposed method successfully incorporated the learned knowledge of the experts' visual systems and has potential to be used as an automated and quantitative tool for estimations of mammographic breast density levels.
Ontology-Based Search of Genomic Metadata.
Fernandez, Javier D; Lenzerini, Maurizio; Masseroli, Marco; Venco, Francesco; Ceri, Stefano
2016-01-01
The Encyclopedia of DNA Elements (ENCODE) is a huge and still expanding public repository of more than 4,000 experiments and 25,000 data files, assembled by a large international consortium since 2007; unknown biological knowledge can be extracted from these huge and largely unexplored data, leading to data-driven genomic, transcriptomic, and epigenomic discoveries. Yet, search of relevant datasets for knowledge discovery is limitedly supported: metadata describing ENCODE datasets are quite simple and incomplete, and not described by a coherent underlying ontology. Here, we show how to overcome this limitation, by adopting an ENCODE metadata searching approach which uses high-quality ontological knowledge and state-of-the-art indexing technologies. Specifically, we developed S.O.S. GeM (http://www.bioinformatics.deib.polimi.it/SOSGeM/), a system supporting effective semantic search and retrieval of ENCODE datasets. First, we constructed a Semantic Knowledge Base by starting with concepts extracted from ENCODE metadata, matched to and expanded on biomedical ontologies integrated in the well-established Unified Medical Language System. We prove that this inference method is sound and complete. Then, we leveraged the Semantic Knowledge Base to semantically search ENCODE data from arbitrary biologists' queries. This allows correctly finding more datasets than those extracted by a purely syntactic search, as supported by the other available systems. We empirically show the relevance of found datasets to the biologists' queries.
Chen, Zhenyu; Li, Jianping; Wei, Liwei
2007-10-01
Recently, gene expression profiling using microarray techniques has been shown as a promising tool to improve the diagnosis and treatment of cancer. Gene expression data contain high level of noise and the overwhelming number of genes relative to the number of available samples. It brings out a great challenge for machine learning and statistic techniques. Support vector machine (SVM) has been successfully used to classify gene expression data of cancer tissue. In the medical field, it is crucial to deliver the user a transparent decision process. How to explain the computed solutions and present the extracted knowledge becomes a main obstacle for SVM. A multiple kernel support vector machine (MK-SVM) scheme, consisting of feature selection, rule extraction and prediction modeling is proposed to improve the explanation capacity of SVM. In this scheme, we show that the feature selection problem can be translated into an ordinary multiple parameters learning problem. And a shrinkage approach: 1-norm based linear programming is proposed to obtain the sparse parameters and the corresponding selected features. We propose a novel rule extraction approach using the information provided by the separating hyperplane and support vectors to improve the generalization capacity and comprehensibility of rules and reduce the computational complexity. Two public gene expression datasets: leukemia dataset and colon tumor dataset are used to demonstrate the performance of this approach. Using the small number of selected genes, MK-SVM achieves encouraging classification accuracy: more than 90% for both two datasets. Moreover, very simple rules with linguist labels are extracted. The rule sets have high diagnostic power because of their good classification performance.
Web-Based Knowledge Exchange through Social Links in the Workplace
ERIC Educational Resources Information Center
Filipowski, Tomasz; Kazienko, Przemyslaw; Brodka, Piotr; Kajdanowicz, Tomasz
2012-01-01
Knowledge exchange between employees is an essential feature of recent commercial organisations on the competitive market. Based on the data gathered by various information technology (IT) systems, social links can be extracted and exploited in knowledge exchange systems of a new kind. Users of such a system ask their queries and the system…
ERIC Educational Resources Information Center
Wu, Yun-Wu; Weng, Apollo; Weng, Kuo-Hua
2017-01-01
The purpose of this study is to design a knowledge conversion and management digital learning system for architecture design learning, helping students to share, extract, use and create their design knowledge through web-based interactive activities based on socialization, internalization, combination and externalization process in addition to…
2016-01-01
Observations of individual organisms (data) can be combined with expert ecological knowledge of species, especially causal knowledge, to model and extract from flower–visiting data useful information about behavioral interactions between insect and plant organisms, such as nectar foraging and pollen transfer. We describe and evaluate a method to elicit and represent such expert causal knowledge of behavioral ecology, and discuss the potential for wider application of this method to the design of knowledge-based systems for knowledge discovery in biodiversity and ecosystem informatics. PMID:27851814
Young, Ian; Waddell, Lisa; Cahill, Sarah; Kojima, Mina; Clarke, Renata; Rajic, Andrijana
2016-01-01
Low-moisture foods (LMF) are increasingly implicated in outbreaks of foodborne illness resulting in a significant public health burden. To inform the development of a new Codex Alimentarius code of hygienic practice for LMF, we applied a rapid knowledge synthesis and transfer approach to review global research on the burden of illness, prevalence, and interventions to control nine selected microbial hazards in eight categories of LMF. Knowledge synthesis methods included an integrated scoping review (search strategy, relevance screening and confirmation, and evidence mapping), systematic review (detailed data extraction), and meta-analysis of prevalence data. Knowledge transfer of the results was achieved through multiple reporting formats, including evidence summary cards. We identified 214 unique outbreaks and 204 prevalence and 126 intervention studies. ‘Cereals and grains’ (n=142) and Salmonella spp. (n=278) were the most commonly investigated LMF and microbial hazard categories, respectively. Salmonella spp. was implicated in the most outbreaks (n=96, 45%), several of which were large and widespread, resulting in the most hospitalizations (n=895, 89%) and deaths (n=14, 74%). Salmonella spp. had a consistently low prevalence across all LMF categories (0-3%), while other hazards (e.g. B. cereus) were found at highly variable levels. A variety of interventions were investigated in small challenge trials. Key knowledge gaps included under-reporting of LMF outbreaks, limited reporting of microbial concentration data from prevalence studies, and a lack of intervention-efficacy research under commercial conditions. Summary cards were a useful knowledge transfer format to inform complementary risk ranking activities. This review builds upon previous work in this area by synthesizing a broad range of evidence using a structured, transparent, and integrated approach to provide timely evidence-informed inputs into international guidelines. PMID:26613924
Albrecht, Lauren; Archibald, Mandy; Snelgrove-Clarke, Erna; Scott, Shannon D
2016-01-01
Strategies to assist evidence-based decision-making for healthcare professionals are crucial to ensure high quality patient care and outcomes. The goal of this systematic review was to identify and synthesize the evidence on knowledge translation interventions aimed at putting explicit research evidence into child health practice. A comprehensive search of thirteen electronic databases was conducted, restricted by date (1985-2011) and language (English). Articles were included if: 1) studies were randomized controlled trials (RCT), controlled clinical trials (CCT), or controlled before-and-after (CBA) studies; 2) target population was child health professionals; 3) interventions implemented research in child health practice; and 4) outcomes were measured at the professional/process, patient, or economic level. Two reviewers independently extracted data and assessed methodological quality. Study data were aggregated and analyzed using evidence tables. Twenty-one studies (13 RCT, 2 CCT, 6 CBA) were included. The studies employed single (n=9) and multiple interventions (n=12). The methodological quality of the included studies was largely moderate (n=8) or weak (n=11). Of the studies with moderate to strong methodological quality ratings, three demonstrated consistent, positive effect(s) on the primary outcome(s); effective knowledge translation interventions were two single, non-educational interventions and one multiple, educational intervention. This multidisciplinary systematic review in child health setting identified effective knowledge translation strategies assessed by the most rigorous research designs. Given the overall poor quality of the research literature, specific recommendations were made to improve knowledge translation efforts in child health. Copyright © 2016 Elsevier Inc. All rights reserved.
Disambiguation of patent inventors and assignees using high-resolution geolocation data.
Morrison, Greg; Riccaboni, Massimo; Pammolli, Fabio
2017-05-16
Patent data represent a significant source of information on innovation, knowledge production, and the evolution of technology through networks of citations, co-invention and co-assignment. A major obstacle to extracting useful information from this data is the problem of name disambiguation: linking alternate spellings of individuals or institutions to a single identifier to uniquely determine the parties involved in knowledge production and diffusion. In this paper, we describe a new algorithm that uses high-resolution geolocation to disambiguate both inventors and assignees on about 8.5 million patents found in the European Patent Office (EPO), under the Patent Cooperation Treaty (PCT), and in the US Patent and Trademark Office (USPTO). We show this disambiguation is consistent with a number of ground-truth benchmarks of both assignees and inventors, significantly outperforming the use of undisambiguated names to identify unique entities. A significant benefit of this work is the high quality assignee disambiguation with coverage across the world coupled with an inventor disambiguation (that is competitive with other state of the art approaches) in multiple patent offices.
Disambiguation of patent inventors and assignees using high-resolution geolocation data
Morrison, Greg; Riccaboni, Massimo; Pammolli, Fabio
2017-01-01
Patent data represent a significant source of information on innovation, knowledge production, and the evolution of technology through networks of citations, co-invention and co-assignment. A major obstacle to extracting useful information from this data is the problem of name disambiguation: linking alternate spellings of individuals or institutions to a single identifier to uniquely determine the parties involved in knowledge production and diffusion. In this paper, we describe a new algorithm that uses high-resolution geolocation to disambiguate both inventors and assignees on about 8.5 million patents found in the European Patent Office (EPO), under the Patent Cooperation Treaty (PCT), and in the US Patent and Trademark Office (USPTO). We show this disambiguation is consistent with a number of ground-truth benchmarks of both assignees and inventors, significantly outperforming the use of undisambiguated names to identify unique entities. A significant benefit of this work is the high quality assignee disambiguation with coverage across the world coupled with an inventor disambiguation (that is competitive with other state of the art approaches) in multiple patent offices. PMID:28509897
Automatic acquisition of domain and procedural knowledge
NASA Technical Reports Server (NTRS)
Ferber, H. J.; Ali, M.
1988-01-01
The design concept and performance of AKAS, an automated knowledge-acquisition system for the development of expert systems, are discussed. AKAS was developed using the FLES knowledge base for the electrical system of the B-737 aircraft and employs a 'learn by being told' strategy. The system comprises four basic modules, a system administration module, a natural-language concept-comprehension module, a knowledge-classification/extraction module, and a knowledge-incorporation module; details of the module architectures are explored.
Argumentation Based Joint Learning: A Novel Ensemble Learning Approach
Xu, Junyi; Yao, Li; Li, Le
2015-01-01
Recently, ensemble learning methods have been widely used to improve classification performance in machine learning. In this paper, we present a novel ensemble learning method: argumentation based multi-agent joint learning (AMAJL), which integrates ideas from multi-agent argumentation, ensemble learning, and association rule mining. In AMAJL, argumentation technology is introduced as an ensemble strategy to integrate multiple base classifiers and generate a high performance ensemble classifier. We design an argumentation framework named Arena as a communication platform for knowledge integration. Through argumentation based joint learning, high quality individual knowledge can be extracted, and thus a refined global knowledge base can be generated and used independently for classification. We perform numerous experiments on multiple public datasets using AMAJL and other benchmark methods. The results demonstrate that our method can effectively extract high quality knowledge for ensemble classifier and improve the performance of classification. PMID:25966359
ECO: A Framework for Entity Co-Occurrence Exploration with Faceted Navigation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halliday, K. D.
2010-08-20
Even as highly structured databases and semantic knowledge bases become more prevalent, a substantial amount of human knowledge is reported as written prose. Typical textual reports, such as news articles, contain information about entities (people, organizations, and locations) and their relationships. Automatically extracting such relationships from large text corpora is a key component of corporate and government knowledge bases. The primary goal of the ECO project is to develop a scalable framework for extracting and presenting these relationships for exploration using an easily navigable faceted user interface. ECO uses entity co-occurrence relationships to identify related entities. The system aggregates andmore » indexes information on each entity pair, allowing the user to rapidly discover and mine relational information.« less
Quiñones, Karin D; Su, Hua; Marshall, Byron; Eggers, Shauna; Chen, Hsinchun
2007-09-01
Explosive growth in biomedical research has made automated information extraction, knowledge integration, and visualization increasingly important and critically needed. The Arizona BioPathway (ABP) system extracts and displays biological regulatory pathway information from the abstracts of journal articles. This study uses relations extracted from more than 200 PubMed abstracts presented in a tabular and graphical user interface with built-in search and aggregation functionality. This paper presents a task-centered assessment of the usefulness and usability of the ABP system focusing on its relation aggregation and visualization functionalities. Results suggest that our graph-based visualization is more efficient in supporting pathway analysis tasks and is perceived as more useful and easier to use as compared to a text-based literature-viewing method. Relation aggregation significantly contributes to knowledge-acquisition efficiency. Together, the graphic and tabular views in the ABP Visualizer provide a flexible and effective interface for pathway relation browsing and analysis. Our study contributes to pathway-related research and biological information extraction by assessing the value of a multiview, relation-based interface that supports user-controlled exploration of pathway information across multiple granularities.
Enhancing biomedical text summarization using semantic relation extraction.
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.
Using isotopic dilution to assess chemical extraction of labile Ni, Cu, Zn, Cd and Pb in soils.
Garforth, J M; Bailey, E H; Tye, A M; Young, S D; Lofts, S
2016-07-01
Chemical extractants used to measure labile soil metal must ideally select for and solubilise the labile fraction, with minimal solubilisation of non-labile metal. We assessed four extractants (0.43 M HNO3, 0.43 M CH3COOH, 0.05 M Na2H2EDTA and 1 M CaCl2) against these requirements. For soils contaminated by contrasting sources, we compared isotopically exchangeable Ni, Cu, Zn, Cd and Pb (EValue, mg kg(-1)), with the concentrations of metal solubilised by the chemical extractants (MExt, mg kg(-1)). Crucially, we also determined isotopically exchangeable metal in the soil-extractant systems (EExt, mg kg(-1)). Thus 'EExt - EValue' quantifies the concentration of mobilised non-labile metal, while 'EExt - MExt' represents adsorbed labile metal in the presence of the extractant. Extraction with CaCl2 consistently underestimated EValue for Ni, Cu, Zn and Pb, while providing a reasonable estimate of EValue for Cd. In contrast, extraction with HNO3 both consistently mobilised non-labile metal and overestimated the EValue. Extraction with CH3COOH appeared to provide a good estimate of EValue for Cd; however, this was the net outcome of incomplete solubilisation of labile metal, and concurrent mobilisation of non-labile metal by the extractant (MExt
Estimating kinetic mechanisms with prior knowledge I: Linear parameter constraints.
Salari, Autoosa; Navarro, Marco A; Milescu, Mirela; Milescu, Lorin S
2018-02-05
To understand how ion channels and other proteins function at the molecular and cellular levels, one must decrypt their kinetic mechanisms. Sophisticated algorithms have been developed that can be used to extract kinetic parameters from a variety of experimental data types. However, formulating models that not only explain new data, but are also consistent with existing knowledge, remains a challenge. Here, we present a two-part study describing a mathematical and computational formalism that can be used to enforce prior knowledge into the model using constraints. In this first part, we focus on constraints that enforce explicit linear relationships involving rate constants or other model parameters. We develop a simple, linear algebra-based transformation that can be applied to enforce many types of model properties and assumptions, such as microscopic reversibility, allosteric gating, and equality and inequality parameter relationships. This transformation converts the set of linearly interdependent model parameters into a reduced set of independent parameters, which can be passed to an automated search engine for model optimization. In the companion article, we introduce a complementary method that can be used to enforce arbitrary parameter relationships and any constraints that quantify the behavior of the model under certain conditions. The procedures described in this study can, in principle, be coupled to any of the existing methods for solving molecular kinetics for ion channels or other proteins. These concepts can be used not only to enforce existing knowledge but also to formulate and test new hypotheses. © 2018 Salari et al.
From specific examples to general knowledge in language learning.
Tamminen, Jakke; Davis, Matthew H; Rastle, Kathleen
2015-06-01
The extraction of general knowledge from individual episodes is critical if we are to learn new knowledge or abilities. Here we uncover some of the key cognitive mechanisms that characterise this process in the domain of language learning. In five experiments adult participants learned new morphological units embedded in fictitious words created by attaching new affixes (e.g., -afe) to familiar word stems (e.g., "sleepafe is a participant in a study about the effects of sleep"). Participants' ability to generalise semantic knowledge about the affixes was tested using tasks requiring the comprehension and production of novel words containing a trained affix (e.g., sailafe). We manipulated the delay between training and test (Experiment 1), the number of unique exemplars provided for each affix during training (Experiment 2), and the consistency of the form-to-meaning mapping of the affixes (Experiments 3-5). In a task where speeded online language processing is required (semantic priming), generalisation was achieved only after a memory consolidation opportunity following training, and only if the training included a sufficient number of unique exemplars. Semantic inconsistency disrupted speeded generalisation unless consolidation was allowed to operate on one of the two affix-meanings before introducing inconsistencies. In contrast, in tasks that required slow, deliberate reasoning, generalisation could be achieved largely irrespective of the above constraints. These findings point to two different mechanisms of generalisation that have different cognitive demands and rely on different types of memory representations. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Susulovska, Solomia; Castillo, Pablo; Archidona-Yuste, Antonio
2017-01-01
Seven needle nematode species of the genus Longidorus have been reported in Ukraine. Nematological surveys for needle nematodes were carried out in Ukraine between 2016 and 2017 and two nematode species of Longidorus (L. caespiticola and L. poessneckensis) were collected from natural and anthropogenically altered habitats on the territory of Opillia and Zakarpattia in Ukraine. Nematodes were extracted from 500 cm3 of soil by modified sieving and decanting method. Extracted specimens were processed to glycerol and mounted on permanent slides and subsequently identified morphologically and molecularly. Nematode DNA was extracted from single individuals and PCR assays were conducted as previously described for D2–D3 expansion segments of 28S rRNA. Sequence alignments for D2–D3 from L. caespiticola showed 97%–99% similarity to other sequences of L. caespiticola deposited in GenBank from Belgium, Bulgaria, Czech Republic, Russia, Slovenia, and Scotland. Similarly, D2–D3 sequence alignments from L. poessneckensis, showed 99% to other sequences of L. poessneckensis deposited in GenBank from Slovakia and Czech Republic. Morphology, morphometry, and molecular data obtained from these samples were consistent with L. caespiticola and L. poessneckensis identification. To our knowledge, these are the first reports of L. caespiticola and L. poessneckensis in Ukraine, extending the geographical distribution of these species. PMID:29353928
Antileishmanial Phenylpropanoids from the Leaves of Hyptis pectinata (L.) Poit
Falcao, Rosangela A.; do Nascimento, Patricia L. A.; de Souza, Silvana A.; da Silva, Telma M. G.; de Queiroz, Aline C.; da Matta, Carolina B. B.; Moreira, Magna S. A.; Camara, Celso A.; Silva, Tania M. S.
2013-01-01
Hyptis pectinata, popularly known in Brazil as “sambacaitá” or “canudinho,” is an aromatic shrub largely grown in the northeast of Brazil. The leaves and bark are used in an infusion for the treatment of throat and skin inflammations, bacterial infections, pain, and cancer. Analogues of rosmarinic acid and flavonoids were obtained from the leaves of Hyptis pectinata and consisted of two new compounds, sambacaitaric acid (1) and 3-O-methyl-sambacaitaric acid (2), and nine known compounds, rosmarinic acid (3), 3-O-methyl-rosmarinic acid (4), ethyl caffeate (5), nepetoidin A (6), nepetoidin B (7), cirsiliol (8), circimaritin (9), 7-O-methylluteolin (10), and genkwanin (11). The structures of these compounds were determined by spectroscopic methods. Compounds 1–5, and 7 were evaluated in vitro against the promastigote form of L. braziliensis, and the ethanol extract. The hexane, ethyl acetate, and methanol-water fractions were also evaluated. The EtOH extract, the hexane extract, EtOAc, MeOH:H2O fractions; and compounds 1, 2 and 4 exhibited antileishmanial activity, and compound 1 was as potent as pentamidine. In contrast, compounds 3, 5, and 7 did not present activity against the promastigote form of L. braziliensis below 100 µM. To our knowledge, compounds 1 and 2 are being described for the first time. PMID:23983783
Aggregating Concept Map Data to Investigate the Knowledge of Beginning CS Students
ERIC Educational Resources Information Center
Mühling, Andreas
2016-01-01
Concept maps have a long history in educational settings as a tool for teaching, learning, and assessing. As an assessment tool, they are predominantly used to extract the structural configuration of learners' knowledge. This article presents an investigation of the knowledge structures of a large group of beginning CS students. The investigation…
Fan, Yahui; Zhang, Shaoru; Li, Yan; Li, Yuelu; Zhang, Tianhua; Liu, Weiping; Jiang, Hualin
2018-05-08
TB outbreaking in schools is extremely complex, and presents a major challenge for public health. Understanding the knowledge, attitudes and practices among student TB patients in such settings is fundamental when it comes to decreasing future TB cases. The objective of this study was to develop a Knowledge, Attitudes and Practices Questionnaire among Student Tuberculosis Patients (STBP-KAPQ), and evaluate its psychometric properties. This study was conducted in three stages: item construction, pilot testing in 10 student TB patients and psychometric testing, including reliability and validity. The item pool for the questionnaire was compiled from literature review and early individual interviews. The questionnaire items were evaluated by the Delphi method based on 12 experts. Reliability and validity were assessed using student TB patients (n = 416) and healthy students (n = 208). Reliability was examined with internal consistency reliability and test-retest reliability. Content validity was calculated by content validity index (CVI); Construct validity was examined using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA); The Public Tuberculosis Knowledge, Attitudes and Practices Questionnaire (PTB-KAPQ) was applied to evaluate criterion validity; As concerning discriminant validity, T-test was performed. The final STBP-KAPQ consisted of three dimensions and 25 items. Cronbach's α coefficient and intraclass correlation coefficient (ICC) was 0.817 and 0.765, respectively. Content validity index (CVI) was 0.962. Seven common factors were extracted by principal factor analysis and varimax rotation, with a cumulative contribution of 66.253%. The resulting CFA model of the STBP-KAPQ exhibited an appropriate model fit (χ2/df = 1.74, RMSEA = 0.082, CFI = 0.923, NNFI = 0.962). STBP-KAPQ and PTB-KAPQ had a strong correlation in the knowledge part, and the correlation coefficient was 0.606 (p < 0.05). Discriminant validity was supported through a significant difference between student TB patients and healthy students across all domains (p < 0.05). An instrument, "Knowledge, Attitudes and Practices Questionnaire among Student Tuberculosis Patients (STBP-KAPQ)" was developed. Psychometric testing indicated that it had adequate validity and reliability for use in KAP researches with student TB patients in China. The new tool might help public health researchers evaluate the level of KAP in student TB patients, and it could also be used to examine the effects of TB health education.
Parental Monitoring and Its Associations With Adolescent Sexual Risk Behavior: A Meta-analysis
Dittus, Patricia J.; Michael, Shannon L.; Becasen, Jeffrey S.; Gloppen, Kari M.; McCarthy, Katharine; Guilamo-Ramos, Vincent
2017-01-01
CONTEXT Increasingly, health care providers are using approaches targeting parents in an effort to improve adolescent sexual and reproductive health. Research is needed to elucidate areas in which providers can target adolescents and parents effectively. Parental monitoring offers one such opportunity, given consistent protective associations with adolescent sexual risk behavior. However, less is known about which components of monitoring are most effective and most suitable for provider-initiated family-based interventions. OBJECTIVE We performed a meta-analysis to assess the magnitude of association between parental monitoring and adolescent sexual intercourse, condom use, and contraceptive use. DATA SOURCES We conducted searches of Medline, the Cumulative Index to Nursing and Allied Health Literature, PsycInfo, Cochrane, the Education Resources Information Center, Social Services Abstracts, Sociological Abstracts, Proquest, and Google Scholar. STUDY SELECTION We selected studies published from 1984 to 2014 that were written in English, included adolescents, and examined relationships between parental monitoring and sexual behavior. DATA EXTRACTION We extracted effect size data to calculate pooled odds ratios (ORs) by using a mixed-effects model. RESULTS Higher overall monitoring (pooled OR, 0.74; 95% confidence interval [CI], 0.69–0.80), monitoring knowledge (pooled OR, 0.81; 95% CI, 0.73–0.90), and rule enforcement (pooled OR, 0.67; 95% CI, 0.59–0.75) were associated with delayed sexual intercourse. Higher overall monitoring (pooled OR, 1.12; 95% CI, 1.01–1.24) and monitoring knowledge (pooled OR, 1.14; 95% CI, 1.01–1.31) were associated with greater condom use. Finally, higher overall monitoring was associated with increased contraceptive use (pooled OR, 1.42; 95% CI, 1.09–1.86), as was monitoring knowledge (pooled OR, 2.27; 95% CI, 1.42–3.63). LIMITATIONS Effect sizes were not uniform across studies, and most studies were cross-sectional. CONCLUSIONS Provider-initiated family-based interventions focused on parental monitoring represent a novel mechanism for enhancing adolescent sexual and reproductive health. PMID:26620067
Thomas, Evert; Valdivia, Jheyson; Alcázar Caicedo, Carolina; Quaedvlieg, Julia; Wadt, Lucia Helena O; Corvera, Ronald
2017-01-01
Understanding the factors that underlie the production of non-timber forest products (NTFPs), as well as regularly monitoring production levels, are key to allow sustainability assessments of NTFP extractive economies. Brazil nut (Bertholletia excelsa, Lecythidaceae) seed harvesting from natural forests is one of the cornerstone NTFP economies in Amazonia. In the Peruvian Amazon it is organized in a concession system. Drawing on seed production estimates of >135,000 individual Brazil nut trees from >400 concessions and ethno-ecological interviews with >80 concession holders, here we aimed to (i) assess the accuracy of seed production estimates by Brazil nut seed harvesters, and (ii) validate their traditional ecological knowledge (TEK) about the variables that influence Brazil nut production. We compared productivity estimates with actual field measurements carried out in the study area and found a positive correlation between them. Furthermore, we compared the relationships between seed production and a number of phenotypic, phytosanitary and environmental variables described in literature with those obtained for the seed production estimates and found high consistency between them, justifying the use of the dataset for validating TEK and innovative hypothesis testing. As expected, nearly all TEK on Brazil nut productivity was corroborated by our data. This is reassuring as Brazil nut concession holders, and NTFP harvesters at large, rely on their knowledge to guide the management of the trees upon which their extractive economies are based. Our findings suggest that productivity estimates of Brazil nut trees and possibly other NTFP-producing species could replace or complement actual measurements, which are very expensive and labour intensive, at least in areas where harvesters have a tradition of collecting NTFPs from the same trees over multiple years or decades. Productivity estimates might even be sourced from harvesters through registers on an annual basis, thus allowing a more cost-efficient and robust monitoring of productivity levels.
Thomas, Evert; Valdivia, Jheyson; Alcázar Caicedo, Carolina; Quaedvlieg, Julia; Wadt, Lucia Helena O.; Corvera, Ronald
2017-01-01
Understanding the factors that underlie the production of non-timber forest products (NTFPs), as well as regularly monitoring production levels, are key to allow sustainability assessments of NTFP extractive economies. Brazil nut (Bertholletia excelsa, Lecythidaceae) seed harvesting from natural forests is one of the cornerstone NTFP economies in Amazonia. In the Peruvian Amazon it is organized in a concession system. Drawing on seed production estimates of >135,000 individual Brazil nut trees from >400 concessions and ethno-ecological interviews with >80 concession holders, here we aimed to (i) assess the accuracy of seed production estimates by Brazil nut seed harvesters, and (ii) validate their traditional ecological knowledge (TEK) about the variables that influence Brazil nut production. We compared productivity estimates with actual field measurements carried out in the study area and found a positive correlation between them. Furthermore, we compared the relationships between seed production and a number of phenotypic, phytosanitary and environmental variables described in literature with those obtained for the seed production estimates and found high consistency between them, justifying the use of the dataset for validating TEK and innovative hypothesis testing. As expected, nearly all TEK on Brazil nut productivity was corroborated by our data. This is reassuring as Brazil nut concession holders, and NTFP harvesters at large, rely on their knowledge to guide the management of the trees upon which their extractive economies are based. Our findings suggest that productivity estimates of Brazil nut trees and possibly other NTFP-producing species could replace or complement actual measurements, which are very expensive and labour intensive, at least in areas where harvesters have a tradition of collecting NTFPs from the same trees over multiple years or decades. Productivity estimates might even be sourced from harvesters through registers on an annual basis, thus allowing a more cost-efficient and robust monitoring of productivity levels. PMID:28837638
Liquete, Camino; Piroddi, Chiara; Drakou, Evangelia G; Gurney, Leigh; Katsanevakis, Stelios; Charef, Aymen; Egoh, Benis
2013-01-01
Research on ecosystem services has grown exponentially during the last decade. Most of the studies have focused on assessing and mapping terrestrial ecosystem services highlighting a knowledge gap on marine and coastal ecosystem services (MCES) and an urgent need to assess them. We reviewed and summarized existing scientific literature related to MCES with the aim of extracting and classifying indicators used to assess and map them. We found 145 papers that specifically assessed marine and coastal ecosystem services from which we extracted 476 indicators. Food provision, in particular fisheries, was the most extensively analyzed MCES while water purification and coastal protection were the most frequently studied regulating and maintenance services. Also recreation and tourism under the cultural services was relatively well assessed. We highlight knowledge gaps regarding the availability of indicators that measure the capacity, flow or benefit derived from each ecosystem service. The majority of the case studies was found in mangroves and coastal wetlands and was mainly concentrated in Europe and North America. Our systematic review highlighted the need of an improved ecosystem service classification for marine and coastal systems, which is herein proposed with definitions and links to previous classifications. This review summarizes the state of available information related to ecosystem services associated with marine and coastal ecosystems. The cataloging of MCES indicators and the integrated classification of MCES provided in this paper establish a background that can facilitate the planning and integration of future assessments. The final goal is to establish a consistent structure and populate it with information able to support the implementation of biodiversity conservation policies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Sheo B.; Ondeyka, John G.; Herath, Kithsiri B.
Natural products continue to serve as one of the best sources for discovery of antibacterial agents as exemplified by the recent discoveries of platensimycin and platencin. Chemical modifications as well as discovery of congeners are the main sources for gaining knowledge of structure-activity relationship of natural products. Screening for congeners in the extracts of the fermentation broths of Streptomyces platensis led to the isolation of platencin A{sub 1}, a hydroxy congener of platencin. The hydroxylation of the tricyclic enone moiety negatively affected the antibacterial activity and appears to be consistent with the hydrophobic binding pocket of the FabF. Isolation, structure,more » enzyme-bound structure and activity of platencin A{sub 1} and two other congeners have been described.« less
Acute generalized exanthematous pustulosis induced by the essential oil of Pistacia lentiscus.
Zaraa, I; Ben Taazayet, S; Trojjet, S; El Euch, D; Chelly, I; Haouet, S; Mokni, M; Ben Osman, A
2012-06-01
Acute generalized exanthematous pustulosis (AGEP) is an uncommon pustular eruption characterized by small nonfollicular pustules on an erythematous background, sometimes associated with fever and neutrophilia. Over 90% of cases are drug-induced; however, it can be caused in rare cases by other agents. We report two cases of AGEP secondary to ingestion of Pistacia lentiscus essential oil, the first two such cases to our knowledge. The cutaneous morphology, disease course and histological findings were consistent with a definite diagnosis of AGEP, based on the criteria of the EuroSCAR study group. These two cases highlight the need to consider herbal extracts as a potential rare cause of AGEP and to ensure the safety of herbal medicines. © The Author(s). CED © 2012 British Association of Dermatologists.
Covariant kaon dynamics and kaon flow in heavy ion collisions
NASA Astrophysics Data System (ADS)
Zheng, Yu-Ming; Fuchs, C.; Faessler, Amand; Shekhter, K.; Yan, Yu-Peng; Kobdaj, Chinorat
2004-03-01
The influence of the chiral mean field on the K+ transverse flow in heavy ion collisions at SIS energy is investigated within covariant kaon dynamics. For the kaon mesons inside the nuclear medium a quasiparticle picture including scalar and vector fields is adopted and compared to the standard treatment with a static potential. It is confirmed that a Lorentz force from spatial component of the vector field provides an important contribution to the in-medium kaon dynamics and strongly counterbalances the influence of the vector potential on the K+ in-plane flow. The FOPI data can be reasonably described using in-medium kaon potentials based on effective chiral models. The information on the in-medium K+ potential extracted from kaon flow is consistent with the knowledge from other sources.
Comparative analysis of prodigiosin isolated from endophyte Serratia marcescens.
Khanam, B; Chandra, R
2018-03-01
Extraction of pigments from endophytes is an uphill task. Up till now, there are no efficient methods available to extract the maximum amount of prodigiosin from Serratia marcescens. This is one of the important endophytes of Beta vulgaris L. The present work was carried out for the comparative study of six different extraction methods such as homogenization, ultrasonication, freezing and thawing, heat treatment, organic solvents and inorganic acids to evaluate the efficiency of prodigiosin yield. Our results demonstrated that highest extraction was observed in ultrasonication (98·1 ± 1·7%) while the lowest extraction by freezing and thawing (31·8 ± 3·8%) methods. However, thin layer chromatography, high-performance liquid chromatography and Fourier transform infrared data suggest that bioactive pigment in the extract was prodigiosin. To the best of our knowledge, this is the first comprehensive study of extraction methods and identification and purification of prodigiosin from cell biomass of Ser. marcescens isolated from Beta vulgaris L. The prodigiosin family is a potent drug with anticancer, antimalarial, antibacterial, antifungal, antiproliferative and immunosuppressive activities. Moreover, it has immense potential in pharmaceutical, food and textile industries. For the industrial perspective, it is essential to achieve purified, high yield and cost-effective extraction of prodigiosin. To the best of our knowledge, this is the first comprehensive study on prodigiosin extraction and also the first report on endophyte Serratia marcescens isolated from Beta vulgaris L. The significance of our results is to extract high amount and good quality prodigiosin for commercial application. © 2017 The Society for Applied Microbiology.
Reconstituting protein interaction networks using parameter-dependent domain-domain interactions
2013-01-01
Background We can describe protein-protein interactions (PPIs) as sets of distinct domain-domain interactions (DDIs) that mediate the physical interactions between proteins. Experimental data confirm that DDIs are more consistent than their corresponding PPIs, lending support to the notion that analyses of DDIs may improve our understanding of PPIs and lead to further insights into cellular function, disease, and evolution. However, currently available experimental DDI data cover only a small fraction of all existing PPIs and, in the absence of structural data, determining which particular DDI mediates any given PPI is a challenge. Results We present two contributions to the field of domain interaction analysis. First, we introduce a novel computational strategy to merge domain annotation data from multiple databases. We show that when we merged yeast domain annotations from six annotation databases we increased the average number of domains per protein from 1.05 to 2.44, bringing it closer to the estimated average value of 3. Second, we introduce a novel computational method, parameter-dependent DDI selection (PADDS), which, given a set of PPIs, extracts a small set of domain pairs that can reconstruct the original set of protein interactions, while attempting to minimize false positives. Based on a set of PPIs from multiple organisms, our method extracted 27% more experimentally detected DDIs than existing computational approaches. Conclusions We have provided a method to merge domain annotation data from multiple sources, ensuring large and consistent domain annotation for any given organism. Moreover, we provided a method to extract a small set of DDIs from the underlying set of PPIs and we showed that, in contrast to existing approaches, our method was not biased towards DDIs with low or high occurrence counts. Finally, we used these two methods to highlight the influence of the underlying annotation density on the characteristics of extracted DDIs. Although increased annotations greatly expanded the possible DDIs, the lack of knowledge of the true biological false positive interactions still prevents an unambiguous assignment of domain interactions responsible for all protein network interactions. Executable files and examples are given at: http://www.bhsai.org/downloads/padds/ PMID:23651452
Harvesting Intelligence in Multimedia Social Tagging Systems
NASA Astrophysics Data System (ADS)
Giannakidou, Eirini; Kaklidou, Foteini; Chatzilari, Elisavet; Kompatsiaris, Ioannis; Vakali, Athena
As more people adopt tagging practices, social tagging systems tend to form rich knowledge repositories that enable the extraction of patterns reflecting the way content semantics is perceived by the web users. This is of particular importance, especially in the case of multimedia content, since the availability of such content in the web is very high and its efficient retrieval using textual annotations or content-based automatically extracted metadata still remains a challenge. It is argued that complementing multimedia analysis techniques with knowledge drawn from web social annotations may facilitate multimedia content management. This chapter focuses on analyzing tagging patterns and combining them with content feature extraction methods, generating, thus, intelligence from multimedia social tagging systems. Emphasis is placed on using all available "tracks" of knowledge, that is tag co-occurrence together with semantic relations among tags and low-level features of the content. Towards this direction, a survey on the theoretical background and the adopted practices for analysis of multimedia social content are presented. A case study from Flickr illustrates the efficiency of the proposed approach.
An in vitro study on the risk of non-allergic type-I like hypersensitivity to Momordica charantia.
Sagkan, Rahsan Ilikci
2013-10-26
Momordica charantia (MC) is a tropical plant that is extensively used in folk medicine. However, the knowledge about side effects of this plant is relatively little according to knowledge about its therapeutic effects. The aim of this study is to reveal the effects of non-allergic type-I like hypersensitivity to MC by an experiment which was designed in vitro. In the present study, the expression of CD63 and CD203c on peripheral blood basophils against different dilutions of MC extracts was measured using flow cytometry and compared with one another. In addition to this, intra-assay CV's of testing extracts were calculated for precision on reproducibility of test results. It was observed that the fruit extract of MC at 1/100 and 1/1000 dilutions significantly increased active basophils compared to same extract at 1/10000 dilution. In conclusion, Momordica charantia may elicit a non-allergic type-I like hypersensitivity reaction in especially susceptible individuals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olama, Mohammed M; Nutaro, James J; Sukumar, Sreenivas R
2013-01-01
The success of data-driven business in government, science, and private industry is driving the need for seamless integration of intra and inter-enterprise data sources to extract knowledge nuggets in the form of correlations, trends, patterns and behaviors previously not discovered due to physical and logical separation of datasets. Today, as volume, velocity, variety and complexity of enterprise data keeps increasing, the next generation analysts are facing several challenges in the knowledge extraction process. Towards addressing these challenges, data-driven organizations that rely on the success of their analysts have to make investment decisions for sustainable data/information systems and knowledge discovery. Optionsmore » that organizations are considering are newer storage/analysis architectures, better analysis machines, redesigned analysis algorithms, collaborative knowledge management tools, and query builders amongst many others. In this paper, we present a concept of operations for enabling knowledge discovery that data-driven organizations can leverage towards making their investment decisions. We base our recommendations on the experience gained from integrating multi-agency enterprise data warehouses at the Oak Ridge National Laboratory to design the foundation of future knowledge nurturing data-system architectures.« less
Ontology-guided data preparation for discovering genotype-phenotype relationships.
Coulet, Adrien; Smaïl-Tabbone, Malika; Benlian, Pascale; Napoli, Amedeo; Devignes, Marie-Dominique
2008-04-25
Complexity and amount of post-genomic data constitute two major factors limiting the application of Knowledge Discovery in Databases (KDD) methods in life sciences. Bio-ontologies may nowadays play key roles in knowledge discovery in life science providing semantics to data and to extracted units, by taking advantage of the progress of Semantic Web technologies concerning the understanding and availability of tools for knowledge representation, extraction, and reasoning. This paper presents a method that exploits bio-ontologies for guiding data selection within the preparation step of the KDD process. We propose three scenarios in which domain knowledge and ontology elements such as subsumption, properties, class descriptions, are taken into account for data selection, before the data mining step. Each of these scenarios is illustrated within a case-study relative to the search of genotype-phenotype relationships in a familial hypercholesterolemia dataset. The guiding of data selection based on domain knowledge is analysed and shows a direct influence on the volume and significance of the data mining results. The method proposed in this paper is an efficient alternative to numerical methods for data selection based on domain knowledge. In turn, the results of this study may be reused in ontology modelling and data integration.
Espresso coffee foam delays cooling of the liquid phase.
Arii, Yasuhiro; Nishizawa, Kaho
2017-04-01
Espresso coffee foam, called crema, is known to be a marker of the quality of espresso coffee extraction. However, the role of foam in coffee temperature has not been quantitatively clarified. In this study, we used an automatic machine for espresso coffee extraction. We evaluated whether the foam prepared using the machine was suitable for foam analysis. After extraction, the percentage and consistency of the foam were measured using various techniques, and changes in the foam volume were tracked over time. Our extraction method, therefore, allowed consistent preparation of high-quality foam. We also quantitatively determined that the foam phase slowed cooling of the liquid phase after extraction. High-quality foam plays an important role in delaying the cooling of espresso coffee.
FIR: An Effective Scheme for Extracting Useful Metadata from Social Media.
Chen, Long-Sheng; Lin, Zue-Cheng; Chang, Jing-Rong
2015-11-01
Recently, the use of social media for health information exchange is expanding among patients, physicians, and other health care professionals. In medical areas, social media allows non-experts to access, interpret, and generate medical information for their own care and the care of others. Researchers paid much attention on social media in medical educations, patient-pharmacist communications, adverse drug reactions detection, impacts of social media on medicine and healthcare, and so on. However, relatively few papers discuss how to extract useful knowledge from a huge amount of textual comments in social media effectively. Therefore, this study aims to propose a Fuzzy adaptive resonance theory network based Information Retrieval (FIR) scheme by combining Fuzzy adaptive resonance theory (ART) network, Latent Semantic Indexing (LSI), and association rules (AR) discovery to extract knowledge from social media. In our FIR scheme, Fuzzy ART network firstly has been employed to segment comments. Next, for each customer segment, we use LSI technique to retrieve important keywords. Then, in order to make the extracted keywords understandable, association rules mining is presented to organize these extracted keywords to build metadata. These extracted useful voices of customers will be transformed into design needs by using Quality Function Deployment (QFD) for further decision making. Unlike conventional information retrieval techniques which acquire too many keywords to get key points, our FIR scheme can extract understandable metadata from social media.
ERIC Educational Resources Information Center
Benoit, Gerald
2002-01-01
Discusses data mining (DM) and knowledge discovery in databases (KDD), taking the view that KDD is the larger view of the entire process, with DM emphasizing the cleaning, warehousing, mining, and visualization of knowledge discovery in databases. Highlights include algorithms; users; the Internet; text mining; and information extraction.…
Information Extraction from Unstructured Text for the Biodefense Knowledge Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Samatova, N F; Park, B; Krishnamurthy, R
2005-04-29
The Bio-Encyclopedia at the Biodefense Knowledge Center (BKC) is being constructed to allow an early detection of emerging biological threats to homeland security. It requires highly structured information extracted from variety of data sources. However, the quantity of new and vital information available from every day sources cannot be assimilated by hand, and therefore reliable high-throughput information extraction techniques are much anticipated. In support of the BKC, Lawrence Livermore National Laboratory and Oak Ridge National Laboratory, together with the University of Utah, are developing an information extraction system built around the bioterrorism domain. This paper reports two important pieces ofmore » our effort integrated in the system: key phrase extraction and semantic tagging. Whereas two key phrase extraction technologies developed during the course of project help identify relevant texts, our state-of-the-art semantic tagging system can pinpoint phrases related to emerging biological threats. Also we are enhancing and tailoring the Bio-Encyclopedia by augmenting semantic dictionaries and extracting details of important events, such as suspected disease outbreaks. Some of these technologies have already been applied to large corpora of free text sources vital to the BKC mission, including ProMED-mail, PubMed abstracts, and the DHS's Information Analysis and Infrastructure Protection (IAIP) news clippings. In order to address the challenges involved in incorporating such large amounts of unstructured text, the overall system is focused on precise extraction of the most relevant information for inclusion in the BKC.« less
Information Extraction Using Controlled English to Support Knowledge-Sharing and Decision-Making
2012-06-01
or language variants. CE-based information extraction will greatly facilitate the processes in the cognitive and social domains that enable forces...terminology or language variants. CE-based information extraction will greatly facilitate the processes in the cognitive and social domains that...processor is run to turn the atomic CE into a more “ stylistically felicitous” CE, using techniques such as: aggregating all information about an entity
Orthology and paralogy constraints: satisfiability and consistency.
Lafond, Manuel; El-Mabrouk, Nadia
2014-01-01
A variety of methods based on sequence similarity, reconciliation, synteny or functional characteristics, can be used to infer orthology and paralogy relations between genes of a given gene family G. But is a given set C of orthology/paralogy constraints possible, i.e., can they simultaneously co-exist in an evolutionary history for G? While previous studies have focused on full sets of constraints, here we consider the general case where C does not necessarily involve a constraint for each pair of genes. The problem is subdivided in two parts: (1) Is C satisfiable, i.e. can we find an event-labeled gene tree G inducing C? (2) Is there such a G which is consistent, i.e., such that all displayed triplet phylogenies are included in a species tree? Previous results on the Graph sandwich problem can be used to answer to (1), and we provide polynomial-time algorithms for satisfiability and consistency with a given species tree. We also describe a new polynomial-time algorithm for the case of consistency with an unknown species tree and full knowledge of pairwise orthology/paralogy relationships, as well as a branch-and-bound algorithm in the case when unknown relations are present. We show that our algorithms can be used in combination with ProteinOrtho, a sequence similarity-based orthology detection tool, to extract a set of robust orthology/paralogy relationships.
Orthology and paralogy constraints: satisfiability and consistency
2014-01-01
Background A variety of methods based on sequence similarity, reconciliation, synteny or functional characteristics, can be used to infer orthology and paralogy relations between genes of a given gene family G. But is a given set C of orthology/paralogy constraints possible, i.e., can they simultaneously co-exist in an evolutionary history for G? While previous studies have focused on full sets of constraints, here we consider the general case where C does not necessarily involve a constraint for each pair of genes. The problem is subdivided in two parts: (1) Is C satisfiable, i.e. can we find an event-labeled gene tree G inducing C? (2) Is there such a G which is consistent, i.e., such that all displayed triplet phylogenies are included in a species tree? Results Previous results on the Graph sandwich problem can be used to answer to (1), and we provide polynomial-time algorithms for satisfiability and consistency with a given species tree. We also describe a new polynomial-time algorithm for the case of consistency with an unknown species tree and full knowledge of pairwise orthology/paralogy relationships, as well as a branch-and-bound algorithm in the case when unknown relations are present. We show that our algorithms can be used in combination with ProteinOrtho, a sequence similarity-based orthology detection tool, to extract a set of robust orthology/paralogy relationships. PMID:25572629
Efficient 3D multi-region prostate MRI segmentation using dual optimization.
Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron
2013-01-01
Efficient and accurate extraction of the prostate, in particular its clinically meaningful sub-regions from 3D MR images, is of great interest in image-guided prostate interventions and diagnosis of prostate cancer. In this work, we propose a novel multi-region segmentation approach to simultaneously locating the boundaries of the prostate and its two major sub-regions: the central gland and the peripheral zone. The proposed method utilizes the prior knowledge of the spatial region consistency and employs a customized prostate appearance model to simultaneously segment multiple clinically meaningful regions. We solve the resulted challenging combinatorial optimization problem by means of convex relaxation, for which we introduce a novel spatially continuous flow-maximization model and demonstrate its duality to the investigated convex relaxed optimization problem with the region consistency constraint. Moreover, the proposed continuous max-flow model naturally leads to a new and efficient continuous max-flow based algorithm, which enjoys great advantages in numerics and can be readily implemented on GPUs. Experiments using 15 T2-weighted 3D prostate MR images, by inter- and intra-operator variability, demonstrate the promising performance of the proposed approach.
PREDOSE: a semantic web platform for drug abuse epidemiology using social media.
Cameron, Delroy; Smith, Gary A; Daniulaityte, Raminta; Sheth, Amit P; Dave, Drashti; Chen, Lu; Anand, Gaurish; Carlson, Robert; Watkins, Kera Z; Falck, Russel
2013-12-01
The role of social media in biomedical knowledge mining, including clinical, medical and healthcare informatics, prescription drug abuse epidemiology and drug pharmacology, has become increasingly significant in recent years. Social media offers opportunities for people to share opinions and experiences freely in online communities, which may contribute information beyond the knowledge of domain professionals. This paper describes the development of a novel semantic web platform called PREDOSE (PREscription Drug abuse Online Surveillance and Epidemiology), which is designed to facilitate the epidemiologic study of prescription (and related) drug abuse practices using social media. PREDOSE uses web forum posts and domain knowledge, modeled in a manually created Drug Abuse Ontology (DAO--pronounced dow), to facilitate the extraction of semantic information from User Generated Content (UGC), through combination of lexical, pattern-based and semantics-based techniques. In a previous study, PREDOSE was used to obtain the datasets from which new knowledge in drug abuse research was derived. Here, we report on various platform enhancements, including an updated DAO, new components for relationship and triple extraction, and tools for content analysis, trend detection and emerging patterns exploration, which enhance the capabilities of the PREDOSE platform. Given these enhancements, PREDOSE is now more equipped to impact drug abuse research by alleviating traditional labor-intensive content analysis tasks. Using custom web crawlers that scrape UGC from publicly available web forums, PREDOSE first automates the collection of web-based social media content for subsequent semantic annotation. The annotation scheme is modeled in the DAO, and includes domain specific knowledge such as prescription (and related) drugs, methods of preparation, side effects, and routes of administration. The DAO is also used to help recognize three types of data, namely: (1) entities, (2) relationships and (3) triples. PREDOSE then uses a combination of lexical and semantic-based techniques to extract entities and relationships from the scraped content, and a top-down approach for triple extraction that uses patterns expressed in the DAO. In addition, PREDOSE uses publicly available lexicons to identify initial sentiment expressions in text, and then a probabilistic optimization algorithm (from related research) to extract the final sentiment expressions. Together, these techniques enable the capture of fine-grained semantic information, which facilitate search, trend analysis and overall content analysis using social media on prescription drug abuse. Moreover, extracted data are also made available to domain experts for the creation of training and test sets for use in evaluation and refinements in information extraction techniques. A recent evaluation of the information extraction techniques applied in the PREDOSE platform indicates 85% precision and 72% recall in entity identification, on a manually created gold standard dataset. In another study, PREDOSE achieved 36% precision in relationship identification and 33% precision in triple extraction, through manual evaluation by domain experts. Given the complexity of the relationship and triple extraction tasks and the abstruse nature of social media texts, we interpret these as favorable initial results. Extracted semantic information is currently in use in an online discovery support system, by prescription drug abuse researchers at the Center for Interventions, Treatment and Addictions Research (CITAR) at Wright State University. A comprehensive platform for entity, relationship, triple and sentiment extraction from such abstruse texts has never been developed for drug abuse research. PREDOSE has already demonstrated the importance of mining social media by providing data from which new findings in drug abuse research were uncovered. Given the recent platform enhancements, including the refined DAO, components for relationship and triple extraction, and tools for content, trend and emerging pattern analysis, it is expected that PREDOSE will play a significant role in advancing drug abuse epidemiology in future. Copyright © 2013 Elsevier Inc. All rights reserved.
Valx: A system for extracting and structuring numeric lab test comparison statements from text
Hao, Tianyong; Liu, Hongfang; Weng, Chunhua
2017-01-01
Objectives To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Methods Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes 7 steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. Results The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 Diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Conclusions Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community. PMID:26940748
Valx: A System for Extracting and Structuring Numeric Lab Test Comparison Statements from Text.
Hao, Tianyong; Liu, Hongfang; Weng, Chunhua
2016-05-17
To develop an automated method for extracting and structuring numeric lab test comparison statements from text and evaluate the method using clinical trial eligibility criteria text. Leveraging semantic knowledge from the Unified Medical Language System (UMLS) and domain knowledge acquired from the Internet, Valx takes seven steps to extract and normalize numeric lab test expressions: 1) text preprocessing, 2) numeric, unit, and comparison operator extraction, 3) variable identification using hybrid knowledge, 4) variable - numeric association, 5) context-based association filtering, 6) measurement unit normalization, and 7) heuristic rule-based comparison statements verification. Our reference standard was the consensus-based annotation among three raters for all comparison statements for two variables, i.e., HbA1c and glucose, identified from all of Type 1 and Type 2 diabetes trials in ClinicalTrials.gov. The precision, recall, and F-measure for structuring HbA1c comparison statements were 99.6%, 98.1%, 98.8% for Type 1 diabetes trials, and 98.8%, 96.9%, 97.8% for Type 2 diabetes trials, respectively. The precision, recall, and F-measure for structuring glucose comparison statements were 97.3%, 94.8%, 96.1% for Type 1 diabetes trials, and 92.3%, 92.3%, 92.3% for Type 2 diabetes trials, respectively. Valx is effective at extracting and structuring free-text lab test comparison statements in clinical trial summaries. Future studies are warranted to test its generalizability beyond eligibility criteria text. The open-source Valx enables its further evaluation and continued improvement among the collaborative scientific community.
Building entity models through observation and learning
NASA Astrophysics Data System (ADS)
Garcia, Richard; Kania, Robert; Fields, MaryAnne; Barnes, Laura
2011-05-01
To support the missions and tasks of mixed robotic/human teams, future robotic systems will need to adapt to the dynamic behavior of both teammates and opponents. One of the basic elements of this adaptation is the ability to exploit both long and short-term temporal data. This adaptation allows robotic systems to predict/anticipate, as well as influence, future behavior for both opponents and teammates and will afford the system the ability to adjust its own behavior in order to optimize its ability to achieve the mission goals. This work is a preliminary step in the effort to develop online entity behavior models through a combination of learning techniques and observations. As knowledge is extracted from the system through sensor and temporal feedback, agents within the multi-agent system attempt to develop and exploit a basic movement model of an opponent. For the purpose of this work, extraction and exploitation is performed through the use of a discretized two-dimensional game. The game consists of a predetermined number of sentries attempting to keep an unknown intruder agent from penetrating their territory. The sentries utilize temporal data coupled with past opponent observations to hypothesize the probable locations of the opponent and thus optimize their guarding locations.
NASA Astrophysics Data System (ADS)
Hellen, Adam; Mandelis, Andreas; Finer, Yoav; Amaechi, Bennett
2010-02-01
The development of photothermal techniques to detect thermal waves in biological tissue has occurred with a concomitant advancement in the extraction of material thermophysical properties and knowledge regarding the internal structure of a medium. Human molars (n=37) were subjected to demineralization in acid gel (pH 4.5, 10 days), followed by incubation in different fluoride-containing remineralization solutions. PTR-LUM frequency scans (1 Hz - 1 kHz) were performed prior to and during demineralization and remineralization treatments. Transverse Micro-Radiography (TMR) analysis followed at treatment conclusion. A coupled diffuse-photon-density-wave and thermal-wave theoretical model was used to quantitatively evaluate changes in thermal and optical properties of sound, demineralized and remineralized enamel. Amplitude increase and phase lag decrease in demineralized samples were consistent with higher scatter of the diffuse-photon density field and thermal wave confinement to near-surface regions. A remineralized sample illustrates a complex interplay between surface and subsurface processes, confining the thermal-wave centroid toward the dominating layer. PTR-LUM sensitivity to changes in tooth mineralization coupled with optical and thermal property extraction illustrates the technique's potential for non-destructive evaluation of multi-layered turbid media.
Bijak, Michal
2017-11-10
Milk thistle ( Silybum marianum ) is a medicinal plant that has been used for thousands of years as a remedy for a variety of ailments. The main component of S. marianum fruit extract (silymarin) is a flavonolignan called silybin, which is not only the major silymarin element but is also the most active ingredient of this extract, which has been confirmed in various studies. This compound belongs to the flavonoid group known as flavonolignans. Silybin's structure consists in two main units. The first is based on a taxifolin, the second a phenyllpropanoid unit, which in this case is conyferil alcohol. These two units are linked together into one structure by an oxeran ring. Since the 1970s, silybin has been regarded in official medicine as a substance with hepatoprotective properties. There is a large body of research that demonstrates silybin's many other healthy properties, but there are still a lack of papers focused on its molecular structure, chemistry, metabolism, and novel form of administration. Therefore, the aim of this paper is a literature review presenting and systematizing our knowledge of the silybin molecule, with particular emphasis on its structure, chemistry, bioavailability, and metabolism.
A knowledge-base generating hierarchical fuzzy-neural controller.
Kandadai, R M; Tien, J M
1997-01-01
We present an innovative fuzzy-neural architecture that is able to automatically generate a knowledge base, in an extractable form, for use in hierarchical knowledge-based controllers. The knowledge base is in the form of a linguistic rule base appropriate for a fuzzy inference system. First, we modify Berenji and Khedkar's (1992) GARIC architecture to enable it to automatically generate a knowledge base; a pseudosupervised learning scheme using reinforcement learning and error backpropagation is employed. Next, we further extend this architecture to a hierarchical controller that is able to generate its own knowledge base. Example applications are provided to underscore its viability.
Brown, James G; Joyce, Kerry E; Stacey, Dawn; Thomson, Richard G
2015-05-01
Efficacy of patient decision aids (PtDAs) may be influenced by trial participants' identity either as patients seeking to benefit personally from involvement or as volunteers supporting the research effort. To determine if study characteristics indicative of participants' trial identity might influence PtDA efficacy. We undertook exploratory subgroup meta-analysis of the 2011 Cochrane review of PtDAs, including trials that compared PtDA with usual care for treatment decisions. We extracted data on whether participants initiated the care pathway, setting, practitioner interactions, and 6 outcome variables (knowledge, risk perception, decisional conflict, feeling informed, feeling clear about values, and participation). The main subgroup analysis categorized trials as "volunteerism" or "patienthood" on the basis of whether participants initiated the care pathway. A supplementary subgroup analysis categorized trials on the basis of whether any volunteerism factors were present (participants had not initiated the care pathway, had attended a research setting, or had a face-to-face interaction with a researcher). Twenty-nine trials were included. Compared with volunteerism trials, pooled effect sizes were higher in patienthood trials (where participants initiated the care pathway) for knowledge, decisional conflict, feeling informed, feeling clear, and participation. The subgroup difference was statistically significant for knowledge only (P = 0.03). When trials were compared on the basis of whether volunteerism factors were present, knowledge was significantly greater in patienthood trials (P < 0.001), but there was otherwise no consistent pattern of differences in effects across outcomes. There is a tendency toward greater PtDA efficacy in trials in which participants initiate the pathway of care. Knowledge acquisition appears to be greater in trials where participants are predominantly patients rather than volunteers. © The Author(s) 2015.
Knowledge Representation of the Melody and Rhythm in Koto Songs
NASA Astrophysics Data System (ADS)
Deguchi, Sachiko; Shirai, Katsuhiko
This paper describes the knowledge representation of the melody and rhythm in koto songs based on the structure of the domain: the scale, melisma (the melody in a syllable), and bar. We have encoded koto scores and extracted 2,3,4-note melodic patterns sequentially from the voice part of koto scores. The 2,3,4-note patterns used in the melisma are limited and the percentages of top patterns are high. The 3,4-note melodic patterns are examined at each scale degree. These patterns are more restricted than the patterns that are possible under the constraint of the scale. These typical patterns on the scale represent the knowledge of koto players. We have analyzed rhythms in two different ways. We have extracted rhythms depending on each melodic pattern, while we have extracted rhythms depending on each bar. The former are complicated and the latter are typical. This result indicates that koto players recognize melodic patterns and rhythmic patterns independently. Our analyses show the melodic patterns and rhythmic patterns that are acquired by koto players. These patterns will be applied to the description of variations of the melisma to build a score database. These patterns will also be applied to a composition and education. The melodic patterns can be extracted from other genres of Japanese traditional music, foreign old folk songs or chants by using this method.
Concept maps: A tool for knowledge management and synthesis in web-based conversational learning.
Joshi, Ankur; Singh, Satendra; Jaswal, Shivani; Badyal, Dinesh Kumar; Singh, Tejinder
2016-01-01
Web-based conversational learning provides an opportunity for shared knowledge base creation through collaboration and collective wisdom extraction. Usually, the amount of generated information in such forums is very huge, multidimensional (in alignment with the desirable preconditions for constructivist knowledge creation), and sometimes, the nature of expected new information may not be anticipated in advance. Thus, concept maps (crafted from constructed data) as "process summary" tools may be a solution to improve critical thinking and learning by making connections between the facts or knowledge shared by the participants during online discussion This exploratory paper begins with the description of this innovation tried on a web-based interacting platform (email list management software), FAIMER-Listserv, and generated qualitative evidence through peer-feedback. This process description is further supported by a theoretical construct which shows how social constructivism (inclusive of autonomy and complexity) affects the conversational learning. The paper rationalizes the use of concept map as mid-summary tool for extracting information and further sense making out of this apparent intricacy.
Knowledge Discovery in Variant Databases Using Inductive Logic Programming
Nguyen, Hoan; Luu, Tien-Dao; Poch, Olivier; Thompson, Julie D.
2013-01-01
Understanding the effects of genetic variation on the phenotype of an individual is a major goal of biomedical research, especially for the development of diagnostics and effective therapeutic solutions. In this work, we describe the use of a recent knowledge discovery from database (KDD) approach using inductive logic programming (ILP) to automatically extract knowledge about human monogenic diseases. We extracted background knowledge from MSV3d, a database of all human missense variants mapped to 3D protein structure. In this study, we identified 8,117 mutations in 805 proteins with known three-dimensional structures that were known to be involved in human monogenic disease. Our results help to improve our understanding of the relationships between structural, functional or evolutionary features and deleterious mutations. Our inferred rules can also be applied to predict the impact of any single amino acid replacement on the function of a protein. The interpretable rules are available at http://decrypthon.igbmc.fr/kd4v/. PMID:23589683
Knowledge discovery in variant databases using inductive logic programming.
Nguyen, Hoan; Luu, Tien-Dao; Poch, Olivier; Thompson, Julie D
2013-01-01
Understanding the effects of genetic variation on the phenotype of an individual is a major goal of biomedical research, especially for the development of diagnostics and effective therapeutic solutions. In this work, we describe the use of a recent knowledge discovery from database (KDD) approach using inductive logic programming (ILP) to automatically extract knowledge about human monogenic diseases. We extracted background knowledge from MSV3d, a database of all human missense variants mapped to 3D protein structure. In this study, we identified 8,117 mutations in 805 proteins with known three-dimensional structures that were known to be involved in human monogenic disease. Our results help to improve our understanding of the relationships between structural, functional or evolutionary features and deleterious mutations. Our inferred rules can also be applied to predict the impact of any single amino acid replacement on the function of a protein. The interpretable rules are available at http://decrypthon.igbmc.fr/kd4v/.
Boutaoui, Nassima; Zaiter, Lahcene; Benayache, Fadila; Benayache, Samir; Carradori, Simone; Cesa, Stefania; Giusti, Anna Maria; Campestre, Cristina; Menghini, Luigi; Innosa, Denise; Locatelli, Marcello
2018-02-20
This study was performed to evaluate the metabolite recovery from different extraction methods applied to Thymus algeriensis aerial parts. A high-performance liquid chromatographic method using photodiode array detector with gradient elution has been developed and validated for the simultaneous estimation of different phenolic compounds in the extracts and in their corresponding purified fractions. The experimental results show that microwave-assisted aqueous extraction for 15 min at 100 °C gave the most phenolics-enriched extract, reducing extraction time without degradation effects on bioactives. Sixteen compounds were identified in this extract, 11 phenolic compounds and five flavonoids, all known for their biological activities. Color analysis and determination of chlorophylls and carotenoids implemented the knowledge of the chemical profile of this plant.
Concept recognition for extracting protein interaction relations from biomedical text
Baumgartner, William A; Lu, Zhiyong; Johnson, Helen L; Caporaso, J Gregory; Paquette, Jesse; Lindemann, Anna; White, Elizabeth K; Medvedeva, Olga; Cohen, K Bretonnel; Hunter, Lawrence
2008-01-01
Background: Reliable information extraction applications have been a long sought goal of the biomedical text mining community, a goal that if reached would provide valuable tools to benchside biologists in their increasingly difficult task of assimilating the knowledge contained in the biomedical literature. We present an integrated approach to concept recognition in biomedical text. Concept recognition provides key information that has been largely missing from previous biomedical information extraction efforts, namely direct links to well defined knowledge resources that explicitly cement the concept's semantics. The BioCreative II tasks discussed in this special issue have provided a unique opportunity to demonstrate the effectiveness of concept recognition in the field of biomedical language processing. Results: Through the modular construction of a protein interaction relation extraction system, we present several use cases of concept recognition in biomedical text, and relate these use cases to potential uses by the benchside biologist. Conclusion: Current information extraction technologies are approaching performance standards at which concept recognition can begin to deliver high quality data to the benchside biologist. Our system is available as part of the BioCreative Meta-Server project and on the internet . PMID:18834500
Enhancing Biomedical Text Summarization Using Semantic Relation Extraction
Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao
2011-01-01
Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization. PMID:21887336
Attentional effects on rule extraction and consolidation from speech.
López-Barroso, Diana; Cucurell, David; Rodríguez-Fornells, Antoni; de Diego-Balaguer, Ruth
2016-07-01
Incidental learning plays a crucial role in the initial phases of language acquisition. However the knowledge derived from implicit learning, which is based on prediction-based mechanisms, may become explicit. The role that attention plays in the formation of implicit and explicit knowledge of the learned material is unclear. In the present study, we investigated the role that attention plays in the acquisition of non-adjacent rule learning from speech. In addition, we also tested whether the amount of attention during learning changes the representation of the learned material after a 24h delay containing sleep. For that, we developed an experiment run on two consecutive days consisting on the exposure to an artificial language that contained non-adjacent dependencies (rules) between words whereas different conditions were established to manipulate the amount of attention given to the rules (target and non-target conditions). Furthermore, we used both indirect and direct measures of learning that are more sensitive to implicit and explicit knowledge, respectively. Whereas the indirect measures indicated that learning of the rules occurred regardless of attention, more explicit judgments after learning showed differences in the type of learning reached under the two attention conditions. 24 hours later, indirect measures showed no further improvements during additional language exposure and explicit judgments indicated that only the information more robustly learned in the previous day, was consolidated. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Denham, Susanne A; Bassett, Hideko Hamada; Thayer, Sara K; Mincic, Melissa S; Sirotkin, Yana S; Zinsser, Katherine
2012-01-01
Social-emotional behavior of 352 3- and 4-year-olds attending private child-care and Head Start programs was observed using the Minnesota Preschool Affect Checklist, Revised (MPAC-R). Goals of the investigation included (a) using MPAC-R data to extract a shortened version, MPAC-R/S, comparing structure, internal consistency, test-retest reliability, and stability of both versions; and, using the shortened measure, to examine (b) age, gender, and risk status differences in social-emotional behaviors; (c) contributions of emotion knowledge and executive function to social-emotional behaviors; and (d) contributions of social-emotional behaviors to early school adjustment and kindergarten academic success. Results show that reliability of MPAC-R/S was as good, or better, than the MPAC-R. MPAC-R/S structure, at both times of observation, included emotionally negative/aggressive, emotionally regulated/prosocial, and emotionally positive/productive behaviors; MPAC-R structure was similar but less replicable over time. Age, gender, and risk differences were found. Children's emotion knowledge contributed to later emotionally regulated/prosocial behavior. Finally, preschool emotionally negative/aggressive behaviors were associated with concurrent and kindergarten school success, and there was evidence of social-emotional behavior mediating relations between emotion knowledge or executive function, and school outcomes. The importance of portable, empirically supported observation measures of social-emotional behaviors is discussed along with possible applications, teacher utilization, and implementation barriers.
Attentional effects on rule extraction and consolidation from speech
López-Barroso, Diana; Cucurell, David; Rodríguez-Fornells, Antoni; de Diego-Balaguer, Ruth
2016-01-01
Incidental learning plays a crucial role in the initial phases of language acquisition. However the knowledge derived from implicit learning, which is based on prediction-based mechanisms, may become explicit. The role that attention plays in the formation of implicit and explicit knowledge of the learned material is unclear. In the present study, we investigated the role that attention plays in the acquisition of non-adjacent rule learning from speech. In addition, we also tested whether the amount of attention during learning changes the representation of the learned material after a 24 h delay containing sleep. For that, we developed an experiment run on two consecutive days consisting on the exposure to an artificial language that contained non-adjacent dependencies (rules) between words whereas different conditions were established to manipulate the amount of attention given to the rules (target and non-target conditions). Furthermore, we used both indirect and direct measures of learning that are more sensitive to implicit and explicit knowledge, respectively. Whereas the indirect measures indicated that learning of the rules occurred regardless of attention, more explicit judgments after learning showed differences in the type of learning reached under the two attention conditions. 24 hours later, indirect measures showed no further improvements during additional language exposure and explicit judgments indicated that only the information more robustly learned in the previous day, was consolidated. PMID:27031495
The tool extracts deep phenotypic information from the clinical narrative at the document-, episode-, and patient-level. The final output is FHIR compliant patient-level phenotypic summary which can be consumed by research warehouses or the DeepPhe native visualization tool.
Optimizing the extraction, storage, and analysis of airborne endotoxins
USDA-ARS?s Scientific Manuscript database
While the Limulus amebocyte lysate (LAL) assay is part of most procedures to assess airborne endotoxin exposure, there is no universally agreed upon standard procedure. The purpose of this study was to fill in additional knowledge gaps with respect to the extraction, storage, and analysis of endotox...
The Science and Art of Eyebrow Transplantation by Follicular Unit Extraction
Gupta, Jyoti; Kumar, Amrendra; Chouhan, Kavish; Ariganesh, C; Nandal, Vinay
2017-01-01
Eyebrows constitute a very important and prominent feature of the face. With growing information, eyebrow transplant has become a popular procedure. However, though it is a small area it requires a lot of precision and knowledge regarding anatomy, designing of brows, extraction and implantation technique. This article gives a comprehensive view regarding eyebrow transplant with special emphasis on follicular unit extraction technique, which has become the most popular technique. PMID:28852290
Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.
Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G
2017-09-01
To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.
Huang, Zhengxing; Dong, Wei; Duan, Huilong; Liu, Jiquan
2018-05-01
Acute coronary syndrome (ACS), as a common and severe cardiovascular disease, is a leading cause of death and the principal cause of serious long-term disability globally. Clinical risk prediction of ACS is important for early intervention and treatment. Existing ACS risk scoring models are based mainly on a small set of hand-picked risk factors and often dichotomize predictive variables to simplify the score calculation. This study develops a regularized stacked denoising autoencoder (SDAE) model to stratify clinical risks of ACS patients from a large volume of electronic health records (EHR). To capture characteristics of patients at similar risk levels, and preserve the discriminating information across different risk levels, two constraints are added on SDAE to make the reconstructed feature representations contain more risk information of patients, which contribute to a better clinical risk prediction result. We validate our approach on a real clinical dataset consisting of 3464 ACS patient samples. The performance of our approach for predicting ACS risk remains robust and reaches 0.868 and 0.73 in terms of both AUC and accuracy, respectively. The obtained results show that the proposed approach achieves a competitive performance compared to state-of-the-art models in dealing with the clinical risk prediction problem. In addition, our approach can extract informative risk factors of ACS via a reconstructive learning strategy. Some of these extracted risk factors are not only consistent with existing medical domain knowledge, but also contain suggestive hypotheses that could be validated by further investigations in the medical domain.
Kothari, Anita; Hovanec, Nina; Hastie, Robyn; Sibbald, Shannon
2011-07-25
The concept of knowledge management has been prevalent in the business sector for decades. Only recently has knowledge management been receiving attention by the health care sector, in part due to the ever growing amount of information that health care practitioners must handle. It has become essential to develop a way to manage the information coming in to and going out of a health care organization. The purpose of this paper was to summarize previous studies from the business literature that explored specific knowledge management tools, with the aim of extracting lessons that could be applied in the health domain. We searched seven databases using keywords such as "knowledge management", "organizational knowledge", and "business performance". We included articles published between 2000-2009; we excluded non-English articles. 83 articles were reviewed and data were extracted to: (1) uncover reasons for initiating knowledge management strategies, (2) identify potential knowledge management strategies/solutions, and (3) describe facilitators and barriers to knowledge management. KM strategies include such things as training sessions, communication technologies, process mapping and communities of practice. Common facilitators and barriers to implementing these strategies are discussed in the business literature, but rigorous studies about the effectiveness of such initiatives are lacking. The health care sector is at a pinnacle place, with incredible opportunities to design, implement (and evaluate) knowledge management systems. While more research needs to be done on how best to do this in healthcare, the lessons learned from the business sector can provide a foundation on which to build.
2011-01-01
Background The concept of knowledge management has been prevalent in the business sector for decades. Only recently has knowledge management been receiving attention by the health care sector, in part due to the ever growing amount of information that health care practitioners must handle. It has become essential to develop a way to manage the information coming in to and going out of a health care organization. The purpose of this paper was to summarize previous studies from the business literature that explored specific knowledge management tools, with the aim of extracting lessons that could be applied in the health domain. Methods We searched seven databases using keywords such as "knowledge management", "organizational knowledge", and "business performance". We included articles published between 2000-2009; we excluded non-English articles. Results 83 articles were reviewed and data were extracted to: (1) uncover reasons for initiating knowledge management strategies, (2) identify potential knowledge management strategies/solutions, and (3) describe facilitators and barriers to knowledge management. Conclusions KM strategies include such things as training sessions, communication technologies, process mapping and communities of practice. Common facilitators and barriers to implementing these strategies are discussed in the business literature, but rigorous studies about the effectiveness of such initiatives are lacking. The health care sector is at a pinnacle place, with incredible opportunities to design, implement (and evaluate) knowledge management systems. While more research needs to be done on how best to do this in healthcare, the lessons learned from the business sector can provide a foundation on which to build. PMID:21787403
A new patent-based approach for technology mapping in the pharmaceutical domain.
Russo, Davide; Montecchi, Tiziano; Carrara, Paolo
2013-09-01
The key factor in decision-making is the quality of information collected and processed in the problem analysis. In most cases, patents represent a very important source of information. The main problem is how to extract such information from the huge corpus of documents with a high recall and precision, and in a short time. This article demonstrates a patent search and classification method, called Knowledge Organizing Module, which consists of creating, almost automatically, a pool of patents based on polysemy expansion and homonymy disambiguation. Since the pool is done, an automatic patent technology landscaping is provided for fixing the state of the art of our product, and exploring competing alternative treatments and/or possible technological opportunities. An exemplary case study is provided, it deals with a patent analysis in the field of verruca treatments.
Pharmacy Students' Attitudes Toward Debt.
Park, Taehwan; Yusuf, Akeem A; Hadsall, Ronald S
2015-05-25
To examine pharmacy students' attitudes toward debt. Two hundred thirteen pharmacy students at the University of Minnesota were surveyed using items designed to assess attitudes toward debt. Factor analysis was performed to identify common themes. Subgroup analysis was performed to examine whether students' debt-tolerant attitudes varied according to their demographic characteristics, past loan experience, monthly income, and workload. Principal component extraction with varimax rotation identified 3 factor themes accounting for 49.0% of the total variance: tolerant attitudes toward debt (23.5%); contemplation and knowledge about loans (14.3%); and fear of debt (11.2%). Tolerant attitudes toward debt were higher if students were white or if they had had past loan experience. These 3 themes in students' attitudes toward debt were consistent with those identified in previous research. Pharmacy schools should consider providing a structured financial education to improve student management of debt.
An intelligent assistant for physicians.
Gavrilis, Dimitris; Georgoulas, George; Vasiloglou, Nikolaos; Nikolakopoulos, George
2016-08-01
This paper presents a software tool developed for assisting physicians during an examination process. The tool consists of a number of modules with the aim to make the examination process not only quicker but also fault proof moving from a simple electronic medical records management system towards an intelligent assistant for the physician. The intelligent component exploits users' inputs as well as well established standards to line up possible suggestions for filling in the examination report. As the physician continues using it, the tool keeps extracting new knowledge. The architecture of the tool is presented in brief while the intelligent component which builds upon the notion of multilabel learning is presented in more detail. Our preliminary results from a real test case indicate that the performance of the intelligent module can reach quite high performance without a large amount of data.
An Integrated Children Disease Prediction Tool within a Special Social Network.
Apostolova Trpkovska, Marika; Yildirim Yayilgan, Sule; Besimi, Adrian
2016-01-01
This paper proposes a social network with an integrated children disease prediction system developed by the use of the specially designed Children General Disease Ontology (CGDO). This ontology consists of children diseases and their relationship with symptoms and Semantic Web Rule Language (SWRL rules) that are specially designed for predicting diseases. The prediction process starts by filling data about the appeared signs and symptoms by the user which are after that mapped with the CGDO ontology. Once the data are mapped, the prediction results are presented. The phase of prediction executes the rules which extract the predicted disease details based on the SWRL rule specified. The motivation behind the development of this system is to spread knowledge about the children diseases and their symptoms in a very simple way using the specialized social networking website www.emama.mk.
Adaptive multisensor fusion for planetary exploration rovers
NASA Technical Reports Server (NTRS)
Collin, Marie-France; Kumar, Krishen; Pampagnin, Luc-Henri
1992-01-01
The purpose of the adaptive multisensor fusion system currently being designed at NASA/Johnson Space Center is to provide a robotic rover with assured vision and safe navigation capabilities during robotic missions on planetary surfaces. Our approach consists of using multispectral sensing devices ranging from visible to microwave wavelengths to fulfill the needs of perception for space robotics. Based on the illumination conditions and the sensors capabilities knowledge, the designed perception system should automatically select the best subset of sensors and their sensing modalities that will allow the perception and interpretation of the environment. Then, based on reflectance and emittance theoretical models, the sensor data are fused to extract the physical and geometrical surface properties of the environment surface slope, dielectric constant, temperature and roughness. The theoretical concepts, the design and first results of the multisensor perception system are presented.
Koh, Gar Yee; Chou, Guixin; Liu, Zhijun
2009-01-01
The aqueous extraction process of the leaves of Rubus suavissimus often brings in a large amount of non-active polysaccharides as part of the constituents. To purify this water extract for potential elevated bioactivity, alcohol precipitation (AP) consisting of gradient regimens was applied, and its resultants were examined through colorimetric and HPLC analyses. AP was effective in partitioning the aqueous crude extract into a soluble supernatant and an insoluble precipitant, and its effect varied significantly with alcohol regimens. Generally, the higher the alcohol concentration, the purer was the resultant extract. At its maximum, approximately 36% (w/w) of the crude extract, of which 23% was polysaccharides, was precipitated and removed, resulting in a purified extract consisting of over 20% bioactive marker compounds (gallic acid, ellagic acid, rutin, rubusoside, and steviol monoside). The removal of 11% polysaccharides from the crude water extract by using alcohol precipitation was complete at 70% alcohol regimen. Higher alcohol levels resulted in even purer extracts, possibly by removing some compounds of uncertain bioactivity. Alcohol precipitation is an effective way of removing polysaccharides from the water extract of sweet tea plant and could be used as an initial simple purification tool for many water plant extracts that contain large amounts of polysaccharides. PMID:19419169
Effective low-level processing for interferometric image enhancement
NASA Astrophysics Data System (ADS)
Joo, Wonjong; Cha, Soyoung S.
1995-09-01
The hybrid operation of digital image processing and a knowledge-based AI system has been recognized as a desirable approach of the automated evaluation of noise-ridden interferogram. Early noise/data reduction before phase is extracted is essential for the success of the knowledge- based processing. In this paper, new concepts of effective, interactive low-level processing operators: that is, a background-matched filter and a directional-smoothing filter, are developed and tested with transonic aerodynamic interferograms. The results indicate that these new operators have promising advantages in noise/data reduction over the conventional ones, leading success of the high-level, intelligent phase extraction.
A semantic model for multimodal data mining in healthcare information systems.
Iakovidis, Dimitris; Smailis, Christos
2012-01-01
Electronic health records (EHRs) are representative examples of multimodal/multisource data collections; including measurements, images and free texts. The diversity of such information sources and the increasing amounts of medical data produced by healthcare institutes annually, pose significant challenges in data mining. In this paper we present a novel semantic model that describes knowledge extracted from the lowest-level of a data mining process, where information is represented by multiple features i.e. measurements or numerical descriptors extracted from measurements, images, texts or other medical data, forming multidimensional feature spaces. Knowledge collected by manual annotation or extracted by unsupervised data mining from one or more feature spaces is modeled through generalized qualitative spatial semantics. This model enables a unified representation of knowledge across multimodal data repositories. It contributes to bridging the semantic gap, by enabling direct links between low-level features and higher-level concepts e.g. describing body parts, anatomies and pathological findings. The proposed model has been developed in web ontology language based on description logics (OWL-DL) and can be applied to a variety of data mining tasks in medical informatics. It utility is demonstrated for automatic annotation of medical data.
Bao, X Y; Huang, W J; Zhang, K; Jin, M; Li, Y; Niu, C Z
2018-04-18
There is a huge amount of diagnostic or treatment information in electronic medical record (EMR), which is a concrete manifestation of clinicians actual diagnosis and treatment details. Plenty of episodes in EMRs, such as complaints, present illness, past history, differential diagnosis, diagnostic imaging, surgical records, reflecting details of diagnosis and treatment in clinical process, adopt Chinese description of natural language. How to extract effective information from these Chinese narrative text data, and organize it into a form of tabular for analysis of medical research, for the practical utilization of clinical data in the real world, is a difficult problem in Chinese medical data processing. Based on the EMRs narrative text data in a tertiary hospital in China, a customized information extracting rules learning, and rule based information extraction methods is proposed. The overall method consists of three steps, which includes: (1) Step 1, a random sample of 600 copies (including the history of present illness, past history, personal history, family history, etc.) of the electronic medical record data, was extracted as raw corpora. With our developed Chinese clinical narrative text annotation platform, the trained clinician and nurses marked the tokens and phrases in the corpora which would be extracted (with a history of diabetes as an example). (2) Step 2, based on the annotated corpora clinical text data, some extraction templates were summarized and induced firstly. Then these templates were rewritten using regular expressions of Perl programming language, as extraction rules. Using these extraction rules as basic knowledge base, we developed extraction packages in Perl, for extracting data from the EMRs text data. In the end, the extracted data items were organized in tabular data format, for later usage in clinical research or hospital surveillance purposes. (3) As the final step of the method, the evaluation and validation of the proposed methods were implemented in the National Clinical Service Data Integration Platform, and we checked the extraction results using artificial verification and automated verification combined, proved the effectiveness of the method. For all the patients with diabetes as diagnosed disease in the Department of Endocrine in the hospital, the medical history episode of these patients showed that, altogether 1 436 patients were dismissed in 2015, and a history of diabetes medical records extraction results showed that the recall rate was 87.6%, the accuracy rate was 99.5%, and F-Score was 0.93. For all the 10% patients (totally 1 223 patients) with diabetes by the dismissed dates of August 2017 in the same department, the extracted diabetes history extraction results showed that the recall rate was 89.2%, the accuracy rate was 99.2%, F-Score was 0.94. This study mainly adopts the combination of natural language processing and rule-based information extraction, and designs and implements an algorithm for extracting customized information from unstructured Chinese electronic medical record text data. It has better results than existing work.
Georga, Eleni; Protopappas, Vasilios; Guillen, Alejandra; Fico, Giuseppe; Ardigo, Diego; Arredondo, Maria Teresa; Exarchos, Themis P; Polyzos, Demosthenes; Fotiadis, Dimitrios I
2009-01-01
METABO is a diabetes monitoring and management system which aims at recording and interpreting patient's context, as well as, at providing decision support to both the patient and the doctor. The METABO system consists of (a) a Patient's Mobile Device (PMD), (b) different types of unobtrusive biosensors, (c) a Central Subsystem (CS) located remotely at the hospital and (d) the Control Panel (CP) from which physicians can follow-up their patients and gain also access to the CS. METABO provides a multi-parametric monitoring system which facilitates the efficient and systematic recording of dietary, physical activity, medication and medical information (continuous and discontinuous glucose measurements). Based on all recorded contextual information, data mining schemes that run in the PMD are responsible to model patients' metabolism, predict hypo/hyper-glycaemic events, and provide the patient with short and long-term alerts. In addition, all past and recently-recorded data are analyzed to extract patterns of behavior, discover new knowledge and provide explanations to the physician through the CP. Advanced tools in the CP allow the physician to prescribe personalized treatment plans and frequently quantify patient's adherence to treatment.
Computational methods to extract meaning from text and advance theories of human cognition.
McNamara, Danielle S
2011-01-01
Over the past two decades, researchers have made great advances in the area of computational methods for extracting meaning from text. This research has to a large extent been spurred by the development of latent semantic analysis (LSA), a method for extracting and representing the meaning of words using statistical computations applied to large corpora of text. Since the advent of LSA, researchers have developed and tested alternative statistical methods designed to detect and analyze meaning in text corpora. This research exemplifies how statistical models of semantics play an important role in our understanding of cognition and contribute to the field of cognitive science. Importantly, these models afford large-scale representations of human knowledge and allow researchers to explore various questions regarding knowledge, discourse processing, text comprehension, and language. This topic includes the latest progress by the leading researchers in the endeavor to go beyond LSA. Copyright © 2010 Cognitive Science Society, Inc.
Acetabular rim and surface segmentation for hip surgery planning and dysplasia evaluation
NASA Astrophysics Data System (ADS)
Tan, Sovira; Yao, Jianhua; Yao, Lawrence; Summers, Ronald M.; Ward, Michael M.
2008-03-01
Knowledge of the acetabular rim and surface can be invaluable for hip surgery planning and dysplasia evaluation. The acetabular rim can also be used as a landmark for registration purposes. At the present time acetabular features are mostly extracted manually at great cost of time and human labor. Using a recent level set algorithm that can evolve on the surface of a 3D object represented by a triangular mesh we automatically extracted rims and surfaces of acetabulae. The level set is guided by curvature features on the mesh. It can segment portions of a surface that are bounded by a line of extremal curvature (ridgeline or crestline). The rim of the acetabulum is such an extremal curvature line. Our material consists of eight hemi-pelvis surfaces. The algorithm is initiated by putting a small circle (level set seed) at the center of the acetabular surface. Because this surface distinctively has the form of a cup we were able to use the Shape Index feature to automatically extract an approximate center. The circle then expands and deforms so as to take the shape of the acetabular rim. The results were visually inspected. Only minor errors were detected. The algorithm also proved to be robust. Seed placement was satisfactory for the eight hemi-pelvis surfaces without changing any parameters. For the level set evolution we were able to use a single set of parameters for seven out of eight surfaces.
Binet, Rachel; Deer, Deanne M; Uhlfelder, Samantha J
2014-06-01
Faster detection of contaminated foods can prevent adulterated foods from being consumed and minimize the risk of an outbreak of foodborne illness. A sensitive molecular detection method is especially important for Shigella because ingestion of as few as 10 of these bacterial pathogens can cause disease. The objectives of this study were to compare the ability of four DNA extraction methods to detect Shigella in six types of produce, post-enrichment, and to evaluate a new and rapid conventional multiplex assay that targets the Shigella ipaH, virB and mxiC virulence genes. This assay can detect less than two Shigella cells in pure culture, even when the pathogen is mixed with background microflora, and it can also differentiate natural Shigella strains from a control strain and eliminate false positive results due to accidental laboratory contamination. The four DNA extraction methods (boiling, PrepMan Ultra [Applied Biosystems], InstaGene Matrix [Bio-Rad], DNeasy Tissue kit [Qiagen]) detected 1.6 × 10(3)Shigella CFU/ml post-enrichment, requiring ∼18 doublings to one cell in 25 g of produce pre-enrichment. Lower sensitivity was obtained, depending on produce type and extraction method. The InstaGene Matrix was the most consistent and sensitive and the multiplex assay accurately detected Shigella in less than 90 min, outperforming, to the best of our knowledge, molecular assays currently in place for this pathogen. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Chang, Ya-Ting; Chang, Li-Chiu; Chang, Fi-John
2005-04-01
To bridge the gap between academic research and actual operation, we propose an intelligent control system for reservoir operation. The methodology includes two major processes, the knowledge acquired and implemented, and the inference system. In this study, a genetic algorithm (GA) and a fuzzy rule base (FRB) are used to extract knowledge based on the historical inflow data with a design objective function and on the operating rule curves respectively. The adaptive network-based fuzzy inference system (ANFIS) is then used to implement the knowledge, to create the fuzzy inference system, and then to estimate the optimal reservoir operation. To investigate its applicability and practicability, the Shihmen reservoir, Taiwan, is used as a case study. For the purpose of comparison, a simulation of the currently used M-5 operating rule curve is also performed. The results demonstrate that (1) the GA is an efficient way to search the optimal input-output patterns, (2) the FRB can extract the knowledge from the operating rule curves, and (3) the ANFIS models built on different types of knowledge can produce much better performance than the traditional M-5 curves in real-time reservoir operation. Moreover, we show that the model can be more intelligent for reservoir operation if more information (or knowledge) is involved.
["Big data" - large data, a lot of knowledge?].
Hothorn, Torsten
2015-01-28
Since a couple of years, the term Big Data describes technologies to extract knowledge from data. Applications of Big Data and their consequences are also increasingly discussed in the mass media. Because medicine is an empirical science, we discuss the meaning of Big Data and its potential for future medical research.
Educational Data Mining Acceptance among Undergraduate Students
ERIC Educational Resources Information Center
Wook, Muslihah; Yusof, Zawiyah M.; Nazri, Mohd Zakree Ahmad
2017-01-01
The acceptance of Educational Data Mining (EDM) technology is on the rise due to, its ability to extract new knowledge from large amounts of students' data. This knowledge is important for educational stakeholders, such as policy makers, educators, and students themselves to enhance efficiency and achievements. However, previous studies on EDM…
Liquete, Camino; Piroddi, Chiara; Drakou, Evangelia G.; Gurney, Leigh; Katsanevakis, Stelios; Charef, Aymen; Egoh, Benis
2013-01-01
Background Research on ecosystem services has grown exponentially during the last decade. Most of the studies have focused on assessing and mapping terrestrial ecosystem services highlighting a knowledge gap on marine and coastal ecosystem services (MCES) and an urgent need to assess them. Methodology/Principal Findings We reviewed and summarized existing scientific literature related to MCES with the aim of extracting and classifying indicators used to assess and map them. We found 145 papers that specifically assessed marine and coastal ecosystem services from which we extracted 476 indicators. Food provision, in particular fisheries, was the most extensively analyzed MCES while water purification and coastal protection were the most frequently studied regulating and maintenance services. Also recreation and tourism under the cultural services was relatively well assessed. We highlight knowledge gaps regarding the availability of indicators that measure the capacity, flow or benefit derived from each ecosystem service. The majority of the case studies was found in mangroves and coastal wetlands and was mainly concentrated in Europe and North America. Our systematic review highlighted the need of an improved ecosystem service classification for marine and coastal systems, which is herein proposed with definitions and links to previous classifications. Conclusions/Significance This review summarizes the state of available information related to ecosystem services associated with marine and coastal ecosystems. The cataloging of MCES indicators and the integrated classification of MCES provided in this paper establish a background that can facilitate the planning and integration of future assessments. The final goal is to establish a consistent structure and populate it with information able to support the implementation of biodiversity conservation policies. PMID:23844080
NASA Astrophysics Data System (ADS)
Oxmann, J. F.; Schwendenmann, L.
2014-06-01
Knowledge of calcium phosphate (Ca-P) solubility is crucial for understanding temporal and spatial variations of phosphorus (P) concentrations in water bodies and sedimentary reservoirs. In situ relationships between liquid- and solid-phase levels cannot be fully explained by dissolved analytes alone and need to be verified by determining particular sediment P species. Lack of quantification methods for these species limits the knowledge of the P cycle. To address this issue, we (i) optimized a specifically developed conversion-extraction (CONVEX) method for P species quantification using standard additions, and (ii) simultaneously determined solubilities of Ca-P standards by measuring their pH-dependent contents in the sediment matrix. Ca-P minerals including various carbonate fluorapatite (CFAP) specimens from different localities, fluorapatite (FAP), fish bone apatite, synthetic hydroxylapatite (HAP) and octacalcium phosphate (OCP) were characterized by XRD, Raman, FTIR and elemental analysis. Sediment samples were incubated with and without these reference minerals and then sequentially extracted to quantify Ca-P species by their differential dissolution at pH values between 3 and 8. The quantification of solid-phase phosphates at varying pH revealed solubilities in the following order: OCP > HAP > CFAP (4.5% CO3) > CFAP (3.4% CO3) > CFAP (2.2% CO3) > FAP. Thus, CFAP was less soluble in sediment than HAP, and CFAP solubility increased with carbonate content. Unspiked sediment analyses together with standard addition analyses indicated consistent differential dissolution of natural sediment species vs. added reference species and therefore verified the applicability of the CONVEX method in separately determining the most prevalent Ca-P minerals. We found surprisingly high OCP contents in the coastal sediments analyzed, which supports the hypothesis of apatite formation by an OCP precursor mechanism.
NASA Astrophysics Data System (ADS)
Oxmann, J. F.; Schwendenmann, L.
2014-01-01
Knowledge of calcium phosphate (Ca-P) solubility is crucial for understanding temporal and spatial variations of phosphorus (P) concentrations in water bodies and sedimentary reservoirs. In-situ relationships between liquid and solid-phase levels cannot be fully explained by dissolved analytes alone and need to be verified by determination of particular sediment P species. Lack of quantification methods for these species limits the knowledge of the P cycle. To address this issue, we (i) optimized a specifically developed conversion-extraction (CONVEX) method for P species quantification using standard additions; and (ii) simultaneously determined solubilities of Ca-P standards by measuring their pH-dependent contents in the sediment matrix. Ca-P minerals including various carbonate fluorapatite (CFAP) specimens from different localities, fluorapatite (FAP), fish bone apatite, synthetic hydroxylapatite (HAP) and octacalcium phosphate (OCP) were characterized by XRD, Raman, FTIR and elemental analysis. Sediment samples were incubated with and without these reference minerals and then sequentially extracted to quantify Ca-P species by their differential dissolution at pH values between 3 and 8. The quantification of solid-phase phosphates at varying pH revealed solubilities in the following order: OCP > HAP > CFAP (4.5% CO3) > CFAP (3.4% CO3) > CFAP (2.2% CO3) > FAP. Thus, CFAP was less soluble in sediment than HAP, and CFAP solubility increased with carbonate content. Unspiked sediment analyses together with standard addition analyses indicated consistent differential dissolution of natural sediment species vs. added reference species and therefore verified the applicability of the CONVEX method in separately determining the most prevalent Ca-P minerals. We found surprisingly high OCP contents in the analyzed coastal sediments which supports the hypothesis of apatite formation by an OCP precursor.
DBpedia and the Live Extraction of Structured Data from Wikipedia
ERIC Educational Resources Information Center
Morsey, Mohamed; Lehmann, Jens; Auer, Soren; Stadler, Claus; Hellmann, Sebastian
2012-01-01
Purpose: DBpedia extracts structured information from Wikipedia, interlinks it with other knowledge bases and freely publishes the results on the web using Linked Data and SPARQL. However, the DBpedia release process is heavyweight and releases are sometimes based on several months old data. DBpedia-Live solves this problem by providing a live…
USDA-ARS?s Scientific Manuscript database
The objective of this study was to fill in additional knowledge gaps with respect to the extraction, storage, and analysis of airborne endotoxin, with a specific focus on samples from a dairy production facility. We utilized polycarbonate filters to collect total airborne endotoxins, sonication as ...
NASA Astrophysics Data System (ADS)
Poux, F.; Neuville, R.; Billen, R.
2017-08-01
Reasoning from information extraction given by point cloud data mining allows contextual adaptation and fast decision making. However, to achieve this perceptive level, a point cloud must be semantically rich, retaining relevant information for the end user. This paper presents an automatic knowledge-based method for pre-processing multi-sensory data and classifying a hybrid point cloud from both terrestrial laser scanning and dense image matching. Using 18 features including sensor's biased data, each tessera in the high-density point cloud from the 3D captured complex mosaics of Germigny-des-prés (France) is segmented via a colour multi-scale abstraction-based featuring extracting connectivity. A 2D surface and outline polygon of each tessera is generated by a RANSAC plane extraction and convex hull fitting. Knowledge is then used to classify every tesserae based on their size, surface, shape, material properties and their neighbour's class. The detection and semantic enrichment method shows promising results of 94% correct semantization, a first step toward the creation of an archaeological smart point cloud.
Towards an Age-Phenome Knowledge-base
2011-01-01
Background Currently, data about age-phenotype associations are not systematically organized and cannot be studied methodically. Searching for scientific articles describing phenotypic changes reported as occurring at a given age is not possible for most ages. Results Here we present the Age-Phenome Knowledge-base (APK), in which knowledge about age-related phenotypic patterns and events can be modeled and stored for retrieval. The APK contains evidence connecting specific ages or age groups with phenotypes, such as disease and clinical traits. Using a simple text mining tool developed for this purpose, we extracted instances of age-phenotype associations from journal abstracts related to non-insulin-dependent Diabetes Mellitus. In addition, links between age and phenotype were extracted from clinical data obtained from the NHANES III survey. The knowledge stored in the APK is made available for the relevant research community in the form of 'Age-Cards', each card holds the collection of all the information stored in the APK about a particular age. These Age-Cards are presented in a wiki, allowing community review, amendment and contribution of additional information. In addition to the wiki interaction, complex searches can also be conducted which require the user to have some knowledge of database query construction. Conclusions The combination of a knowledge model based repository with community participation in the evolution and refinement of the knowledge-base makes the APK a useful and valuable environment for collecting and curating existing knowledge of the connections between age and phenotypes. PMID:21651792
Towards an Obesity-Cancer Knowledge Base: Biomedical Entity Identification and Relation Detection
Lossio-Ventura, Juan Antonio; Hogan, William; Modave, François; Hicks, Amanda; Hanna, Josh; Guo, Yi; He, Zhe; Bian, Jiang
2017-01-01
Obesity is associated with increased risks of various types of cancer, as well as a wide range of other chronic diseases. On the other hand, access to health information activates patient participation, and improve their health outcomes. However, existing online information on obesity and its relationship to cancer is heterogeneous ranging from pre-clinical models and case studies to mere hypothesis-based scientific arguments. A formal knowledge representation (i.e., a semantic knowledge base) would help better organizing and delivering quality health information related to obesity and cancer that consumers need. Nevertheless, current ontologies describing obesity, cancer and related entities are not designed to guide automatic knowledge base construction from heterogeneous information sources. Thus, in this paper, we present methods for named-entity recognition (NER) to extract biomedical entities from scholarly articles and for detecting if two biomedical entities are related, with the long term goal of building a obesity-cancer knowledge base. We leverage both linguistic and statistical approaches in the NER task, which supersedes the state-of-the-art results. Further, based on statistical features extracted from the sentences, our method for relation detection obtains an accuracy of 99.3% and a f-measure of 0.993. PMID:28503356
Bortoluzzi, Marcelo Carlos; Martins, Luciana Dorochenko; Takahashi, André; Ribeiro, Bianca; Martins, Ligiane; Pinto, Marcia Helena Baldani
2018-01-01
The scope of this study was to develop and validate a questionnaire (QCirDental) to measure the impacts associated with dental extraction surgery. The QCirDental questionnaire was developed in two steps; (1) question and item generation and selection, and (2) pretest of the questionnaire with evaluation of the its measurement properties (internal consistency and responsiveness). The sample was composed of 123 patients. None of the patients had any difficulty in understanding the QCirDental. The instrument was found to have excellent internal consistency with Cronbach's alpha reliability coefficient of 0.83. The principal component analysis (Kaiser-Meyer-Olkin Measure of Sampling Adequacy 0,72 and Bartlett's Test of Sphericity with p < 0.001) showed six (6) dimensions explaining 67.5% of the variance. The QCirDental presented excellent internal consistency, being a questionnaire that is easy to read and understand with adequate semantic and content validity. More than 80% of the patients who underwent dental extraction reported some degree of discomfort within the perioperative period which highlights the necessity to assess the quality of care and impacts of dental extraction surgery.
Gauge invariant spectral Cauchy characteristic extraction
NASA Astrophysics Data System (ADS)
Handmer, Casey J.; Szilágyi, Béla; Winicour, Jeffrey
2015-12-01
We present gauge invariant spectral Cauchy characteristic extraction. We compare gravitational waveforms extracted from a head-on black hole merger simulated in two different gauges by two different codes. We show rapid convergence, demonstrating both gauge invariance of the extraction algorithm and consistency between the legacy Pitt null code and the much faster spectral Einstein code (SpEC).
The effect of polarity of extractives on the durability of wood
Roderquita K. Moore; Jonathan Smaglick; Erick Arellano-ruiz; Michael Leitch; Doreen Mann
2015-01-01
Extractives are low molecular weight compounds and regarded as nonstructural wood constituents. These compounds are present in trees and can be extracted by organic solvents. Extractives consist of several classes of compounds that diversify the biological function of the tree. Fats are an energy source for the wood cells whereas terpenoids, resin acids, and phenolic...
SOLVENT EXTRACTION OF URANIUM VALUES
Feder, H.M.; Ader, M.; Ross, L.E.
1959-02-01
A process is presented for extracting uranium salt from aqueous acidic solutions by organic solvent extraction. It consists in contacting the uranium bearing solution with a water immiscible dialkylacetamide having at least 8 carbon atoms in the molecule. Mentioned as a preferred extractant is dibutylacetamide. The organic solvent is usually used with a diluent such as kerosene or CCl/sub 4/.
Knowledge Extraction and Semantic Annotation of Text from the Encyclopedia of Life
Thessen, Anne E.; Parr, Cynthia Sims
2014-01-01
Numerous digitization and ontological initiatives have focused on translating biological knowledge from narrative text to machine-readable formats. In this paper, we describe two workflows for knowledge extraction and semantic annotation of text data objects featured in an online biodiversity aggregator, the Encyclopedia of Life. One workflow tags text with DBpedia URIs based on keywords. Another workflow finds taxon names in text using GNRD for the purpose of building a species association network. Both workflows work well: the annotation workflow has an F1 Score of 0.941 and the association algorithm has an F1 Score of 0.885. Existing text annotators such as Terminizer and DBpedia Spotlight performed well, but require some optimization to be useful in the ecology and evolution domain. Important future work includes scaling up and improving accuracy through the use of distributional semantics. PMID:24594988
ViDI: Virtual Diagnostics Interface. Volume 1; The Future of Wind Tunnel Testing
NASA Technical Reports Server (NTRS)
Fleming, Gary A. (Technical Monitor); Schwartz, Richard J.
2004-01-01
The quality of data acquired in a given test facility ultimately resides within the fidelity and implementation of the instrumentation systems. Over the last decade, the emergence of robust optical techniques has vastly expanded the envelope of measurement possibilities. At the same time the capabilities for data processing, data archiving and data visualization required to extract the highest level of knowledge from these global, on and off body measurement techniques have equally expanded. Yet today, while the instrumentation has matured to the production stage, an optimized solution for gaining knowledge from the gigabytes of data acquired per test (or even per test point) is lacking. A technological void has to be filled in order to possess a mechanism for near-real time knowledge extraction during wind tunnel experiments. Under these auspices, the Virtual Diagnostics Interface, or ViDI, was developed.
Iyappan, Anandhi; Younesi, Erfan; Redolfi, Alberto; Vrooman, Henri; Khanna, Shashank; Frisoni, Giovanni B; Hofmann-Apitius, Martin
2017-01-01
Ontologies and terminologies are used for interoperability of knowledge and data in a standard manner among interdisciplinary research groups. Existing imaging ontologies capture general aspects of the imaging domain as a whole such as methodological concepts or calibrations of imaging instruments. However, none of the existing ontologies covers the diagnostic features measured by imaging technologies in the context of neurodegenerative diseases. Therefore, the Neuro-Imaging Feature Terminology (NIFT) was developed to organize the knowledge domain of measured brain features in association with neurodegenerative diseases by imaging technologies. The purpose is to identify quantitative imaging biomarkers that can be extracted from multi-modal brain imaging data. This terminology attempts to cover measured features and parameters in brain scans relevant to disease progression. In this paper, we demonstrate the systematic retrieval of measured indices from literature and how the extracted knowledge can be further used for disease modeling that integrates neuroimaging features with molecular processes.
NASA's online machine aided indexing system
NASA Technical Reports Server (NTRS)
Silvester, June P.; Genuardi, Michael T.; Klingbiel, Paul H.
1993-01-01
This report describes the NASA Lexical Dictionary, a machine aided indexing system used online at the National Aeronautics and Space Administration's Center for Aerospace Information (CASI). This system is comprised of a text processor that is based on the computational, non-syntactic analysis of input text, and an extensive 'knowledge base' that serves to recognize and translate text-extracted concepts. The structure and function of the various NLD system components are described in detail. Methods used for the development of the knowledge base are discussed. Particular attention is given to a statistically-based text analysis program that provides the knowledge base developer with a list of concept-specific phrases extracted from large textual corpora. Production and quality benefits resulting from the integration of machine aided indexing at CASI are discussed along with a number of secondary applications of NLD-derived systems including on-line spell checking and machine aided lexicography.
Green extraction of grape skin phenolics by using deep eutectic solvents.
Cvjetko Bubalo, Marina; Ćurko, Natka; Tomašević, Marina; Kovačević Ganić, Karin; Radojčić Redovniković, Ivana
2016-06-01
Conventional extraction techniques for plant phenolics are usually associated with high organic solvent consumption and long extraction times. In order to establish an environmentally friendly extraction method for grape skin phenolics, deep eutectic solvents (DES) as a green alternative to conventional solvents coupled with highly efficient microwave-assisted and ultrasound-assisted extraction methods (MAE and UAE, respectively) have been considered. Initially, screening of five different DES for proposed extraction was performed and choline chloride-based DES containing oxalic acid as a hydrogen bond donor with 25% of water was selected as the most promising one, resulting in more effective extraction of grape skin phenolic compounds compared to conventional solvents. Additionally, in our study, UAE proved to be the best extraction method with extraction efficiency superior to both MAE and conventional extraction method. The knowledge acquired in this study will contribute to further DES implementation in extraction of biologically active compounds from various plant sources. Copyright © 2016 Elsevier Ltd. All rights reserved.
Tietjen, Ian; Gatonye, Teresia; Ngwenya, Barbara N; Namushe, Amos; Simonambanga, Sundana; Muzila, Mbaki; Mwimanzi, Philip; Xiao, Jianbo; Fedida, David; Brumme, Zabrina L; Brockman, Mark A; Andrae-Marobela, Kerstin
2016-09-15
Human Immunodeficiency Virus (HIV) strains resistant to licensed anti-retroviral drugs (ARVs) continue to emerge. On the African continent, uneven access to ARVs combined with occurrence of side-effects after prolonged ARV therapy have led to searches for traditional medicines as alternative or complementary remedies to conventional HIV/AIDS management. Here we characterize a specific three-step traditional HIV/AIDS treatment regimen consisting of Cassia sieberiana root, Vitex doniana root, and Croton megalobotrys bark by combining qualitative interviews of traditional medical knowledge users in Botswana with in vitro HIV replication studies. Crude extracts from a total of seven medicinal plants were tested for in vitro cytotoxicity and inhibition of wild-type (NL4.3) and ARV-resistant HIV-1 replication in an immortalized GFP-reporter CD4+ T-cell line. C. sieberiana root, V. doniana root, and C. megalobotrys bark extracts inhibited HIV-1NL4.3 replication with dose-dependence and without concomitant cytotoxicity. C. sieberiana and V. doniana extracts inhibited HIV-1 replication by 50% at 84.8µg/mL and at 25µg/mL, respectively, while C. megalobotrys extracts inhibited HIV-1 replication by a maximum of 45% at concentrations as low as 0.05µg/mL. Extracts did not interfere with antiviral activities of licensed ARVs when applied in combination and exhibited comparable efficacies against viruses harboring major resistance mutations to licensed protease, reverse-transcriptase, or integrase inhibitors. We report for the first time a three-step traditional HIV/AIDS regimen, used alone or in combination with standard ARV regimens, where each step exhibited more potent ability to inhibit HIV replication in vitro. Our observations support the "reverse pharmacology" model where documented clinical experiences are used to identify natural products of therapeutic value. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Pathak, Jyotishman; Bailey, Kent R; Beebe, Calvin E; Bethard, Steven; Carrell, David S; Chen, Pei J; Dligach, Dmitriy; Endle, Cory M; Hart, Lacey A; Haug, Peter J; Huff, Stanley M; Kaggal, Vinod C; Li, Dingcheng; Liu, Hongfang; Marchant, Kyle; Masanz, James; Miller, Timothy; Oniki, Thomas A; Palmer, Martha; Peterson, Kevin J; Rea, Susan; Savova, Guergana K; Stancl, Craig R; Sohn, Sunghwan; Solbrig, Harold R; Suesse, Dale B; Tao, Cui; Taylor, David P; Westberg, Les; Wu, Stephen; Zhuo, Ning; Chute, Christopher G
2013-01-01
Research objective To develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction. Materials and methods Software tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems—Mayo Clinic and Intermountain Healthcare—were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine. Results Using CEMs and open-source natural language processing and terminology services engines—namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)—we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria. Conclusions End-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts. PMID:24190931
Gattinger, Heidrun; Senn, Beate; Hantikainen, Virpi; Köpke, Sascha; Ott, Stefan; Leino-Kilpi, Helena
2017-01-01
Impaired mobility is a prevalent condition among care-dependent persons living in nursing homes. Therefore, competence development of nursing staff in mobility care is important. This study aimed to develop and initially test the Kinaesthetics Competence Self-Evaluation (KCSE) scale for assessing nursing staff's competence in mobility care. The KCSE scale was developed based on an analysis of the concept of nurses' competence in kinaesthetics. Kinaesthetics is a training concept that provides theory and practice about movement foundations that comprise activities of daily living. The scale contains 28 items and four subscales (attitude, dynamic state, knowledge and skills). Content validity was assessed by determining the content validity index within two expert panels. Internal consistency and construct validity were tested within a cross-sectional study in three nursing homes in the German-speaking region of Switzerland between September and November 2015. The content validity index for the entire scale was good (0.93). Based on a sample of nursing staff ( n = 180) the internal consistency results were good for the whole scale (Cronbach's alpha = 0.91) and for the subscales knowledge and skills (α = 0.91, 0.86), acceptable for the subscale attitude (α = 0.63) and weak for the subscale dynamic state (α = 0.54). Most items showed acceptable inter-item and item-total correlations. Based on the exploratory factor analysis, four factors explaining 52% of the variance were extracted. The newly developed KCSE scale is a promising instrument for measuring nursing staff's attitude, dynamic state, knowledge, and skills in mobility care based on kinaesthetics. Despite the need for further psychometric evaluation, the KCSE scale can be used in clinical practice to evaluate competence in mobility care based on kinaesthetics and to identify educational needs for nursing staff.
NASA Astrophysics Data System (ADS)
Thies, Christian; Ostwald, Tamara; Fischer, Benedikt; Lehmann, Thomas M.
2005-04-01
The classification and measuring of objects in medical images is important in radiological diagnostics and education, especially when using large databases as knowledge resources, for instance a picture archiving and communication system (PACS). The main challenge is the modeling of medical knowledge and the diagnostic context to label the sought objects. This task is referred to as closing the semantic gap between low-level pixel information and high level application knowledge. This work describes an approach which allows labeling of a-priori unknown objects in an intuitive way. Our approach consists of four main components. At first an image is completely decomposed into all visually relevant partitions on different scales. This provides a hierarchical organized set of regions. Afterwards, for each of the obtained regions a set of descriptive features is computed. In this data structure objects are represented by regions with characteristic attributes. The actual object identification is the formulation of a query. It consists of attributes on which intervals are defined describing those regions that correspond to the sought objects. Since the objects are a-priori unknown, they are described by a medical expert by means of an intuitive graphical user interface (GUI). This GUI is the fourth component. It enables complex object definitions by browsing the data structure and examinating the attributes to formulate the query. The query is executed and if the sought objects have not been identified its parameterization is refined. By using this heuristic approach, object models for hand radiographs have been developed to extract bones from a single hand in different anatomical contexts. This demonstrates the applicability of the labeling concept. By using a rule for metacarpal bones on a series of 105 images, this type of bone could be retrieved with a precision of 0.53 % and a recall of 0.6%.
Use of metaknowledge in the verification of knowledge-based systems
NASA Technical Reports Server (NTRS)
Morell, Larry J.
1989-01-01
Knowledge-based systems are modeled as deductive systems. The model indicates that the two primary areas of concern in verification are demonstrating consistency and completeness. A system is inconsistent if it asserts something that is not true of the modeled domain. A system is incomplete if it lacks deductive capability. Two forms of consistency are discussed along with appropriate verification methods. Three forms of incompleteness are discussed. The use of metaknowledge, knowledge about knowledge, is explored in connection to each form of incompleteness.
Mahazar, N H; Zakuan, Z; Norhayati, H; MeorHussin, A S; Rukayadi, Y
2017-01-01
Inoculation of starter culture in cocoa bean fermentation produces consistent, predictable and high quality of fermented cocoa beans. It is important to produce healthy inoculum in cocoa bean fermentation for better fermented products. Inoculum could minimize the length of the lag phase in fermentation. The purpose of this study was to optimize the component of culture medium for the maximum cultivation of Candida sp. and Blastobotrys sp. Molasses and yeast extract were chosen as medium composition and Response Surface Methodology (RSM) was then employed to optimize the molasses and yeast extract. Maximum growth of Candida sp. (7.63 log CFU mL-1) and Blastobotrys sp. (8.30 log CFU mL-1) were obtained from the fermentation. Optimum culture media for the growth of Candida sp., consist of 10% (w/v) molasses and 2% (w/v) yeast extract, while for Blastobotrys sp., were 1.94% (w/v) molasses and 2% (w/v) yeast extract. This study shows that culture medium consists of molasses and yeast extract were able to produce maximum growth of Candida sp. and Blastobotrys sp., as a starter culture for cocoa bean fermentation.
Knowledge-Based Image Analysis.
1981-04-01
UNCLASSIF1 ED ETL-025s N IIp ETL-0258 AL Ai01319 S"Knowledge-based image analysis u George C. Stockman Barbara A. Lambird I David Lavine Laveen N. Kanal...extraction, verification, region classification, pattern recognition, image analysis . 3 20. A. CT (Continue on rever.. d. It necessary and Identify by...UNCLgSTFTF n In f SECURITY CLASSIFICATION OF THIS PAGE (When Date Entered) .L1 - I Table of Contents Knowledge Based Image Analysis I Preface
EVALUATION OF GROUNDWATER EXTRACTION REMEDIES - VOLUME III
This volume is the third of a three-volume report documenting the results of an evaluation of ground-water extraction remedies at hazardous waste sites. It consists of a collection of 112 data base reports presenting general information on sites where ground-water extraction sys...
2013-01-01
Background We introduce a Knowledge-based Decision Support System (KDSS) in order to face the Protein Complex Extraction issue. Using a Knowledge Base (KB) coding the expertise about the proposed scenario, our KDSS is able to suggest both strategies and tools, according to the features of input dataset. Our system provides a navigable workflow for the current experiment and furthermore it offers support in the configuration and running of every processing component of that workflow. This last feature makes our system a crossover between classical DSS and Workflow Management Systems. Results We briefly present the KDSS' architecture and basic concepts used in the design of the knowledge base and the reasoning component. The system is then tested using a subset of Saccharomyces cerevisiae Protein-Protein interaction dataset. We used this subset because it has been well studied in literature by several research groups in the field of complex extraction: in this way we could easily compare the results obtained through our KDSS with theirs. Our system suggests both a preprocessing and a clustering strategy, and for each of them it proposes and eventually runs suited algorithms. Our system's final results are then composed of a workflow of tasks, that can be reused for other experiments, and the specific numerical results for that particular trial. Conclusions The proposed approach, using the KDSS' knowledge base, provides a novel workflow that gives the best results with regard to the other workflows produced by the system. This workflow and its numeric results have been compared with other approaches about PPI network analysis found in literature, offering similar results. PMID:23368995
Dutta, Shubha Ranjan; Singh, Purnima; Passi, Deepak; Patter, Pradeep
2015-09-01
To evaluate the efficacy of autologous platelet rich plasma (PRP) in regeneration of bone and to assess clinical compatibility of the material in mandibular third molar extraction socket. To compare the healing of mandibular third molar extraction wounds with and without PRP. Group A consists of the 30 patients where PRP will be placed in the extraction socket before closure of the socket. Group B consists of 30 patients who will be the control group where the extraction sockets will be closed without any intra socket medicaments. The patients would be allocated to the groups randomly. Soft tissue healing was better in study site compared to control site. The result of the study shows rapid bone regeneration in the extraction socket treated with PRP when compared with the socket without PRP. Evaluation for bone blending and trabecular bone formation started earlier in PRP site compared to control, non PRP site. Also there was less postoperative discomfort on the PRP treated side. Autologous PRP is biocompatible and has significant improved soft tissue healing, bone regeneration and increase in bone density in extraction sockets.
CAMUR: Knowledge extraction from RNA-seq cancer data through equivalent classification rules.
Cestarelli, Valerio; Fiscon, Giulia; Felici, Giovanni; Bertolazzi, Paola; Weitschek, Emanuel
2016-03-01
Nowadays, knowledge extraction methods from Next Generation Sequencing data are highly requested. In this work, we focus on RNA-seq gene expression analysis and specifically on case-control studies with rule-based supervised classification algorithms that build a model able to discriminate cases from controls. State of the art algorithms compute a single classification model that contains few features (genes). On the contrary, our goal is to elicit a higher amount of knowledge by computing many classification models, and therefore to identify most of the genes related to the predicted class. We propose CAMUR, a new method that extracts multiple and equivalent classification models. CAMUR iteratively computes a rule-based classification model, calculates the power set of the genes present in the rules, iteratively eliminates those combinations from the data set, and performs again the classification procedure until a stopping criterion is verified. CAMUR includes an ad-hoc knowledge repository (database) and a querying tool.We analyze three different types of RNA-seq data sets (Breast, Head and Neck, and Stomach Cancer) from The Cancer Genome Atlas (TCGA) and we validate CAMUR and its models also on non-TCGA data. Our experimental results show the efficacy of CAMUR: we obtain several reliable equivalent classification models, from which the most frequent genes, their relationships, and the relation with a particular cancer are deduced. dmb.iasi.cnr.it/camur.php emanuel@iasi.cnr.it Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.
Halatchliyski, Iassen; Cress, Ulrike
2014-01-01
Using a longitudinal network analysis approach, we investigate the structural development of the knowledge base of Wikipedia in order to explain the appearance of new knowledge. The data consists of the articles in two adjacent knowledge domains: psychology and education. We analyze the development of networks of knowledge consisting of interlinked articles at seven snapshots from 2006 to 2012 with an interval of one year between them. Longitudinal data on the topological position of each article in the networks is used to model the appearance of new knowledge over time. Thus, the structural dimension of knowledge is related to its dynamics. Using multilevel modeling as well as eigenvector and betweenness measures, we explain the significance of pivotal articles that are either central within one of the knowledge domains or boundary-crossing between the two domains at a given point in time for the future development of new knowledge in the knowledge base. PMID:25365319
The nature of declarative and nondeclarative knowledge for implicit and explicit learning.
Kirkhart, M W
2001-10-01
Using traditional implicit and explicit artificial-grammar learning tasks, the author investigated the similarities and differences between the acquisition of declarative knowledge under implicit and explicit learning conditions and the functions of the declarative knowledge during testing. Results suggested that declarative knowledge was not predictive of or required for implicit learning but was related to consistency in implicit learning performance. In contrast, declarative knowledge was predictive of and required for explicit learning and was related to consistency in performance. For explicit learning, the declarative knowledge functioned as a guide for other behavior. In contrast, for implicit learning, the declarative knowledge did not serve as a guide for behavior but was instead a post hoc description of the most commonly seen stimuli.
1988-01-19
approach for the analysis of aerial images. In this approach image analysis is performed ast three levels of abstraction, namely iconic or low-level... image analysis , symbolic or medium-level image analysis , and semantic or high-level image analysis . Domain dependent knowledge about prototypical urban
Extraction of Graph Information Based on Image Contents and the Use of Ontology
ERIC Educational Resources Information Center
Kanjanawattana, Sarunya; Kimura, Masaomi
2016-01-01
A graph is an effective form of data representation used to summarize complex information. Explicit information such as the relationship between the X- and Y-axes can be easily extracted from a graph by applying human intelligence. However, implicit knowledge such as information obtained from other related concepts in an ontology also resides in…
Support patient search on pathology reports with interactive online learning based data extraction.
Zheng, Shuai; Lu, James J; Appin, Christina; Brat, Daniel; Wang, Fusheng
2015-01-01
Structural reporting enables semantic understanding and prompt retrieval of clinical findings about patients. While synoptic pathology reporting provides templates for data entries, information in pathology reports remains primarily in narrative free text form. Extracting data of interest from narrative pathology reports could significantly improve the representation of the information and enable complex structured queries. However, manual extraction is tedious and error-prone, and automated tools are often constructed with a fixed training dataset and not easily adaptable. Our goal is to extract data from pathology reports to support advanced patient search with a highly adaptable semi-automated data extraction system, which can adjust and self-improve by learning from a user's interaction with minimal human effort. We have developed an online machine learning based information extraction system called IDEAL-X. With its graphical user interface, the system's data extraction engine automatically annotates values for users to review upon loading each report text. The system analyzes users' corrections regarding these annotations with online machine learning, and incrementally enhances and refines the learning model as reports are processed. The system also takes advantage of customized controlled vocabularies, which can be adaptively refined during the online learning process to further assist the data extraction. As the accuracy of automatic annotation improves overtime, the effort of human annotation is gradually reduced. After all reports are processed, a built-in query engine can be applied to conveniently define queries based on extracted structured data. We have evaluated the system with a dataset of anatomic pathology reports from 50 patients. Extracted data elements include demographical data, diagnosis, genetic marker, and procedure. The system achieves F-1 scores of around 95% for the majority of tests. Extracting data from pathology reports could enable more accurate knowledge to support biomedical research and clinical diagnosis. IDEAL-X provides a bridge that takes advantage of online machine learning based data extraction and the knowledge from human's feedback. By combining iterative online learning and adaptive controlled vocabularies, IDEAL-X can deliver highly adaptive and accurate data extraction to support patient search.
The Importance of Being a Complement: CED Effects Revisited
ERIC Educational Resources Information Center
Jurka, Johannes
2010-01-01
This dissertation revisits subject island effects (Ross 1967, Chomsky 1973) cross-linguistically. Controlled acceptability judgment studies in German, English, Japanese and Serbian show that extraction out of specifiers is consistently degraded compared to extraction out of complements, indicating that the Condition on Extraction domains (CED,…
García-Salgado, Sara; Quijano, M Ángeles
2016-12-01
Ultrasonic probe sonication (UPS) and microwave-assisted extraction (MAE) were used for rapid single extraction of Cd, Cr, Cu, Ni, Pb, and Zn from soils polluted by former mining activities (Mónica Mine, Bustarviejo, NW Madrid, Spain), using 0.01 mol L -1 calcium chloride (CaCl 2 ), 0.43 mol L -1 acetic acid (CH 3 COOH), and 0.05 mol L -1 ethylenediaminetetraacetic acid (EDTA) at pH 7 as extracting agents. The optimum extraction conditions by UPS consisted of an extraction time of 2 min for both CaCl 2 and EDTA extractions and 15 min for CH 3 COOH extraction, at 30% ultrasound (US) amplitude, whereas in the case of MAE, they consisted of 5 min at 50 °C for both CaCl 2 and EDTA extractions and 15 min at 120 °C for CH 3 COOH extraction. Extractable concentrations were determined by inductively coupled plasma atomic emission spectrometry (ICP-AES). The proposed methods were compared with a reduced version of the corresponding single extraction procedures proposed by the Standards, Measurements and Testing Programme (SM&T). The results obtained showed a great variability on extraction percentages, depending on the metal, the total concentration level and the soil sample, reaching high values in some areas. However, the correlation analysis showed that total concentration is the most relevant factor for element extractability in these soil samples. From the results obtained, the application of the accelerated extraction procedures, such as MAE and UPS, could be considered a useful approach to evaluate rapidly the extractability of the metals studied.
From data to information and knowledge for geospatial applications
NASA Astrophysics Data System (ADS)
Schenk, T.; Csatho, B.; Yoon, T.
2006-12-01
An ever-increasing number of airborne and spaceborne data-acquisition missions with various sensors produce a glut of data. Sensory data rarely contains information in a explicit form such that an application can directly use it. The processing and analyzing of data constitutes a real bottleneck; therefore, automating the processes of gaining useful information and knowledge from the raw data is of paramount interest. This presentation is concerned with the transition from data to information and knowledge. With data we refer to the sensor output and we notice that data provide very rarely direct answers for applications. For example, a pixel in a digital image or a laser point from a LIDAR system (data) have no direct relationship with elevation changes of topographic surfaces or the velocity of a glacier (information, knowledge). We propose to employ the computer vision paradigm to extract information and knowledge as it pertains to a wide range of geoscience applications. After introducing the paradigm we describe the major steps to be undertaken for extracting information and knowledge from sensory input data. Features play an important role in this process. Thus we focus on extracting features and their perceptual organization to higher order constructs. We demonstrate these concepts with imaging data and laser point clouds. The second part of the presentation addresses the problem of combining data obtained by different sensors. An absolute prerequisite for successful fusion is to establish a common reference frame. We elaborate on the concept of sensor invariant features that allow the registration of such disparate data sets as aerial/satellite imagery, 3D laser point clouds, and multi/hyperspectral imagery. Fusion takes place on the data level (sensor registration) and on the information level. We show how fusion increases the degree of automation for reconstructing topographic surfaces. Moreover, fused information gained from the three sensors results in a more abstract surface representation with a rich set of explicit surface information that can be readily used by an analyst for applications such as change detection.
ERIC Educational Resources Information Center
Rice, Stephen; Geels, Kasha; Trafimow, David; Hackett, Holly
2011-01-01
Test scores are used to assess one's general knowledge of a specific area. Although strategies to improve test performance have been previously identified, the consistency with which one uses these strategies has not been analyzed in such a way that allows assessment of how much consistency affects overall performance. Participants completed one…
Biomedical discovery acceleration, with applications to craniofacial development.
Leach, Sonia M; Tipney, Hannah; Feng, Weiguo; Baumgartner, William A; Kasliwal, Priyanka; Schuyler, Ronald P; Williams, Trevor; Spritz, Richard A; Hunter, Lawrence
2009-03-01
The profusion of high-throughput instruments and the explosion of new results in the scientific literature, particularly in molecular biomedicine, is both a blessing and a curse to the bench researcher. Even knowledgeable and experienced scientists can benefit from computational tools that help navigate this vast and rapidly evolving terrain. In this paper, we describe a novel computational approach to this challenge, a knowledge-based system that combines reading, reasoning, and reporting methods to facilitate analysis of experimental data. Reading methods extract information from external resources, either by parsing structured data or using biomedical language processing to extract information from unstructured data, and track knowledge provenance. Reasoning methods enrich the knowledge that results from reading by, for example, noting two genes that are annotated to the same ontology term or database entry. Reasoning is also used to combine all sources into a knowledge network that represents the integration of all sorts of relationships between a pair of genes, and to calculate a combined reliability score. Reporting methods combine the knowledge network with a congruent network constructed from experimental data and visualize the combined network in a tool that facilitates the knowledge-based analysis of that data. An implementation of this approach, called the Hanalyzer, is demonstrated on a large-scale gene expression array dataset relevant to craniofacial development. The use of the tool was critical in the creation of hypotheses regarding the roles of four genes never previously characterized as involved in craniofacial development; each of these hypotheses was validated by further experimental work.
TARGET's role in knowledge acquisition, engineering, validation, and documentation
NASA Technical Reports Server (NTRS)
Levi, Keith R.
1994-01-01
We investigate the use of the TARGET task analysis tool for use in the development of rule-based expert systems. We found TARGET to be very helpful in the knowledge acquisition process. It enabled us to perform knowledge acquisition with one knowledge engineer rather than two. In addition, it improved communication between the domain expert and knowledge engineer. We also found it to be useful for both the rule development and refinement phases of the knowledge engineering process. Using the network in these phases required us to develop guidelines that enabled us to easily translate the network into production rules. A significant requirement for TARGET remaining useful throughout the knowledge engineering process was the need to carefully maintain consistency between the network and the rule representations. Maintaining consistency not only benefited the knowledge engineering process, but also has significant payoffs in the areas of validation of the expert system and documentation of the knowledge in the system.
Simulation for learning and teaching procedural skills: the state of the science.
Nestel, Debra; Groom, Jeffrey; Eikeland-Husebø, Sissel; O'Donnell, John M
2011-08-01
Simulation is increasingly used to support learning of procedural skills. Our panel was tasked with summarizing the "best evidence." We addressed the following question: To what extent does simulation support learning and teaching in procedural skills? We conducted a literature search from 2000 to 2010 using Medline, CINAHL, ERIC, and PSYCHINFO databases. Inclusion criteria were established and then data extracted from abstracts according to several categories. Although secondary sources of literature were sourced from key informants and participants at the "Research Consensus Summit: State of the Science," they were not included in the data extraction process but were used to inform discussion. Eighty-one of 1,575 abstracts met inclusion criteria. The uses of simulation for learning and teaching procedural skills were diverse. The most commonly reported simulator type was manikins (n = 17), followed by simulated patients (n = 14), anatomic simulators (eg, part-task) (n = 12), and others. For research design, most abstracts (n = 52) were at Level IV of the National Health and Medical Research Council classification (ie, case series, posttest, or pretest/posttest, with no control group, narrative reviews, and editorials). The most frequent Best Evidence Medical Education ranking was for conclusions probable (n = 37). Using the modified Kirkpatrick scale for impact of educational intervention, the most frequent classification was for modification of knowledge and/or skills (Level 2b) (n = 52). Abstracts assessed skills (n = 47), knowledge (n = 32), and attitude (n = 15) with the majority demonstrating improvements after simulation-based interventions. Studies focused on immediate gains and skills assessments were usually conducted in simulation. The current state of the science finds that simulation usually leads to improved knowledge and skills. Learners and instructors express high levels of satisfaction with the method. While most studies focus on short-term gains attained in the simulation setting, a small number support the transfer of simulation learning to clinical practice. Further study is needed to optimize the alignment of learner, instructor, simulator, setting, and simulation for learning and teaching procedural skills. Instructional design and educational theory, contextualization, transferability, accessibility, and scalability must all be considered in simulation-based education programs. More consistently, robust research designs are required to strengthen the evidence.
Using articulated scene models for dynamic 3d scene analysis in vista spaces
NASA Astrophysics Data System (ADS)
Beuter, Niklas; Swadzba, Agnes; Kummert, Franz; Wachsmuth, Sven
2010-09-01
In this paper we describe an efficient but detailed new approach to analyze complex dynamic scenes directly in 3D. The arising information is important for mobile robots to solve tasks in the area of household robotics. In our work a mobile robot builds an articulated scene model by observing the environment in the visual field or rather in the so-called vista space. The articulated scene model consists of essential knowledge about the static background, about autonomously moving entities like humans or robots and finally, in contrast to existing approaches, information about articulated parts. These parts describe movable objects like chairs, doors or other tangible entities, which could be moved by an agent. The combination of the static scene, the self-moving entities and the movable objects in one articulated scene model enhances the calculation of each single part. The reconstruction process for parts of the static scene benefits from removal of the dynamic parts and in turn, the moving parts can be extracted more easily through the knowledge about the background. In our experiments we show, that the system delivers simultaneously an accurate static background model, moving persons and movable objects. This information of the articulated scene model enables a mobile robot to detect and keep track of interaction partners, to navigate safely through the environment and finally, to strengthen the interaction with the user through the knowledge about the 3D articulated objects and 3D scene analysis. [Figure not available: see fulltext.
Learning the Structure of Biomedical Relationships from Unstructured Text
Percha, Bethany; Altman, Russ B.
2015-01-01
The published biomedical research literature encompasses most of our understanding of how drugs interact with gene products to produce physiological responses (phenotypes). Unfortunately, this information is distributed throughout the unstructured text of over 23 million articles. The creation of structured resources that catalog the relationships between drugs and genes would accelerate the translation of basic molecular knowledge into discoveries of genomic biomarkers for drug response and prediction of unexpected drug-drug interactions. Extracting these relationships from natural language sentences on such a large scale, however, requires text mining algorithms that can recognize when different-looking statements are expressing similar ideas. Here we describe a novel algorithm, Ensemble Biclustering for Classification (EBC), that learns the structure of biomedical relationships automatically from text, overcoming differences in word choice and sentence structure. We validate EBC's performance against manually-curated sets of (1) pharmacogenomic relationships from PharmGKB and (2) drug-target relationships from DrugBank, and use it to discover new drug-gene relationships for both knowledge bases. We then apply EBC to map the complete universe of drug-gene relationships based on their descriptions in Medline, revealing unexpected structure that challenges current notions about how these relationships are expressed in text. For instance, we learn that newer experimental findings are described in consistently different ways than established knowledge, and that seemingly pure classes of relationships can exhibit interesting chimeric structure. The EBC algorithm is flexible and adaptable to a wide range of problems in biomedical text mining. PMID:26219079
Model-based vision system for automatic recognition of structures in dental radiographs
NASA Astrophysics Data System (ADS)
Acharya, Raj S.; Samarabandu, Jagath K.; Hausmann, E.; Allen, K. A.
1991-07-01
X-ray diagnosis of destructive periodontal disease requires assessing serial radiographs by an expert to determine the change in the distance between cemento-enamel junction (CEJ) and the bone crest. To achieve this without the subjectivity of a human expert, a knowledge based system is proposed to automatically locate the two landmarks which are the CEJ and the level of alveolar crest at its junction with the periodontal ligament space. This work is a part of an ongoing project to automatically measure the distance between CEJ and the bone crest along a line parallel to the axis of the tooth. The approach presented in this paper is based on identifying a prominent feature such as the tooth boundary using local edge detection and edge thresholding to establish a reference and then using model knowledge to process sub-regions in locating the landmarks. Segmentation techniques invoked around these regions consists of a neural-network like hierarchical refinement scheme together with local gradient extraction, multilevel thresholding and ridge tracking. Recognition accuracy is further improved by first locating the easily identifiable parts of the bone surface and the interface between the enamel and the dentine and then extending these boundaries towards the periodontal ligament space and the tooth boundary respectively. The system is realized as a collection of tools (or knowledge sources) for pre-processing, segmentation, primary and secondary feature detection and a control structure based on the blackboard model to coordinate the activities of these tools.
Vasli, Parvaneh; Dehghan-Nayeri, Nahid; Khosravi, Laleh
2018-01-01
Despite the emphasis placed on the implementation of continuing professional education programs in Iran, researchers or practitioners have not developed an instrument for assessing the factors that affect the knowledge transfer from such programs to clinical practice. The aim of this study was to design and validate such instrument for the Iranian context. The research used a three-stage mix method. In the first stage, in-depth interviews with nurses and content analysis were conducted, after which themes were extracted from the data. In the second stage, the findings of the content analysis and literature review were examined, and preliminary instrument options were developed. In the third stage, qualitative content validity, face validity, content validity ratio, content validity index, and construct validity using exploratory factor analysis was conducted. The reliability of the instrument was measured before and after the determination of construct validity. Primary tool instrument initially comprised 53 items, and its content validity index was 0.86. In the multi-stage factor analysis, eight questions were excluded, thereby reducing 11 factors to five and finally, to four. The final instrument with 43 items consists of the following dimensions: structure and organizational climate, personal characteristics, nature and status of professionals, and nature of educational programs. Managers can use the Iranian instrument to identify factors affecting knowledge transfer of continuing professional education to clinical practice. Copyright © 2017. Published by Elsevier Ltd.
Al Mohtar, Abeer; Kazan, Michel; Taliercio, Thierry; Cerutti, Laurent; Blaize, Sylvain; Bruyant, Aurélien
2017-03-24
We have investigated the effective dielectric response of a subwavelength grating made of highly doped semiconductors (HDS) excited in reflection, using numerical simulations and spectroscopic measurement. The studied system can exhibit strong localized surface resonances and has, therefore, a great potential for surface-enhanced infrared absorption (SEIRA) spectroscopy application. It consists of a highly doped InAsSb grating deposited on lattice-matched GaSb. The numerical analysis demonstrated that the resonance frequencies can be inferred from the dielectric function of an equivalent homogeneous slab by accounting for the complex reflectivity of the composite layer. Fourier transform infrared reflectivity (FTIR) measurements, analyzed with the Kramers-Kronig conversion technique, were used to deduce the effective response in reflection of the investigated system. From the knowledge of this phenomenological dielectric function, transversal and longitudinal energy-loss functions were extracted and attributed to transverse and longitudinal resonance modes frequencies.
Bertoluzzi, Luca; Bisquert, Juan
2017-01-05
The optimization of solar energy conversion devices relies on their accurate and nondestructive characterization. The small voltage perturbation techniques of impedance spectroscopy (IS) have proven to be very powerful to identify the main charge storage modes and charge transfer processes that control device operation. Here we establish the general connection between IS and light modulated techniques such as intensity modulated photocurrent (IMPS) and photovoltage spectroscopies (IMVS) for a general system that converts light to energy. We subsequently show how these techniques are related to the steady-state photocurrent and photovoltage and the external quantum efficiency. Finally, we express the IMPS and IMVS transfer functions in terms of the capacitive and resistive features of a general equivalent circuit of IS for the case of a photoanode used for solar fuel production. We critically discuss how much knowledge can be extracted from the combined use of those three techniques.
Discovering Fine-grained Sentiment in Suicide Notes
Wang, Wenbo; Chen, Lu; Tan, Ming; Wang, Shaojun; Sheth, Amit P.
2012-01-01
This paper presents our solution for the i2b2 sentiment classification challenge. Our hybrid system consists of machine learning and rule-based classifiers. For the machine learning classifier, we investigate a variety of lexical, syntactic and knowledge-based features, and show how much these features contribute to the performance of the classifier through experiments. For the rule-based classifier, we propose an algorithm to automatically extract effective syntactic and lexical patterns from training examples. The experimental results show that the rule-based classifier outperforms the baseline machine learning classifier using unigram features. By combining the machine learning classifier and the rule-based classifier, the hybrid system gains a better trade-off between precision and recall, and yields the highest micro-averaged F-measure (0.5038), which is better than the mean (0.4875) and median (0.5027) micro-average F-measures among all participating teams. PMID:22879770
Pattern Recognition Of Blood Vessel Networks In Ocular Fundus Images
NASA Astrophysics Data System (ADS)
Akita, K.; Kuga, H.
1982-11-01
We propose a computer method of recognizing blood vessel networks in color ocular fundus images which are used in the mass diagnosis of adult diseases such as hypertension and diabetes. A line detection algorithm is applied to extract the blood vessels, and the skeleton patterns of them are made to analyze and describe their structures. The recognition of line segments of arteries and/or veins in the vessel networks consists of three stages. First, a few segments which satisfy a certain constraint are picked up and discriminated as arteries or veins. This is the initial labeling. Then the remaining unknown ones are labeled by utilizing the physical level knowledge. We propose two schemes for this stage : a deterministic labeling and a probabilistic relaxation labeling. Finally the label of each line segment is checked so as to minimize the total number of labeling contradictions. Some experimental results are also presented.
Schütz, Katrin; Persike, Markus; Carle, Reinhold; Schieber, Andreas
2006-04-01
The anthocyanin pattern of artichoke heads (Cynara scolymus L.) has been investigated by high-performance liquid chromatography-electrospray ionization mass spectrometry. For this purpose a suitable extraction and liquid chromatographic method was developed. Besides the main anthocyanins-cyanidin 3,5-diglucoside, cyanidin 3-glucoside, cyanidin 3,5-malonyldiglucoside, cyanidin 3-(3''-malonyl)glucoside, and cyanidin 3-(6''-malonyl)glucoside-several minor compounds were identified. Among these, two peonidin derivatives and one delphinidin derivative were characterized on the basis of their fragmentation patterns. To the best of our knowledge this is the first report on anthocyanins in artichoke heads consisting of aglycones other than those of cyanidin. Quantification of individual compounds was performed by external calibration. Cyanidin 3-(6''-malonyl)glucoside was found to be the major anthocyanin in all the samples analyzed. Total anthocyanin content ranged from 8.4 to 1,705.4 mg kg(-1) dry mass.
Pharmacy Students’ Attitudes Toward Debt
Yusuf, Akeem A.; Hadsall, Ronald S.
2015-01-01
Objective. To examine pharmacy students’ attitudes toward debt. Methods. Two hundred thirteen pharmacy students at the University of Minnesota were surveyed using items designed to assess attitudes toward debt. Factor analysis was performed to identify common themes. Subgroup analysis was performed to examine whether students’ debt-tolerant attitudes varied according to their demographic characteristics, past loan experience, monthly income, and workload. Results. Principal component extraction with varimax rotation identified 3 factor themes accounting for 49.0% of the total variance: tolerant attitudes toward debt (23.5%); contemplation and knowledge about loans (14.3%); and fear of debt (11.2%). Tolerant attitudes toward debt were higher if students were white or if they had had past loan experience. Conclusion. These 3 themes in students’ attitudes toward debt were consistent with those identified in previous research. Pharmacy schools should consider providing a structured financial education to improve student management of debt. PMID:26089561
46 CFR 161.002-15 - Sample extraction smoke detection systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 6 2010-10-01 2010-10-01 false Sample extraction smoke detection systems. 161.002-15..., CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL ELECTRICAL EQUIPMENT Fire-Protective Systems § 161.002-15 Sample extraction smoke detection systems. The smoke detecting system must consist of a means for...
Anawar, Hossain Md
2015-08-01
The oxidative dissolution of sulfidic minerals releases the extremely acidic leachate, sulfate and potentially toxic elements e.g., As, Ag, Cd, Cr, Cu, Hg, Ni, Pb, Sb, Th, U, Zn, etc. from different mine tailings and waste dumps. For the sustainable rehabilitation and disposal of mining waste, the sources and mechanisms of contaminant generation, fate and transport of contaminants should be clearly understood. Therefore, this study has provided a critical review on (1) recent insights in mechanisms of oxidation of sulfidic minerals, (2) environmental contamination by mining waste, and (3) remediation and rehabilitation techniques, and (4) then developed the GEMTEC conceptual model/guide [(bio)-geochemistry-mine type-mineralogy- geological texture-ore extraction process-climatic knowledge)] to provide the new scientific approach and knowledge for remediation of mining wastes and acid mine drainage. This study has suggested the pre-mining geological, geochemical, mineralogical and microtextural characterization of different mineral deposits, and post-mining studies of ore extraction processes, physical, geochemical, mineralogical and microbial reactions, natural attenuation and effect of climate change for sustainable rehabilitation of mining waste. All components of this model should be considered for effective and integrated management of mining waste and acid mine drainage. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wen, Demin; Androjna, Caroline; Vasanji, Amit; Belovich, Joanne; Midura, Ronald J.
2010-01-01
In vivo the hydraulic permeability of cortical bone influences the transport of nutrients, waste products and signaling molecules, thus influencing the metabolic functions of osteocytes and osteoblasts. In the current study two hypotheses were tested: the presence of (1) lipids and (2) collagen matrix in the porous compartment of cortical bone restricts its permeability. Our approach was to measure the radial permeability of adult canine cortical bone before and after extracting lipids with acetone-methanol, and before and after digesting collagen with bacterial collagenase. Our results showed that the permeability of adult canine cortical bone was below 4.0 × 10−17 m2, a value consistent with prior knowledge. After extracting lipids, permeability increased to a median value of 8.6 × 10−16 m2. After further digesting with collagenase, permeability increased to a median value of 1.4 × 10−14 m2. We conclude that the presence of both lipids and collagen matrix within the porous compartment of cortical bone restricts its radial permeability. These novel findings suggest that the chemical composition of the tissue matrix within the porous compartment of cortical bone influences the transport and exchange of nutrients and waste products, and possibly influences the metabolic functions of osteocytes and osteoblasts. PMID:19967451
Gaitanis, George; Magiatis, Prokopios; Stathopoulou, Konstantina; Bassukas, Ioannis D; Alexopoulos, Evangelos C; Velegraki, Aristea; Skaltsounis, Alexios-Leandros
2008-07-01
Malassezia yeasts are connected with seborrheic dermatitis (SD) whereas M. furfur pathogenicity is associated with the production of bioactive indoles. In this study, the production of indoles by M. furfur isolates from healthy and diseased skin was compared, the respective HPLC patterns were analyzed, and substances that are preferentially synthesized by strains isolated from SD lesions were isolated and characterized. Malassezin, pityriacitrin, indole-3-carbaldehyde, and indolo[3,2-b]carbazole (ICZ) were isolated by HPLC from extracts of M. furfur grown in L-tryptophan agar, and identified by nuclear magnetic resonance and mass spectroscopy. Of these, ICZ, a potent ligand of the aryl hydrocarbon receptor (AhR), is described for the first time to our knowledge as a M. furfur metabolite. HPLC-photodiode array detection analysis of strain extracts from 7 healthy subjects and 10 SD patients showed that M. furfur isolates from only SD patients consistently produce malassezin and ICZ. This discriminatory production of AhR agonists provides initial evidence for a previously unreported mechanism triggering development of SD and indicates that the variable pathogenicity patterns recorded for M. furfur-associated SD conditions may be attributed to selective production (P<0.001) of measurable bioactive indoles.
Totton, Sarah C; Glanville, Julie M; Dzikamunhenga, Rungano S; Dickson, James S; O'Connor, Annette M
2016-06-01
In this systematic review, we summarized change in Salmonella prevalence and/or quantity associated with pathogen reduction treatments (washes, sprays, steam) on pork carcasses or skin-on carcass parts in comparative designs (natural or artificial contamination). In January 2015, CAB Abstracts (1910-2015), SCI and CPCI-Science (1900-2015), Medline® and Medline® In-Process (1946-2015) (OVIDSP), Science.gov, and Safe Pork (1996-2012) were searched with no language or publication type restrictions. Reference lists of 24 review articles were checked. Two independent reviewers screened 4001 titles/abstracts and assessed 122 full-text articles for eligibility. Only English-language records were extracted. Fourteen studies (5 in commercial abattoirs) were extracted and risk of bias was assessed by two reviewers independently. Risk of bias due to systematic error was moderate; a major source of bias was the potential differential recovery of Salmonella from treated carcasses due to knowledge of the intervention. The most consistently observed association was a positive effect of acid washes on categorical measures of Salmonella; however, this was based on individual results, not a summary effect measure. There was no strong evidence that any one intervention protocol (acid temperature, acid concentration, water temperature) was clearly superior to others for Salmonella control.
Interactive Cadastral Boundary Delineation from Uav Data
NASA Astrophysics Data System (ADS)
Crommelinck, S.; Höfle, B.; Koeva, M. N.; Yang, M. Y.; Vosselman, G.
2018-05-01
Unmanned aerial vehicles (UAV) are evolving as an alternative tool to acquire land tenure data. UAVs can capture geospatial data at high quality and resolution in a cost-effective, transparent and flexible manner, from which visible land parcel boundaries, i.e., cadastral boundaries are delineable. This delineation is to no extent automated, even though physical objects automatically retrievable through image analysis methods mark a large portion of cadastral boundaries. This study proposes (i) a methodology that automatically extracts and processes candidate cadastral boundary features from UAV data, and (ii) a procedure for a subsequent interactive delineation. Part (i) consists of two state-of-the-art computer vision methods, namely gPb contour detection and SLIC superpixels, as well as a classification part assigning costs to each outline according to local boundary knowledge. Part (ii) allows a user-guided delineation by calculating least-cost paths along previously extracted and weighted lines. The approach is tested on visible road outlines in two UAV datasets from Germany. Results show that all roads can be delineated comprehensively. Compared to manual delineation, the number of clicks per 100 m is reduced by up to 86 %, while obtaining a similar localization quality. The approach shows promising results to reduce the effort of manual delineation that is currently employed for indirect (cadastral) surveying.
Bourke, Alan K; Klenk, Jochen; Schwickert, Lars; Aminian, Kamiar; Ihlen, Espen A F; Mellone, Sabato; Helbostad, Jorunn L; Chiari, Lorenzo; Becker, Clemens
2016-08-01
Automatic fall detection will promote independent living and reduce the consequences of falls in the elderly by ensuring people can confidently live safely at home for linger. In laboratory studies inertial sensor technology has been shown capable of distinguishing falls from normal activities. However less than 7% of fall-detection algorithm studies have used fall data recorded from elderly people in real life. The FARSEEING project has compiled a database of real life falls from elderly people, to gain new knowledge about fall events and to develop fall detection algorithms to combat the problems associated with falls. We have extracted 12 different kinematic, temporal and kinetic related features from a data-set of 89 real-world falls and 368 activities of daily living. Using the extracted features we applied machine learning techniques and produced a selection of algorithms based on different feature combinations. The best algorithm employs 10 different features and produced a sensitivity of 0.88 and a specificity of 0.87 in classifying falls correctly. This algorithm can be used distinguish real-world falls from normal activities of daily living in a sensor consisting of a tri-axial accelerometer and tri-axial gyroscope located at L5.
Rider, Cynthia V.; Nyska, Abraham; Cora, Michelle C.; Kissling, Grace E.; Smith, Cynthia; Travlos, Gregory S.; Hejtmancik, Milton R.; Fomby, Laurene M.; Colleton, Curtis A.; Ryan, Michael J.; Kooistra, Linda; Morrison, James P.; Chan, Po C.
2014-01-01
Ginkgo biloba extract (GBE) is a popular herbal supplement that is used to improve circulation and brain function. In spite of widespread human exposure to relatively high doses over potentially long periods of time, there is a paucity of data from animal studies regarding the toxicity and carcinogenicity associated with GBE. In order to fill this knowledge gap, three-month and two-year toxicity and carcinogenicity studies with GBE administered by oral gavage to B6C3F1/N mice and F344/N rats were performed as part of the National Toxicology Program’s Dietary Supplements and Herbal Medicines Initiative. The targets of GBE treatment were the liver, thyroid, and nose. These targets were consistent across exposure period, sex, and species, albeit with varying degrees of effect observed among studies. Key findings included a notably high incidence of hepatoblastomas in male and female mice and evidence of carcinogenic potential in the thyroid gland of both mice and rats. Various nonneoplastic lesions were observed beyond control levels in the liver, thyroid gland, and nose of rats and mice administered GBE. Although these results cannot be directly extrapolated to humans, the findings fill an important data gap in assessing risk associated with GBE use. PMID:23960164
Differentiation of Glioblastoma and Lymphoma Using Feature Extraction and Support Vector Machine.
Yang, Zhangjing; Feng, Piaopiao; Wen, Tian; Wan, Minghua; Hong, Xunning
2017-01-01
Differentiation of glioblastoma multiformes (GBMs) and lymphomas using multi-sequence magnetic resonance imaging (MRI) is an important task that is valuable for treatment planning. However, this task is a challenge because GBMs and lymphomas may have a similar appearance in MRI images. This similarity may lead to misclassification and could affect the treatment results. In this paper, we propose a semi-automatic method based on multi-sequence MRI to differentiate these two types of brain tumors. Our method consists of three steps: 1) the key slice is selected from 3D MRIs and region of interests (ROIs) are drawn around the tumor region; 2) different features are extracted based on prior clinical knowledge and validated using a t-test; and 3) features that are helpful for classification are used to build an original feature vector and a support vector machine is applied to perform classification. In total, 58 GBM cases and 37 lymphoma cases are used to validate our method. A leave-one-out crossvalidation strategy is adopted in our experiments. The global accuracy of our method was determined as 96.84%, which indicates that our method is effective for the differentiation of GBM and lymphoma and can be applied in clinical diagnosis. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Herrero, Miguel; Ibáñez, Elena; Cifuentes, Alejandro; Señoráns, Javier
2004-08-27
In this work, different extracts from the microalga Spirulina platensis are obtained using pressurized liquid extraction (PLE) and four different solvents (hexane, light petroleum, ethanol and water). Different extraction temperatures (115 and 170 degrees C) were tested using extraction times ranging from 9 to 15 min. The antioxidant activity of the different extracts is determined by means of an in vitro assay using a free radical method. Moreover, a new and fast method is developed using micellar electrokinetic chromatography with diode array detection (MEKC-DAD) to provide a preliminary analysis on the composition of the extracts. This combined application (i.e., in vitro assays plus MEKC-DAD) allowed the fast characterization of the extracts based on their antioxidant activity and the UV-vis spectra of the different compounds found in the extracts. To our knowledge, this work shows for the first time the great possibilities of the combined use of PLE-in vitro assay-MEKC-DAD to investigate natural sources of antioxidants.
Tan, Si Ying; Melendez-Torres, G J
2016-01-01
Female sex work accounts for about 15% of the global HIV burden in women. Asia is the region with the second highest attributable fraction of the HIV epidemic after sub-Saharan Africa. This review synthesises studies that depict the barriers and facilitators encountered by sex workers in Asia when negotiating consistent condom use. A total of 18 studies published between January 1989 and May 2015 were included in the review. Data were extracted, critically appraised and analysed using a thematic analysis approach. Individual-level factors related to sex workers' knowledge, perception and power, as well as interpersonal-level factors that encompassed dynamics with clients and peer-related factors, presented as both barriers and facilitators to sex workers' condom negotiation process. In addition, the structural environment of sex work, access to resources, poverty, stigma, the legal environment and the role of media were also identified as factors in influencing the condom negotiation process of sex workers. A multisectoral interventional approach that addresses the multilevel barriers encountered by sex workers in condom negotiation is needed. Awareness of safe-sex practice should be collectively enhanced among sex workers, clients and brothel managers.
WH-Questions and Extraction from Temporal Adjuncts: A Case for Movement.
ERIC Educational Resources Information Center
Goodluck, Helen; And Others
A study investigated young children's knowledge of the constraint that prevents questioning from a position inside a temporal adjunct: i.e., knowledge of the ungrammaticality of a question such as "Who did Fred kiss Sue before hugging...?" Subjects were 30 children aged 3 to 5 years, who listened to stories accompanied by pictures and…
Acharya, Dev Raj; Thomas, Malcolm; Cann, Rosemary
2016-01-01
School-based sex education has the potential to prevent unwanted pregnancy and to promote positive sexual health at the individual, family and community level. To develop and validate a sexual health questionnaire to measure young peoples' sexual health knowledge and understanding (SHQ) in Nepalese secondary school. Secondary school students (n = 259, male = 43.63%, female = 56.37%) and local experts (n = 9, male = 90%, female = 10%) were participated in this study. Evaluation processes were; content validity (>0.89), plausibility check (>95), item-total correlation (>0.3), factor loading (>0.4), principal component analysis (4 factors Kaiser's criterion), Chronbach's alpha (>0.65), face validity and internal consistency using test-retest reliability (P > 0.05). The principal component analysis revealed four factors to be extracted; sexual health norms and beliefs, source of sexual health information, sexual health knowledge and understanding, and level of sexual awareness. Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy demonstrated that the patterns of correlations are relatively compact (>0.80). Chronbach's alpha for each factors were above the cut-off point (0.65). Face validity indicated that the questions were clear to the majority of the respondent. Moreover, there were no significant differences (P > 0.05) in the responses to the items at two time points at seven weeks later. The finding suggests that SHQ is a valid and reliable instrument to be used in schools to measure sexual health knowledge and understanding. Further analysis such as structured equation modelling (SEM) and confirmatory factor analysis could make the questionnaire more robust and applicable to the wider school population.
Acharya, Dev Raj; Thomas, Malcolm; Cann, Rosemary
2016-01-01
Background: School-based sex education has the potential to prevent unwanted pregnancy and to promote positive sexual health at the individual, family and community level. Objectives: To develop and validate a sexual health questionnaire to measure young peoples’ sexual health knowledge and understanding (SHQ) in Nepalese secondary school. Materials and Methods: Secondary school students (n = 259, male = 43.63%, female = 56.37%) and local experts (n = 9, male = 90%, female = 10%) were participated in this study. Evaluation processes were; content validity (>0.89), plausibility check (>95), item-total correlation (>0.3), factor loading (>0.4), principal component analysis (4 factors Kaiser's criterion), Chronbach's alpha (>0.65), face validity and internal consistency using test-retest reliability (P > 0.05). Results: The principal component analysis revealed four factors to be extracted; sexual health norms and beliefs, source of sexual health information, sexual health knowledge and understanding, and level of sexual awareness. Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy demonstrated that the patterns of correlations are relatively compact (>0.80). Chronbach's alpha for each factors were above the cut-off point (0.65). Face validity indicated that the questions were clear to the majority of the respondent. Moreover, there were no significant differences (P > 0.05) in the responses to the items at two time points at seven weeks later. Conclusions: The finding suggests that SHQ is a valid and reliable instrument to be used in schools to measure sexual health knowledge and understanding. Further analysis such as structured equation modelling (SEM) and confirmatory factor analysis could make the questionnaire more robust and applicable to the wider school population. PMID:27500171
A knowledgebase system to enhance scientific discovery: Telemakus
Fuller, Sherrilynne S; Revere, Debra; Bugni, Paul F; Martin, George M
2004-01-01
Background With the rapid expansion of scientific research, the ability to effectively find or integrate new domain knowledge in the sciences is proving increasingly difficult. Efforts to improve and speed up scientific discovery are being explored on a number of fronts. However, much of this work is based on traditional search and retrieval approaches and the bibliographic citation presentation format remains unchanged. Methods Case study. Results The Telemakus KnowledgeBase System provides flexible new tools for creating knowledgebases to facilitate retrieval and review of scientific research reports. In formalizing the representation of the research methods and results of scientific reports, Telemakus offers a potential strategy to enhance the scientific discovery process. While other research has demonstrated that aggregating and analyzing research findings across domains augments knowledge discovery, the Telemakus system is unique in combining document surrogates with interactive concept maps of linked relationships across groups of research reports. Conclusion Based on how scientists conduct research and read the literature, the Telemakus KnowledgeBase System brings together three innovations in analyzing, displaying and summarizing research reports across a domain: (1) research report schema, a document surrogate of extracted research methods and findings presented in a consistent and structured schema format which mimics the research process itself and provides a high-level surrogate to facilitate searching and rapid review of retrieved documents; (2) research findings, used to index the documents, allowing searchers to request, for example, research studies which have studied the relationship between neoplasms and vitamin E; and (3) visual exploration interface of linked relationships for interactive querying of research findings across the knowledgebase and graphical displays of what is known as well as, through gaps in the map, what is yet to be tested. The rationale and system architecture are described and plans for the future are discussed. PMID:15507158
Extracting Phonological Patterns for L2 Word Learning: The Effect of Poor Phonological Awareness
ERIC Educational Resources Information Center
Hu, Chieh-Fang
2014-01-01
An implicit word learning paradigm was designed to test the hypothesis that children who came to the task of L2 vocabulary acquisition with poorer L1 phonological awareness (PA) are less capable of extracting phonological patterns from L2 and thus have difficulties capitalizing on this knowledge to support L2 vocabulary learning. A group of…
Studies of extraction, storage, and testing of pine pollen
J. W. Duffield
1953-01-01
This report assembles the results of a number of small exploratory studies on the extraction, storage, and viability testing of pollen of several species of pines. These studies indicate clearly the need for more knowledge of the physiology of pollen â particularly of the relation between atmospheric humidity at the time of pollen shedding and the subsequent reactions...
Effects of time and rainfall on PCR success using DNA extracted from deer fecal pellets
Todd J. Brinkman; Michael K. Schwartz; David K. Person; Kristine L. Pilgrim; Kris J. Hundertmark
2009-01-01
Non-invasive wildlife research using DNA from feces has become increasingly popular. Recent studies have attempted to solve problems associated with recovering DNA from feces by investigating the influence of factors such as season, diet, collection method, preservation method, extraction protocol, and time. To our knowledge, studies of this nature have not addressed...
Balkanyi, Laszlo; Heja, Gergely; Nagy, Attlia
2014-01-01
Extracting scientifically accurate terminology from an EU public health regulation is part of the knowledge engineering work at the European Centre for Disease Prevention and Control (ECDC). ECDC operates information systems at the crossroads of many areas - posing a challenge for transparency and consistency. Semantic interoperability is based on the Terminology Server (TS). TS value sets (structured vocabularies) describe shared domains as "diseases", "organisms", "public health terms", "geo-entities" "organizations" and "administrative terms" and others. We extracted information from the relevant EC Implementing Decision on case definitions for reporting communicable diseases, listing 53 notifiable infectious diseases, containing clinical, diagnostic, laboratory and epidemiological criteria. We performed a consistency check; a simplification - abstraction; we represented lab criteria in triplets: as 'y' procedural result /of 'x' organism-substance/on 'z' specimen and identified negations. The resulting new case definition value set represents the various formalized criteria, meanwhile the existing disease value set has been extended, new signs and symptoms were added. New organisms enriched the organism value set. Other new categories have been added to the public health value set, as transmission modes; substances; specimens and procedures. We identified problem areas, as (a) some classification error(s); (b) inconsistent granularity of conditions; (c) seemingly nonsense criteria, medical trivialities; (d) possible logical errors, (e) seemingly factual errors that might be phrasing errors. We think our hypothesis regarding room for possible improvements is valid: there are some open issues and a further improved legal text might lead to more precise epidemiologic data collection. It has to be noted that formal representation for automatic classification of cases was out of scope, such a task would require other formalism, as e.g. those used by rule-based decision support systems.
Murray, Jenni; Craigs, Cheryl Leanne; Hill, Kate Mary; Honey, Stephanie; House, Allan
2012-12-08
Healthy lifestyles are an important facet of cardiovascular risk management. Unfortunately many individuals fail to engage with lifestyle change programmes. There are many factors that patients report as influencing their decisions about initiating lifestyle change. This is challenging for health care professionals who may lack the skills and time to address a broad range of barriers to lifestyle behaviour. Guidance on which factors to focus on during lifestyle consultations may assist healthcare professionals to hone their skills and knowledge leading to more productive patient interactions with ultimately better uptake of lifestyle behaviour change support. The aim of our study was to clarify which influences reported by patients predict uptake and completion of formal lifestyle change programmes. A systematic narrative review of quantitative observational studies reporting factors (influences) associated with uptake and completion of lifestyle behaviour change programmes. Quantitative observational studies involving patients at high risk of cardiovascular events were identified through electronic searching and screened against pre-defined selection criteria. Factors were extracted and organised into an existing qualitative framework. 374 factors were extracted from 32 studies. Factors most consistently associated with uptake of lifestyle change related to support from family and friends, transport and other costs, and beliefs about the causes of illness and lifestyle change. Depression and anxiety also appear to influence uptake as well as completion. Many factors show inconsistent patterns with respect to uptake and completion of lifestyle change programmes. There are a small number of factors that consistently appear to influence uptake and completion of cardiovascular lifestyle behaviour change. These factors could be considered during patient consultations to promote a tailored approach to decision making about the most suitable type and level lifestyle behaviour change support.
Technical design and system implementation of region-line primitive association framework
NASA Astrophysics Data System (ADS)
Wang, Min; Xing, Jinjin; Wang, Jie; Lv, Guonian
2017-08-01
Apart from regions, image edge lines are an important information source, and they deserve more attention in object-based image analysis (OBIA) than they currently receive. In the region-line primitive association framework (RLPAF), we promote straight-edge lines as line primitives to achieve powerful OBIAs. Along with regions, straight lines become basic units for subsequent extraction and analysis of OBIA features. This study develops a new software system called remote-sensing knowledge finder (RSFinder) to implement RLPAF for engineering application purposes. This paper introduces the extended technical framework, a comprehensively designed feature set, key technology, and software implementation. To our knowledge, RSFinder is the world's first OBIA system based on two types of primitives, namely, regions and lines. It is fundamentally different from other well-known region-only-based OBIA systems, such as eCogntion and ENVI feature extraction module. This paper has important reference values for the development of similarly structured OBIA systems and line-involved extraction algorithms of remote sensing information.
Automated Extraction of Substance Use Information from Clinical Texts.
Wang, Yan; Chen, Elizabeth S; Pakhomov, Serguei; Arsoniadis, Elliot; Carter, Elizabeth W; Lindemann, Elizabeth; Sarkar, Indra Neil; Melton, Genevieve B
2015-01-01
Within clinical discourse, social history (SH) includes important information about substance use (alcohol, drug, and nicotine use) as key risk factors for disease, disability, and mortality. In this study, we developed and evaluated a natural language processing (NLP) system for automated detection of substance use statements and extraction of substance use attributes (e.g., temporal and status) based on Stanford Typed Dependencies. The developed NLP system leveraged linguistic resources and domain knowledge from a multi-site social history study, Propbank and the MiPACQ corpus. The system attained F-scores of 89.8, 84.6 and 89.4 respectively for alcohol, drug, and nicotine use statement detection, as well as average F-scores of 82.1, 90.3, 80.8, 88.7, 96.6, and 74.5 respectively for extraction of attributes. Our results suggest that NLP systems can achieve good performance when augmented with linguistic resources and domain knowledge when applied to a wide breadth of substance use free text clinical notes.
Concurrent evolution of feature extractors and modular artificial neural networks
NASA Astrophysics Data System (ADS)
Hannak, Victor; Savakis, Andreas; Yang, Shanchieh Jay; Anderson, Peter
2009-05-01
This paper presents a new approach for the design of feature-extracting recognition networks that do not require expert knowledge in the application domain. Feature-Extracting Recognition Networks (FERNs) are composed of interconnected functional nodes (feurons), which serve as feature extractors, and are followed by a subnetwork of traditional neural nodes (neurons) that act as classifiers. A concurrent evolutionary process (CEP) is used to search the space of feature extractors and neural networks in order to obtain an optimal recognition network that simultaneously performs feature extraction and recognition. By constraining the hill-climbing search functionality of the CEP on specific parts of the solution space, i.e., individually limiting the evolution of feature extractors and neural networks, it was demonstrated that concurrent evolution is a necessary component of the system. Application of this approach to a handwritten digit recognition task illustrates that the proposed methodology is capable of producing recognition networks that perform in-line with other methods without the need for expert knowledge in image processing.
Online Knowledge-Based Model for Big Data Topic Extraction.
Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan
2016-01-01
Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half.
Estimating individual contribution from group-based structural correlation networks.
Saggar, Manish; Hosseini, S M Hadi; Bruno, Jennifer L; Quintin, Eve-Marie; Raman, Mira M; Kesler, Shelli R; Reiss, Allan L
2015-10-15
Coordinated variations in brain morphology (e.g., cortical thickness) across individuals have been widely used to infer large-scale population brain networks. These structural correlation networks (SCNs) have been shown to reflect synchronized maturational changes in connected brain regions. Further, evidence suggests that SCNs, to some extent, reflect both anatomical and functional connectivity and hence provide a complementary measure of brain connectivity in addition to diffusion weighted networks and resting-state functional networks. Although widely used to study between-group differences in network properties, SCNs are inferred only at the group-level using brain morphology data from a set of participants, thereby not providing any knowledge regarding how the observed differences in SCNs are associated with individual behavioral, cognitive and disorder states. In the present study, we introduce two novel distance-based approaches to extract information regarding individual differences from the group-level SCNs. We applied the proposed approaches to a moderately large dataset (n=100) consisting of individuals with fragile X syndrome (FXS; n=50) and age-matched typically developing individuals (TD; n=50). We tested the stability of proposed approaches using permutation analysis. Lastly, to test the efficacy of our method, individual contributions extracted from the group-level SCNs were examined for associations with intelligence scores and genetic data. The extracted individual contributions were stable and were significantly related to both genetic and intelligence estimates, in both typically developing individuals and participants with FXS. We anticipate that the approaches developed in this work could be used as a putative biomarker for altered connectivity in individuals with neurodevelopmental disorders. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bathke, C. G.; Ebbinghaus, Bartley B.; Collins, Brian A.
2012-08-29
We must anticipate that the day is approaching when details of nuclear weapons design and fabrication will become common knowledge. On that day we must be particularly certain that all special nuclear materials (SNM) are adequately accounted for and protected and that we have a clear understanding of the utility of nuclear materials to potential adversaries. To this end, this paper examines the attractiveness of materials mixtures containing SNM and alternate nuclear materials associated with the plutonium-uranium reduction extraction (Purex), uranium extraction (UREX), coextraction (COEX), thorium extraction (THOREX), and PYROX (an electrochemical refining method) reprocessing schemes. This paper provides amore » set of figures of merit for evaluating material attractiveness that covers a broad range of proliferant state and subnational group capabilities. The primary conclusion of this paper is that all fissile material must be rigorously safeguarded to detect diversion by a state and must be provided the highest levels of physical protection to prevent theft by subnational groups; no 'silver bullet' fuel cycle has been found that will permit the relaxation of current international safeguards or national physical security protection levels. The work reported herein has been performed at the request of the U.S. Department of Energy (DOE) and is based on the calculation of 'attractiveness levels' that are expressed in terms consistent with, but normally reserved for, the nuclear materials in DOE nuclear facilities. The methodology and findings are presented. Additionally, how these attractiveness levels relate to proliferation resistance and physical security is discussed.« less
Semantic extraction and processing of medical records for patient-oriented visual index
NASA Astrophysics Data System (ADS)
Zheng, Weilin; Dong, Wenjie; Chen, Xiangjiao; Zhang, Jianguo
2012-02-01
To have comprehensive and completed understanding healthcare status of a patient, doctors need to search patient medical records from different healthcare information systems, such as PACS, RIS, HIS, USIS, as a reference of diagnosis and treatment decisions for the patient. However, it is time-consuming and tedious to do these procedures. In order to solve this kind of problems, we developed a patient-oriented visual index system (VIS) to use the visual technology to show health status and to retrieve the patients' examination information stored in each system with a 3D human model. In this presentation, we present a new approach about how to extract the semantic and characteristic information from the medical record systems such as RIS/USIS to create the 3D Visual Index. This approach includes following steps: (1) Building a medical characteristic semantic knowledge base; (2) Developing natural language processing (NLP) engine to perform semantic analysis and logical judgment on text-based medical records; (3) Applying the knowledge base and NLP engine on medical records to extract medical characteristics (e.g., the positive focus information), and then mapping extracted information to related organ/parts of 3D human model to create the visual index. We performed the testing procedures on 559 samples of radiological reports which include 853 focuses, and achieved 828 focuses' information. The successful rate of focus extraction is about 97.1%.
46 CFR 161.002-15 - Sample extraction smoke detection systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 6 2014-10-01 2014-10-01 false Sample extraction smoke detection systems. 161.002-15...-15 Sample extraction smoke detection systems. The smoke detecting system must consist of a means for... smoke, together with visual and audible alarms for indicating the presence of smoke. [CGD 94-108, 61 FR...
46 CFR 161.002-15 - Sample extraction smoke detection systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 6 2011-10-01 2011-10-01 false Sample extraction smoke detection systems. 161.002-15...-15 Sample extraction smoke detection systems. The smoke detecting system must consist of a means for... smoke, together with visual and audible alarms for indicating the presence of smoke. [CGD 94-108, 61 FR...
46 CFR 161.002-15 - Sample extraction smoke detection systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 6 2012-10-01 2012-10-01 false Sample extraction smoke detection systems. 161.002-15...-15 Sample extraction smoke detection systems. The smoke detecting system must consist of a means for... smoke, together with visual and audible alarms for indicating the presence of smoke. [CGD 94-108, 61 FR...
46 CFR 161.002-15 - Sample extraction smoke detection systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 6 2013-10-01 2013-10-01 false Sample extraction smoke detection systems. 161.002-15...-15 Sample extraction smoke detection systems. The smoke detecting system must consist of a means for... smoke, together with visual and audible alarms for indicating the presence of smoke. [CGD 94-108, 61 FR...
Alonso-Salces, Rosa M; Barranco, Alejandro; Corta, Edurne; Berrueta, Luis A; Gallo, Blanca; Vicente, Francisca
2005-02-15
A solid-liquid extraction procedure followed by reversed-phase high-performance liquid chromatography (RP-HPLC) coupled with a photodiode array detector (DAD) for the determination of polyphenols in freeze-dried apple peel and pulp is reported. The extraction step consists in sonicating 0.5g of freeze-dried apple tissue with 30mL of methanol-water-acetic acid (30:69:1, v/v/v) containing 2g of ascorbic acid/L, for 10min in an ultrasonic bath. The whole method was validated, concluding that it is a robust method that presents high extraction efficiencies (peel: >91%, pulp: >95%) and appropriate precisions (within day: R.S.D. (n = 5) <5%, and between days: R.S.D. (n = 5) <7%) at the different concentration levels of polyphenols that can be found in apple samples. The method was compared with one previously published, consisting in a pressurized liquid extraction (PLE) followed by RP-HPLC-DAD determination. The advantages and disadvantages of both methods are discussed.
Socioeconomic Status Modifies Interest-Knowledge Associations among Adolescents.
Tucker-Drob, Elliot M; Briley, Daniel A
2012-07-01
Researchers have recently taken a renewed interest in examining the patterns by which noncognitive traits and cognitive traits relate to one another. Few researchers, however, have examined the possibility that such patterns might differ according to environmental context. Using data from a nationally representative sample of approximately 375,000 students from 1,300 high schools in the United States, we examined the relations between socioeconomic status (SES), interests, and knowledge in eleven academic, vocational/professional, and recreational domains. We found little support for the hypothesis that SES-related differences in levels of interest mediate SES-related differences in levels of knowledge. In contrast, we found robust and consistent support for the hypothesis that SES moderates interest-knowledge associations. For 10 out of 11 of the knowledge domains examined, the interest-knowledge association was stronger for individuals living in higher SES contexts. Moderation persisted after controlling for an index of general intelligence. These findings are consistent with the hypothesis that low SES inhibits individuals from selectively investing their time and attention in learning experiences that are consistent with their interests.
Text mining patents for biomedical knowledge.
Rodriguez-Esteban, Raul; Bundschus, Markus
2016-06-01
Biomedical text mining of scientific knowledge bases, such as Medline, has received much attention in recent years. Given that text mining is able to automatically extract biomedical facts that revolve around entities such as genes, proteins, and drugs, from unstructured text sources, it is seen as a major enabler to foster biomedical research and drug discovery. In contrast to the biomedical literature, research into the mining of biomedical patents has not reached the same level of maturity. Here, we review existing work and highlight the associated technical challenges that emerge from automatically extracting facts from patents. We conclude by outlining potential future directions in this domain that could help drive biomedical research and drug discovery. Copyright © 2016 Elsevier Ltd. All rights reserved.
A flower image retrieval method based on ROI feature.
Hong, An-Xiang; Chen, Gang; Li, Jun-Li; Chi, Zhe-Ru; Zhang, Dan
2004-07-01
Flower image retrieval is a very important step for computer-aided plant species recognition. In this paper, we propose an efficient segmentation method based on color clustering and domain knowledge to extract flower regions from flower images. For flower retrieval, we use the color histogram of a flower region to characterize the color features of flower and two shape-based features sets, Centroid-Contour Distance (CCD) and Angle Code Histogram (ACH), to characterize the shape features of a flower contour. Experimental results showed that our flower region extraction method based on color clustering and domain knowledge can produce accurate flower regions. Flower retrieval results on a database of 885 flower images collected from 14 plant species showed that our Region-of-Interest (ROI) based retrieval approach using both color and shape features can perform better than a method based on the global color histogram proposed by Swain and Ballard (1991) and a method based on domain knowledge-driven segmentation and color names proposed by Das et al.(1999).
Recent progress in automatically extracting information from the pharmacogenomic literature
Garten, Yael; Coulet, Adrien; Altman, Russ B
2011-01-01
The biomedical literature holds our understanding of pharmacogenomics, but it is dispersed across many journals. In order to integrate our knowledge, connect important facts across publications and generate new hypotheses we must organize and encode the contents of the literature. By creating databases of structured pharmocogenomic knowledge, we can make the value of the literature much greater than the sum of the individual reports. We can, for example, generate candidate gene lists or interpret surprising hits in genome-wide association studies. Text mining automatically adds structure to the unstructured knowledge embedded in millions of publications, and recent years have seen a surge in work on biomedical text mining, some specific to pharmacogenomics literature. These methods enable extraction of specific types of information and can also provide answers to general, systemic queries. In this article, we describe the main tasks of text mining in the context of pharmacogenomics, summarize recent applications and anticipate the next phase of text mining applications. PMID:21047206
Domain-independent information extraction in unstructured text
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irwin, N.H.
Extracting information from unstructured text has become an important research area in recent years due to the large amount of text now electronically available. This status report describes the findings and work done during the second year of a two-year Laboratory Directed Research and Development Project. Building on the first-year`s work of identifying important entities, this report details techniques used to group words into semantic categories and to output templates containing selective document content. Using word profiles and category clustering derived during a training run, the time-consuming knowledge-building task can be avoided. Though the output still lacks in completeness whenmore » compared to systems with domain-specific knowledge bases, the results do look promising. The two approaches are compatible and could complement each other within the same system. Domain-independent approaches retain appeal as a system that adapts and learns will soon outpace a system with any amount of a priori knowledge.« less
New developments of a knowledge based system (VEG) for inferring vegetation characteristics
NASA Technical Reports Server (NTRS)
Kimes, D. S.; Harrison, P. A.; Harrison, P. R.
1992-01-01
An extraction technique for inferring physical and biological surface properties of vegetation using nadir and/or directional reflectance data as input has been developed. A knowledge-based system (VEG) accepts spectral data of an unknown target as input, determines the best strategy for inferring the desired vegetation characteristic, applies the strategy to the target data, and provides a rigorous estimate of the accuracy of the inference. Progress in developing the system is presented. VEG combines methods from remote sensing and artificial intelligence, and integrates input spectral measurements with diverse knowledge bases. VEG has been developed to (1) infer spectral hemispherical reflectance from any combination of nadir and/or off-nadir view angles; (2) test and develop new extraction techniques on an internal spectral database; (3) browse, plot, or analyze directional reflectance data in the system's spectral database; (4) discriminate between user-defined vegetation classes using spectral and directional reflectance relationships; and (5) infer unknown view angles from known view angles (known as view angle extension).
FJET Database Project: Extract, Transform, and Load
NASA Technical Reports Server (NTRS)
Samms, Kevin O.
2015-01-01
The Data Mining & Knowledge Management team at Kennedy Space Center is providing data management services to the Frangible Joint Empirical Test (FJET) project at Langley Research Center (LARC). FJET is a project under the NASA Engineering and Safety Center (NESC). The purpose of FJET is to conduct an assessment of mild detonating fuse (MDF) frangible joints (FJs) for human spacecraft separation tasks in support of the NASA Commercial Crew Program. The Data Mining & Knowledge Management team has been tasked with creating and managing a database for the efficient storage and retrieval of FJET test data. This paper details the Extract, Transform, and Load (ETL) process as it is related to gathering FJET test data into a Microsoft SQL relational database, and making that data available to the data users. Lessons learned, procedures implemented, and programming code samples are discussed to help detail the learning experienced as the Data Mining & Knowledge Management team adapted to changing requirements and new technology while maintaining flexibility of design in various aspects of the data management project.
Iyappan, Anandhi; Younesi, Erfan; Redolfi, Alberto; Vrooman, Henri; Khanna, Shashank; Frisoni, Giovanni B.; Hofmann-Apitius, Martin
2017-01-01
Ontologies and terminologies are used for interoperability of knowledge and data in a standard manner among interdisciplinary research groups. Existing imaging ontologies capture general aspects of the imaging domain as a whole such as methodological concepts or calibrations of imaging instruments. However, none of the existing ontologies covers the diagnostic features measured by imaging technologies in the context of neurodegenerative diseases. Therefore, the Neuro-Imaging Feature Terminology (NIFT) was developed to organize the knowledge domain of measured brain features in association with neurodegenerative diseases by imaging technologies. The purpose is to identify quantitative imaging biomarkers that can be extracted from multi-modal brain imaging data. This terminology attempts to cover measured features and parameters in brain scans relevant to disease progression. In this paper, we demonstrate the systematic retrieval of measured indices from literature and how the extracted knowledge can be further used for disease modeling that integrates neuroimaging features with molecular processes. PMID:28731430
NASA Astrophysics Data System (ADS)
Jung, Chinte; Sun, Chih-Hong
2006-10-01
Motivated by the increasing accessibility of technology, more and more spatial data are being made digitally available. How to extract the valuable knowledge from these large (spatial) databases is becoming increasingly important to businesses, as well. It is essential to be able to analyze and utilize these large datasets, convert them into useful knowledge, and transmit them through GIS-enabled instruments and the Internet, conveying the key information to business decision-makers effectively and benefiting business entities. In this research, we combine the techniques of GIS, spatial decision support system (SDSS), spatial data mining (SDM), and ArcGIS Server to achieve the following goals: (1) integrate databases from spatial and non-spatial datasets about the locations of businesses in Taipei, Taiwan; (2) use the association rules, one of the SDM methods, to extract the knowledge from the integrated databases; and (3) develop a Web-based SDSS GIService as a location-selection tool for business by the product of ArcGIS Server.
Knowledge Discovery from Posts in Online Health Communities Using Unified Medical Language System.
Chen, Donghua; Zhang, Runtong; Liu, Kecheng; Hou, Lei
2018-06-19
Patient-reported posts in Online Health Communities (OHCs) contain various valuable information that can help establish knowledge-based online support for online patients. However, utilizing these reports to improve online patient services in the absence of appropriate medical and healthcare expert knowledge is difficult. Thus, we propose a comprehensive knowledge discovery method that is based on the Unified Medical Language System for the analysis of narrative posts in OHCs. First, we propose a domain-knowledge support framework for OHCs to provide a basis for post analysis. Second, we develop a Knowledge-Involved Topic Modeling (KI-TM) method to extract and expand explicit knowledge within the text. We propose four metrics, namely, explicit knowledge rate, latent knowledge rate, knowledge correlation rate, and perplexity, for the evaluation of the KI-TM method. Our experimental results indicate that our proposed method outperforms existing methods in terms of providing knowledge support. Our method enhances knowledge support for online patients and can help develop intelligent OHCs in the future.
Ruiz-Aceituno, Laura; García-Sarrió, M Jesús; Alonso-Rodriguez, Belén; Ramos, Lourdes; Sanz, M Luz
2016-04-01
Microwave assisted extraction (MAE) and pressurized liquid extraction (PLE) methods using water as solvent have been optimized by means of a Box-Behnken and 3(2) composite experimental designs, respectively, for the effective extraction of bioactive carbohydrates (inositols and inulin) from artichoke (Cynara scolymus L.) external bracts. MAE at 60 °C for 3 min of 0.3 g of sample allowed the extraction of slightly higher concentrations of inositol than PLE at 75 °C for 26.7 min (11.6 mg/g dry sample vs. 7.6 mg/g dry sample). On the contrary, under these conditions, higher concentrations of inulin were extracted with the latter technique (185.4 mg/g vs. 96.4 mg/g dry sample), considering two successive extraction cycles for both techniques. Both methodologies can be considered appropriate for the simultaneous extraction of these bioactive carbohydrates from this particular industrial by-product. To the best of our knowledge this is the first time that these techniques are applied for this purpose. Copyright © 2015 Elsevier Ltd. All rights reserved.
Internet: Education and Application for the Knowledge Warrior
1995-05-01
of the available population to work to support agriculture or mineral extraction. It was during this period in historical development permanent...addresses such as these found on the Internet, ACSC students extracted current information on Chinese politics, environment, culture , leadership... working on a notional scenario, the type of information located was surprising in its level of detail. And while they were able to find almost
A computational intelligent approach to multi-factor analysis of violent crime information system
NASA Astrophysics Data System (ADS)
Liu, Hongbo; Yang, Chao; Zhang, Meng; McLoone, Seán; Sun, Yeqing
2017-02-01
Various scientific studies have explored the causes of violent behaviour from different perspectives, with psychological tests, in particular, applied to the analysis of crime factors. The relationship between bi-factors has also been extensively studied including the link between age and crime. In reality, many factors interact to contribute to criminal behaviour and as such there is a need to have a greater level of insight into its complex nature. In this article we analyse violent crime information systems containing data on psychological, environmental and genetic factors. Our approach combines elements of rough set theory with fuzzy logic and particle swarm optimisation to yield an algorithm and methodology that can effectively extract multi-knowledge from information systems. The experimental results show that our approach outperforms alternative genetic algorithm and dynamic reduct-based techniques for reduct identification and has the added advantage of identifying multiple reducts and hence multi-knowledge (rules). Identified rules are consistent with classical statistical analysis of violent crime data and also reveal new insights into the interaction between several factors. As such, the results are helpful in improving our understanding of the factors contributing to violent crime and in highlighting the existence of hidden and intangible relationships between crime factors.
Information extraction and knowledge graph construction from geoscience literature
NASA Astrophysics Data System (ADS)
Wang, Chengbin; Ma, Xiaogang; Chen, Jianguo; Chen, Jingwen
2018-03-01
Geoscience literature published online is an important part of open data, and brings both challenges and opportunities for data analysis. Compared with studies of numerical geoscience data, there are limited works on information extraction and knowledge discovery from textual geoscience data. This paper presents a workflow and a few empirical case studies for that topic, with a focus on documents written in Chinese. First, we set up a hybrid corpus combining the generic and geology terms from geology dictionaries to train Chinese word segmentation rules of the Conditional Random Fields model. Second, we used the word segmentation rules to parse documents into individual words, and removed the stop-words from the segmentation results to get a corpus constituted of content-words. Third, we used a statistical method to analyze the semantic links between content-words, and we selected the chord and bigram graphs to visualize the content-words and their links as nodes and edges in a knowledge graph, respectively. The resulting graph presents a clear overview of key information in an unstructured document. This study proves the usefulness of the designed workflow, and shows the potential of leveraging natural language processing and knowledge graph technologies for geoscience.
2013-01-01
Background Significant emphasis is currently placed on the need to enhance health care decision-making with research-derived evidence. While much has been written on specific strategies to enable these “knowledge-to-action” processes, there is less empirical evidence regarding what happens when knowledge translation (KT) processes do not proceed as planned. The present paper provides a KT case study using the area of health care screening for intimate partner violence (IPV). Methods A modified citation analysis method was used, beginning with a comprehensive search (August 2009 to October 2012) to capture scholarly and grey literature, and news reports citing a specific randomized controlled trial published in a major medical journal on the effectiveness of screening women, in health care settings, for exposure to IPV. Results of the searches were extracted, coded and analysed using a multi-step mixed qualitative and quantitative content analysis process. Results The trial was cited in 147 citations from 112 different sources in journal articles, commentaries, books, and government and news reports. The trial also formed part of the evidence base for several national-level practice guidelines and policy statements. The most common interpretations of the trial were “no benefit of screening”, “no harms of screening”, or both. Variation existed in how these findings were represented, ranging from summaries of the findings, to privileging one outcome over others, and to critical qualifications, especially with regard to methodological rigour of the trial. Of note, interpretations were not always internally consistent, with the same evidence used in sometimes contradictory ways within the same source. Conclusions Our findings provide empirical data on the malleability of “evidence” in knowledge translation processes, and its potential for multiple, often unanticipated, uses. They have implications for understanding how research evidence is used and interpreted in policy and practice, particularly in contested knowledge areas. PMID:23587155
Wathen, C Nadine; Macgregor, Jennifer Cd; Sibbald, Shannon L; Macmillan, Harriet L
2013-04-12
Significant emphasis is currently placed on the need to enhance health care decision-making with research-derived evidence. While much has been written on specific strategies to enable these "knowledge-to-action" processes, there is less empirical evidence regarding what happens when knowledge translation (KT) processes do not proceed as planned. The present paper provides a KT case study using the area of health care screening for intimate partner violence (IPV). A modified citation analysis method was used, beginning with a comprehensive search (August 2009 to October 2012) to capture scholarly and grey literature, and news reports citing a specific randomized controlled trial published in a major medical journal on the effectiveness of screening women, in health care settings, for exposure to IPV. Results of the searches were extracted, coded and analysed using a multi-step mixed qualitative and quantitative content analysis process. The trial was cited in 147 citations from 112 different sources in journal articles, commentaries, books, and government and news reports. The trial also formed part of the evidence base for several national-level practice guidelines and policy statements. The most common interpretations of the trial were "no benefit of screening", "no harms of screening", or both. Variation existed in how these findings were represented, ranging from summaries of the findings, to privileging one outcome over others, and to critical qualifications, especially with regard to methodological rigour of the trial. Of note, interpretations were not always internally consistent, with the same evidence used in sometimes contradictory ways within the same source. Our findings provide empirical data on the malleability of "evidence" in knowledge translation processes, and its potential for multiple, often unanticipated, uses. They have implications for understanding how research evidence is used and interpreted in policy and practice, particularly in contested knowledge areas.
An Investigation of the Technological Pedagogical Content Knowledge of Pre-Service Teachers
ERIC Educational Resources Information Center
Horzum, Mehmet Baris
2013-01-01
This study investigates whether pre-service teachers' learning approach and gender are related to their technological knowledge, their technological content knowledge, their technological pedagogical knowledge and their technological pedagogical content knowledge. The sample of the study consisted of 239 pre-service teachers. It was found that an…
ERIC Educational Resources Information Center
Hoe, Siu Loon; McShane, Steven
2010-01-01
Purpose: The topic of organizational learning is populated with many theories and models; many relate to the enduring organizational learning framework consisting of knowledge acquisition, knowledge dissemination, and knowledge use. However, most of the research either emphasizes structural knowledge acquisition and dissemination as a composite…
[System evaluation on Ginkgo Biloba extract in the treatment of acute cerebral infarction].
Wang, Lin; Zhang, Tao; Bai, Kezhen
2015-10-01
To evaluate the effect and safety of Ginkgo Biloba extract on the treatment of acute cerebral infarction. The Database of Wanfang, China National Knowledge Infrastructure (CNKI) and VIPU were screened for literatures regarding Ginkgo Biloba extract in the treatment of acute cerebral infarction, including the clinical randomized controlled trials. Meta-analysis based on the Revman 4.2 system was performed. Compared with the control group, treatment with Ginkgo Biloba extract enhanced efficacy in the treatment of acute cerebral infarction (OR: 1.60-5.53), which displayed an improved neural function defect score [WMD -3.12 (95%CI: -3.96- -2.28)]. Ginkgo Biloba extract is beneficial to the improvement of neurological function in patients with acute cerebral infarction and it is safe for patients.
Duckstein, Sarina M; Lorenz, Peter; Stintzing, Florian C
2012-01-01
Hamamelis virginiana, known for its high level of tannins and other phenolics is widely used for treatment of dermatological disorders. Although reports on hydroalcoholic and aqueous extracts from Hamamelis leaf and bark exist, knowledge on fermented leaf preparations and the underlying conversion processes are still scant. Aqueous Hamamelis leaf extracts were monitored during fermentation and maturation in order to obtain an insight into the bioconversion of tannins and other phenolics. Aliquots taken during the production period were investigated by HPLC-DAD-MS/MS as well as GC-MS after derivatisation into the corresponding trimethylsilyl compounds. In Hamamelis leaf extracts, the main constituents exhibited changes during the observational period of 6 months. By successive depside bond cleavage, the gallotannins were completely transformed into gallic acid after 1 month. Although not completely, kaempferol and quercetin glycosides were also converted during 6 months to yield their corresponding aglycones. Following C-ring fission, phloroglucinol was formed from the A-ring of both flavonols. The B-ring afforded 3-hydroxybenzoic acid from quercetin and 3,4-dihydroxybenzoic acid as well as 2-(4-hydroxyphenyl)-ethanol from kaempferol. Interestingly, hydroxycinnamic acids remained almost stable in the same time range. The present study broadens the knowledge on conversion processes in aqueous fermented extracts containing tannins, flavonol glycosides and hydroxycinnamic acids. In particular, the analogy between the microbial metabolism of phenolics from fermented Hamamelis extracts, fermented sourdough by heterofermentative lactic acid bacteria or conversion of phenolics by the human microbial flora is indicated. Copyright © 2012 John Wiley & Sons, Ltd.
The method is for extracting an indoor and outdoor air sample consisting of a quartz fiber filter and an XAD-2 cartridge for analysis of neutral persistent organic pollutants. It covers the extraction and concentration of samples that are to be analyzed by gas chromatography/mass...
Validation of heart rate extraction through an iPhone accelerometer.
Kwon, Sungjun; Lee, Jeongsu; Chung, Gih Sung; Park, Kwang Suk
2011-01-01
Ubiquitous medical technology may provide advanced utility for evaluating the status of the patient beyond the clinical environment. The iPhone provides the capacity to measure the heart rate, as the iPhone consists of a 3-axis accelerometer that is sufficiently sensitive to perceive tiny body movements caused by heart pumping. In this preliminary study, an iPhone was tested and evaluated as the reliable heart rate extractor to use for medical purpose by comparing with reference electrocardiogram. By comparing the extracted heart rate from acquired acceleration data with the extracted one from ECG reference signal, iPhone functioning as the reliable heart rate extractor has demonstrated sufficient accuracy and consistency.
Wang, W J; Dong, J; Ren, Z P; Chen, B; He, W; Li, W D; Hao, Z W
2016-07-06
To evaluate the validity, reliability, and acceptability of the scale of knowledge, attitude, and behavior of lifestyle intervention in a diabetes high-risk population (HILKAB), and provide scientific evidence for its usage. By convenient sampling, we selected 406 individuals at high risk for diabetes for survey using the HILKAB. Pearson correlation coefficient, factor analysis, independent sampling, and t-test for high- and low-score groups were used to evaluate the content validity, construct validity, and discriminant validity of the scale. Reliability of the scale was evaluated by internal consistency, which included Cronbach's α coefficient, θ coefficient, Ω coefficient, and split-half reliability. Scale acceptability was evaluated by acceptance rate and completion time of the survey. In this study, 366 questionnaires (90.1%) was qnalified and the completion time was (8.62±2.79) minutes. Scores for knowledge, attitude, and behavior were 10.60±3.73, 26.56±3.58, 17.09±9.74, respectively. The scale had good face validity and content validity. The correlation coefficient of items and the dimension to which they belong was between 0.25 and 0.97, and the correlation coefficient of three dimensions and the entire scale was between 0.64 and 0.91, all with P<0.001. Factor analysis of the scale extracted eight common factors. The cumulative variance contribution rate was 65.23%, thereby reaching the 50% approved standard. Of 30 items there were 29 items with factor loadings ≥0.40, indicating the scale had good construct validity. For the high-score group, scores for knowledge, attitude, and behavior dimensions were 13.89±2.55, 29.56± 2.46, 28.05 ± 2.93, respectively, which were higher than those for the low-score group (7.67 ± 2.78, 23.89 ± 3.35, 6.25 ± 3.13); t-values were 55.14, 119.40, 95.29, respectively, with P<0.001. The scale consisted of three dimensions: knowledge, attitude, and behavior. The Cronbach's α coefficient was between 0.84 and 0.92, the θ coefficient was between 0.85 and 0.96, the Ω coefficient was between 0.90 and 0.94, and the split-half reliability was between 0.77 and 0.95, reaching the 0.70 standard letter. The validity, reliability, and acceptability of the HILKAB scale were satisfactory for use in a population at high risk of diabetes.
Watson, Alice J; O'Rourke, Julia; Jethwani, Kamal; Cami, Aurel; Stern, Theodore A; Kvedar, Joseph C; Chueh, Henry C; Zai, Adrian H
2011-01-01
Knowledge of psychosocial characteristics that helps to identify patients at increased risk for readmission for heart failure (HF) may facilitate timely and targeted care. We hypothesized that certain psychosocial characteristics extracted from the electronic health record (EHR) would be associated with an increased risk for hospital readmission within the next 30 days. We identified 15 psychosocial predictors of readmission. Eleven of these were extracted from the EHR (six from structured data sources and five from unstructured clinical notes). We then analyzed their association with the likelihood of hospital readmission within the next 30 days among 729 patients admitted for HF. Finally, we developed a multivariable predictive model to recognize individuals at high risk for readmission. We found five characteristics-dementia, depression, adherence, declining/refusal of services, and missed clinical appointments-that were associated with an increased risk for hospital readmission: the first four features were captured from unstructured clinical notes, while the last item was captured from a structured data source. Unstructured clinical notes contain important knowledge on the relationship between psychosocial risk factors and an increased risk of readmission for HF that would otherwise have been missed if only structured data were considered. Gathering this EHR-based knowledge can be automated, thus enabling timely and targeted care. Copyright © 2011 The Academy of Psychosomatic Medicine. Published by Elsevier Inc. All rights reserved.
Watson, Alice J.; O’Rourke, Julia; Jethwani, Kamal; Cami, Aurel; Stern, Theodore A.; Kvedar, Joseph C.; Chueh, Henry C.; Zai, Adrian H.
2013-01-01
Background Knowledge of psychosocial characteristics that helps to identify patients at increased risk for readmission for heart failure (HF) may facilitate timely and targeted care. Objective We hypothesized that certain psychosocial characteristics extracted from the electronic health record (EHR) would be associated with an increased risk for hospital readmission within the next 30 days. Methods We identified 15 psychosocial predictors of readmission. Eleven of these were extracted from the EHR (six from structured data sources and five from unstructured clinical notes). We then analyzed their association with the likelihood of hospital readmission within the next 30 days among 729 patients admitted for HF. Finally, we developed a multivariable predictive model to recognize individuals at high risk for readmission. Results We found five characteristics—dementia, depression, adherence, declining/refusal of services, and missed clinical appointments—that were associated with an increased risk for hospital readmission: the first four features were captured from unstructured clinical notes, while the last item was captured from a structured data source. Conclusions Unstructured clinical notes contain important knowledge on the relationship between psychosocial risk factors and an increased risk of readmission for HF that would otherwise have been missed if only structured data were considered. Gathering this EHR-based knowledge can be automated, thus enabling timely and targeted care. PMID:21777714
Techniques for information extraction from compressed GPS traces : final report.
DOT National Transportation Integrated Search
2015-12-31
Developing techniques for extracting information requires a good understanding of methods used to compress the traces. Many techniques for compressing trace data : consisting of position (i.e., latitude/longitude) and time values have been developed....
ERIC Educational Resources Information Center
Sahin, Ömer; Gökkurt, Burçin; Soylu, Yasin
2016-01-01
The aim of the study is to examine prospective mathematics teachers' pedagogical content knowledge in terms of knowledge of understanding students and knowledge of instructional strategies which are the subcomponents of pedagogical content knowledge. The participants of this research consist of 98 prospective teachers who are studying in two…
Broglio-Micheletti, Sônia Maria Forti; Dias, Nivia da Silva; Valente, Ellen Carine Neves; de Souza, Leilianne Alves; Lopes, Diego Olympio Peixoto; Dos Santos, Jakeline Maria
2010-01-01
Organic plant extracts and emulsified oil of Azadirachta indica A. Juss (Meliaceae) (neem) were studied to evaluate its effects in control of engorged females of Rhipicephalus (Boophilus) microplus (Canestrini, 1887) in the laboratory. Hexane and alcoholic organic extracts, 2% (weight/volume) were used in tests of immersion for 5 minutes, prepared with seeds, solubilized in dimethylsulfoxide (DMSO) to 1%. The experiment was entirely randomized, consisting of 6 treatments and 5 replicates, each represented by 5 ticks. Control groups consisted of untreated females. Based on the results of this work, we can indicate that the seed extract (hexanic fraction) and óleo emulsionável I¹ concentration to 2% have significant adjuvant potential to control the cattle tick, because, cause the mortality in the first days after the treatment and interfere in the reproduction, showing to be an alternative to acaricides normally used.
Pickrell, J. A.; Link, R. P.; Simon, J.; Rhoades, H. E.; Gossling, J.
1969-01-01
Twenty-two pigs were inoculated parenterally with various E. coli 0139:K82:H1 preparations. Clinical signs of disease in pigs injected with freeze-thaw extract consisted of early listlessness, diarrhea and, later, hyperirritability of varying intensity in some animals. Hemorrhagic gastroenteritis involving the duodenum, spiral colon and the fundic portion of the stomach, and ulceration of the fundic stomach were observed at post-mortem examination of pigs inoculated parenterallly with living culture or freeze-thaw extract. No significant lesions were observed in pigs inoculated with ultrasonic or hypotonic acid-saline extract. In pigs injected with living culture or freeze-thaw extract, the histological alterations consisted of moderate perivascular edema of the brain, marked hepatic parenchymal cell degeneration, hepatic subserosal edema and “toxic” lymph nodes, when compared to the control group. ImagesFig. 1.Fig. 2.Fig. 3.Fig. 4. PMID:4237302
Online Knowledge-Based Model for Big Data Topic Extraction
Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan
2016-01-01
Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004
Moradi Khanghahi, Behnam; Jamali, Zahra; Pournaghi Azar, Fatemeh; Naghavi Behzad, Mohammad; Azami-Aghdash, Saber
2013-01-01
Background and aims Infection control is an important issue in dentistry, and the dentists are primarily responsible for observing the relevant procedures. Therefore, the present study evaluated knowledge, attitude, practice, and status of infection control among Iranian dentists through systematic review of published results. Materials and methods In this systematic review, the required data was collected searching for keywords including infection, infection control, behavior, performance, practice, attitude, knowledge, dent*, prevention, Iran* and their Persian equivalents in PubMed, Science Direct, Iranmedex, SID, Medlib, and Magiran databases with a time limit of 1985 to 2012. Out of 698 articles, 15 completely related articles were finally considered and the rest were excluded due to lake of relev-ance to the study goals. The required data were extracted and summarized in an Extraction Table and were analyzed ma-nually. Results Evaluating the results of studies indicated inappropriate knowledge, attitude, and practice regarding infection control among Iranian dentists and dental students. Using personal protection devices and observing measures required for infection control were not in accordance with global standards. Conclusion The knowledge, attitudes, and practice of infection control in Iranian dental settings were found to be inadequate. Therefore, dentists should be educated more on the subject and special programs should be in place to monitor the dental settings for observing infection control standards. PMID:23875081
SU-E-J-71: Spatially Preserving Prior Knowledge-Based Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H; Xing, L
2015-06-15
Purpose: Prior knowledge-based treatment planning is impeded by the use of a single dose volume histogram (DVH) curve. Critical spatial information is lost from collapsing the dose distribution into a histogram. Even similar patients possess geometric variations that becomes inaccessible in the form of a single DVH. We propose a simple prior knowledge-based planning scheme that extracts features from prior dose distribution while still preserving the spatial information. Methods: A prior patient plan is not used as a mere starting point for a new patient but rather stopping criteria are constructed. Each structure from the prior patient is partitioned intomore » multiple shells. For instance, the PTV is partitioned into an inner, middle, and outer shell. Prior dose statistics are then extracted for each shell and translated into the appropriate Dmin and Dmax parameters for the new patient. Results: The partitioned dose information from a prior case has been applied onto 14 2-D prostate cases. Using prior case yielded final DVHs that was comparable to manual planning, even though the DVH for the prior case was different from the DVH for the 14 cases. Solely using a single DVH for the entire organ was also performed for comparison but showed a much poorer performance. Different ways of translating the prior dose statistics into parameters for the new patient was also tested. Conclusion: Prior knowledge-based treatment planning need to salvage the spatial information without transforming the patients on a voxel to voxel basis. An efficient balance between the anatomy and dose domain is gained through partitioning the organs into multiple shells. The use of prior knowledge not only serves as a starting point for a new case but the information extracted from the partitioned shells are also translated into stopping criteria for the optimization problem at hand.« less
Requirement of scientific documentation for the development of Naturopathy.
Rastogi, Rajiv
2006-01-01
Past few decades have witnessed explosion of knowledge in almost every field. This has resulted not only in the advancement of the subjects in particular but also have influenced the growth of various allied subjects. The present paper explains about the advancement of science through efforts made in specific areas and also through discoveries in different allied fields having an indirect influence upon the subject in proper. In Naturopathy this seems that though nothing particular is added to the basic thoughts or fundamental principles of the subject yet the entire treatment understanding is revolutionised under the influence of scientific discoveries of past few decades. Advent of information technology has further added to the boom of knowledge and many times this seems impossible to utilize these informations for the good of human being because these are not logically arranged in our minds. In the above background, the author tries to define documentation stating that we have today ocean of information and knowledge about various things- living or dead, plants, animals or human beings; the geographical conditions or changing weather and environment. What required to be done is to extract the relevant knowledge and information required to enrich the subject. The author compares documentation with churning of milk to extract butter. Documentation, in fact, is churning of ocean of information to extract the specific, most appropriate, relevant and defined information and knowledge related to the particular subject . The paper besides discussing the definition of documentation, highlights the areas of Naturopathy requiring an urgent necessity to make proper documentations. Paper also discusses the present status of Naturopathy in India, proposes short-term and long-term goals to be achieved and plans the strategies for achieving them. The most important aspect of the paper is due understanding of the limitations of Naturopathy but a constant effort to improve the same with the growth made in various discipline of science so far.
Cutting Silica Aerogel for Particle Extraction
NASA Technical Reports Server (NTRS)
Tsou, P.; Brownlee, D. E.; Glesias, R.; Grigoropoulos, C. P.; Weschler, M.
2005-01-01
The detailed laboratory analyses of extraterrestrial particles have revolutionized our knowledge of planetary bodies in the last three decades. This knowledge of chemical composition, morphology, mineralogy, and isotopics of particles cannot be provided by remote sensing. In order to acquire these detail information in the laboratories, the samples need be intact, unmelted. Such intact capture of hypervelocity particles has been developed in 1996. Subsequently silica aerogel was introduced as the preferred medium for intact capturing of hypervelocity particles and later showed it to be particularly suitable for the space environment. STARDUST, the 4th NASA Discovery mission to capture samples from 81P/Wild 2 and contemporary interstellar dust, is the culmination of these new technologies. In early laboratory experiments of launching hypervelocity projectiles into aerogel, there was the need to cut aerogel to isolate or extract captured particles/tracks. This is especially challenging for space captures, since there will be many particles/tracks of wide ranging scales closely located, even collocated. It is critical to isolate and extract one particle without compromising its neighbors since the full significance of a particle is not known until it is extracted and analyzed. To date, three basic techniques have been explored: mechanical cutting, lasers cutting and ion beam milling. We report the current findings.
Self-Supervised Chinese Ontology Learning from Online Encyclopedias
Shao, Zhiqing; Ruan, Tong
2014-01-01
Constructing ontology manually is a time-consuming, error-prone, and tedious task. We present SSCO, a self-supervised learning based chinese ontology, which contains about 255 thousand concepts, 5 million entities, and 40 million facts. We explore the three largest online Chinese encyclopedias for ontology learning and describe how to transfer the structured knowledge in encyclopedias, including article titles, category labels, redirection pages, taxonomy systems, and InfoBox modules, into ontological form. In order to avoid the errors in encyclopedias and enrich the learnt ontology, we also apply some machine learning based methods. First, we proof that the self-supervised machine learning method is practicable in Chinese relation extraction (at least for synonymy and hyponymy) statistically and experimentally and train some self-supervised models (SVMs and CRFs) for synonymy extraction, concept-subconcept relation extraction, and concept-instance relation extraction; the advantages of our methods are that all training examples are automatically generated from the structural information of encyclopedias and a few general heuristic rules. Finally, we evaluate SSCO in two aspects, scale and precision; manual evaluation results show that the ontology has excellent precision, and high coverage is concluded by comparing SSCO with other famous ontologies and knowledge bases; the experiment results also indicate that the self-supervised models obviously enrich SSCO. PMID:24715819
Self-supervised Chinese ontology learning from online encyclopedias.
Hu, Fanghuai; Shao, Zhiqing; Ruan, Tong
2014-01-01
Constructing ontology manually is a time-consuming, error-prone, and tedious task. We present SSCO, a self-supervised learning based chinese ontology, which contains about 255 thousand concepts, 5 million entities, and 40 million facts. We explore the three largest online Chinese encyclopedias for ontology learning and describe how to transfer the structured knowledge in encyclopedias, including article titles, category labels, redirection pages, taxonomy systems, and InfoBox modules, into ontological form. In order to avoid the errors in encyclopedias and enrich the learnt ontology, we also apply some machine learning based methods. First, we proof that the self-supervised machine learning method is practicable in Chinese relation extraction (at least for synonymy and hyponymy) statistically and experimentally and train some self-supervised models (SVMs and CRFs) for synonymy extraction, concept-subconcept relation extraction, and concept-instance relation extraction; the advantages of our methods are that all training examples are automatically generated from the structural information of encyclopedias and a few general heuristic rules. Finally, we evaluate SSCO in two aspects, scale and precision; manual evaluation results show that the ontology has excellent precision, and high coverage is concluded by comparing SSCO with other famous ontologies and knowledge bases; the experiment results also indicate that the self-supervised models obviously enrich SSCO.
Antimicrobial thin films based on ayurvedic plants extracts embedded in a bioactive glass matrix
NASA Astrophysics Data System (ADS)
Floroian, L.; Ristoscu, C.; Candiani, G.; Pastori, N.; Moscatelli, M.; Mihailescu, N.; Negut, I.; Badea, M.; Gilca, M.; Chiesa, R.; Mihailescu, I. N.
2017-09-01
Ayurvedic medicine is one of the oldest medical systems. It is an example of a coherent traditional system which has a time-tested and precise algorithm for medicinal plant selection, based on several ethnopharmacophore descriptors which knowledge endows the user to adequately choose the optimal plant for the treatment of certain pathology. This work aims for linking traditional knowledge with biomedical science by using traditional ayurvedic plants extracts with antimicrobial effect in form of thin films for implant protection. We report on the transfer of novel composites from bioactive glass mixed with antimicrobial plants extracts and polymer by matrix-assisted pulsed laser evaporation into uniform thin layers onto stainless steel implant-like surfaces. The comprehensive characterization of the deposited films was performed by complementary analyses: Fourier transformed infrared spectroscopy, glow discharge optical emission spectroscopy, scanning electron microscopy, atomic force microscopy, electrochemical impedance spectroscopy, UV-VIS absorption spectroscopy and antimicrobial tests. The results emphasize upon the multifunctionality of these coatings which allow to halt the leakage of metal and metal oxides into the biological fluids and eventually to inner organs (by polymer use), to speed up the osseointegration (due to the bioactive glass use), to exert antimicrobial effects (by ayurvedic plants extracts use) and to decrease the implant price (by cheaper stainless steel use).
ERIC Educational Resources Information Center
Sathick, Javubar; Venkat, Jaya
2015-01-01
Mining social web data is a challenging task and finding user interest for personalized and non-personalized recommendation systems is another important task. Knowledge sharing among web users has become crucial in determining usage of web data and personalizing content in various social websites as per the user's wish. This paper aims to design a…
Wang, Huili; Gao, Ming; Wang, Mei; Zhang, Rongbo; Wang, Wenwei; Dahlgren, Randy A; Wang, Xuedong
2015-03-15
Herein, we developed a novel integrated device to perform phase separation based on ultrasound-assisted salt-induced liquid-liquid microextraction for determination of five fluoroquinones (FQs) in human body fluids. The integrated device consisted of three simple HDPE components used to separate the extraction solvent from the aqueous phase prior to retrieving the extractant. A series of extraction parameters were optimized using the response surface method based on central composite design. Optimal conditions consisted of 945μL acetone extraction solvent, pH 2.1, 4.1min stir time, 5.9g Na2SO4, and 4.0min centrifugation. Under optimized conditions, the limits of detection (at S/N=3) were 0.12-0.66μgL(-1), the linear range was 0.5-500μgL(-1) and recoveries were 92.6-110.9% for the five FQs extracted from plasma and urine. The proposed method has several advantages, such as easy construction from inexpensive materials, high extraction efficiency, short extraction time, and compatibility with HPLC analysis. Thus, this method shows excellent prospects for sample pretreatment and analysis of FQs in human body fluids. Copyright © 2015 Elsevier B.V. All rights reserved.
Antioxidant Activity of a Red Lentil Extract and Its Fractions
Amarowicz, Ryszard; Estrella, Isabell; Hernández, Teresa; Dueñas, Montserrat; Troszyńska, Agnieszka; Agnieszka, Kosińska; Pegg, Ronald B.
2009-01-01
Phenolic compounds were extracted from red lentil seeds using 80% (v/v) aqueous acetone. The crude extract was applied to a Sephadex LH-20 column. Fraction 1, consisting of sugars and low-molecular-weight phenolics, was eluted from the column by ethanol. Fraction 2, consisting of tannins, was obtained using acetone-water (1:1; v/v) as the mobile phase. Phenolic compounds present in the crude extract and its fractions demonstrated antioxidant and antiradical activities as revealed from studies using a β-carotene-linoleate model system, the total antioxidant activity (TAA) method, the DPPH radical-scavenging activity assay, and a reducing power evaluation. Results of these assays showed the highest values when tannins (fraction 2) were tested. For instance, the TAA of the tannin fraction was 5.85 μmol Trolox® eq./mg, whereas the crude extract and fraction 1 showed 0.68 and 0.33 μmol Trolox® eq./mg, respectively. The content of total phenolics in fraction 2 was the highest (290 mg/g); the tannin content, determined using the vanillin method and expressed as absorbance units at 500 nm per 1 g, was 129. There were 24 compounds identified in the crude extract using an HPLC-ESI-MS method: quercetin diglycoside, catechin, digallate procyanidin, and p-hydroxybenzoic were the dominant phenolics in the extract. PMID:20054484
Bioactive glass ions as strong enhancers of osteogenic differentiation in human adipose stem cells.
Ojansivu, Miina; Vanhatupa, Sari; Björkvik, Leena; Häkkänen, Heikki; Kellomäki, Minna; Autio, Reija; Ihalainen, Janne A; Hupa, Leena; Miettinen, Susanna
2015-07-01
Bioactive glasses are known for their ability to induce osteogenic differentiation of stem cells. To elucidate the mechanism of the osteoinductivity in more detail, we studied whether ionic extracts prepared from a commercial glass S53P4 and from three experimental glasses (2-06, 1-06 and 3-06) are alone sufficient to induce osteogenic differentiation of human adipose stem cells. Cells were cultured using basic medium or osteogenic medium as extract basis. Our results indicate that cells stay viable in all the glass extracts for the whole culturing period, 14 days. At 14 days the mineralization in osteogenic medium extracts was excessive compared to the control. Parallel to the increased mineralization we observed a decrease in the cell amount. Raman and Laser Induced Breakdown Spectroscopy analyses confirmed that the mineral consisted of calcium phosphates. Consistently, the osteogenic medium extracts also increased osteocalcin production and collagen Type-I accumulation in the extracellular matrix at 13 days. Of the four osteogenic medium extracts, 2-06 and 3-06 induced the best responses of osteogenesis. However, regardless of the enhanced mineral formation, alkaline phosphatase activity was not promoted by the extracts. The osteogenic medium extracts could potentially provide a fast and effective way to differentiate human adipose stem cells in vitro. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
When does prior knowledge disproportionately benefit older adults’ memory?
Badham, Stephen P.; Hay, Mhairi; Foxon, Natasha; Kaur, Kiran; Maylor, Elizabeth A.
2016-01-01
ABSTRACT Material consistent with knowledge/experience is generally more memorable than material inconsistent with knowledge/experience – an effect that can be more extreme in older adults. Four experiments investigated knowledge effects on memory with young and older adults. Memory for familiar and unfamiliar proverbs (Experiment 1) and for common and uncommon scenes (Experiment 2) showed similar knowledge effects across age groups. Memory for person-consistent and person-neutral actions (Experiment 3) showed a greater benefit of prior knowledge in older adults. For cued recall of related and unrelated word pairs (Experiment 4), older adults benefited more from prior knowledge only when it provided uniquely useful additional information beyond the episodic association itself. The current data and literature suggest that prior knowledge has the age-dissociable mnemonic properties of (1) improving memory for the episodes themselves (age invariant), and (2) providing conceptual information about the tasks/stimuli extrinsically to the actual episodic memory (particularly aiding older adults). PMID:26473767
Refractive index variance of cells and tissues measured by quantitative phase imaging.
Shan, Mingguang; Kandel, Mikhail E; Popescu, Gabriel
2017-01-23
The refractive index distribution of cells and tissues governs their interaction with light and can report on morphological modifications associated with disease. Through intensity-based measurements, refractive index information can be extracted only via scattering models that approximate light propagation. As a result, current knowledge of refractive index distributions across various tissues and cell types remains limited. Here we use quantitative phase imaging and the statistical dispersion relation (SDR) to extract information about the refractive index variance in a variety of specimens. Due to the phase-resolved measurement in three-dimensions, our approach yields refractive index results without prior knowledge about the tissue thickness. With the recent progress in quantitative phase imaging systems, we anticipate that using SDR will become routine in assessing tissue optical properties.
A tutorial on information retrieval: basic terms and concepts
Zhou, Wei; Smalheiser, Neil R; Yu, Clement
2006-01-01
This informal tutorial is intended for investigators and students who would like to understand the workings of information retrieval systems, including the most frequently used search engines: PubMed and Google. Having a basic knowledge of the terms and concepts of information retrieval should improve the efficiency and productivity of searches. As well, this knowledge is needed in order to follow current research efforts in biomedical information retrieval and text mining that are developing new systems not only for finding documents on a given topic, but extracting and integrating knowledge across documents. PMID:16722601
Tassiopoulos, Katherine K; Seage, George R; Sam, Noel E; Ao, Trong T H; Masenga, Elisante J; Hughes, Michael D; Kapiga, Saidi H
2006-07-01
Understanding psychosocial, sexual behavior and knowledge differences between never, inconsistent and consistent condom users can improve interventions to increase condom use in resource-poor countries, but they have not been adequately studied. We examined these differences in a cohort of 961 female hotel and bar workers in Moshi, Tanzania. Forty-nine percent of women reported no condom use; 39% reported inconsistent use, and 12% reported consistent use. Women with multiple sexual partners in the past five years were less likely to be consistent rather than inconsistent users as were women who had ever exchanged sex for gifts or money. Inconsistent users had higher condom knowledge and higher perceived acceptability of condom use than did never users, but they did not differ from consistent users by these factors. There are important differences between women by level of condom use. These findings can help inform interventions to increase condom use.
Chemat, Farid; Rombaut, Natacha; Sicaire, Anne-Gaëlle; Meullemiestre, Alice; Fabiano-Tixier, Anne-Sylvie; Abert-Vian, Maryline
2017-01-01
This review presents a complete picture of current knowledge on ultrasound-assisted extraction (UAE) in food ingredients and products, nutraceutics, cosmetic, pharmaceutical and bioenergy applications. It provides the necessary theoretical background and some details about extraction by ultrasound, the techniques and their combinations, the mechanisms (fragmentation, erosion, capillarity, detexturation, and sonoporation), applications from laboratory to industry, security, and environmental impacts. In addition, the ultrasound extraction procedures and the important parameters influencing its performance are also included, together with the advantages and the drawbacks of each UAE techniques. Ultrasound-assisted extraction is a research topic, which affects several fields of modern plant-based chemistry. All the reported applications have shown that ultrasound-assisted extraction is a green and economically viable alternative to conventional techniques for food and natural products. The main benefits are decrease of extraction and processing time, the amount of energy and solvents used, unit operations, and CO 2 emissions. Copyright © 2016 Elsevier B.V. All rights reserved.
Sánchez-Muñoz, María Alejandra; Valdez-Solana, Mónica Andrea; Avitia-Domínguez, Claudia; Ramírez-Baca, Patricia; Candelas-Cadillo, María Guadalupe; Aguilera-Ortíz, Miguel; Meza-Velázquez, Jorge Armando; Téllez-Valencia, Alfredo; Sierra-Campos, Erick
2017-01-01
In this study, the potential use of Moringa oleifera as a clotting agent of different types of milk (whole, skim, and soy milk) was investigated. M. oleifera seed extract showed high milk-clotting activity followed by flower extract. Specific clotting activity of seed extract was 200 times higher than that of flower extract. Seed extract is composed by four main protein bands (43.6, 32.2, 19.4, and 16.3 kDa). Caseinolytic activity assessed by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and tyrosine quantification, showed a high extent of casein degradation using M. oleifera seed extract. Milk soy cheese was soft and creamy, while skim milk cheese was hard and crumbly. According to these results, it is concluded that seed extract of M. oleifera generates suitable milk clotting activity for cheesemaking. To our knowledge, this study is the first to report comparative data of M. oleifera milk clotting activity between different types of soy milk. PMID:28783066
Sánchez-Muñoz, María Alejandra; Valdez-Solana, Mónica Andrea; Avitia-Domínguez, Claudia; Ramírez-Baca, Patricia; Candelas-Cadillo, María Guadalupe; Aguilera-Ortíz, Miguel; Meza-Velázquez, Jorge Armando; Téllez-Valencia, Alfredo; Sierra-Campos, Erick
2017-08-05
In this study, the potential use of Moringa oleifera as a clotting agent of different types of milk (whole, skim, and soy milk) was investigated. M. oleifera seed extract showed high milk-clotting activity followed by flower extract. Specific clotting activity of seed extract was 200 times higher than that of flower extract. Seed extract is composed by four main protein bands (43.6, 32.2, 19.4, and 16.3 kDa). Caseinolytic activity assessed by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) and tyrosine quantification, showed a high extent of casein degradation using M. oleifera seed extract. Milk soy cheese was soft and creamy, while skim milk cheese was hard and crumbly. According to these results, it is concluded that seed extract of M. oleifera generates suitable milk clotting activity for cheesemaking. To our knowledge, this study is the first to report comparative data of M. oleifera milk clotting activity between different types of soy milk.
Li, Wei; Zhao, Li-Chun; Sun, Yin-Shi; Lei, Feng-Jie; Wang, Zi; Gui, Xiong-Bin; Wang, Hui
2012-01-01
In this work, pressurized liquid extraction (PLE) of three acetophenones (4-hydroxyacetophenone, baishouwubenzophenone, and 2,4-dihydroxyacetophenone) from Cynanchum bungei (ACB) were investigated. The optimal conditions for extraction of ACB were obtained using a Box-Behnken design, consisting of 17 experimental points, as follows: Ethanol (100%) as the extraction solvent at a temperature of 120 °C and an extraction pressure of 1500 psi, using one extraction cycle with a static extraction time of 17 min. The extracted samples were analyzed by high-performance liquid chromatography using an UV detector. Under this optimal condition, the experimental values agreed with the predicted values by analysis of variance. The ACB extraction yield with optimal PLE was higher than that obtained by soxhlet extraction and heat-reflux extraction methods. The results suggest that the PLE method provides a good alternative for acetophenone extraction. PMID:23203079
Analysis of extractables from one euphorbia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nemethy, E.K.; Otvos, J.W.; Calvin, M.
1979-12-01
Chemical analyses have been made of the heptane extractable material of Euphorbia lathyris, a plant which has been proposed as an energy farm candidate. The heptane extract is 4 to 5% of the dry plant weight and has a heat value of approx. 18 x 10/sup 3/ Btu/lb. This reduced photosynthetic material consists almost entirely of polycyclic triterpenoids. 2 figures, 4 tables.
NASA Astrophysics Data System (ADS)
Alshehhi, Rasha; Marpu, Prashanth Reddy
2017-04-01
Extraction of road networks in urban areas from remotely sensed imagery plays an important role in many urban applications (e.g. road navigation, geometric correction of urban remote sensing images, updating geographic information systems, etc.). It is normally difficult to accurately differentiate road from its background due to the complex geometry of the buildings and the acquisition geometry of the sensor. In this paper, we present a new method for extracting roads from high-resolution imagery based on hierarchical graph-based image segmentation. The proposed method consists of: 1. Extracting features (e.g., using Gabor and morphological filtering) to enhance the contrast between road and non-road pixels, 2. Graph-based segmentation consisting of (i) Constructing a graph representation of the image based on initial segmentation and (ii) Hierarchical merging and splitting of image segments based on color and shape features, and 3. Post-processing to remove irregularities in the extracted road segments. Experiments are conducted on three challenging datasets of high-resolution images to demonstrate the proposed method and compare with other similar approaches. The results demonstrate the validity and superior performance of the proposed method for road extraction in urban areas.
Boehm, A B; Griffith, J; McGee, C; Edge, T A; Solo-Gabriele, H M; Whitman, R; Cao, Y; Getrich, M; Jay, J A; Ferguson, D; Goodwin, K D; Lee, C M; Madison, M; Weisberg, S B
2009-11-01
The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Method standardization will improve the understanding of how sands affect surface water quality.
Bussey-Smith, Kristin L; Rossen, Roger D
2007-06-01
Educating patients with asthma about the pathophysiology and treatment of their disease is recommended. In recent years, several computer programs have been developed to provide this education. These programs take advantage of the population's increasing skill with computers and the growth of the Internet as a source of health care information. To evaluate the effectiveness of published interactive computerized asthma patient education programs (CAPEPs) that have been subjected to randomized controlled trials (RCTs). The PubMed, ERIC, CINAHL, Psychinfo, and Clinicaltrials.gov databases were searched (through October 3, 2005) using the following terms: asthma, patient, education, interactive, and computer. RCTs in English that evaluated the effect of an interactive CAPEP on the following primary end points were included in the study: hospitalizations, acute care visits, rescue inhaler use, or lung function. Secondary end points included asthma knowledge and symptoms. Trials were screened by title and abstract before full text review. Two independent investigators used a standardized data extraction form to identify the articles chosen for full review. Nine of 406 citations met inclusion criteria. Four CAPEPs were computer games, 7 only studied children, and 4 focused on urban populations. One study each showed that the intervention reduced the number of hospitalizations, acute care visits, or rescue inhaler use. Two studies reported lung function improvements. Four studies showed improvement in asthma knowledge, and 5 studies reported improvements in symptoms. Although interactive CAPEPs may improve patient asthma knowledge and symptoms, their effect on objective clinical outcomes is less consistent.
Managing biological networks by using text mining and computer-aided curation
NASA Astrophysics Data System (ADS)
Yu, Seok Jong; Cho, Yongseong; Lee, Min-Ho; Lim, Jongtae; Yoo, Jaesoo
2015-11-01
In order to understand a biological mechanism in a cell, a researcher should collect a huge number of protein interactions with experimental data from experiments and the literature. Text mining systems that extract biological interactions from papers have been used to construct biological networks for a few decades. Even though the text mining of literature is necessary to construct a biological network, few systems with a text mining tool are available for biologists who want to construct their own biological networks. We have developed a biological network construction system called BioKnowledge Viewer that can generate a biological interaction network by using a text mining tool and biological taggers. It also Boolean simulation software to provide a biological modeling system to simulate the model that is made with the text mining tool. A user can download PubMed articles and construct a biological network by using the Multi-level Knowledge Emergence Model (KMEM), MetaMap, and A Biomedical Named Entity Recognizer (ABNER) as a text mining tool. To evaluate the system, we constructed an aging-related biological network that consist 9,415 nodes (genes) by using manual curation. With network analysis, we found that several genes, including JNK, AP-1, and BCL-2, were highly related in aging biological network. We provide a semi-automatic curation environment so that users can obtain a graph database for managing text mining results that are generated in the server system and can navigate the network with BioKnowledge Viewer, which is freely available at http://bioknowledgeviewer.kisti.re.kr.
Rosen, Brittany L; Shepard, Allie; Kahn, Jessica A
2018-03-01
Clinicians' recommendation for the human papillomavirus (HPV) vaccine appears to be an important driver of parental decisions about vaccination. Our aim was to synthesize the best available evidence exploring the perceptions and experiences regarding HPV vaccination, from the perspective of the US clinician. We conducted a comprehensive literature search of Academic Search Complete, CINAHL Plus, Communication & Mass Media Complete, Consumer Health Complete (EBSCOhost), ERIC, Health and Psychosocial Instruments, MEDLINE with full text, and PsycINFO databases. We identified 60 eligible articles: 48 quantitative and 12 qualitative. We extracted the following information: study purpose, use of theory, location, inclusion criteria, and health care provider classification. Results were organized into 5 categories: 1) clinicians' knowledge and beliefs about HPV and the HPV vaccine, 2) clinicians' attitudes and beliefs about recommending HPV vaccines, 3) clinicians' intention to recommend HPV vaccines, 4) clinicians' professional practices regarding HPV vaccination, and 5) patient HPV vaccination rates. Although clinicians were generally supportive of HPV vaccination, there was a discrepancy between clinicians' intentions, recommendation practices, and patient vaccination rates. Studies reported that clinicians tended not to provide strong, consistent recommendations, and were more likely to recommend HPV vaccines to girls versus boys and to older versus younger adolescents. Analyses revealed a number of facilitating factors and barriers to HPV vaccination at the clinician, parent/patient, and systems levels, including clinician knowledge, clinician beliefs, and office procedures that promote vaccination. This review provides an evidence base for multilevel interventions to improve clinician HPV vaccine recommendations and vaccination rates. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Zhang, Guang Lan; Riemer, Angelika B.; Keskin, Derin B.; Chitkushev, Lou; Reinherz, Ellis L.; Brusic, Vladimir
2014-01-01
High-risk human papillomaviruses (HPVs) are the causes of many cancers, including cervical, anal, vulvar, vaginal, penile and oropharyngeal. To facilitate diagnosis, prognosis and characterization of these cancers, it is necessary to make full use of the immunological data on HPV available through publications, technical reports and databases. These data vary in granularity, quality and complexity. The extraction of knowledge from the vast amount of immunological data using data mining techniques remains a challenging task. To support integration of data and knowledge in virology and vaccinology, we developed a framework called KB-builder to streamline the development and deployment of web-accessible immunological knowledge systems. The framework consists of seven major functional modules, each facilitating a specific aspect of the knowledgebase construction process. Using KB-builder, we constructed the Human Papillomavirus T cell Antigen Database (HPVdb). It contains 2781 curated antigen entries of antigenic proteins derived from 18 genotypes of high-risk HPV and 18 genotypes of low-risk HPV. The HPVdb also catalogs 191 verified T cell epitopes and 45 verified human leukocyte antigen (HLA) ligands. Primary amino acid sequences of HPV antigens were collected and annotated from the UniProtKB. T cell epitopes and HLA ligands were collected from data mining of scientific literature and databases. The data were subject to extensive quality control (redundancy elimination, error detection and vocabulary consolidation). A set of computational tools for an in-depth analysis, such as sequence comparison using BLAST search, multiple alignments of antigens, classification of HPV types based on cancer risk, T cell epitope/HLA ligand visualization, T cell epitope/HLA ligand conservation analysis and sequence variability analysis, has been integrated within the HPVdb. Predicted Class I and Class II HLA binding peptides for 15 common HLA alleles are included in this database as putative targets. HPVdb is a knowledge-based system that integrates curated data and information with tailored analysis tools to facilitate data mining for HPV vaccinology and immunology. To our best knowledge, HPVdb is a unique data source providing a comprehensive list of HPV antigens and peptides. Database URL: http://cvc.dfci.harvard.edu/hpv/ PMID:24705205
The effectiveness of knowledge translation strategies used in public health: a systematic review
2012-01-01
Background Literature related to the effectiveness of knowledge translation (KT) strategies used in public health is lacking. The capacity to seek, analyze, and synthesize evidence-based information in public health is linked to greater success in making policy choices that have the best potential to yield positive outcomes for populations. The purpose of this systematic review is to identify the effectiveness of KT strategies used to promote evidence-informed decision making (EIDM) among public health decision makers. Methods A search strategy was developed to identify primary studies published between 2000–2010. Studies were obtained from multiple electronic databases (CINAHL, Medline, EMBASE, and the Cochrane Database of Systematic Reviews). Searches were supplemented by hand searching and checking the reference lists of included articles. Two independent review authors screened studies for relevance, assessed methodological quality of relevant studies, and extracted data from studies using standardized tools. Results After removal of duplicates, the search identified 64, 391 titles related to KT strategies. Following title and abstract review, 346 publications were deemed potentially relevant, of which 5 met all relevance criteria on full text screen. The included publications were of moderate quality and consisted of five primary studies (four randomized controlled trials and one interrupted time series analysis). Results were synthesized narratively. Simple or single KT strategies were shown in some circumstances to be as effective as complex, multifaceted ones when changing practice including tailored and targeted messaging. Multifaceted KT strategies led to changes in knowledge but not practice. Knowledge translation strategies shown to be less effective were passive and included access to registries of pre-processed research evidence or print materials. While knowledge brokering did not have a significant effect generally, results suggested that it did have a positive effect on those organizations that at baseline perceived their organization to place little value on evidence-informed decision making. Conclusions No singular KT strategy was shown to be effective in all contexts. Conclusions about interventions cannot be taken on their own without considering the characteristics of the knowledge that was being transferred, providers, participants and organizations. PMID:22958371
Zhang, Guang Lan; Riemer, Angelika B; Keskin, Derin B; Chitkushev, Lou; Reinherz, Ellis L; Brusic, Vladimir
2014-01-01
High-risk human papillomaviruses (HPVs) are the causes of many cancers, including cervical, anal, vulvar, vaginal, penile and oropharyngeal. To facilitate diagnosis, prognosis and characterization of these cancers, it is necessary to make full use of the immunological data on HPV available through publications, technical reports and databases. These data vary in granularity, quality and complexity. The extraction of knowledge from the vast amount of immunological data using data mining techniques remains a challenging task. To support integration of data and knowledge in virology and vaccinology, we developed a framework called KB-builder to streamline the development and deployment of web-accessible immunological knowledge systems. The framework consists of seven major functional modules, each facilitating a specific aspect of the knowledgebase construction process. Using KB-builder, we constructed the Human Papillomavirus T cell Antigen Database (HPVdb). It contains 2781 curated antigen entries of antigenic proteins derived from 18 genotypes of high-risk HPV and 18 genotypes of low-risk HPV. The HPVdb also catalogs 191 verified T cell epitopes and 45 verified human leukocyte antigen (HLA) ligands. Primary amino acid sequences of HPV antigens were collected and annotated from the UniProtKB. T cell epitopes and HLA ligands were collected from data mining of scientific literature and databases. The data were subject to extensive quality control (redundancy elimination, error detection and vocabulary consolidation). A set of computational tools for an in-depth analysis, such as sequence comparison using BLAST search, multiple alignments of antigens, classification of HPV types based on cancer risk, T cell epitope/HLA ligand visualization, T cell epitope/HLA ligand conservation analysis and sequence variability analysis, has been integrated within the HPVdb. Predicted Class I and Class II HLA binding peptides for 15 common HLA alleles are included in this database as putative targets. HPVdb is a knowledge-based system that integrates curated data and information with tailored analysis tools to facilitate data mining for HPV vaccinology and immunology. To our best knowledge, HPVdb is a unique data source providing a comprehensive list of HPV antigens and peptides. Database URL: http://cvc.dfci.harvard.edu/hpv/.
Mulinacci, N; Innocenti, M; Bellumori, M; Giaccherini, C; Martini, V; Michelozzi, M
2011-07-15
The Rosmarinus officinalis L. is widely known for its numerous applications in the food field but also for the increasing interest in its pharmaceutical properties. Two groups of compounds are mainly responsible for the biological activities of the plant: the volatile fraction and the phenolic constituents. The latter group is mainly constituted by rosmarinic acid, by a flavonoidic fraction and by some diterpenoid compounds structurally derived from the carnosic acid. The aim of our work was to optimize the extractive and analytical procedure for the determination of all the phenolic constituents. Moreover the chemical stability of the main phenols, depending on the storage condition, the different drying procedures and the extraction solvent, have been evaluated. This method allowed to detect up to 29 different constituents at the same time in a relatively short time. The described procedure has the advantage to being able to detect and quantify several classes of compounds, among them numerous minor flavonoids, thus contributing to improving knowledge of the plant. The findings from this study have demonstrated that storing the raw fresh material in the freezer is not appropriate for rosemary, mainly due to the rapid disappearing of the rosmarinic acid during the freezing/thawing process. Regarding the flavonoidic fraction, consistent decrements, were highlighted in the dried samples at room temperature if compared with the fresh leaf. Rosmarinic acid, appeared very sensitive also to mild drying processes. The total diterpenoidic content undergoes to little changes when the leaves are freeze dried or frozen and limited losses are observed working on dried leaves at room temperature. Nevertheless it can be taken in account that this fraction is very sensitive to the water presence during the extraction that favors the conversion of carnosic acid toward it oxidized form carnosol. From our findings, it appear evident that when evaluating the phenolic content in rosemary leaves, several factors, mainly the type of storage, the drying process and the extraction methods, should be carefully taken into account because they can induce partial losses of the antioxidant components. Copyright © 2011 Elsevier B.V. All rights reserved.
Leveraging Semantic Knowledge in IRB Databases to Improve Translation Science
Hurdle, John F.; Botkin, Jeffery; Rindflesch, Thomas C.
2007-01-01
We introduce the notion that research administrative databases (RADs), such as those increasingly used to manage information flow in the Institutional Review Board (IRB), offer a novel, useful, and mine-able data source overlooked by informaticists. As a proof of concept, using an IRB database we extracted all titles and abstracts from system startup through January 2007 (n=1,876); formatted these in a pseudo-MEDLINE format; and processed them through the SemRep semantic knowledge extraction system. Even though SemRep is tuned to find semantic relations in MEDLINE citations, we found that it performed comparably well on the IRB texts. When adjusted to eliminate non-healthcare IRB submissions (e.g., economic and education studies), SemRep extracted an average of 7.3 semantic relations per IRB abstract (compared to an average of 11.1 for MEDLINE citations) with a precision of 70% (compared to 78% for MEDLINE). We conclude that RADs, as represented by IRB data, are mine-able with existing tools, but that performance will improve as these tools are tuned for RAD structures. PMID:18693856
NASA Technical Reports Server (NTRS)
Jahnsen, Vilhelm J. (Inventor); Campen, Jr., Charles F. (Inventor)
1980-01-01
A sample processor and method for the automatic extraction of families of compounds, known as extracts, from liquid and/or homogenized solid samples are disclosed. The sample processor includes a tube support structure which supports a plurality of extraction tubes, each containing a sample from which families of compounds are to be extracted. The support structure is moveable automatically with respect to one or more extraction stations, so that as each tube is at each station a solvent system, consisting of a solvent and reagents, is introduced therein. As a result an extract is automatically extracted from the tube. The sample processor includes an arrangement for directing the different extracts from each tube to different containers, or to direct similar extracts from different tubes to the same utilization device.
Marquine, María J.; Grilli, Matthew D.; Rapcsak, Steven Z.; Kaszniak, Alfred W.; Ryan, Lee; Walther, Katrin; Glisky, Elizabeth L.
2016-01-01
Functional neuroimaging has revealed that in healthy adults retrieval of personal trait knowledge is associated with increased activation in the medial prefrontal cortex (mPFC). Separately, neuropsychology has shown that the self-referential nature of memory can be disrupted in individuals with mPFC lesions. However, it remains unclear whether damage to the mPFC impairs retrieval of personal trait knowledge. Therefore, in this neuropsychological case study we investigated the integrity of personal trait knowledge in J.S., an individual who sustained bilateral damage to the mPFC as a result of an anterior communicating artery aneurysm. We measured both accuracy and consistency of J.S.’s personal trait knowledge as well as his trait knowledge of another, frequently seen person, and compared his performance to a group of healthy adults. Findings revealed that J.S. had severely impaired accuracy and consistency of his personal trait knowledge relative to control participants. In contrast, J.S.’s accuracy and consistency of other-person trait knowledge was intact in comparison to control participants. Moreover, J.S. showed a normal positivity bias in his trait ratings. These results, albeit based on a single case, implicate the mPFC as critical for retrieval of personal trait knowledge. Findings also cast doubt on the likelihood that the mPFC, in particular the ventral mPFC, is necessary for storage and retrieval of trait knowledge of other people. Therefore, this case study adds to a growing body of evidence that mPFC damage can disrupt the link between self and memory. PMID:27342256
Shahraki, Jafar; Zareh, Mona; Kamalinejad, Mohammad; Pourahmad, Jalal
2014-01-01
This study was conducted to evaluate the cytoprotection of various extracts and bioactive compounds found in Pistacia vera againts cytotoxicity, ROS formation, lipid peroxidation, protein carbonylation, mitochondrial and lysosomal membrane damages in cell toxicity models of diabetes related carbonyl (glyoxal) and oxidative stress (hydroperoxide). Methanol, water and ethyl acetate were used to prepare crude pistachios extracts, which were then used to screen for in-vitro cytoprotection of freshly isolated rat hepatocytes against these toxins. The order of protection by Pistacia vera extracts against both hydroperoxide induced oxidative stress (ROS formation) and glyoxal induced protein carbonylation was: pistachio methanolic extract >pistachio water extract, gallic acid, catechin> α-tochoferol and pistachio ethyl acetate extract. Finally due to higher protection achieved by methanolic extract even compared to sole pretreatment of gallic acid, catechin or α-tochoferol, we suggest that cytoprotection depends on the variety of polar and non-polar compounds found in methanolic extract, it is likely that multiple cytoprotective mechanisms are acting against oxidative and carbonyl induced cytotoxicity. To our knowledge, we are the first to report the cytoprotective activity of Pistacia vera extracts against oxidative and carbonyl stress seen in type 2 diabetes hepatocytes model. PMID:25587316
Novel Approaches to Extraction Methods in Recovery of Capsaicin from Habanero Pepper (CNPH 15.192).
Martins, Frederico S; Borges, Leonardo L; Ribeiro, Claudia S C; Reifschneider, Francisco J B; Conceição, Edemilson C
2017-07-01
The objective of this study was to compare three capsaicin extraction methods: Shoxlet, Ultrasound-assisted Extraction (UAE), and Shaker-assisted Extraction (SAE) from Habanero pepper, CNPH 15.192. The different parameters evaluated were alcohol degree, time extraction, and solid-solvent ratio using response surface methodology (RSM). The three parameters found significant ( p < 0.05) were for UAE and solvent concentration and extraction time for SAE. The optimum conditions for the capsaicin UAE and SAE were similar 95% alcohol degree, 30 minutes and solid-liquid ratio 2 mg/mL. The Soxhlet increased the extraction in 10-25%; however, long extraction times (45 minutes) degraded 2% capsaicin. The extraction of capsaicin was influenced by extraction method and by the operating conditions chosen. The optimized conditions provided savings of time, solvent, and herbal material. Prudent choice of the extraction method is essential to ensure optimal yield of extract, thereby making the study relevant and the knowledge gained useful for further exploitation and application of this resource. Habanero pepper , line CNPH 15.192, possess capsaicin in higher levels when compared with others speciesHigher levels of ethanolic strength are more suitable to obtain a higher levels of capsaicinBox-Behnken design indicates to be useful to explore the best conditions of ultrasound assisted extraction of capsaicin. Abbreviations used: Nomenclature UAE: Ultrasound-assisted Extraction; SAE: Shaker-assisted Extraction.
Valentão, Patrícia; Gonçalves, Rui F; Belo, Cristóvão; de Pinho, Paula Guedes; Andrade, Paula B; Ferreres, Federico
2010-10-01
Piper betle is a species growing in South East Asia, where its leaves are economically and medicinally important. To screen the highest possible number of volatile and semivolatile components, the leaves were subjected to headspace solid-phase microextraction, hydrodistillation and Soxhlet extraction, prior to analysis by GC/MS. Fifty compounds (identified by comparison with standard compounds or tentatively by National Institute of Standards and Technology database) were determined, 23 being described for the first time in this matrix. An aqueous extract was also analysed, in which only seven compounds were characterized. The organic acids' composition of this extract was determined by HPLC/UV and eight compounds are reported for the first time in P. betle. This extract also displayed acetylcholinesterase inhibitory capacity.
Enhanced anion exchange for selective sulfate extraction: overcoming the Hofmeister bias.
Fowler, Christopher J; Haverlock, Tamara J; Moyer, Bruce A; Shriver, James A; Gross, Dustin E; Marquez, Manuel; Sessler, Jonathan L; Hossain, Md Alamgir; Bowman-James, Kristin
2008-11-05
In this communication, a new approach to enhancing the efficacy of liquid-liquid anion exchange is demonstrated. It involves the concurrent use of appropriately chosen hydrogen-bond-donating (HBD) anion receptors in combination with a traditional quaternary ammonium extractant. The fluorinated calixpyrroles 1 and 2 and the tetraamide macrocycle 4 were found to be particularly effective receptors. Specifically, their use allowed the extraction of sulfate by tricaprylmethylammonium nitrate to be effected in the presence of excess nitrate. As such, the present work provides a rare demonstration of overcoming the Hofmeister bias in a competitive environment and the first to the authors' knowledge wherein this difficult-to-achieve objective is attained using a neutral HBD-based anion binding agent under conditions of solvent extraction.
Composition of extracts of airborne grain dusts: lectins and lymphocyte mitogens.
Olenchock, S A; Lewis, D M; Mull, J C
1986-01-01
Airborne grain dusts are heterogeneous materials that can elicit acute and chronic respiratory pathophysiology in exposed workers. Previous characterizations of the dusts include the identification of viable microbial contaminants, mycotoxins, and endotoxins. We provide information on the lectin-like activity of grain dust extracts and its possible biological relationship. Hemagglutination of erythrocytes and immunochemical modulation by antibody to specific lectins showed the presence of these substances in extracts of airborne dusts from barley, corn, and rye. Proliferation of normal rat splenic lymphocytes in vitro provided evidence for direct biological effects on the cells of the immune system. These data expand the knowledge of the composition of grain dusts (extracts), and suggest possible mechanisms that may contribute to respiratory disease in grain workers. PMID:3709474
The smooth (tractor) operator: insights of knowledge engineering.
Cullen, Ralph H; Smarr, Cory-Ann; Serrano-Baquero, Daniel; McBride, Sara E; Beer, Jenay M; Rogers, Wendy A
2012-11-01
The design of and training for complex systems requires in-depth understanding of task demands imposed on users. In this project, we used the knowledge engineering approach (Bowles et al., 2004) to assess the task of mowing in a citrus grove. Knowledge engineering is divided into four phases: (1) Establish goals. We defined specific goals based on the stakeholders involved. The main goal was to identify operator demands to support improvement of the system. (2) Create a working model of the system. We reviewed product literature, analyzed the system, and conducted expert interviews. (3) Extract knowledge. We interviewed tractor operators to understand their knowledge base. (4) Structure knowledge. We analyzed and organized operator knowledge to inform project goals. We categorized the information and developed diagrams to display the knowledge effectively. This project illustrates the benefits of knowledge engineering as a qualitative research method to inform technology design and training. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Strategies for the extraction and analysis of non-extractable polyphenols from plants.
Domínguez-Rodríguez, Gloria; Marina, María Luisa; Plaza, Merichel
2017-09-08
The majority of studies based on phenolic compounds from plants are focused on the extractable fraction derived from an aqueous or aqueous-organic extraction. However, an important fraction of polyphenols is ignored due to the fact that they remain retained in the residue of extraction. They are the so-called non-extractable polyphenols (NEPs) which are high molecular weight polymeric polyphenols or individual low molecular weight phenolics associated to macromolecules. The scarce information available about NEPs shows that these compounds possess interesting biological activities. That is why the interest about the study of these compounds has been increasing in the last years. Furthermore, the extraction and characterization of NEPs are considered a challenge because the developed analytical methodologies present some limitations. Thus, the present literature review summarizes current knowledge of NEPs and the different methodologies for the extraction of these compounds, with a particular focus on hydrolysis treatments. Besides, this review provides information on the most recent developments in the purification, separation, identification and quantification of NEPs from plants. Copyright © 2017 Elsevier B.V. All rights reserved.
Construction of an annotated corpus to support biomedical information extraction
Thompson, Paul; Iqbal, Syed A; McNaught, John; Ananiadou, Sophia
2009-01-01
Background Information Extraction (IE) is a component of text mining that facilitates knowledge discovery by automatically locating instances of interesting biomedical events from huge document collections. As events are usually centred on verbs and nominalised verbs, understanding the syntactic and semantic behaviour of these words is highly important. Corpora annotated with information concerning this behaviour can constitute a valuable resource in the training of IE components and resources. Results We have defined a new scheme for annotating sentence-bound gene regulation events, centred on both verbs and nominalised verbs. For each event instance, all participants (arguments) in the same sentence are identified and assigned a semantic role from a rich set of 13 roles tailored to biomedical research articles, together with a biological concept type linked to the Gene Regulation Ontology. To our knowledge, our scheme is unique within the biomedical field in terms of the range of event arguments identified. Using the scheme, we have created the Gene Regulation Event Corpus (GREC), consisting of 240 MEDLINE abstracts, in which events relating to gene regulation and expression have been annotated by biologists. A novel method of evaluating various different facets of the annotation task showed that average inter-annotator agreement rates fall within the range of 66% - 90%. Conclusion The GREC is a unique resource within the biomedical field, in that it annotates not only core relationships between entities, but also a range of other important details about these relationships, e.g., location, temporal, manner and environmental conditions. As such, it is specifically designed to support bio-specific tool and resource development. It has already been used to acquire semantic frames for inclusion within the BioLexicon (a lexical, terminological resource to aid biomedical text mining). Initial experiments have also shown that the corpus may viably be used to train IE components, such as semantic role labellers. The corpus and annotation guidelines are freely available for academic purposes. PMID:19852798
Charlton, Kimberly; Kumar, Saravana
2017-01-01
Objective Despite consistent evidence for the positive impact of contingency planning for falls in older people, implementation of plans often fail. This is likely due to lack of recognition and knowledge about perspectives of older people about contingency planning. The objective of this research was to explore the perspectives of older people living in the community about use of contingency planning for getting help quickly after a fall. Method A systematic literature search seeking qualitative research was conducted in April 2014, with no limit placed on date of publication. Medline, EMBASE, Ageline, CINAHL, HealthSource- Nursing/Academic Edition, AMED and Psych INFO databases were searched. Three main concepts were explored and linked using Boolean operators; older people, falls and contingency planning. The search was updated until February 2016 with no new articles found. After removal of duplicates, 562 articles were assessed against inclusion and exclusion criteria resulting in six studies for the meta-synthesis. These studies were critically appraised using the McMaster critical appraisal tool. Bespoke data extraction sheets were developed and a meta-synthesis approach was adopted to extract and synthesise findings. Findings Three themes of ‘a mix of attitudes’, ‘careful deliberations’ and ‘a source of anxiety’ were established. Perspectives of older people were on a continuum between regarding contingency plans as necessary and not necessary. Levels of engagement with the contingency planning process seemed associated with acceptance of their risk of falling and their familiarity with available contingency planning strategies. Conclusion Avoiding a long lie on the floor following a fall is imperative for older people in the community but there is a lack of knowledge about contingency planning for falls. This meta-synthesis provides new insights into this area of health service delivery and highlights that implementation of plans needs to be directed by the older people rather than the health professionals. PMID:28562596
Charlton, Kimberly; Murray, Carolyn M; Kumar, Saravana
2017-01-01
Despite consistent evidence for the positive impact of contingency planning for falls in older people, implementation of plans often fail. This is likely due to lack of recognition and knowledge about perspectives of older people about contingency planning. The objective of this research was to explore the perspectives of older people living in the community about use of contingency planning for getting help quickly after a fall. A systematic literature search seeking qualitative research was conducted in April 2014, with no limit placed on date of publication. Medline, EMBASE, Ageline, CINAHL, HealthSource- Nursing/Academic Edition, AMED and Psych INFO databases were searched. Three main concepts were explored and linked using Boolean operators; older people, falls and contingency planning. The search was updated until February 2016 with no new articles found. After removal of duplicates, 562 articles were assessed against inclusion and exclusion criteria resulting in six studies for the meta-synthesis. These studies were critically appraised using the McMaster critical appraisal tool. Bespoke data extraction sheets were developed and a meta-synthesis approach was adopted to extract and synthesise findings. Three themes of 'a mix of attitudes', 'careful deliberations' and 'a source of anxiety' were established. Perspectives of older people were on a continuum between regarding contingency plans as necessary and not necessary. Levels of engagement with the contingency planning process seemed associated with acceptance of their risk of falling and their familiarity with available contingency planning strategies. Avoiding a long lie on the floor following a fall is imperative for older people in the community but there is a lack of knowledge about contingency planning for falls. This meta-synthesis provides new insights into this area of health service delivery and highlights that implementation of plans needs to be directed by the older people rather than the health professionals.
Gates, Allison; Shave, Kassi; Featherstone, Robin; Buckreus, Kelli; Ali, Samina; Scott, Shannon; Hartling, Lisa
2017-06-06
There exist many evidence-based interventions available to manage procedural pain in children and neonates, yet they are severely underutilized. Parents play an important role in the management of their child's pain; however, many do not possess adequate knowledge of how to effectively do so. The purpose of the planned study is to systematically review and synthesize current knowledge of the experiences and information needs of parents with regard to the management of their child's pain and distress related to medical procedures in the emergency department. We will conduct a systematic review using rigorous methods and reporting based on the PRISMA statement. We will conduct a comprehensive search of literature published between 2000 and 2016 reporting on parents' experiences and information needs with regard to helping their child manage procedural pain and distress. Ovid MEDLINE, Ovid PsycINFO, CINAHL, and PubMed will be searched. We will also search reference lists of key studies and gray literature sources. Two reviewers will screen the articles following inclusion criteria defined a priori. One reviewer will then extract the data from each article following a data extraction form developed by the study team. The second reviewer will check the data extraction for accuracy and completeness. Any disagreements with regard to study inclusion or data extraction will be resolved via discussion. Data from qualitative studies will be summarized thematically, while those from quantitative studies will be summarized narratively. The second reviewer will confirm the overarching themes resulting from the qualitative and quantitative data syntheses. The Critical Appraisal Skills Programme Qualitative Research Checklist and the Quality Assessment Tool for Quantitative Studies will be used to assess the quality of the evidence from each included study. To our knowledge, no published review exists that comprehensively reports on the experiences and information needs of parents related to the management of their child's procedural pain and distress. A systematic review of parents' experiences and information needs will help to inform strategies to empower them with the knowledge necessary to ensure their child's comfort during a painful procedure. PROSPERO CRD42016043698.
Knowledge Concerning the Mathematical Horizon: A Close View
ERIC Educational Resources Information Center
Guberman, Raisa; Gorev, Dvora
2015-01-01
The main objective of this study is to identify components of teachers' mathematical knowledge for teaching, associated with the knowledge of mathematical horizon (KMH) in order to describe this type of knowledge from the viewpoint of elementary school mathematics teachers. The research population of this study consisted of 118 elementary school…
Briki, Fatma; Vérine, Jérôme; Doucet, Jean; Bénas, Philippe; Fayard, Barbara; Delpech, Marc; Grateau, Gilles; Riès-Kautt, Madeleine
2011-07-20
Amyloidoses are increasingly recognized as a major public health concern in Western countries. All amyloidoses share common morphological, structural, and tinctorial properties. These consist of staining by specific dyes, a fibrillar aspect in electron microscopy and a typical cross-β folding in x-ray diffraction patterns. Most studies that aim at deciphering the amyloid structure rely on fibers generated in vitro or extracted from tissues using protocols that may modify their intrinsic structure. Therefore, the fine details of the in situ architecture of the deposits remain unknown. Here, we present to our knowledge the first data obtained on ex vivo human renal tissue sections using x-ray microdiffraction. The typical cross-β features from fixed paraffin-embedded samples are similar to those formed in vitro or extracted from tissues. Moreover, the fiber orientation maps obtained across glomerular sections reveal an intrinsic texture that is correlated with the glomerulus morphology. These results are of the highest importance to understanding the formation of amyloid deposits and are thus expected to trigger new incentives for tissue investigation. Moreover, the access to intrinsic structural parameters such as fiber size and orientation using synchrotron x-ray microdiffraction, could provide valuable information concerning in situ mechanisms and deposit formation with potential benefits for diagnostic and therapeutic purposes. Copyright © 2011 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Rotter, Thomas; Plishka, Christopher; Lawal, Adegboyega; Harrison, Liz; Sari, Nazmi; Goodridge, Donna; Flynn, Rachel; Chan, James; Fiander, Michelle; Poksinska, Bonnie; Willoughby, Keith; Kinsman, Leigh
2018-01-01
Industrial improvement approaches such as Lean management are increasingly being adopted in health care. Synthesis is necessary to ensure these approaches are evidence based and requires operationalization of concepts to ensure all relevant studies are included. This article outlines the process utilized to develop an operational definition of Lean in health care. The literature search, screening, data extraction, and data synthesis processes followed the recommendations outlined by the Cochrane Collaboration. Development of the operational definition utilized the methods prescribed by Kinsman et al. and Wieland et al. This involved extracting characteristics of Lean, synthesizing similar components to establish an operational definition, applying this definition, and updating the definition to address shortcomings. We identified two defining characteristics of Lean health-care management: (1) Lean philosophy, consisting of Lean principles and continuous improvement, and (2) Lean activities, which include Lean assessment activities and Lean improvement activities. The resulting operational definition requires that an organization or subunit of an organization had integrated Lean philosophy into the organization's mandate, guidelines, or policies and utilized at least one Lean assessment activity or Lean improvement activity. This operational definition of Lean management in health care will act as an objective screening criterion for our systematic review. To our knowledge, this is the first evidence-based operational definition of Lean management in health care.
Dehomogenized Elastic Properties of Heterogeneous Layered Materials in AFM Indentation Experiments.
Lee, Jia-Jye; Rao, Satish; Kaushik, Gaurav; Azeloglu, Evren U; Costa, Kevin D
2018-06-05
Atomic force microscopy (AFM) is used to study mechanical properties of biological materials at submicron length scales. However, such samples are often structurally heterogeneous even at the local level, with different regions having distinct mechanical properties. Physical or chemical disruption can isolate individual structural elements but may alter the properties being measured. Therefore, to determine the micromechanical properties of intact heterogeneous multilayered samples indented by AFM, we propose the Hybrid Eshelby Decomposition (HED) analysis, which combines a modified homogenization theory and finite element modeling to extract layer-specific elastic moduli of composite structures from single indentations, utilizing knowledge of the component distribution to achieve solution uniqueness. Using finite element model-simulated indentation of layered samples with micron-scale thickness dimensions, biologically relevant elastic properties for incompressible soft tissues, and layer-specific heterogeneity of an order of magnitude or less, HED analysis recovered the prescribed modulus values typically within 10% error. Experimental validation using bilayer spin-coated polydimethylsiloxane samples also yielded self-consistent layer-specific modulus values whether arranged as stiff layer on soft substrate or soft layer on stiff substrate. We further examined a biophysical application by characterizing layer-specific microelastic properties of full-thickness mouse aortic wall tissue, demonstrating that the HED-extracted modulus of the tunica media was more than fivefold stiffer than the intima and not significantly different from direct indentation of exposed media tissue. Our results show that the elastic properties of surface and subsurface layers of microscale synthetic and biological samples can be simultaneously extracted from the composite material response to AFM indentation. HED analysis offers a robust approach to studying regional micromechanics of heterogeneous multilayered samples without destructively separating individual components before testing. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Wiltshire, C J; Sutherland, S K; Fenner, P J; Young, A R
2000-01-01
To optimize venom extraction and to undertake preliminary biochemical studies of venom from the box jellyfish (Chironex fleckeri), the Irukandji jellyfish (Carukia barnesi), and the blubber jellyfish (Catostylus mosaicus). Lyophilized crude venoms from box jellyfish tentacles and whole Irukandji jellyfish were prepared in water by homogenization, sonication, and rapid freeze thawing. A second technique, consisting of grinding samples with a glass mortar and pestle and using phosphate-buffered saline, was used to prepare crude venom from isolated nematocysts of the box jellyfish, the bells of Irukandji jellyfish, and the oral lobes of blubber jellyfish. Venoms were compared by use of sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and Western blot test. Toxicity of some venoms was determined by intravenous median lethal dose assay in mice. Different venom extraction techniques produced significantly different crude venoms for both box and Irukandji jellyfish. Irukandji and blubber venom SDS-PAGE protein profiles were established for the first time. Analysis of Western blot tests revealed that box jellyfish antivenin reacted specifically with the venom of each jellyfish. Toxicity was found in Irukandji jellyfish venom derived by use of the mortar-and-pestle method, but not in the lyophilized venom. Glass mortar-and-pestle grinding and use of an appropriate buffer was found to be a simple and suitable method for the preparation of venom from each jellyfish species studied. This study contributes to biochemical investigations of jellyfish venoms, particularly the venom of the Irukandji jellyfish, for which there are, to our knowledge, no published studies. It also highlights the importance of optimizing venom extraction as the first step toward understanding the complex biological effects of jellyfish venoms.
Line drawing extraction from gray level images by feature integration
NASA Astrophysics Data System (ADS)
Yoo, Hoi J.; Crevier, Daniel; Lepage, Richard; Myler, Harley R.
1994-10-01
We describe procedures that extract line drawings from digitized gray level images, without use of domain knowledge, by modeling preattentive and perceptual organization functions of the human visual system. First, edge points are identified by standard low-level processing, based on the Canny edge operator. Edge points are then linked into single-pixel thick straight- line segments and circular arcs: this operation serves to both filter out isolated and highly irregular segments, and to lump the remaining points into a smaller number of structures for manipulation by later stages of processing. The next stages consist in linking the segments into a set of closed boundaries, which is the system's definition of a line drawing. According to the principles of Gestalt psychology, closure allows us to organize the world by filling in the gaps in a visual stimulation so as to perceive whole objects instead of disjoint parts. To achieve such closure, the system selects particular features or combinations of features by methods akin to those of preattentive processing in humans: features include gaps, pairs of straight or curved parallel lines, L- and T-junctions, pairs of symmetrical lines, and the orientation and length of single lines. These preattentive features are grouped into higher-level structures according to the principles of proximity, similarity, closure, symmetry, and feature conjunction. Achieving closure may require supplying missing segments linking contour concavities. Choices are made between competing structures on the basis of their overall compliance with the principles of closure and symmetry. Results include clean line drawings of curvilinear manufactured objects. The procedures described are part of a system called VITREO (viewpoint-independent 3-D recognition and extraction of objects).
Extracting BI-RADS Features from Portuguese Clinical Texts.
Nassif, Houssam; Cunha, Filipe; Moreira, Inês C; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês
2012-01-01
In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser's performance is comparable to the manual method.
Example-Based Image Colorization Using Locality Consistent Sparse Representation.
Bo Li; Fuchen Zhao; Zhuo Su; Xiangguo Liang; Yu-Kun Lai; Rosin, Paul L
2017-11-01
Image colorization aims to produce a natural looking color image from a given gray-scale image, which remains a challenging problem. In this paper, we propose a novel example-based image colorization method exploiting a new locality consistent sparse representation. Given a single reference color image, our method automatically colorizes the target gray-scale image by sparse pursuit. For efficiency and robustness, our method operates at the superpixel level. We extract low-level intensity features, mid-level texture features, and high-level semantic features for each superpixel, which are then concatenated to form its descriptor. The collection of feature vectors for all the superpixels from the reference image composes the dictionary. We formulate colorization of target superpixels as a dictionary-based sparse reconstruction problem. Inspired by the observation that superpixels with similar spatial location and/or feature representation are likely to match spatially close regions from the reference image, we further introduce a locality promoting regularization term into the energy formulation, which substantially improves the matching consistency and subsequent colorization results. Target superpixels are colorized based on the chrominance information from the dominant reference superpixels. Finally, to further improve coherence while preserving sharpness, we develop a new edge-preserving filter for chrominance channels with the guidance from the target gray-scale image. To the best of our knowledge, this is the first work on sparse pursuit image colorization from single reference images. Experimental results demonstrate that our colorization method outperforms the state-of-the-art methods, both visually and quantitatively using a user study.
Tan, Chun-Wei; Kumar, Ajay
2014-07-10
Accurate iris recognition from the distantly acquired face or eye images requires development of effective strategies which can account for significant variations in the segmented iris image quality. Such variations can be highly correlated with the consistency of encoded iris features and the knowledge that such fragile bits can be exploited to improve matching accuracy. A non-linear approach to simultaneously account for both local consistency of iris bit and also the overall quality of the weight map is proposed. Our approach therefore more effectively penalizes the fragile bits while simultaneously rewarding more consistent bits. In order to achieve more stable characterization of local iris features, a Zernike moment-based phase encoding of iris features is proposed. Such Zernike moments-based phase features are computed from the partially overlapping regions to more effectively accommodate local pixel region variations in the normalized iris images. A joint strategy is adopted to simultaneously extract and combine both the global and localized iris features. The superiority of the proposed iris matching strategy is ascertained by providing comparison with several state-of-the-art iris matching algorithms on three publicly available databases: UBIRIS.v2, FRGC, CASIA.v4-distance. Our experimental results suggest that proposed strategy can achieve significant improvement in iris matching accuracy over those competing approaches in the literature, i.e., average improvement of 54.3%, 32.7% and 42.6% in equal error rates, respectively for UBIRIS.v2, FRGC, CASIA.v4-distance.
PROCESS OF SEPARATING URANIUM FROM AQUEOUS SOLUTION BY SOLVENT EXTRACTION
Warf, J.C.
1958-08-19
A process is described for separating uranium values from aqueous uranyl nitrate solutions. The process consists in contacting the uramium bearing solution with an organic solvent, tributyl phosphate, preferably diluted with a less viscous organic liquida whereby the uranyl nitrate is extracted into the organic solvent phase. The uranvl nitrate may be recovered from the solvent phase bv back extracting with an aqueous mediuin.
A Rapid Approach to Modeling Species-Habitat Relationships
NASA Technical Reports Server (NTRS)
Carter, Geoffrey M.; Breinger, David R.; Stolen, Eric D.
2005-01-01
A growing number of species require conservation or management efforts. Success of these activities requires knowledge of the species' occurrence pattern. Species-habitat models developed from GIS data sources are commonly used to predict species occurrence but commonly used data sources are often developed for purposes other than predicting species occurrence and are of inappropriate scale and the techniques used to extract predictor variables are often time consuming and cannot be repeated easily and thus cannot efficiently reflect changing conditions. We used digital orthophotographs and a grid cell classification scheme to develop an efficient technique to extract predictor variables. We combined our classification scheme with a priori hypothesis development using expert knowledge and a previously published habitat suitability index and used an objective model selection procedure to choose candidate models. We were able to classify a large area (57,000 ha) in a fraction of the time that would be required to map vegetation and were able to test models at varying scales using a windowing process. Interpretation of the selected models confirmed existing knowledge of factors important to Florida scrub-jay habitat occupancy. The potential uses and advantages of using a grid cell classification scheme in conjunction with expert knowledge or an habitat suitability index (HSI) and an objective model selection procedure are discussed.
Zhao, Di; Weng, Chunhua
2011-10-01
In this paper, we propose a novel method that combines PubMed knowledge and Electronic Health Records to develop a weighted Bayesian Network Inference (BNI) model for pancreatic cancer prediction. We selected 20 common risk factors associated with pancreatic cancer and used PubMed knowledge to weigh the risk factors. A keyword-based algorithm was developed to extract and classify PubMed abstracts into three categories that represented positive, negative, or neutral associations between each risk factor and pancreatic cancer. Then we designed a weighted BNI model by adding the normalized weights into a conventional BNI model. We used this model to extract the EHR values for patients with or without pancreatic cancer, which then enabled us to calculate the prior probabilities for the 20 risk factors in the BNI. The software iDiagnosis was designed to use this weighted BNI model for predicting pancreatic cancer. In an evaluation using a case-control dataset, the weighted BNI model significantly outperformed the conventional BNI and two other classifiers (k-Nearest Neighbor and Support Vector Machine). We conclude that the weighted BNI using PubMed knowledge and EHR data shows remarkable accuracy improvement over existing representative methods for pancreatic cancer prediction. Copyright © 2011 Elsevier Inc. All rights reserved.
Zhao, Di; Weng, Chunhua
2011-01-01
In this paper, we propose a novel method that combines PubMed knowledge and Electronic Health Records to develop a weighted Bayesian Network Inference (BNI) model for pancreatic cancer prediction. We selected 20 common risk factors associated with pancreatic cancer and used PubMed knowledge to weigh the risk factors. A keyword-based algorithm was developed to extract and classify PubMed abstracts into three categories that represented positive, negative, or neutral associations between each risk factor and pancreatic cancer. Then we designed a weighted BNI model by adding the normalized weights into a conventional BNI model. We used this model to extract the EHR values for patients with or without pancreatic cancer, which then enabled us to calculate the prior probabilities for the 20 risk factors in the BNI. The software iDiagnosis was designed to use this weighted BNI model for predicting pancreatic cancer. In an evaluation using a case-control dataset, the weighted BNI model significantly outperformed the conventional BNI and two other classifiers (k-Nearest Neighbor and Support Vector Machine). We conclude that the weighted BNI using PubMed knowledge and EHR data shows remarkable accuracy improvement over existing representative methods for pancreatic cancer prediction. PMID:21642013
Gundlapalli, Adi V; Divita, Guy; Redd, Andrew; Carter, Marjorie E; Ko, Danette; Rubin, Michael; Samore, Matthew; Strymish, Judith; Krein, Sarah; Gupta, Kalpana; Sales, Anne; Trautner, Barbara W
2017-07-01
To develop a natural language processing pipeline to extract positively asserted concepts related to the presence of an indwelling urinary catheter in hospitalized patients from the free text of the electronic medical note. The goal is to assist infection preventionists and other healthcare professionals in determining whether a patient has an indwelling urinary catheter when a catheter-associated urinary tract infection is suspected. Currently, data on indwelling urinary catheters is not consistently captured in the electronic medical record in structured format and thus cannot be reliably extracted for clinical and research purposes. We developed a lexicon of terms related to indwelling urinary catheters and urinary symptoms based on domain knowledge, prior experience in the field, and review of medical notes. A reference standard of 1595 randomly selected documents from inpatient admissions was annotated by human reviewers to identify all positively and negatively asserted concepts related to indwelling urinary catheters. We trained a natural language processing pipeline based on the V3NLP framework using 1050 documents and tested on 545 documents to determine agreement with the human reference standard. Metrics reported are positive predictive value and recall. The lexicon contained 590 terms related to the presence of an indwelling urinary catheter in various categories including insertion, care, change, and removal of urinary catheters and 67 terms for urinary symptoms. Nursing notes were the most frequent inpatient note titles in the reference standard document corpus; these also yielded the highest number of positively asserted concepts with respect to urinary catheters. Comparing the performance of the natural language processing pipeline against the human reference standard, the overall recall was 75% and positive predictive value was 99% on the training set; on the testing set, the recall was 72% and positive predictive value was 98%. The performance on extracting urinary symptoms (including fever) was high with recall and precision greater than 90%. We have shown that it is possible to identify the presence of an indwelling urinary catheter and urinary symptoms from the free text of electronic medical notes from inpatients using natural language processing. These are two key steps in developing automated protocols to assist humans in large-scale review of patient charts for catheter-associated urinary tract infection. The challenges associated with extracting indwelling urinary catheter-related concepts also inform the design of electronic medical record templates to reliably and consistently capture data on indwelling urinary catheters. Published by Elsevier Inc.
Antimycobacterial activity of medicinal plants used by the Mayo people of Sonora, Mexico.
Coronado-Aceves, Enrique Wenceslao; Sánchez-Escalante, José Jesús; López-Cervantes, Jaime; Robles-Zepeda, Ramón Enrique; Velázquez, Carlos; Sánchez-Machado, Dalia Isabel; Garibay-Escobar, Adriana
2016-08-22
Tuberculosis (TB) is an infectious disease mainly caused by Mycobacterium tuberculosis (Mtb), which generates 9 million new cases worldwide each year. The Mayo ethnicity of southern Sonora, Mexico is more than 2000 years old, and the Mayos possess extensive knowledge of traditional medicine. To evaluate the antimycobacterial activity levels of extracts of medicinal plants used by the Mayos against Mtb and Mycobacterium smegmatis (Msm) in the treatment of TB, respiratory diseases and related symptoms. A total of 34 plant species were collected, and 191 extracts were created with n-hexane, dichloromethane, ethyl acetate (EtOAc), methanol and water. Their minimum inhibitory concentrations (MICs) and minimum bactericidal concentrations (MBCs) were determined against Mtb H37Rv using the microplate alamar blue assay (MABA) and against Msm using the resazurin microplate assay (REMA) at 6 and 2 days of exposure, respectively, and at concentrations of 250-1.9µg/mL (n-hexane extracts) and 1000-7.81µg/mL (extracts obtained with dichloromethane, EtOAc, methanol and water). Rhynchosia precatoria (Willd.) DC. (n-hexane root extract), Euphorbia albomarginata Torr. and A. Gray. (EtOAc shoot extract) and Helianthus annuus L. (n-hexane stem extract) were the most active plants against Mtb H37Rv, with MICs of 15.6, 250, 250µg/mL and MBCs of 31.25, 250, 250µg/mL, respectively. R. precatoria (root) was the only active plant against Msm, with MIC and MBC values of ≥250µg/mL. None of the aqueous extracts were active. This study validates the medicinal use of certain plants used by the Mayo people in the treatment of TB and related symptoms. R. precatoria, E. albomarginata and H. annuus are promising plant sources of active compounds that act against Mtb H37Rv. To our knowledge, this is the first time that their antimycobacterial activity has been reported. Crude extracts obtained with n-hexane, EtOAc and dichloromethane were the most active against Mtb H37Rv. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Aniba, Mohamed Radhouene; Siguenza, Sophie; Friedrich, Anne; Plewniak, Frédéric; Poch, Olivier; Marchler-Bauer, Aron; Thompson, Julie Dawn
2009-01-01
The traditional approach to bioinformatics analyses relies on independent task-specific services and applications, using different input and output formats, often idiosyncratic, and frequently not designed to inter-operate. In general, such analyses were performed by experts who manually verified the results obtained at each step in the process. Today, the amount of bioinformatics information continuously being produced means that handling the various applications used to study this information presents a major data management and analysis challenge to researchers. It is now impossible to manually analyse all this information and new approaches are needed that are capable of processing the large-scale heterogeneous data in order to extract the pertinent information. We review the recent use of integrated expert systems aimed at providing more efficient knowledge extraction for bioinformatics research. A general methodology for building knowledge-based expert systems is described, focusing on the unstructured information management architecture, UIMA, which provides facilities for both data and process management. A case study involving a multiple alignment expert system prototype called AlexSys is also presented.
Aniba, Mohamed Radhouene; Siguenza, Sophie; Friedrich, Anne; Plewniak, Frédéric; Poch, Olivier; Marchler-Bauer, Aron
2009-01-01
The traditional approach to bioinformatics analyses relies on independent task-specific services and applications, using different input and output formats, often idiosyncratic, and frequently not designed to inter-operate. In general, such analyses were performed by experts who manually verified the results obtained at each step in the process. Today, the amount of bioinformatics information continuously being produced means that handling the various applications used to study this information presents a major data management and analysis challenge to researchers. It is now impossible to manually analyse all this information and new approaches are needed that are capable of processing the large-scale heterogeneous data in order to extract the pertinent information. We review the recent use of integrated expert systems aimed at providing more efficient knowledge extraction for bioinformatics research. A general methodology for building knowledge-based expert systems is described, focusing on the unstructured information management architecture, UIMA, which provides facilities for both data and process management. A case study involving a multiple alignment expert system prototype called AlexSys is also presented. PMID:18971242
Ontology design patterns to disambiguate relations between genes and gene products in GENIA
2011-01-01
Motivation Annotated reference corpora play an important role in biomedical information extraction. A semantic annotation of the natural language texts in these reference corpora using formal ontologies is challenging due to the inherent ambiguity of natural language. The provision of formal definitions and axioms for semantic annotations offers the means for ensuring consistency as well as enables the development of verifiable annotation guidelines. Consistent semantic annotations facilitate the automatic discovery of new information through deductive inferences. Results We provide a formal characterization of the relations used in the recent GENIA corpus annotations. For this purpose, we both select existing axiom systems based on the desired properties of the relations within the domain and develop new axioms for several relations. To apply this ontology of relations to the semantic annotation of text corpora, we implement two ontology design patterns. In addition, we provide a software application to convert annotated GENIA abstracts into OWL ontologies by combining both the ontology of relations and the design patterns. As a result, the GENIA abstracts become available as OWL ontologies and are amenable for automated verification, deductive inferences and other knowledge-based applications. Availability Documentation, implementation and examples are available from http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/. PMID:22166341
NOUS: Construction and Querying of Dynamic Knowledge Graphs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choudhury, Sutanay; Agarwal, Khushbu; Purohit, Sumit
The ability to construct domain specific knowledge graphs (KG) and perform question-answering or hypothesis generation is a trans- formative capability. Despite their value, automated construction of knowledge graphs remains an expensive technical challenge that is beyond the reach for most enterprises and academic institutions. We propose an end-to-end framework for developing custom knowl- edge graph driven analytics for arbitrary application domains. The uniqueness of our system lies A) in its combination of curated KGs along with knowledge extracted from unstructured text, B) support for advanced trending and explanatory questions on a dynamic KG, and C) the ability to answer queriesmore » where the answer is embedded across multiple data sources.« less
Extraction and identification of flavonoids from parsley extracts by HPLC analysis
NASA Astrophysics Data System (ADS)
Stan, M.; Soran, M. L.; Varodi, C.; Lung, I.
2012-02-01
Flavonoids are phenolic compounds isolated from a wide variety of plants, and are valuable for their multiple properties, including antioxidant and antimicrobial activities. In the present work, parsley (Petroselinum crispum L.) extracts were obtained by three different extraction techniques: maceration, ultrasonic-assisted and microwave-assisted solvent extractions. The extractions were performed with ethanol-water mixtures in various ratios. From these extracts, flavonoids like the flavones apigenin and luteolin, and the flavonols quercetin and kaempferol were identified using an HPLC Shimadzu apparatus equipped with PDA and MS detectors. The separation method involved a gradient step. The mobile phase consisted of two solvents: acetonitrile and distilled water with 0.1% formic acid. The separation was performed on a RP-C18 column.
Lera, Lydia; Fretes, Gabriela; González, Carmen Gloria; Salinas, Judith; Vio del Rio, Fernando
2015-05-01
An instrument to measure food knowledge, food consumption, cooking skills, food habits and food expenses at school is necessary to assess changes in food practices. To validate an instrument to measure changes in food knowledge, food consumption, cooking skills, food habits and food expenses in Chilean school children 8 - 11 years from third to fifth grade. A validation of a questionnaire with 42 questions was conducted in two stages: the first to assess temporal stability, concordance and internal consistency in 45 children. The second one to apply the survey, modified with the results of the first stage, in 90 children assessing internal consistency. The first survey with 42 questions showed a reasonable temporal stability, concordance and internal consistency for cooking skills, habits and food expenditure at school. Internal consistency was good for food consumption, but not so good for food knowledge. In the final validation with 90 children, there was good consistency for food consumption but bad for food knowledge. Besides, children with cooking skills ate more healthy food and those who expended more money at school, consumed less healthy food. Food knowledge questions were eliminated from the instrument, which was elaborated with 28 questions about food consumption, cooking skills, food habits and food expenses at school. This instrument is useful to assess changes in food and nutrition education interventions in 8 -11 years children, in particular to measure cooking skills and food expenses at school. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Yoshioka, Yasuko; Kojima, H; Tamura, A; Tsuji, K; Tamesada, M; Yagi, K; Murakami, N
2012-01-01
The extract of cultured Lentinula edodes mycelia (LEM) is a medicinal food ingredient that has hepatoprotective effects. In this study, we fractionated the LEM extract to explore novel active compounds related to hepatoprotection by using primary cultures of rat hepatocytes exposed to carbon tetrachloride (CCl(4)). The LEM extract and the fractions markedly inhibited the release of alanine aminotransferase (ALT) from hepatocytes damaged by CCl(4) into the culture medium. The strongest hepatocyte-protective activity was seen in a fraction (Fr. 2) in which a 50% ethanol extract was further eluted with 50% methanol and separated using reverse-phase HPLC. Fr. 2 had an average molecular weight of 2753, and the main components are lignin (49%) and saccharides (36%, of which xylose comprises 41%). Therefore, Fr. 2 was presumed to be a low-molecular-weight compound consisting mainly of lignin and xylan-like polysaccharides. The hepatocyte-protective activity was observed even after digestion of xylan-like polysaccharides in Fr.2 and confirmed with low-molecular-weight lignin (LM-lignin) alone. In addition, Fr. 2, the xylan-digested Fr. 2 and LM-lignin showed higher superoxide dismutase (SOD)-like activity than the LEM extract. These results suggested that the effective fraction in the LEM extract related to hepatocyte protection consisted mainly of LM-lignin, and its antioxidant activity partially contributes to the hepatocyte-protective activity of the LEM extract.
Mauritia flexuosa Presents In Vitro and In Vivo Antiplatelet and Antithrombotic Activities
Fuentes, Eduardo; Rodríguez-Pérez, Wilson; Guzmán, Luis; Alarcón, Marcelo; Navarrete, Simón; Forero-Doria, Oscar; Palomo, Iván
2013-01-01
Fruit from the palm Mauritia flexuosa is one of the most important species in Peru, Venezuela, Brazil, Colombia, Bolivia, and Guyana. The present study aimed to investigate the antiplatelet and antithrombotic activities of oil extracted from Mauritia flexuosa. The fatty acid contents were determined by gas chromatography—mass spectrometry. Oil extract of peel of Mauritia flexuosa was extracted by soxhlet extraction. The oil extract inhibited platelet secretion and aggregation induced by ADP, collagen, and TRAP-6 by a concentration-dependent way (0.1 to 1 mg/mL) without the participation of the adenylyl cyclase pathway and diminished platelet rolling and firm adhesion under flow conditions. Furthermore, the oil extract induced a marked increase in the rolling speed of leukocytes retained on the platelet surface, reflecting a reduction of rolling and less adhesion. At the concentrations used, the oil extract significantly decreased platelet release of sP-selectin, an atherosclerotic-related inflammatory mediator. Oil extract inhibited thrombus growth at the same concentration as that of aspirin, a classical reference drug. Finally, the data presented herein also demonstrate for the first time to our knowledge the protective effect of oil extracted from Mauritia flexuosa on platelet activation and thrombosis formation. PMID:24454503
Mauritia flexuosa Presents In Vitro and In Vivo Antiplatelet and Antithrombotic Activities.
Fuentes, Eduardo; Rodríguez-Pérez, Wilson; Guzmán, Luis; Alarcón, Marcelo; Navarrete, Simón; Forero-Doria, Oscar; Palomo, Iván
2013-01-01
Fruit from the palm Mauritia flexuosa is one of the most important species in Peru, Venezuela, Brazil, Colombia, Bolivia, and Guyana. The present study aimed to investigate the antiplatelet and antithrombotic activities of oil extracted from Mauritia flexuosa. The fatty acid contents were determined by gas chromatography-mass spectrometry. Oil extract of peel of Mauritia flexuosa was extracted by soxhlet extraction. The oil extract inhibited platelet secretion and aggregation induced by ADP, collagen, and TRAP-6 by a concentration-dependent way (0.1 to 1 mg/mL) without the participation of the adenylyl cyclase pathway and diminished platelet rolling and firm adhesion under flow conditions. Furthermore, the oil extract induced a marked increase in the rolling speed of leukocytes retained on the platelet surface, reflecting a reduction of rolling and less adhesion. At the concentrations used, the oil extract significantly decreased platelet release of sP-selectin, an atherosclerotic-related inflammatory mediator. Oil extract inhibited thrombus growth at the same concentration as that of aspirin, a classical reference drug. Finally, the data presented herein also demonstrate for the first time to our knowledge the protective effect of oil extracted from Mauritia flexuosa on platelet activation and thrombosis formation.
Text Mining in Biomedical Domain with Emphasis on Document Clustering.
Renganathan, Vinaitheerthan
2017-07-01
With the exponential increase in the number of articles published every year in the biomedical domain, there is a need to build automated systems to extract unknown information from the articles published. Text mining techniques enable the extraction of unknown knowledge from unstructured documents. This paper reviews text mining processes in detail and the software tools available to carry out text mining. It also reviews the roles and applications of text mining in the biomedical domain. Text mining processes, such as search and retrieval of documents, pre-processing of documents, natural language processing, methods for text clustering, and methods for text classification are described in detail. Text mining techniques can facilitate the mining of vast amounts of knowledge on a given topic from published biomedical research articles and draw meaningful conclusions that are not possible otherwise.
Key Relation Extraction from Biomedical Publications.
Huang, Lan; Wang, Ye; Gong, Leiguang; Kulikowski, Casimir; Bai, Tian
2017-01-01
Within the large body of biomedical knowledge, recent findings and discoveries are most often presented as research articles. Their number has been increasing sharply since the turn of the century, presenting ever-growing challenges for search and discovery of knowledge and information related to specific topics of interest, even with the help of advanced online search tools. This is especially true when the goal of a search is to find or discover key relations between important concepts or topic words. We have developed an innovative method for extracting key relations between concepts from abstracts of articles. The method focuses on relations between keywords or topic words in the articles. Early experiments with the method on PubMed publications have shown promising results in searching and discovering keywords and their relationships that are strongly related to the main topic of an article.
Using texts in science education: cognitive processes and knowledge representation.
van den Broek, Paul
2010-04-23
Texts form a powerful tool in teaching concepts and principles in science. How do readers extract information from a text, and what are the limitations in this process? Central to comprehension of and learning from a text is the construction of a coherent mental representation that integrates the textual information and relevant background knowledge. This representation engenders learning if it expands the reader's existing knowledge base or if it corrects misconceptions in this knowledge base. The Landscape Model captures the reading process and the influences of reader characteristics (such as working-memory capacity, reading goal, prior knowledge, and inferential skills) and text characteristics (such as content/structure of presented information, processing demands, and textual cues). The model suggests factors that can optimize--or jeopardize--learning science from text.
NASA Astrophysics Data System (ADS)
Franzen, P.; Gutser, R.; Fantz, U.; Kraus, W.; Falter, H.; Fröschle, M.; Heinemann, B.; McNeely, P.; Nocentini, R.; Riedl, R.; Stäbler, A.; Wünderlich, D.
2011-07-01
The ITER neutral beam system requires a negative hydrogen ion beam of 48 A with an energy of 0.87 MeV, and a negative deuterium beam of 40 A with an energy of 1 MeV. The beam is extracted from a large ion source of dimension 1.9 × 0.9 m2 by an acceleration system consisting of seven grids with 1280 apertures each. Currently, apertures with a diameter of 14 mm in the first grid are foreseen. In 2007, the IPP RF source was chosen as the ITER reference source due to its reduced maintenance compared with arc-driven sources and the successful development at the BATMAN test facility of being equipped with the small IPP prototype RF source ( {\\sim}\\frac{1}{8} of the area of the ITER NBI source). These results, however, were obtained with an extraction system with 8 mm diameter apertures. This paper reports on the comparison of the source performance at BATMAN of an ITER-relevant extraction system equipped with chamfered apertures with a 14 mm diameter and 8 mm diameter aperture extraction system. The most important result is that there is almost no difference in the achieved current density—being consistent with ion trajectory calculations—and the amount of co-extracted electrons. Furthermore, some aspects of the beam optics of both extraction systems are discussed.
Tavares Estevam, Adriana Carneiro; Alonso Buriti, Flávia Carolina; de Oliveira, Tiago Almeida; Pereira, Elainy Virginia Dos Santos; Florentino, Eliane Rolim; Porto, Ana Lúcia Figueiredo
2016-04-01
The effects of the Gracilaria domingensis seaweed aqueous extract in comparison with gelatin on the physicochemical, microbial, and textural characteristics of fermented milks processed with the mixed culture SAB 440 A, composed of Streptococcus thermophilus, Lactobacillus acidophilus, and Bifidobacterium animalis ssp. lactis, were investigated. The addition of G. domingensis aqueous extract did not affect pH, titratable acidity, and microbial viability of fermented milks when compared with the control (with no texture modifier) and the products with added gelatin. Fermented milk with added the seaweed aqueous extract showed firmness, consistency, cohesiveness, and viscosity index at least 10% higher than those observed for the control product (P < 0.05). At 4 h of fermentation, the fermented milks with only G. domingensis extract showed a texture comparable to that observed for products containing only gelatin. At 5 h of fermentation, firmness and consistency increased significantly (P < 0.05) in products with only seaweed extract added, a behavior not observed in products with the full amount of gelatin, probably due to the differences between the interactions of these ingredients with casein during the development of the gel network throughout the acidification of milk. The G. domingensis aqueous extract appears as a promising gelatin alternative to be used as texture modifier in fermented milks and related dairy products. © 2016 Institute of Food Technologists®
Pollard, T D; Ito, S
1970-08-01
The role of filaments in consistency changes and movement in a motile cytoplasmic extract of Amoeba proteus was investigated by correlating light and electron microscopic observations with viscosity measurements. The extract is prepared by the method of Thompson and Wolpert (1963). At 0 degrees C, this extract is nonmotile and similar in structure to ameba cytoplasm, consisting of groundplasm, vesicles, mitochondria, and a few 160 A filaments. The extract undergoes striking ATP-stimulated streaming when warmed to 22 degrees C. Two phases of movement are distinguished. During the first phase, the apparent viscosity usually increases and numerous 50-70 A filaments appear in samples of the extract prepared for electron microscopy, suggesting that the increase in viscosity in caused, at least in part, by the formation of these thin filaments. During this initial phase of ATP-stimulated movement, these thin filaments are not detectable by phase-contrast or polarization microscopy, but later, in the second phase of movement, 70 A filaments aggregate to form birefringent microscopic fibrils. A preparation of pure groundplasm with no 160 A filaments or membranous organelles exhibits little or no ATP-stimulated movement, but 50-70 A filaments form and aggregate into birefringent fibrils. This observation and the structural relationship of the 70 A and the 160 A filaments in the motile extract suggest that both types of filaments may be required for movement. These two types of filaments, 50-70 A and 160 A, are also present in the cytoplasm of intact amebas. Fixed cells could not be used to study the distribution of these filaments during natural ameboid movement because of difficulties in preserving the normal structure of the ameba during preparation for electron microscopy.
Chebrolu, Kranthi K; Jayaprakasha, G K; Jifon, J; Patil, Bhimanagouda S
2011-07-15
Understanding the factors influencing flavonone extraction is critical for the knowledge in sample preparation. The present study was focused on the extraction parameters such as solvent, heat, centrifugal speed, centrifuge temperature, sample to solvent ratio, extraction cycles, sonication time, microwave time and their interactions on sample preparation. Flavanones were analyzed in a high performance liquid chromatography (HPLC) and later identified by liquid chromatography and mass spectrometry (LC-MS). The five flavanones were eluted by a binary mobile phase with 0.03% phosphoric acid and acetonitrile in 20 min and detected at 280 nm, and later identified by mass spectral analysis. Dimethylsulfoxide (DMSO) and dimethyl formamide (DMF) had optimum extraction levels of narirutin, naringin, neohesperidin, didymin and poncirin compared to methanol (MeOH), ethanol (EtOH) and acetonitrile (ACN). Centrifuge temperature had a significant effect on flavanone distribution in the extracts. The DMSO and DMF extracts had homogeneous distribution of flavanones compared to MeOH, EtOH and ACN after centrifugation. Furthermore, ACN showed clear phase separation due to differential densities in the extracts after centrifugation. The number of extraction cycles significantly increased the flavanone levels during extraction. Modulating the sample to solvent ratio increased naringin quantity in the extracts. Current research provides critical information on the role of centrifuge temperature, extraction solvent and their interactions on flavanone distribution in extracts. Published by Elsevier B.V.
The extraction characteristic of Au-Ag from Au concentrate by thiourea solution
NASA Astrophysics Data System (ADS)
Kim, Bongju; Cho, Kanghee; On, Hyunsung; Choi, Nagchoul; Park, Cheonyoung
2013-04-01
The cyanidation process has been used commercially for the past 100 years, there are ores that are not amenable to treatment by cyanide. Interest in alternative lixiviants, such as thiourea, halogens, thiosulfate and malononitrile, has been revived as a result of a major increase in gold price, which has stimulated new developments in extraction technology, combined with environmental concern. The Au extraction process using the thiourea solvent has many advantages over the cyanidation process, including higher leaching rates, faster extraction time and less than toxicity. The purpose of this study was investigated to the extraction characteristic of Au-Ag from two different Au concentrate (sulfuric acid washing and roasting) under various experiment conditions (thiourea concentration, pH of solvent, temperature) by thiourea solvent. The result of extraction experiment showed that the Au-Ag extraction was a fast extraction process, reaching equilibrium (maximum extraction rate) within 30 min. The Au-Ag extraction rate was higher in the roasted concentrate than in the sulfuric acid washing. The higher the Au-Ag extraction rate (Au - 70.87%, Ag - 98.12%) from roasted concentrate was found when the more concentration of thiourea increased, pH decreased and extraction temperature increased. This study informs extraction method basic knowledge when thiourea was a possibility to eco-/economic resources of Au-Ag utilization studies including the hydrometallurgy.
ERIC Educational Resources Information Center
Tchoshanov, Mourat; Cruz, Maria D.; Huereca, Karla; Shakirova, Kadriya; Shakirova, Liliana; Ibragimova, Elena N.
2017-01-01
This mixed methods study examined an association between cognitive types of teachers' mathematical content knowledge and students' performance in lower secondary schools (grades 5 through 9). Teachers (N = 90) completed the Teacher Content Knowledge Survey (TCKS), which consisted of items measuring different cognitive types of teacher knowledge.…
Turker, Hakan; Yıldırım, Arzu Birinci
2015-01-01
The antibacterial activity of ethanolic and aqueous crude extracts from 36 plants in Turkey, including seven endemic species, against fish pathogens was studied using the disc diffusion assay. The extract that was most active against all microbial strains, except Aeromonas salmonicida, was that of Dorycnium pentaphyllum. Some of the extracts also showed a very broad spectrum of potent antimicrobial activity. The extract of Anemone nemorosa showed the highest antimicrobial activity against Vibrio anguillarum. V. anguillarum, a Gram-negative bacterium, appeared to be the most susceptible to the plant extracts used in this experiment. To the best of our knowledge, this is the first report on the antimicrobial activity of 11 of the studied plants. The preliminary screening assay indicated that some of the Turkish plants with antibacterial properties may offer alternative therapeutic agents against bacterial infections in aquaculture industry. PMID:26019642
The Australian Pharmaceutical Benefits Scheme data collection: a practical guide for researchers.
Mellish, Leigh; Karanges, Emily A; Litchfield, Melisa J; Schaffer, Andrea L; Blanch, Bianca; Daniels, Benjamin J; Segrave, Alicia; Pearson, Sallie-Anne
2015-11-02
The Pharmaceutical Benefits Scheme (PBS) is Australia's national drug subsidy program. This paper provides a practical guide to researchers using PBS data to examine prescribed medicine use. Excerpts of the PBS data collection are available in a variety of formats. We describe the core components of four publicly available extracts (the Australian Statistics on Medicines, PBS statistics online, section 85 extract, under co-payment extract). We also detail common analytical challenges and key issues regarding the interpretation of utilisation using the PBS collection and its various extracts. Research using routinely collected data is increasing internationally. PBS data are a valuable resource for Australian pharmacoepidemiological and pharmaceutical policy research. A detailed knowledge of the PBS, the nuances of data capture, and the extracts available for research purposes are necessary to ensure robust methodology, interpretation, and translation of study findings into policy and practice.
NASA Astrophysics Data System (ADS)
Aziz, Aamer; Hu, Qingmao; Nowinski, Wieslaw L.
2004-04-01
The human cerebral ventricular system is a complex structure that is essential for the well being and changes in which reflect disease. It is clinically imperative that the ventricular system be studied in details. For this reason computer assisted algorithms are essential to be developed. We have developed a novel (patent pending) and robust anatomical knowledge-driven algorithm for automatic extraction of the cerebral ventricular system from MRI. The algorithm is not only unique in its image processing aspect but also incorporates knowledge of neuroanatomy, radiological properties, and variability of the ventricular system. The ventricular system is divided into six 3D regions based on the anatomy and its variability. Within each ventricular region a 2D region of interest (ROI) is defined and is then further subdivided into sub-regions. Various strict conditions that detect and prevent leakage into the extra-ventricular space are specified for each sub-region based on anatomical knowledge. Each ROI is processed to calculate its local statistics, local intensity ranges of cerebrospinal fluid and grey and white matters, set a seed point within the ROI, grow region directionally in 3D, check anti-leakage conditions and correct growing if leakage occurs and connects all unconnected regions grown by relaxing growing conditions. The algorithm was tested qualitatively and quantitatively on normal and pathological MRI cases and worked well. In this paper we discuss in more detail inclusion of anatomical knowledge in the algorithm and usefulness of our approach from clinical perspective.
2014-01-01
Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614
Ruhoff, J.R.; Winters, C.E.
1957-11-12
A process is described for the purification of uranyl nitrate by an extraction process. A solution is formed consisting of uranyl nitrate, together with the associated impurities arising from the HNO/sub 3/ leaching of the ore, in an organic solvent such as ether. If this were back extracted with water to remove the impurities, large quantities of uranyl nitrate will also be extracted and lost. To prevent this, the impure organic solution is extracted with small amounts of saturated aqueous solutions of uranyl nitrate thereby effectively accomplishing the removal of impurities while not allowing any further extraction of the uranyl nitrate from the organic solvent. After the impurities have been removed, the uranium values are extracted with large quantities of water.
Extraction of small boat harmonic signatures from passive sonar.
Ogden, George L; Zurk, Lisa M; Jones, Mark E; Peterson, Mary E
2011-06-01
This paper investigates the extraction of acoustic signatures from small boats using a passive sonar system. Noise radiated from a small boats consists of broadband noise and harmonically related tones that correspond to engine and propeller specifications. A signal processing method to automatically extract the harmonic structure of noise radiated from small boats is developed. The Harmonic Extraction and Analysis Tool (HEAT) estimates the instantaneous fundamental frequency of the harmonic tones, refines the fundamental frequency estimate using a Kalman filter, and automatically extracts the amplitudes of the harmonic tonals to generate a harmonic signature for the boat. Results are presented that show the HEAT algorithms ability to extract these signatures. © 2011 Acoustical Society of America
Comparison of results from simple expressions for MOSFET parameter extraction
NASA Technical Reports Server (NTRS)
Buehler, M. G.; Lin, Y.-S.
1988-01-01
In this paper results are compared from a parameter extraction procedure applied to the linear, saturation, and subthreshold regions for enhancement-mode MOSFETs fabricated in a 3-micron CMOS process. The results indicate that the extracted parameters differ significantly depending on the extraction algorithm and the distribution of I-V data points. It was observed that KP values vary by 30 percent, VT values differ by 50 mV, and Delta L values differ by 1 micron. Thus for acceptance of wafers from foundries and for modeling purposes, the extraction method and data point distribution must be specified. In this paper measurement and extraction procedures that will allow a consistent evaluation of measured parameters are discussed.
Terreaux, Christian; Wang, Qi; Ioset, Jean-Robert; Ndjoko, Karine; Grimminger, Wolf; Hostettmann, Kurt
2002-04-01
The hydroalcoholic extract of Tinnevelli senna is widely used as a laxative phytomedicine. In order to improve the knowledge of the chemical composition of this extract, LC/MS and LC/MS(n) studies were performed, allowing the on-line identification of most of the known constituents, i. e., flavonoids, anthraquinones and the typical dianthronic sennosides. However, the identity of four compounds could not be ascertained on-line under the given LC/MS conditions. These substances were isolated and their structures elucidated as kaempferol, the naphthalene derivative tinnevellin 8-glucoside and two new carboxylated benzophenone glucosides.
NASA Technical Reports Server (NTRS)
Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.
1989-01-01
Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.
XAFS investigation of polyamidoxime-bound uranyl contests the paradigm from small molecule studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayes, Richard T.; Piechowicz, Marek; Lin, Zekai
In this study, limited resource availability and population growth have motivated interest in harvesting valuable metals from unconventional reserves, but developing selective adsorbents for this task requires structural knowledge of metal binding environments. Amidoxime polymers have been identified as the most promising platform for large-scale extraction of uranium from seawater. However, despite more than 30 years of research, the uranyl coordination environment on these adsorbents has not been positively identified. We report the first XAFS investigation of polyamidoxime-bound uranyl, with EXAFS fits suggesting a cooperative chelating model, rather than the tridentate or η 2 motifs proposed by small molecule andmore » computational studies. Samples exposed to environmental seawater also display a feature consistent with a μ 2-oxo-bridged transition metal in the uranyl coordination sphere, suggesting in situ formation of a specific binding site or mineralization of uranium on the polymer surface. These unexpected findings challenge several long-held assumptions and have significant implications for development of polymer adsorbents with high selectivity.« less
In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function
NASA Astrophysics Data System (ADS)
Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir
2018-03-01
We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.
XAFS investigation of polyamidoxime-bound uranyl contests the paradigm from small molecule studies
Mayes, Richard T.; Piechowicz, Marek; Lin, Zekai; ...
2015-11-12
In this study, limited resource availability and population growth have motivated interest in harvesting valuable metals from unconventional reserves, but developing selective adsorbents for this task requires structural knowledge of metal binding environments. Amidoxime polymers have been identified as the most promising platform for large-scale extraction of uranium from seawater. However, despite more than 30 years of research, the uranyl coordination environment on these adsorbents has not been positively identified. We report the first XAFS investigation of polyamidoxime-bound uranyl, with EXAFS fits suggesting a cooperative chelating model, rather than the tridentate or η 2 motifs proposed by small molecule andmore » computational studies. Samples exposed to environmental seawater also display a feature consistent with a μ 2-oxo-bridged transition metal in the uranyl coordination sphere, suggesting in situ formation of a specific binding site or mineralization of uranium on the polymer surface. These unexpected findings challenge several long-held assumptions and have significant implications for development of polymer adsorbents with high selectivity.« less
Bright half-cycle optical radiation from relativistic wavebreaking
NASA Astrophysics Data System (ADS)
Miao, Bo; Goers, Andy; Hine, George; Feder, Linus; Salehi, Fatholah; Wahlstrand, Jared; Milchberg, Howard
2015-11-01
Wavebreaking injection of electrons into relativistic plasma wakes generated in near-critical density hydrogen plasmas by sub-terawatt laser pulses is observed to generate an extremely energetic and ultra-broadband radiation flash. The flash is coherent, with a bandwidth of Δλ / λ ~ 0 . 7 consistent with half-cycle optical emission of duration ~ 1 fs from violent unidirectional acceleration of electrons to light speed from rest over a distance much less than the radiated wavelength. We studied the temporal duration and coherence of the flash by interfering it in the frequency domain with a well-characterized Xe supercontinuum pulse. Fringes across the full flash spectrum were observed with high visibility, and the extracted flash spectral phase supports it being a nearly transform-limited pulse. To our knowledge, this is the first evidence of bright half-cycle optical emission. This research is supported by the Defense Threat Reduction Agency, the US Department of Energy, and the Air Force Office of Scientific Research.
A Modeling Approach for Burn Scar Assessment Using Natural Features and Elastic Property
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsap, L V; Zhang, Y; Goldgof, D B
2004-04-02
A modeling approach is presented for quantitative burn scar assessment. Emphases are given to: (1) constructing a finite element model from natural image features with an adaptive mesh, and (2) quantifying the Young's modulus of scars using the finite element model and the regularization method. A set of natural point features is extracted from the images of burn patients. A Delaunay triangle mesh is then generated that adapts to the point features. A 3D finite element model is built on top of the mesh with the aid of range images providing the depth information. The Young's modulus of scars ismore » quantified with a simplified regularization functional, assuming that the knowledge of scar's geometry is available. The consistency between the Relative Elasticity Index and the physician's rating based on the Vancouver Scale (a relative scale used to rate burn scars) indicates that the proposed modeling approach has high potentials for image-based quantitative burn scar assessment.« less
Learning target masks in infrared linescan imagery
NASA Astrophysics Data System (ADS)
Fechner, Thomas; Rockinger, Oliver; Vogler, Axel; Knappe, Peter
1997-04-01
In this paper we propose a neural network based method for the automatic detection of ground targets in airborne infrared linescan imagery. Instead of using a dedicated feature extraction stage followed by a classification procedure, we propose the following three step scheme: In the first step of the recognition process, the input image is decomposed into its pyramid representation, thus obtaining a multiresolution signal representation. At the lowest three levels of the Laplacian pyramid a neural network filter of moderate size is trained to indicate the target location. The last step consists of a fusion process of the several neural network filters to obtain the final result. To perform this fusion we use a belief network to combine the various filter outputs in a statistical meaningful way. In addition, the belief network allows the integration of further knowledge about the image domain. By applying this multiresolution recognition scheme, we obtain a nearly scale- and rotational invariant target recognition with a significantly decreased false alarm rate compared with a single resolution target recognition scheme.
Discovering Peripheral Arterial Disease Cases from Radiology Notes Using Natural Language Processing
Savova, Guergana K.; Fan, Jin; Ye, Zi; Murphy, Sean P.; Zheng, Jiaping; Chute, Christopher G.; Kullo, Iftikhar J.
2010-01-01
As part of the Electronic Medical Records and Genomics Network, we applied, extended and evaluated an open source clinical Natural Language Processing system, Mayo’s Clinical Text Analysis and Knowledge Extraction System, for the discovery of peripheral arterial disease cases from radiology reports. The manually created gold standard consisted of 223 positive, 19 negative, 63 probable and 150 unknown cases. Overall accuracy agreement between the system and the gold standard was 0.93 as compared to a named entity recognition baseline of 0.46. Sensitivity for the positive, probable and unknown cases was 0.93–0.96, and for the negative cases was 0.72. Specificity and negative predictive value for all categories were in the 90’s. The positive predictive value for the positive and unknown categories was in the high 90’s, for the negative category was 0.84, and for the probable category was 0.63. We outline the main sources of errors and suggest improvements. PMID:21347073
A semi-supervised learning framework for biomedical event extraction based on hidden topics.
Zhou, Deyu; Zhong, Dayou
2015-05-01
Scientists have devoted decades of efforts to understanding the interaction between proteins or RNA production. The information might empower the current knowledge on drug reactions or the development of certain diseases. Nevertheless, due to the lack of explicit structure, literature in life science, one of the most important sources of this information, prevents computer-based systems from accessing. Therefore, biomedical event extraction, automatically acquiring knowledge of molecular events in research articles, has attracted community-wide efforts recently. Most approaches are based on statistical models, requiring large-scale annotated corpora to precisely estimate models' parameters. However, it is usually difficult to obtain in practice. Therefore, employing un-annotated data based on semi-supervised learning for biomedical event extraction is a feasible solution and attracts more interests. In this paper, a semi-supervised learning framework based on hidden topics for biomedical event extraction is presented. In this framework, sentences in the un-annotated corpus are elaborately and automatically assigned with event annotations based on their distances to these sentences in the annotated corpus. More specifically, not only the structures of the sentences, but also the hidden topics embedded in the sentences are used for describing the distance. The sentences and newly assigned event annotations, together with the annotated corpus, are employed for training. Experiments were conducted on the multi-level event extraction corpus, a golden standard corpus. Experimental results show that more than 2.2% improvement on F-score on biomedical event extraction is achieved by the proposed framework when compared to the state-of-the-art approach. The results suggest that by incorporating un-annotated data, the proposed framework indeed improves the performance of the state-of-the-art event extraction system and the similarity between sentences might be precisely described by hidden topics and structures of the sentences. Copyright © 2015 Elsevier B.V. All rights reserved.
Text feature extraction based on deep learning: a review.
Liang, Hong; Sun, Xiao; Sun, Yunlei; Gao, Yuan
2017-01-01
Selection of text feature item is a basic and important matter for text mining and information retrieval. Traditional methods of feature extraction require handcrafted features. To hand-design, an effective feature is a lengthy process, but aiming at new applications, deep learning enables to acquire new effective feature representation from training data. As a new feature extraction method, deep learning has made achievements in text mining. The major difference between deep learning and conventional methods is that deep learning automatically learns features from big data, instead of adopting handcrafted features, which mainly depends on priori knowledge of designers and is highly impossible to take the advantage of big data. Deep learning can automatically learn feature representation from big data, including millions of parameters. This thesis outlines the common methods used in text feature extraction first, and then expands frequently used deep learning methods in text feature extraction and its applications, and forecasts the application of deep learning in feature extraction.
Methanolic Extract of Ganoderma lucidum Induces Autophagy of AGS Human Gastric Tumor Cells.
Reis, Filipa S; Lima, Raquel T; Morales, Patricia; Ferreira, Isabel C F R; Vasconcelos, M Helena
2015-09-29
Ganoderma lucidum is one of the most widely studied mushroom species, particularly in what concerns its medicinal properties. Previous studies (including those from some of us) have shown some evidence that the methanolic extract of G. lucidum affects cellular autophagy. However, it was not known if it induces autophagy or decreases the autophagic flux. The treatment of a gastric adenocarcinoma cell line (AGS) with the mushroom extract increased the formation of autophagosomes (vacuoles typical from autophagy). Moreover, the cellular levels of LC3-II were also increased, and the cellular levels of p62 decreased, confirming that the extract affects cellular autophagy. Treating the cells with the extract together with lysossomal protease inhibitors, the cellular levels of LC3-II and p62 increased. The results obtained proved that, in AGS cells, the methanolic extract of G. lucidum causes an induction of autophagy, rather than a reduction in the autophagic flux. To our knowledge, this is the first study proving that statement.
Coordination of knowledge in judging animated motion
NASA Astrophysics Data System (ADS)
Thaden-Koch, Thomas C.; Dufresne, Robert J.; Mestre, Jose P.
2006-12-01
Coordination class theory is used to explain college students’ judgments about animated depictions of moving objects. diSessa’s coordination class theory models a “concept” as a complex knowledge system that can reliably determine a particular type of information in widely varying situations. In the experiment described here, fifty individually interviewed college students judged the realism of two sets of computer animations depicting balls rolling on a pair of tracks. The judgments of students from an introductory physics class were strongly affected by the number of balls depicted (one or two), but the judgments of students from an educational psychology class were not. Coordination analysis of interview transcripts supports the interpretation that physics students’ developing physics knowledge led them to consistently miss or ignore some observations that the other students consistently paid attention to. The analysis highlights the context sensitivity and potential fragility of coordination systems, and leads to the conclusion that students’ developing knowledge systems might not necessarily result in consistently improving performance.
De Angelis, Gino; Davies, Barbara; King, Judy; McEwan, Jessica; Cavallo, Sabrina; Loew, Laurianne; Wells, George A; Brosseau, Lucie
2016-11-30
The transfer of research knowledge into clinical practice can be a continuous challenge for researchers. Information and communication technologies, such as websites and email, have emerged as popular tools for the dissemination of evidence to health professionals. The objective of this systematic review was to identify research on health professionals' perceived usability and practice behavior change of information and communication technologies for the dissemination of clinical practice guidelines. We used a systematic approach to retrieve and extract data about relevant studies. We identified 2248 citations, of which 21 studies met criteria for inclusion; 20 studies were randomized controlled trials, and 1 was a controlled clinical trial. The following information and communication technologies were evaluated: websites (5 studies), computer software (3 studies), Web-based workshops (2 studies), computerized decision support systems (2 studies), electronic educational game (1 study), email (2 studies), and multifaceted interventions that consisted of at least one information and communication technology component (6 studies). Website studies demonstrated significant improvements in perceived usefulness and perceived ease of use, but not for knowledge, reducing barriers, and intention to use clinical practice guidelines. Computer software studies demonstrated significant improvements in perceived usefulness, but not for knowledge and skills. Web-based workshop and email studies demonstrated significant improvements in knowledge, perceived usefulness, and skills. An electronic educational game intervention demonstrated a significant improvement from baseline in knowledge after 12 and 24 weeks. Computerized decision support system studies demonstrated variable findings for improvement in skills. Multifaceted interventions demonstrated significant improvements in beliefs about capabilities, perceived usefulness, and intention to use clinical practice guidelines, but variable findings for improvements in skills. Most multifaceted studies demonstrated significant improvements in knowledge. The findings suggest that health professionals' perceived usability and practice behavior change vary by type of information and communication technology. Heterogeneity and the paucity of properly conducted studies did not allow for a clear comparison between studies and a conclusion on the effectiveness of information and communication technologies as a knowledge translation strategy for the dissemination of clinical practice guidelines. ©Gino De Angelis, Barbara Davies, Judy King, Jessica McEwan, Sabrina Cavallo, Laurianne Loew, George A Wells, Lucie Brosseau. Originally published in JMIR Medical Education (http://mededu.jmir.org), 30.11.2016.
SOLVENT EXTRACTION OF RUTHENIUM
Hyman, H.H.; Leader, G.R.
1959-07-14
The separation of rathenium from aqueous solutions by solvent extraction is described. According to the invention, a nitrite selected from the group consisting of alkali nitrite and alkaline earth nitrite in an equimolecular quantity with regard to the quantity of rathenium present is added to an aqueous solution containing ruthenium tetrantrate to form a ruthenium complex. Adding an organic solvent such as ethyl ether to the resulting mixture selectively extracts the rathenium complex.
Boehm, A.B.; Griffith, J.; McGee, C.; Edge, T.A.; Solo-Gabriele, H. M.; Whitman, R.; Cao, Y.; Getrich, M.; Jay, J.A.; Ferguson, D.; Goodwin, K.D.; Lee, C.M.; Madison, M.; Weisberg, S.B.
2009-01-01
Aims: The absence of standardized methods for quantifying faecal indicator bacteria (FIB) in sand hinders comparison of results across studies. The purpose of the study was to compare methods for extraction of faecal bacteria from sands and recommend a standardized extraction technique. Methods and Results: Twenty-two methods of extracting enterococci and Escherichia coli from sand were evaluated, including multiple permutations of hand shaking, mechanical shaking, blending, sonication, number of rinses, settling time, eluant-to-sand ratio, eluant composition, prefiltration and type of decantation. Tests were performed on sands from California, Florida and Lake Michigan. Most extraction parameters did not significantly affect bacterial enumeration. anova revealed significant effects of eluant composition and blending; with both sodium metaphosphate buffer and blending producing reduced counts. Conclusions: The simplest extraction method that produced the highest FIB recoveries consisted of 2 min of hand shaking in phosphate-buffered saline or deionized water, a 30-s settling time, one-rinse step and a 10 : 1 eluant volume to sand weight ratio. This result was consistent across the sand compositions tested in this study but could vary for other sand types. Significance and Impact of the Study: Method standardization will improve the understanding of how sands affect surface water quality. ?? 2009 The Society for Applied Microbiology.
Knight, Tamsin L; Swindells, Chris M; Craddock, Andrew M; Maharaj, Vinesh J; Buchwald-Werner, Sybille; Ismaili, Smail Alaoui; McWilliam, Simon C
2012-01-01
Hoodia gordonii (Masson) Sweet ex Decne., is a succulent shrub, indigenous to the arid regions of southern Africa. Indigenous people have historically utilised certain species of Hoodia, including H. gordonii, as a source of food and water. Studies by the Council for Scientific and Industrial Research (CSIR, South Africa) identified that extracts of H. gordonii had appetite suppressant activity associated with specific steroid glycosides. A programme to develop weight management products based around this discovery was implemented in 1998. An agronomy programme was established which demonstrated that it was possible to cultivate this novel crop on a commercial scale (in excess of 70 ha). In parallel, a food grade manufacturing process was developed consisting of four main steps: harvesting of H. gordonii plant stems, comminution, drying under controlled conditions and extraction using food grade solvents. Appropriate Quality Control (QC) procedures were developed. The extraction process is capable of delivering a consistent composition despite natural variations in the composition of the dried H. gordonii. Specifications were developed for the resulting extract. The intended use of the standardised H. gordonii extract was as a functional food ingredient for weight management products. Other development studies on characterisation, toxicology and pharmacology are reported separately. Copyright © 2011. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Hamidson, H.; Damiri, N.; Angraini, E.
2018-01-01
This research was conducted to study the effect of the application of several extracts of medicinal plants on the incidence of mosaic disease caused by Cucumber Mosaic Virus infection on the chili (Capsicum annuum L.) plantation. A Randomized Block Design with eight treatments including control was used throughout the experiment. Treatments consisted of Azadiracta indica (A), Piper bitle (B), Cymbopogon citrates (C), Curcuma domestica (D), Averroa bilimbi (E), Datura stramonium (F), Annona Muricata (G) and control (H). Each treatment consist of three replications. The parameters observed were the incidence of mosaic attack due to CMV, disease severity, plant height, wet and dry weight and production (number of fruits and the weight of total fruits) each plant. Results showed that the application of medicinal plant extracts reduced the disease severity due to CMV. Extracts of Annona muricata and Datura stramonium were most effective in suppressing disease severity caused by the virus as they significantly different from control and from a number of treatment. The plants medicinal extracts were found to have increased the plant height and total weight of the plant, fruit amount and fruit weight. Extracts of Curcuma domestica, Piper bitle and Cymbopogon citrates were the third highest in fruit amount and weight and significantly different from the control.
RECOVERY OF METAL VALUES FROM AQUEOUS SOLUTIONS BY SOLVENT EXTRACTION
Moore, R.L.
1959-09-01
An organic solvent mixure is described for extracting actinides from aqueous solutions; the solvent mixture consists of from 10 to 25% by volume of tributyl phosphate and the remainder a chlorine-fluorine-substituted saturated hydrocarbon having two carbon atoms in the molecule.
Reid, Kendra R; Kennedy, Lonnie J; Crick, Eric W; Conte, Eric D
2002-10-25
Presented is a solid-phase extraction sorbent material composed of cationic alkyltrimethylammonium surfactants attached to a strong cation-exchange resin via ion-exchange. The original hydrophilic cation-exchange resin is made hydrophobic by covering the surface with alkyl chains from the hydrophobic portion of the surfactant. The sorbent material now has a better ability to extract hydrophobic molecules from aqueous samples. The entire stationary phase (alkyltrimethylammonium surfactant) is removed along with the analyte during the elution step. The elution step requires a mild elution solvent consisting of 0.25 M Mg2+ in a 50% 2-propanol solution. The main advantage of using a removable stationary phase is that traditionally utilized toxic elution solvents such as methylene chloride, which are necessary to efficiently release strongly hydrophobic species from SPE stationary phases, may now be avoided. Also, the final extract is directly compatible with reversed-phase liquid chromatography. The performance of this procedure is presented using pyrene as a test molecule.
Data Treatment for LC-MS Untargeted Analysis.
Riccadonna, Samantha; Franceschi, Pietro
2018-01-01
Liquid chromatography-mass spectrometry (LC-MS) untargeted experiments require complex chemometrics strategies to extract information from the experimental data. Here we discuss "data preprocessing", the set of procedures performed on the raw data to produce a data matrix which will be the starting point for the subsequent statistical analysis. Data preprocessing is a crucial step on the path to knowledge extraction, which should be carefully controlled and optimized in order to maximize the output of any untargeted metabolomics investigation.
Extracting BI-RADS Features from Portuguese Clinical Texts
Nassif, Houssam; Cunha, Filipe; Moreira, Inês C.; Cruz-Correia, Ricardo; Sousa, Eliana; Page, David; Burnside, Elizabeth; Dutra, Inês
2013-01-01
In this work we build the first BI-RADS parser for Portuguese free texts, modeled after existing approaches to extract BI-RADS features from English medical records. Our concept finder uses a semantic grammar based on the BIRADS lexicon and on iterative transferred expert knowledge. We compare the performance of our algorithm to manual annotation by a specialist in mammography. Our results show that our parser’s performance is comparable to the manual method. PMID:23797461
Novel Approaches to Extraction Methods in Recovery of Capsaicin from Habanero Pepper (CNPH 15.192)
Martins, Frederico S.; Borges, Leonardo L.; Ribeiro, Claudia S. C.; Reifschneider, Francisco J. B.; Conceição, Edemilson C.
2017-01-01
Introduction: The objective of this study was to compare three capsaicin extraction methods: Shoxlet, Ultrasound-assisted Extraction (UAE), and Shaker-assisted Extraction (SAE) from Habanero pepper, CNPH 15.192. Materials and Methods: The different parameters evaluated were alcohol degree, time extraction, and solid–solvent ratio using response surface methodology (RSM). Results: The three parameters found significant (p < 0.05) were for UAE and solvent concentration and extraction time for SAE. The optimum conditions for the capsaicin UAE and SAE were similar 95% alcohol degree, 30 minutes and solid–liquid ratio 2 mg/mL. The Soxhlet increased the extraction in 10–25%; however, long extraction times (45 minutes) degraded 2% capsaicin. Conclusion: The extraction of capsaicin was influenced by extraction method and by the operating conditions chosen. The optimized conditions provided savings of time, solvent, and herbal material. Prudent choice of the extraction method is essential to ensure optimal yield of extract, thereby making the study relevant and the knowledge gained useful for further exploitation and application of this resource. SUMMARY Habanero pepper, line CNPH 15.192, possess capsaicin in higher levels when compared with others speciesHigher levels of ethanolic strength are more suitable to obtain a higher levels of capsaicinBox-Behnken design indicates to be useful to explore the best conditions of ultrasound assisted extraction of capsaicin. Abbreviations used: Nomenclature UAE: Ultrasound-assisted Extraction; SAE: Shaker-assisted Extraction. PMID:28808409
Spatial Knowledge Infrastructures - Creating Value for Policy Makers and Benefits the Community
NASA Astrophysics Data System (ADS)
Arnold, L. M.
2016-12-01
The spatial data infrastructure is arguably one of the most significant advancements in the spatial sector. It's been a game changer for governments, providing for the coordination and sharing of spatial data across organisations and the provision of accessible information to the broader community of users. Today however, end-users such as policy-makers require far more from these spatial data infrastructures. They want more than just data; they want the knowledge that can be extracted from data and they don't want to have to download, manipulate and process data in order to get the knowledge they seek. It's time for the spatial sector to reduce its focus on data in spatial data infrastructures and take a more proactive step in emphasising and delivering the knowledge value. Nowadays, decision-makers want to be able to query at will the data to meet their immediate need for knowledge. This is a new value proposal for the decision-making consumer and will require a shift in thinking. This paper presents a model for a Spatial Knowledge Infrastructure and underpinning methods that will realise a new real-time approach to delivering knowledge. The methods embrace the new capabilities afforded through the sematic web, domain and process ontologies and natural query language processing. Semantic Web technologies today have the potential to transform the spatial industry into more than just a distribution channel for data. The Semantic Web RDF (Resource Description Framework) enables meaning to be drawn from data automatically. While pushing data out to end-users will remain a central role for data producers, the power of the semantic web is that end-users have the ability to marshal a broad range of spatial resources via a query to extract knowledge from available data. This can be done without actually having to configure systems specifically for the end-user. All data producers need do is make data accessible in RDF and the spatial analytics does the rest.
Orbital transfer vehicle launch operations study: Automated technology knowledge base, volume 4
NASA Technical Reports Server (NTRS)
1986-01-01
A simplified retrieval strategy for compiling automation-related bibliographies from NASA/RECON is presented. Two subsets of NASA Thesaurus subject terms were extracted: a primary list, which is used to obtain an initial set of citations; and a secondary list, which is used to limit or further specify a large initial set of citations. These subject term lists are presented in Appendix A as the Automated Technology Knowledge Base (ATKB) Thesaurus.
Solana, Javier; Cáceres, César; García-Molina, Alberto; Opisso, Eloy; Roig, Teresa; Tormos, José M; Gómez, Enrique J
2015-01-01
Cognitive rehabilitation aims to remediate or alleviate the cognitive deficits appearing after an episode of acquired brain injury (ABI). The purpose of this work is to describe the telerehabilitation platform called Guttmann Neuropersonal Trainer (GNPT) which provides new strategies for cognitive rehabilitation, improving efficiency and access to treatments, and to increase knowledge generation from the process. A cognitive rehabilitation process has been modeled to design and develop the system, which allows neuropsychologists to configure and schedule rehabilitation sessions, consisting of set of personalized computerized cognitive exercises grounded on neuroscience and plasticity principles. It provides remote continuous monitoring of patient's performance, by an asynchronous communication strategy. An automatic knowledge extraction method has been used to implement a decision support system, improving treatment customization. GNPT has been implemented in 27 rehabilitation centers and in 83 patients' homes, facilitating the access to the treatment. In total, 1660 patients have been treated. Usability and cost analysis methodologies have been applied to measure the efficiency in real clinical environments. The usability evaluation reveals a system usability score higher than 70 for all target users. The cost efficiency study results show a relation of 1-20 compared to face-to-face rehabilitation. GNPT enables brain-damaged patients to continue and further extend rehabilitation beyond the hospital, improving the efficiency of the rehabilitation process. It allows customized therapeutic plans, providing information to further development of clinical practice guidelines.
Perception of professional ethics by Iranian occupational therapists working with children
Kalantari, Minoo; Kamali, Mohammad; Joolaee, Soodabeh; Rassafiani, Mehdi; Shafarodi, Narges
2015-01-01
Ethics are related to the structure and culture of the society. In addition to specialized ethics for every profession, individuals also hold their own personal beliefs and values. This study aimed to investigate Iranian occupational therapists’ perception of ethical practice when working with children. For this purpose, qualitative content analysis was used and semi-structured interviews were conducted with ten occupational therapists in their convenient place and time. Each interview was transcribed and double-checked by the research team. Units of meaning were extracted from each transcription and then coded and categorized accordingly. The main categories of ethical practice when working with children included personal attributes, responsibility toward clients, and professional responsibility. Personal attributes included four subcategories: veracity, altruism, empathy, and competence. Responsibility toward clients consisted of six subcategories: equality, autonomy, respect for clients, confidentiality, beneficence, and non-maleficence. Professional responsibility included three subcategories: fidelity, development of professional knowledge, and promotion and growth of the profession. Findings of this study indicated that in Iran, occupational therapists’ perception of autonomy, beneficence, non-maleficence, fidelity and competence is different from Western countries, which may be due to a lower knowledge of ethics and other factors such as culture. The results of this study may be used to develop ethical codes for Iranian occupational therapists both during training and on the job. PMID:27354897
Labaj, Wojciech; Papiez, Anna; Polanski, Andrzej; Polanska, Joanna
2017-03-01
Large collections of data in studies on cancer such as leukaemia provoke the necessity of applying tailored analysis algorithms to ensure supreme information extraction. In this work, a custom-fit pipeline is demonstrated for thorough investigation of the voluminous MILE gene expression data set. Three analyses are accomplished, each for gaining a deeper understanding of the processes underlying leukaemia types and subtypes. First, the main disease groups are tested for differential expression against the healthy control as in a standard case-control study. Here, the basic knowledge on molecular mechanisms is confirmed quantitatively and by literature references. Second, pairwise comparison testing is performed for juxtaposing the main leukaemia types among each other. In this case by means of the Dice coefficient similarity measure the general relations are pointed out. Moreover, lists of candidate main leukaemia group biomarkers are proposed. Finally, with this approach being successful, the third analysis provides insight into all of the studied subtypes, followed by the emergence of four leukaemia subtype biomarkers. In addition, the class enhanced DEG signature obtained on the basis of novel pipeline processing leads to significantly better classification power of multi-class data classifiers. The developed methodology consisting of batch effect adjustment, adaptive noise and feature filtration coupled with adequate statistical testing and biomarker definition proves to be an effective approach towards knowledge discovery in high-throughput molecular biology experiments.
Aggregating concept map data to investigate the knowledge of beginning CS students
NASA Astrophysics Data System (ADS)
Mühling, Andreas
2016-07-01
Concept maps have a long history in educational settings as a tool for teaching, learning, and assessing. As an assessment tool, they are predominantly used to extract the structural configuration of learners' knowledge. This article presents an investigation of the knowledge structures of a large group of beginning CS students. The investigation is based on a method that collects, aggregates, and automatically analyzes the concept maps of a group of learners as a whole, to identify common structural configurations and differences in the learners' knowledge. It shows that those students who have attended CS education in their secondary school life have, on average, configured their knowledge about typical core CS/OOP concepts differently. Also, artifacts of their particular CS curriculum are visible in their externalized knowledge. The data structures and analysis methods necessary for working with concept landscapes have been implemented as a GNU R package that is freely available.
Studies on some Pharmacognostic profiles of Pithecell’obium dulce Benth. Leaves (Leguminosae)
Sugumaran, M.; Vetrichelvan, T.; Venkapayya, D
2006-01-01
The macroscopical characters of the leaves, leaf constants, physico-chemical constants, extractive values, colour, consistency, pH, extractive values with different solvents, micro chemical test, fluorescence characters of liquid extracts and leaf powder after treatment with different chemical reagents under visible and UV light at 254mn, measurement of cell and tissues were studied to fix some pharmacognostical parameters for leaves of Pithecellobium, dulce Benth which will enable the future investigators for identification of the plant. Preliminary phytochemical study on different extracts of the leaves were also performed. PMID:22557213
Sequential microfluidic droplet processing for rapid DNA extraction.
Pan, Xiaoyan; Zeng, Shaojiang; Zhang, Qingquan; Lin, Bingcheng; Qin, Jianhua
2011-11-01
This work describes a novel droplet-based microfluidic device, which enables sequential droplet processing for rapid DNA extraction. The microdevice consists of a droplet generation unit, two reagent addition units and three droplet splitting units. The loading/washing/elution steps required for DNA extraction were carried out by sequential microfluidic droplet processing. The movement of superparamagnetic beads, which were used as extraction supports, was controlled with magnetic field. The microdevice could generate about 100 droplets per min, and it took about 1 min for each droplet to perform the whole extraction process. The extraction efficiency was measured to be 46% for λ-DNA, and the extracted DNA could be used in subsequent genetic analysis such as PCR, demonstrating the potential of the device for fast DNA extraction. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Calella, Patrizia; Iacullo, Vittorio Maria; Valerio, Giuliana
2017-04-29
Good knowledge of nutrition is widely thought to be an important aspect to maintaining a balanced and healthy diet. The aim of this study was to develop and validate a new reliable tool to measure the general and the sport nutrition knowledge (GeSNK) in people who used to practice sports at different levels. The development of (GeSNK) was carried out in six phases as follows: (1) item development and selection by a panel of experts; (2) pilot study in order to assess item difficulty and item discrimination; (3) measurement of the internal consistency; (4) reliability assessment with a 2-week test-retest analysis; (5) concurrent validity was tested by administering the questionnaire along with other two similar tools; (6) construct validity by administering the questionnaire to three groups of young adults with different general nutrition and sport nutrition knowledge. The final questionnaire, consisted of 62 items of the original 183 questions. It is a consistent, valid, and suitable instrument that can be applied over time, making it a promising tool to look at the relationship between nutrition knowledge, demographic characteristics, and dietary behavior in adolescents and young adults.
Liu, Wei; Zhou, Chun-Li; Zhao, Jing; Chen, Dong; Li, Quan-Hong
2014-01-01
6-Gingerol is one of the most pharmacologically active and abundant components in ginger, which has a wide array of biochemical and pharmacologic activities. In recent years, the application of microwave-assisted extraction (MAE) for obtaining bioactive compounds from plant materials has shown tremendous research interest and potential. In this study, an efficient microwave-assisted extraction (MAE) technique was developed to extract 6-gingerol from ginger. The extraction efficiency of MAE was also compared with conventional extraction techniques. Fresh gingers (Zingiber officinale Rose.) were harvested at commercial maturity (originally from Shandong, laiwu, China). In single-factor experiments for the recovery of 6-gingerol, proper ranges of ratio of liquid to solid, ethanol proportion, microwave power, extraction time were determined. Based on the values obtained in single-factor experiments, a Box-Behnken design (BBD) was applied to determine the best combination of extraction variables on the yield of 6-gingerol. The optimum extraction conditions were as follows: microwave power 528 W, ratio of liquid to solid 26 mL·g(-1), extraction time 31 s and ethanol proportion 78%. Furthermore, more 6-gingerol and total polyphenols contents were extracted by MAE than conventional methods including Maceration (MAC), Stirring Extraction (SE), Heat reflux extraction (HRE), Ultrasound-assisted extraction (UAE), as well as the antioxidant capacity. Microwave-assisted extraction showed obvious advantages in terms of high extraction efficiency and antioxidant activity of extract within shortest extraction time. Scanning electron microscopy (SEM) images of ginger powder materials after different extractions were obtained to provide visual evidence of the disruption effect. To our best knowledge, this is the first report about usage of MAE of 6-gingerol extraction from ginger, which could be referenced for the extraction of other active compounds from herbal plants.
Solvent extraction of Cu, Mo, V, and U from leach solutions of copper ore and flotation tailings.
Smolinski, Tomasz; Wawszczak, Danuta; Deptula, Andrzej; Lada, Wieslawa; Olczak, Tadeusz; Rogowski, Marcin; Pyszynska, Marta; Chmielewski, Andrzej Grzegorz
2017-01-01
Flotation tailings from copper production are deposits of copper and other valuable metals, such as Mo, V and U. New hydrometallurgical technologies are more economical and open up new possibilities for metal recovery. This work presents results of the study on the extraction of copper by mixed extractant consisting p -toluidine dissolved in toluene. The possibility of simultaneous liquid-liquid extraction of molybdenum and vanadium was examined. D2EHPA solutions was used as extractant, and recovery of individual elements compared for the representative samples of ore and copper flotation tailings. Radiometric methods were applied for process optimization.
SOLVENT EXTRACTION PROCESS FOR THE RECOVERY OF METALS FROM PHOSPHORIC ACID
Bailes, R.H.; Long, R.S.
1958-11-01
> A solvent extraction process is presented for recovering metal values including uranium, thorium, and other lanthanide and actinide elements from crude industrial phosphoric acid solutions. The process conslsts of contacting said solution with an immisclble organic solvent extractant containing a diluent and a material selected from the group consisting of mono and di alkyl phosphates, alkyl phosphonates and alkyl phosphites. The uranlum enters the extractant phase and is subsequently recovered by any of the methods known to the art. Recovery is improved if the phosphate solution is treated with a reducing agent such as iron or aluminum powder prior to the extraction step.
Knowledge Navigation for Virtual Vehicles
NASA Technical Reports Server (NTRS)
Gomez, Julian E.
2004-01-01
A virtual vehicle is a digital model of the knowledge surrounding a potentially real vehicle. Knowledge consists not only of the tangible information, such as CAD, but also what is known about the knowledge - its metadata. This paper is an overview of technologies relevant to building a virtual vehicle, and an assessment of how to bring those technologies together.
Nanodiamond in Colloidal Suspension: Electrophoresis; Other Observations
NASA Technical Reports Server (NTRS)
Meshik, A. P.; Pravdivtseva, O. V.; Hohenberg, C. M.
2002-01-01
Selective laser extraction has demonstrated that meteoritic diamonds may consist of subpopulations with different optical absorption properties, but it is not clear what makes them optically different. More work is needed to understand the mechanism for selective laser extraction. Additional information is contained in the original extended abstract.
Mahieu, Lieslot; de Casterlé, Bernadette Dierckx; Van Elssen, Kim; Gastmans, Chris
2013-11-01
This paper reports a study testing the content and face validity and internal consistency of the Dutch version of the Aging Sexual Knowledge and Attitudes Scale. The ability of older residents to sexually express themselves is known to be influenced by the knowledge and attitudes of nursing home staff towards later-life sexuality. Although the Aging Sexual Knowledge and Attitudes Scale is a widely used instrument to measure this, there is no validated, Dutch translation available. Instrument development. Following a standard forward/backward translation into Dutch, the scale was further adapted for use in Flemish nursing home settings. Content and face validity and user-friendliness were assessed. The psychometric properties were determined by means of an exploratory study. Data were collected from March-April 2011 at eight Flemish nursing homes. Reliability was assessed using internal consistency and item-total correlations. Both subscales of the Flemish adaptation showed acceptable content validity. The face validity and user-friendliness were deemed favourable with hardly any remarks given by the expert panel. The Cronbach's α was 0.80 and 0.88 for the knowledge and attitude subscales, respectively. The item-total correlations ranged from 0.21-0.48 for the knowledge section and from 0.09-0.68 for the attitude subscale. We conclude from our study that the Dutch version of the scale has acceptable to good psychometric properties. The Flemish adaptation therefore seems to be a valuable instrument for studying nursing staff's knowledge and attitudes towards aged sexuality in Flanders. © 2013 Blackwell Publishing Ltd.
Ramirez-Rodrigues, Milena M; Plaza, Maria L; Azeredo, Alberto; Balaban, Murat O; Marshall, Maurice R
2011-04-01
Hibiscus cold (25 °C) and hot (90 °C) water extracts were prepared in various time-temperature combinations to determine equivalent extraction conditions regarding their physicochemical and phytochemical properties. Equivalent anthocyanins concentration was obtained at 25 °C for 240 min and 90 °C for 16 min. Total phenolics were better extracted with hot water that also resulted in a higher antioxidant capacity in these extracts. Similar polyphenolic profiles were observed between fresh and dried hibiscus extracts. Hibiscus acid and 2 derivatives were found in all extracts. Hydroxybenzoic acids, caffeoylquinic acids, flavonols, and anthocyanins constituted the polyphenolic compounds identified in hibiscus extracts. Two major anthocyanins were found in both cold and hot extracts: delphynidin-3-sambubioside and cyanidin-3-sambubioside. In general, both cold and hot extractions yielded similar phytochemical properties; however, under cold extraction, color degradation was significantly lower and extraction times were 15-fold longer. Hibiscus beverages are prepared from fresh or dried calyces by a hot extraction and pasteurized, which can change organoleptic, nutritional, and color attributes. Nonthermal technologies such as dense phase carbon dioxide may maintain their fresh-like color, flavor, and nutrients. This research compares the physicochemical and phytochemical changes resulting from a cold and hot extraction of fresh and dried hibiscus calyces and adds to the knowledge of work done on color, quality attributes, and antioxidant capacity of unique tropical products. In addition, the research shows how these changes could lead to alternative nonthermal processes for hibiscus.
Development of portable health monitoring system for automatic self-blood glucose measurement
NASA Astrophysics Data System (ADS)
Kim, Huijun; Mizuno, Yoshihumi; Nakamachi, Eiji; Morita, Yusuke
2010-02-01
In this study, a new HMS (Health Monitoring System) device is developed for diabetic patient. This device mainly consists of I) 3D blood vessel searching unit and II) automatic blood glucose measurement (ABGM) unit. This device has features such as 1)3D blood vessel location search 2) laptop type, 3) puncturing a blood vessel by using a minimally invasive micro-needle, 4) very little blood sampling (10μl), and 5) automatic blood extraction and blood glucose measurement. In this study, ABGM unit is described in detail. It employs a syringe type's blood extraction mechanism because of its high accuracy. And it consists of the syringe component and the driving component. The syringe component consists of a syringe itself, a piston, a magnet, a ratchet and a micro-needle whose inner diameter is about 80μm. And the syringe component is disposable. The driving component consists of body parts, a linear stepping motor, a glucose enzyme sensor and a slider for accurate positioning control. The driving component has the all-in-one mechanism with a glucose enzyme sensor for compact size and stable blood transfer. On designing, required thrust force to drive the slider is designed to be greater than the value of the blood extraction force. Further, only one linear stepping motor is employed for blood extraction and transportation processes. The experimental result showed more than 80% of volume ratio under the piston speed 2.4mm/s. Further, the blood glucose was measured successfully by using the prototype unit. Finally, the availability of our ABGM unit was confirmed.
An assessment of oral health promotion programmes in the United Kingdom.
Passalacqua, A; Reeves, A O; Newton, T; Hughes, R; Dunne, S; Donaldson, N; Wilson, N
2012-02-01
Improving oral health and reducing tooth decay is a key area for action, both in the United Kingdom (UK) and overseas. The World Health Organization (WHO) has highlighted the unique advantage schools have in promoting oral health. We summarise current oral health promotion strategies in the United Kingdom and estimate the spread of their use as well as their impact on oral health and influence on the oral health-related knowledge and behaviour in a patient population. A structured overview of published papers, government publications, official government websites and policy reports. A cross-sectional study of patients referred for a tooth extraction in one dental surgery in south-east London. Statistical methods consisted of logistic and ordinal regressions to model the likelihood of exposure to oral health promotion and of obtaining higher levels of knowledge of oral health issues, respectively. Linear regression was used to model the level of oral health and knowledge of oral health issues. We found three main promotion programmes, namely, National Healthy Schools (NHS), Sure Start and Brushing for life plus a small number of local initiatives. Sure Start targets disadvantaged areas, but is limited. In our observational study, 34% of the patients reported exposure to a settings-based oral health education programme: Sure Start (5%), NHS (7%) and other (22%). This exposure was not influenced by age or gender, but an association with education was detected. Although oral health promotion was not found to influence the actual knowledge of oral health issues, it was found to influence some oral health-related attitudes and perceptions. Participation in an oral health promotion programme was found to be significantly associated with the patients' education, their belief that they can prevent oral disease and the subjective perception of their own oral health. The WHO principles need to be embedded across all schools to achieve a true national oral health promotion programme for the United Kingdom. The National Healthy Schools programme provides the perfect platform. © 2011 John Wiley & Sons A/S.
Combined Amplification and Sound Generation for Tinnitus: A Scoping Review.
Tutaj, Lindsey; Hoare, Derek J; Sereda, Magdalena
In most cases, tinnitus is accompanied by some degree of hearing loss. Current tinnitus management guidelines recognize the importance of addressing hearing difficulties, with hearing aids being a common option. Sound therapy is the preferred mode of audiological tinnitus management in many countries, including in the United Kingdom. Combination instruments provide a further option for those with an aidable hearing loss, as they combine amplification with a sound generation option. The aims of this scoping review were to catalog the existing body of evidence on combined amplification and sound generation for tinnitus and consider opportunities for further research or evidence synthesis. A scoping review is a rigorous way to identify and review an established body of knowledge in the field for suggestive but not definitive findings and gaps in current knowledge. A wide variety of databases were used to ensure that all relevant records within the scope of this review were captured, including gray literature, conference proceedings, dissertations and theses, and peer-reviewed articles. Data were gathered using scoping review methodology and consisted of the following steps: (1) identifying potentially relevant records; (2) selecting relevant records; (3) extracting data; and (4) collating, summarizing, and reporting results. Searches using 20 different databases covered peer-reviewed and gray literature and returned 5959 records. After exclusion of duplicates and works that were out of scope, 89 records remained for further analysis. A large number of records identified varied considerably in methodology, applied management programs, and type of devices. There were significant differences in practice between different countries and clinics regarding candidature and fitting of combination aids, partly driven by the application of different management programs. Further studies on the use and effects of combined amplification and sound generation for tinnitus are indicated, including further efficacy studies, evidence synthesis, development of guidelines, and recommended procedures that are based on existing knowledge, expert knowledge, and clinical service evaluations.
2014-01-01
Background Nurses and allied health care professionals (physiotherapists, occupational therapists, speech and language pathologists, dietitians) form more than half of the clinical health care workforce and play a central role in health service delivery. There is a potential to improve the quality of health care if these professionals routinely use research evidence to guide their clinical practice. However, the use of research evidence remains unpredictable and inconsistent. Leadership is consistently described in implementation research as critical to enhancing research use by health care professionals. However, this important literature has not yet been synthesized and there is a lack of clarity on what constitutes effective leadership for research use, or what kinds of intervention effectively develop leadership for the purpose of enabling and enhancing research use in clinical practice. We propose to synthesize the evidence on leadership behaviours amongst front line and senior managers that are associated with research evidence by nurses and allied health care professionals, and then determine the effectiveness of interventions that promote these behaviours. Methods/Design Using an integrated knowledge translation approach that supports a partnership between researchers and knowledge users throughout the research process, we will follow principles of knowledge synthesis using a systematic method to synthesize different types of evidence involving: searching the literature, study selection, data extraction and quality assessment, and analysis. A narrative synthesis will be conducted to explore relationships within and across studies and meta-analysis will be performed if sufficient homogeneity exists across studies employing experimental randomized control trial designs. Discussion With the engagement of knowledge users in leadership and practice, we will synthesize the research from a broad range of disciplines to understand the key elements of leadership that supports and enables research use by health care practitioners, and how to develop leadership for the purpose of enhancing research use in clinical practice. Trial registration PROSPERO CRD42014007660. PMID:24903267