Sample records for generation knowledge base

  1. A knowledge-base generating hierarchical fuzzy-neural controller.

    PubMed

    Kandadai, R M; Tien, J M

    1997-01-01

    We present an innovative fuzzy-neural architecture that is able to automatically generate a knowledge base, in an extractable form, for use in hierarchical knowledge-based controllers. The knowledge base is in the form of a linguistic rule base appropriate for a fuzzy inference system. First, we modify Berenji and Khedkar's (1992) GARIC architecture to enable it to automatically generate a knowledge base; a pseudosupervised learning scheme using reinforcement learning and error backpropagation is employed. Next, we further extend this architecture to a hierarchical controller that is able to generate its own knowledge base. Example applications are provided to underscore its viability.

  2. Automated knowledge generation

    NASA Technical Reports Server (NTRS)

    Myler, Harley R.; Gonzalez, Avelino J.

    1988-01-01

    The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).

  3. Generating Topic Headings during Reading of Screen-Based Text Facilitates Learning of Structural Knowledge and Impairs Learning of Lower-Level Knowledge

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Marker, Anthony W.

    2007-01-01

    This investigation considers the effects of learner-generated headings on memory. Participants (N = 63) completed a computer-based lesson with or without learner-generated text topic headings. Posttests included a cued recall test of factual knowledge and a sorting task measure of structural knowledge. A significant disordinal interaction was…

  4. Intrusion Detection Systems with Live Knowledge System

    DTIC Science & Technology

    2016-05-31

    Ripple -down Rule (RDR) to maintain the knowledge from human experts with knowledge base generated by the Induct RDR, which is a machine-learning based RDR...propose novel approach that uses Ripple -down Rule (RDR) to maintain the knowledge from human experts with knowledge base generated by the Induct RDR...detection model by applying Induct RDR approach. The proposed induct RDR ( Ripple Down Rules) approach allows to acquire the phishing detection

  5. Motion Recognition and Modifying Motion Generation for Imitation Robot Based on Motion Knowledge Formation

    NASA Astrophysics Data System (ADS)

    Okuzawa, Yuki; Kato, Shohei; Kanoh, Masayoshi; Itoh, Hidenori

    A knowledge-based approach to imitation learning of motion generation for humanoid robots and an imitative motion generation system based on motion knowledge learning and modification are described. The system has three parts: recognizing, learning, and modifying parts. The first part recognizes an instructed motion distinguishing it from the motion knowledge database by the continuous hidden markov model. When the motion is recognized as being unfamiliar, the second part learns it using locally weighted regression and acquires a knowledge of the motion. When a robot recognizes the instructed motion as familiar or judges that its acquired knowledge is applicable to the motion generation, the third part imitates the instructed motion by modifying a learned motion. This paper reports some performance results: the motion imitation of several radio gymnastics motions.

  6. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  7. System diagnostic builder: a rule-generation tool for expert systems that do intelligent data evaluation

    NASA Astrophysics Data System (ADS)

    Nieten, Joseph L.; Burke, Roger

    1993-03-01

    The system diagnostic builder (SDB) is an automated knowledge acquisition tool using state- of-the-art artificial intelligence (AI) technologies. The SDB uses an inductive machine learning technique to generate rules from data sets that are classified by a subject matter expert (SME). Thus, data is captured from the subject system, classified by an expert, and used to drive the rule generation process. These rule-bases are used to represent the observable behavior of the subject system, and to represent knowledge about this system. The rule-bases can be used in any knowledge based system which monitors or controls a physical system or simulation. The SDB has demonstrated the utility of using inductive machine learning technology to generate reliable knowledge bases. In fact, we have discovered that the knowledge captured by the SDB can be used in any number of applications. For example, the knowledge bases captured from the SMS can be used as black box simulations by intelligent computer aided training devices. We can also use the SDB to construct knowledge bases for the process control industry, such as chemical production, or oil and gas production. These knowledge bases can be used in automated advisory systems to ensure safety, productivity, and consistency.

  8. Knowledge-based reasoning in the Paladin tactical decision generation system

    NASA Technical Reports Server (NTRS)

    Chappell, Alan R.

    1993-01-01

    A real-time tactical decision generation system for air combat engagements, Paladin, has been developed. A pilot's job in air combat includes tasks that are largely symbolic. These symbolic tasks are generally performed through the application of experience and training (i.e. knowledge) gathered over years of flying a fighter aircraft. Two such tasks, situation assessment and throttle control, are identified and broken out in Paladin to be handled by specialized knowledge based systems. Knowledge pertaining to these tasks is encoded into rule-bases to provide the foundation for decisions. Paladin uses a custom built inference engine and a partitioned rule-base structure to give these symbolic results in real-time. This paper provides an overview of knowledge-based reasoning systems as a subset of rule-based systems. The knowledge used by Paladin in generating results as well as the system design for real-time execution is discussed.

  9. Factors Influencing the Creation of a Wiki Culture for Knowledge Management in a Cross-Generational Organizational Setting

    ERIC Educational Resources Information Center

    Macro, Kenneth L., Jr.

    2011-01-01

    Initiatives within organizations that promote sharing of knowledge may be hampered by generational differences. Research on relationships between generations and technology-based knowledge sharing campaigns provides little managerial guidance for practitioners. The purpose of this ethnographic study was to identify the factors that influence the…

  10. A Different Approach to the Generation of Patient Management Problems from a Knowledge-Based System

    PubMed Central

    Barriga, Rosa Maria

    1988-01-01

    Several strategies are proposed to approach the generation of Patient Management Problems from a Knowledge Base and avoid inconsistencies in the results. These strategies are based on a different Knowledge Base structure and in the use of case introductions that describe the patient attributes which are not disease-dependent. This methodology has proven effective in a recent pilot test and it is on its way to implementation as part of an educational program at CWRU, School of Medicine.

  11. Advanced software development workstation. Knowledge base design: Design of knowledge base for flight planning application

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    1992-01-01

    The development process of the knowledge base for the generation of Test Libraries for Mission Operations Computer (MOC) Command Support focused on a series of information gathering interviews. These knowledge capture sessions are supporting the development of a prototype for evaluating the capabilities of INTUIT on such an application. the prototype includes functions related to POCC (Payload Operation Control Center) processing. It prompts the end-users for input through a series of panels and then generates the Meds associated with the initialization and the update of hazardous command tables for a POCC Processing TLIB.

  12. Knowledge-Based Planning Model for Courses of Action Generation,

    DTIC Science & Technology

    1986-04-07

    AO-AIS 608 KNOWLEDGE-BASED PLANNING MODEL FOR COURSES OF ACTION mJI OENERATION(U) ARMY MAR COLL CARLISLE BARRACKS PA USI FE D R COLLINS ET AL. 97APR...agencies. This document may not be released for open publication until it has been cleared by the appropriate military service or government agency. 00 DTIC...I ELECTE KNOWLEDGE-BASED PLANNING MODEL C AUG 5~ FOR COURSES OF ACTION GENERATION DD BY COLONEL D. R. COLLINS LIEUTENANT COLONEL(P) T. A. BAUCUM

  13. Going beyond the lesson: Self-generating new factual knowledge in the classroom

    PubMed Central

    Esposito, Alena G.; Bauer, Patricia J.

    2016-01-01

    For children to build a knowledge base, they must integrate and extend knowledge acquired across separate episodes of new learning. Children’s performance was assessed in a task requiring them to self-generate new factual knowledge from the integration of novel facts presented through separate lessons in the classroom. Whether self-generation performance predicted academic outcomes in reading comprehension and mathematics was also examined. The 278 participating children were in grades K-3 (mean age 7.7 years; range 5.5–10.3 years). Children self-generated new factual knowledge through integration in the classroom; age-related increases were observed. Self-generation performance predicted both reading comprehension and mathematics academic outcomes, even when controlling for caregiver education. PMID:27728784

  14. System Diagnostic Builder - A rule generation tool for expert systems that do intelligent data evaluation. [applied to Shuttle Mission Simulator

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph; Burke, Roger

    1993-01-01

    Consideration is given to the System Diagnostic Builder (SDB), an automated knowledge acquisition tool using state-of-the-art AI technologies. The SDB employs an inductive machine learning technique to generate rules from data sets that are classified by a subject matter expert. Thus, data are captured from the subject system, classified, and used to drive the rule generation process. These rule bases are used to represent the observable behavior of the subject system, and to represent knowledge about this system. The knowledge bases captured from the Shuttle Mission Simulator can be used as black box simulations by the Intelligent Computer Aided Training devices. The SDB can also be used to construct knowledge bases for the process control industry, such as chemical production or oil and gas production.

  15. Knowledge-Based Entrepreneurship in a Boundless Research System

    ERIC Educational Resources Information Center

    Dell'Anno, Davide

    2008-01-01

    International entrepreneurship and knowledge-based entrepreneurship have recently generated considerable academic and non-academic attention. This paper explores the "new" field of knowledge-based entrepreneurship in a boundless research system. Cultural barriers to the development of business opportunities by researchers persist in some academic…

  16. Ontology-Based Multiple Choice Question Generation

    PubMed Central

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  17. Development and evaluation of a crowdsourcing methodology for knowledge base construction: identifying relationships between clinical problems and medications

    PubMed Central

    Wright, Adam; Laxmisan, Archana; Ottosen, Madelene J; McCoy, Jacob A; Butten, David; Sittig, Dean F

    2012-01-01

    Objective We describe a novel, crowdsourcing method for generating a knowledge base of problem–medication pairs that takes advantage of manually asserted links between medications and problems. Methods Through iterative review, we developed metrics to estimate the appropriateness of manually entered problem–medication links for inclusion in a knowledge base that can be used to infer previously unasserted links between problems and medications. Results Clinicians manually linked 231 223 medications (55.30% of prescribed medications) to problems within the electronic health record, generating 41 203 distinct problem–medication pairs, although not all were accurate. We developed methods to evaluate the accuracy of the pairs, and after limiting the pairs to those meeting an estimated 95% appropriateness threshold, 11 166 pairs remained. The pairs in the knowledge base accounted for 183 127 total links asserted (76.47% of all links). Retrospective application of the knowledge base linked 68 316 medications not previously linked by a clinician to an indicated problem (36.53% of unlinked medications). Expert review of the combined knowledge base, including inferred and manually linked problem–medication pairs, found a sensitivity of 65.8% and a specificity of 97.9%. Conclusion Crowdsourcing is an effective, inexpensive method for generating a knowledge base of problem–medication pairs that is automatically mapped to local terminologies, up-to-date, and reflective of local prescribing practices and trends. PMID:22582202

  18. Translating three states of knowledge--discovery, invention, and innovation

    PubMed Central

    2010-01-01

    Background Knowledge Translation (KT) has historically focused on the proper use of knowledge in healthcare delivery. A knowledge base has been created through empirical research and resides in scholarly literature. Some knowledge is amenable to direct application by stakeholders who are engaged during or after the research process, as shown by the Knowledge to Action (KTA) model. Other knowledge requires multiple transformations before achieving utility for end users. For example, conceptual knowledge generated through science or engineering may become embodied as a technology-based invention through development methods. The invention may then be integrated within an innovative device or service through production methods. To what extent is KT relevant to these transformations? How might the KTA model accommodate these additional development and production activities while preserving the KT concepts? Discussion Stakeholders adopt and use knowledge that has perceived utility, such as a solution to a problem. Achieving a technology-based solution involves three methods that generate knowledge in three states, analogous to the three classic states of matter. Research activity generates discoveries that are intangible and highly malleable like a gas; development activity transforms discoveries into inventions that are moderately tangible yet still malleable like a liquid; and production activity transforms inventions into innovations that are tangible and immutable like a solid. The paper demonstrates how the KTA model can accommodate all three types of activity and address all three states of knowledge. Linking the three activities in one model also illustrates the importance of engaging the relevant stakeholders prior to initiating any knowledge-related activities. Summary Science and engineering focused on technology-based devices or services change the state of knowledge through three successive activities. Achieving knowledge implementation requires methods that accommodate these three activities and knowledge states. Accomplishing beneficial societal impacts from technology-based knowledge involves the successful progression through all three activities, and the effective communication of each successive knowledge state to the relevant stakeholders. The KTA model appears suitable for structuring and linking these processes. PMID:20205873

  19. Comparison of clinical knowledge bases for summarization of electronic health records.

    PubMed

    McCoy, Allison B; Sittig, Dean F; Wright, Adam

    2013-01-01

    Automated summarization tools that create condition-specific displays may improve clinician efficiency. These tools require new kinds of knowledge that is difficult to obtain. We compared five problem-medication pair knowledge bases generated using four previously described knowledge base development approaches. The number of pairs in the resulting mapped knowledge bases varied widely due to differing mapping techniques from the source terminologies, ranging from 2,873 to 63,977,738 pairs. The number of overlapping pairs across knowledge bases was low, with one knowledge base having half of the pairs overlapping with another knowledge base, and most having less than a third overlapping. Further research is necessary to better evaluate the knowledge bases independently in additional settings, and to identify methods to integrate the knowledge bases.

  20. Critical Analysis of Textbooks: Knowledge-Generating Logics and the Emerging Image of "Global Economic Contexts"

    ERIC Educational Resources Information Center

    Thoma, Michael

    2017-01-01

    This paper presents an approach to the critical analysis of textbook knowledge, which, working from a discourse theory perspective (based on the work of Foucault), refers to the performative nature of language. The critical potential of the approach derives from an analysis of knowledge-generating logics, which produce particular images of reality…

  1. Knowledge management impact of information technology Web 2.0/3.0. The case study of agent software technology usability in knowledge management system

    NASA Astrophysics Data System (ADS)

    Sołtysik-Piorunkiewicz, Anna

    2015-02-01

    How we can measure the impact of internet technology Web 2.0/3.0 for knowledge management? How we can use the Web 2.0/3.0 technologies for generating, evaluating, sharing, organizing knowledge in knowledge-based organization? How we can evaluate it from user-centered perspective? Article aims to provide a method for evaluate the usability of web technologies to support knowledge management in knowledge-based organizations of the various stages of the cycle knowledge management, taking into account: generating knowledge, evaluating knowledge, sharing knowledge, etc. for the modern Internet technologies based on the example of agent technologies. The method focuses on five areas of evaluation: GUI, functional structure, the way of content publication, organizational aspect, technological aspect. The method is based on the proposed indicators relating respectively to assess specific areas of evaluation, taking into account the individual characteristics of the scoring. Each of the features identified in the evaluation is judged first point wise, then this score is subject to verification and clarification by means of appropriate indicators of a given feature. The article proposes appropriate indicators to measure the impact of Web 2.0/3.0 technologies for knowledge management and verification them in an example of agent technology usability in knowledge management system.

  2. Knowledge-based design of generate-and-patch problem solvers that solve global resource assignment problems

    NASA Technical Reports Server (NTRS)

    Voigt, Kerstin

    1992-01-01

    We present MENDER, a knowledge based system that implements software design techniques that are specialized to automatically compile generate-and-patch problem solvers that satisfy global resource assignments problems. We provide empirical evidence of the superior performance of generate-and-patch over generate-and-test: even with constrained generation, for a global constraint in the domain of '2D-floorplanning'. For a second constraint in '2D-floorplanning' we show that even when it is possible to incorporate the constraint into a constrained generator, a generate-and-patch problem solver may satisfy the constraint more rapidly. We also briefly summarize how an extended version of our system applies to a constraint in the domain of 'multiprocessor scheduling'.

  3. Case-based tutoring from a medical knowledge base.

    PubMed

    Chin, H L; Cooper, G F

    1989-01-01

    The past decade has seen the emergence of programs that make use of large knowledge bases to assist physicians in diagnosis within the general field of internal medicine. One such program, Internist-I, contains knowledge about over 600 diseases, covering a significant proportion of internal medicine. This paper describes the process of converting a subset of this knowledge base--in the area of cardiovascular diseases--into a probabilistic format, and the use of this resulting knowledge base to teach medical diagnostic knowledge. The system (called KBSimulator--for Knowledge-Based patient Simulator) generates simulated patient cases and uses these cases as a focal point from which to teach medical knowledge. This project demonstrates the feasibility of building an intelligent, flexible instructional system that uses a knowledge base constructed primarily for medical diagnosis.

  4. Innovative Extension Models and Smallholders: How ICT platforms can Deliver Timely Information to Farmers in India.

    NASA Astrophysics Data System (ADS)

    Nagothu, U. S.

    2016-12-01

    Agricultural extension services, among others, contribute to improving rural livelihoods and enhancing economic development. Knowledge development and transfer from the cognitive science point of view, is about, how farmers use and apply their experiential knowledge as well as acquired new knowledge to solve new problems. This depends on the models adopted, the way knowledge is generated and delivered. New extension models based on ICT platforms and smart phones are promising. Results from a 5-year project (www.climaadapt.org) in India shows that farmer led-on farm validations of technologies and knowledge exchange through ICT based platforms outperformed state operated linear extension programs. Innovation here depends on the connectivity, net-working between stakeholders that are involved in generating, transferring and using the knowledge. Key words: Smallholders, Knowledge, Extension, Innovation, India

  5. Empirical Analysis and Refinement of Expert System Knowledge Bases

    DTIC Science & Technology

    1988-08-31

    refinement. Both a simulated case generation program, and a random rule basher were developed to enhance rule refinement experimentation. *Substantial...the second fiscal year 88 objective was fully met. Rule Refinement System Simulated Rule Basher Case Generator Stored Cases Expert System Knowledge...generated until the rule is satisfied. Cases may be randomly generated for a given rule or hypothesis. Rule Basher Given that one has a correct

  6. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs.

    PubMed

    McCoy, A B; Wright, A; Krousel-Wood, M; Thomas, E J; McCoy, J A; Sittig, D F

    2015-01-01

    Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes.

  7. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs

    PubMed Central

    Wright, A.; Krousel-Wood, M.; Thomas, E. J.; McCoy, J. A.; Sittig, D. F.

    2015-01-01

    Summary Background Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. Objective We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. Methods We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. Results The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. Conclusions We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes. PMID:26171079

  8. A Natural Language Interface Concordant with a Knowledge Base.

    PubMed

    Han, Yong-Jin; Park, Seong-Bae; Park, Se-Young

    2016-01-01

    The discordance between expressions interpretable by a natural language interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate natural language questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible natural language expressions corresponding to each formal query can also be collected in advance. A natural language expression has a one-to-one mapping with a formal query. Hence, a natural language question is translated into a formal query by matching the question with the most appropriate natural language expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively.

  9. MMKG: An approach to generate metallic materials knowledge graph based on DBpedia and Wikipedia

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoming; Liu, Xin; Li, Xin; Pan, Dongyu

    2017-02-01

    The research and development of metallic materials are playing an important role in today's society, and in the meanwhile lots of metallic materials knowledge is generated and available on the Web (e.g., Wikipedia) for materials experts. However, due to the diversity and complexity of metallic materials knowledge, the knowledge utilization may encounter much inconvenience. The idea of knowledge graph (e.g., DBpedia) provides a good way to organize the knowledge into a comprehensive entity network. Therefore, the motivation of our work is to generate a metallic materials knowledge graph (MMKG) using available knowledge on the Web. In this paper, an approach is proposed to build MMKG based on DBpedia and Wikipedia. First, we use an algorithm based on directly linked sub-graph semantic distance (DLSSD) to preliminarily extract metallic materials entities from DBpedia according to some predefined seed entities; then based on the results of the preliminary extraction, we use an algorithm, which considers both semantic distance and string similarity (SDSS), to achieve the further extraction. Second, due to the absence of materials properties in DBpedia, we use an ontology-based method to extract properties knowledge from the HTML tables of corresponding Wikipedia Web pages for enriching MMKG. Materials ontology is used to locate materials properties tables as well as to identify the structure of the tables. The proposed approach is evaluated by precision, recall, F1 and time performance, and meanwhile the appropriate thresholds for the algorithms in our approach are determined through experiments. The experimental results show that our approach returns expected performance. A tool prototype is also designed to facilitate the process of building the MMKG as well as to demonstrate the effectiveness of our approach.

  10. Argumentation Based Joint Learning: A Novel Ensemble Learning Approach

    PubMed Central

    Xu, Junyi; Yao, Li; Li, Le

    2015-01-01

    Recently, ensemble learning methods have been widely used to improve classification performance in machine learning. In this paper, we present a novel ensemble learning method: argumentation based multi-agent joint learning (AMAJL), which integrates ideas from multi-agent argumentation, ensemble learning, and association rule mining. In AMAJL, argumentation technology is introduced as an ensemble strategy to integrate multiple base classifiers and generate a high performance ensemble classifier. We design an argumentation framework named Arena as a communication platform for knowledge integration. Through argumentation based joint learning, high quality individual knowledge can be extracted, and thus a refined global knowledge base can be generated and used independently for classification. We perform numerous experiments on multiple public datasets using AMAJL and other benchmark methods. The results demonstrate that our method can effectively extract high quality knowledge for ensemble classifier and improve the performance of classification. PMID:25966359

  11. Computer integrated documentation

    NASA Technical Reports Server (NTRS)

    Boy, Guy

    1991-01-01

    The main technical issues of the Computer Integrated Documentation (CID) project are presented. The problem of automation of documents management and maintenance is analyzed both from an artificial intelligence viewpoint and from a human factors viewpoint. Possible technologies for CID are reviewed: conventional approaches to indexing and information retrieval; hypertext; and knowledge based systems. A particular effort was made to provide an appropriate representation for contextual knowledge. This representation is used to generate context on hypertext links. Thus, indexing in CID is context sensitive. The implementation of the current version of CID is described. It includes a hypertext data base, a knowledge based management and maintenance system, and a user interface. A series is also presented of theoretical considerations as navigation in hyperspace, acquisition of indexing knowledge, generation and maintenance of a large documentation, and relation to other work.

  12. Ontology to relational database transformation for web application development and maintenance

    NASA Astrophysics Data System (ADS)

    Mahmudi, Kamal; Inggriani Liem, M. M.; Akbar, Saiful

    2018-03-01

    Ontology is used as knowledge representation while database is used as facts recorder in a KMS (Knowledge Management System). In most applications, data are managed in a database system and updated through the application and then they are transformed to knowledge as needed. Once a domain conceptor defines the knowledge in the ontology, application and database can be generated from the ontology. Most existing frameworks generate application from its database. In this research, ontology is used for generating the application. As the data are updated through the application, a mechanism is designed to trigger an update to the ontology so that the application can be rebuilt based on the newest ontology. By this approach, a knowledge engineer has a full flexibility to renew the application based on the latest ontology without dependency to a software developer. In many cases, the concept needs to be updated when the data changed. The framework is built and tested in a spring java environment. A case study was conducted to proof the concepts.

  13. A knowledge-based approach to automated flow-field zoning for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1989-01-01

    An automated three-dimensional zonal grid generation capability for computational fluid dynamics is shown through the development of a demonstration computer program capable of automatically zoning the flow field of representative two-dimensional (2-D) aerodynamic configurations. The applicability of a knowledge-based programming approach to the domain of flow-field zoning is examined. Several aspects of flow-field zoning make the application of knowledge-based techniques challenging: the need for perceptual information, the role of individual bias in the design and evaluation of zonings, and the fact that the zoning process is modeled as a constructive, design-type task (for which there are relatively few examples of successful knowledge-based systems in any domain). Engineering solutions to the problems arising from these aspects are developed, and a demonstration system is implemented which can design, generate, and output flow-field zonings for representative 2-D aerodynamic configurations.

  14. An Ontology-Based Conceptual Model For Accumulating And Reusing Knowledge In A DMAIC Process

    NASA Astrophysics Data System (ADS)

    Nguyen, ThanhDat; Kifor, Claudiu Vasile

    2015-09-01

    DMAIC (Define, Measure, Analyze, Improve, and Control) is an important process used to enhance quality of processes basing on knowledge. However, it is difficult to access DMAIC knowledge. Conventional approaches meet a problem arising from structuring and reusing DMAIC knowledge. The main reason is that DMAIC knowledge is not represented and organized systematically. In this article, we overcome the problem basing on a conceptual model that is a combination of DMAIC process, knowledge management, and Ontology engineering. The main idea of our model is to utilizing Ontologies to represent knowledge generated by each of DMAIC phases. We build five different knowledge bases for storing all knowledge of DMAIC phases with the support of necessary tools and appropriate techniques in Information Technology area. Consequently, these knowledge bases provide knowledge available to experts, managers, and web users during or after DMAIC execution in order to share and reuse existing knowledge.

  15. A Knowledge-Base for a Personalized Infectious Disease Risk Prediction System.

    PubMed

    Vinarti, Retno; Hederman, Lucy

    2018-01-01

    We present a knowledge-base to represent collated infectious disease risk (IDR) knowledge. The knowledge is about personal and contextual risk of contracting an infectious disease obtained from declarative sources (e.g. Atlas of Human Infectious Diseases). Automated prediction requires encoding this knowledge in a form that can produce risk probabilities (e.g. Bayesian Network - BN). The knowledge-base presented in this paper feeds an algorithm that can auto-generate the BN. The knowledge from 234 infectious diseases was compiled. From this compilation, we designed an ontology and five rule types for modelling IDR knowledge in general. The evaluation aims to assess whether the knowledge-base structure, and its application to three disease-country contexts, meets the needs of personalized IDR prediction system. From the evaluation results, the knowledge-base conforms to the system's purpose: personalization of infectious disease risk.

  16. Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation

    DTIC Science & Technology

    1989-08-01

    pobitive, as false positives generated by a medical program can often be caught by a physician upon further testing . False negatives, however, may be...improvement over the knowledge base tested is obtained. Although our work is pretty much theoretical research oriented one example of ex- periments is...knowledge base, improves the performance by about 10%. of tests . First, we divide the cases into a training set and a validation set with 70% vs. 30% each

  17. SWAN: An expert system with natural language interface for tactical air capability assessment

    NASA Technical Reports Server (NTRS)

    Simmons, Robert M.

    1987-01-01

    SWAN is an expert system and natural language interface for assessing the war fighting capability of Air Force units in Europe. The expert system is an object oriented knowledge based simulation with an alternate worlds facility for performing what-if excursions. Responses from the system take the form of generated text, tables, or graphs. The natural language interface is an expert system in its own right, with a knowledge base and rules which understand how to access external databases, models, or expert systems. The distinguishing feature of the Air Force expert system is its use of meta-knowledge to generate explanations in the frame and procedure based environment.

  18. Model-based software design

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui; Yenne, Britt; Vansickle, Larry; Ballantyne, Michael

    1992-01-01

    Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model.

  19. Reducing a Knowledge-Base Search Space When Data Are Missing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  20. Ontological knowledge engine and health screening data enabled ubiquitous personalized physical fitness (UFIT).

    PubMed

    Su, Chuan-Jun; Chiang, Chang-Yu; Chih, Meng-Chun

    2014-03-07

    Good physical fitness generally makes the body less prone to common diseases. A personalized exercise plan that promotes a balanced approach to fitness helps promotes fitness, while inappropriate forms of exercise can have adverse consequences for health. This paper aims to develop an ontology-driven knowledge-based system for generating custom-designed exercise plans based on a user's profile and health status, incorporating international standard Health Level Seven International (HL7) data on physical fitness and health screening. The generated plan exposing Representational State Transfer (REST) style web services which can be accessed from any Internet-enabled device and deployed in cloud computing environments. To ensure the practicality of the generated exercise plans, encapsulated knowledge used as a basis for inference in the system is acquired from domain experts. The proposed Ubiquitous Exercise Plan Generation for Personalized Physical Fitness (UFIT) will not only improve health-related fitness through generating personalized exercise plans, but also aid users in avoiding inappropriate work outs.

  1. Ontological Knowledge Engine and Health Screening Data Enabled Ubiquitous Personalized Physical Fitness (UFIT)

    PubMed Central

    Su, Chuan-Jun; Chiang, Chang-Yu; Chih, Meng-Chun

    2014-01-01

    Good physical fitness generally makes the body less prone to common diseases. A personalized exercise plan that promotes a balanced approach to fitness helps promotes fitness, while inappropriate forms of exercise can have adverse consequences for health. This paper aims to develop an ontology-driven knowledge-based system for generating custom-designed exercise plans based on a user's profile and health status, incorporating international standard Health Level Seven International (HL7) data on physical fitness and health screening. The generated plan exposing Representational State Transfer (REST) style web services which can be accessed from any Internet-enabled device and deployed in cloud computing environments. To ensure the practicality of the generated exercise plans, encapsulated knowledge used as a basis for inference in the system is acquired from domain experts. The proposed Ubiquitous Exercise Plan Generation for Personalized Physical Fitness (UFIT) will not only improve health-related fitness through generating personalized exercise plans, but also aid users in avoiding inappropriate work outs. PMID:24608002

  2. The computer integrated documentation project: A merge of hypermedia and AI techniques

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Boy, Guy

    1993-01-01

    To generate intelligent indexing that allows context-sensitive information retrieval, a system must be able to acquire knowledge directly through interaction with users. In this paper, we present the architecture for CID (Computer Integrated Documentation). CID is a system that enables integration of various technical documents in a hypertext framework and includes an intelligent browsing system that incorporates indexing in context. CID's knowledge-based indexing mechanism allows case based knowledge acquisition by experimentation. It utilizes on-line user information requirements and suggestions either to reinforce current indexing in case of success or to generate new knowledge in case of failure. This allows CID's intelligent interface system to provide helpful responses, based on previous experience (user feedback). We describe CID's current capabilities and provide an overview of our plans for extending the system.

  3. Using diagnostic experiences in experience-based innovative design

    NASA Astrophysics Data System (ADS)

    Prabhakar, Sattiraju; Goel, Ashok K.

    1992-03-01

    Designing a novel class of devices requires innovation. Often, the design knowledge of these devices does not identify and address the constraints that are required for their performance in the real world operating environment. So any new design adapted from these devices tend to be similarly sketchy. In order to address this problem, we propose a case-based reasoning method called performance driven innovation (PDI). We model the design as a dynamic process, arrive at a design by adaptation from the known designs, generate failures for this design for some new constraints, and then use this failure knowledge to generate the required design knowledge for the new constraints. In this paper, we discuss two aspects of PDI: the representation of PDI cases and the translation of the failure knowledge into design knowledge for a constraint. Each case in PDI has two components: design and failure knowledge. Both of them are represented using a substance-behavior-function model. Failure knowledge has internal device failure behaviors and external environmental behaviors. The environmental behavior, for a constraint, interacting with the design behaviors, results in the failure internal behavior. The failure adaptation strategy generates functions, from the failure knowledge, which can be addressed using the routine design methods. These ideas are illustrated using a coffee-maker example.

  4. Route Generation for a Synthetic Character (BOT) Using a Partial or Incomplete Knowledge Route Generation Algorithm in UT2004 Virtual Environment

    NASA Technical Reports Server (NTRS)

    Hanold, Gregg T.; Hanold, David T.

    2010-01-01

    This paper presents a new Route Generation Algorithm that accurately and realistically represents human route planning and navigation for Military Operations in Urban Terrain (MOUT). The accuracy of this algorithm in representing human behavior is measured using the Unreal Tournament(Trademark) 2004 (UT2004) Game Engine to provide the simulation environment in which the differences between the routes taken by the human player and those of a Synthetic Agent (BOT) executing the A-star algorithm and the new Route Generation Algorithm can be compared. The new Route Generation Algorithm computes the BOT route based on partial or incomplete knowledge received from the UT2004 game engine during game play. To allow BOT navigation to occur continuously throughout the game play with incomplete knowledge of the terrain, a spatial network model of the UT2004 MOUT terrain is captured and stored in an Oracle 11 9 Spatial Data Object (SOO). The SOO allows a partial data query to be executed to generate continuous route updates based on the terrain knowledge, and stored dynamic BOT, Player and environmental parameters returned by the query. The partial data query permits the dynamic adjustment of the planned routes by the Route Generation Algorithm based on the current state of the environment during a simulation. The dynamic nature of this algorithm more accurately allows the BOT to mimic the routes taken by the human executing under the same conditions thereby improving the realism of the BOT in a MOUT simulation environment.

  5. Building the Knowledge Base to Support the Automatic Animation Generation of Chinese Traditional Architecture

    NASA Astrophysics Data System (ADS)

    Wei, Gongjin; Bai, Weijing; Yin, Meifang; Zhang, Songmao

    We present a practice of applying the Semantic Web technologies in the domain of Chinese traditional architecture. A knowledge base consisting of one ontology and four rule bases is built to support the automatic generation of animations that demonstrate the construction of various Chinese timber structures based on the user's input. Different Semantic Web formalisms are used, e.g., OWL DL, SWRL and Jess, to capture the domain knowledge, including the wooden components needed for a given building, construction sequence, and the 3D size and position of every piece of wood. Our experience in exploiting the current Semantic Web technologies in real-world application systems indicates their prominent advantages (such as the reasoning facilities and modeling tools) as well as the limitations (such as low efficiency).

  6. Evolutionary Local Search of Fuzzy Rules through a novel Neuro-Fuzzy encoding method.

    PubMed

    Carrascal, A; Manrique, D; Ríos, J; Rossi, C

    2003-01-01

    This paper proposes a new approach for constructing fuzzy knowledge bases using evolutionary methods. We have designed a genetic algorithm that automatically builds neuro-fuzzy architectures based on a new indirect encoding method. The neuro-fuzzy architecture represents the fuzzy knowledge base that solves a given problem; the search for this architecture takes advantage of a local search procedure that improves the chromosomes at each generation. Experiments conducted both on artificially generated and real world problems confirm the effectiveness of the proposed approach.

  7. THE VALIDITY OF HUMAN AND COMPUTERIZED WRITING ASSESSMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ronald L. Boring

    2005-09-01

    This paper summarizes an experiment designed to assess the validity of essay grading between holistic and analytic human graders and a computerized grader based on latent semantic analysis. The validity of the grade was gauged by the extent to which the student’s knowledge of the topic correlated with the grader’s expert knowledge. To assess knowledge, Pathfinder networks were generated by the student essay writers, the holistic and analytic graders, and the computerized grader. It was found that the computer generated grades more closely matched the definition of valid grading than did human generated grades.

  8. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  9. Applying Knowledge to Generate Action: A Community-Based Knowledge Translation Framework

    ERIC Educational Resources Information Center

    Campbell, Barbara

    2010-01-01

    Introduction: Practical strategies are needed to translate research knowledge between researchers and users into action. For effective translation to occur, researchers and users should partner during the research process, recognizing the impact that knowledge, when translated into practice, will have on those most affected by that research.…

  10. Knowledge-based zonal grid generation for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.

  11. Integrative pathway knowledge bases as a tool for systems molecular medicine.

    PubMed

    Liang, Mingyu

    2007-08-20

    There exists a sense of urgency to begin to generate a cohesive assembly of biomedical knowledge as the pace of knowledge accumulation accelerates. The urgency is in part driven by the emergence of systems molecular medicine that emphasizes the combination of systems analysis and molecular dissection in the future of medical practice and research. A potentially powerful approach is to build integrative pathway knowledge bases that link organ systems function with molecules.

  12. New Proposals for Generating and Exploiting Solution-Oriented Knowledge

    ERIC Educational Resources Information Center

    Gredig, Daniel; Sommerfeld, Peter

    2008-01-01

    The claim that professional social work should be based on scientific knowledge is many decades old with knowledge transfer usually moving in the direction from science to practice. The authors critique this model of knowledge transfer and support a hybrid one that places more of an emphasis on professional knowledge and action occurring in the…

  13. Mobile Eye Tracking Methodology in Informal E-Learning in Social Groups in Technology-Enhanced Science Centres

    ERIC Educational Resources Information Center

    Magnussen, Rikke; Zachariassen, Maria; Kharlamov, Nikita; Larsen, Birger

    2017-01-01

    This paper presents a methodological discussion of the potential and challenges of involving mobile eye tracking technology in studies of knowledge generation and learning in a science centre context. The methodological exploration is based on eye-tracking studies of audience interaction and knowledge generation in the technology-enhanced health…

  14. The Knowledge Building Paradigm: A Model of Learning for Net Generation Students

    ERIC Educational Resources Information Center

    Philip, Donald

    2005-01-01

    In this article Donald Philip describes Knowledge Building, a pedagogy based on the way research organizations function. The global economy, Philip argues, is driving a shift from older, industrial models to the model of the business as a learning organization. The cognitive patterns of today's Net Generation students, formed by lifetime exposure…

  15. Knowledge Base for Automatic Generation of Online IMS LD Compliant Course Structures

    ERIC Educational Resources Information Center

    Pacurar, Ecaterina Giacomini; Trigano, Philippe; Alupoaie, Sorin

    2006-01-01

    Our article presents a pedagogical scenarios-based web application that allows the automatic generation and development of pedagogical websites. These pedagogical scenarios are represented in the IMS Learning Design standard. Our application is a web portal helping teachers to dynamically generate web course structures, to edit pedagogical content…

  16. Concept maps: A tool for knowledge management and synthesis in web-based conversational learning.

    PubMed

    Joshi, Ankur; Singh, Satendra; Jaswal, Shivani; Badyal, Dinesh Kumar; Singh, Tejinder

    2016-01-01

    Web-based conversational learning provides an opportunity for shared knowledge base creation through collaboration and collective wisdom extraction. Usually, the amount of generated information in such forums is very huge, multidimensional (in alignment with the desirable preconditions for constructivist knowledge creation), and sometimes, the nature of expected new information may not be anticipated in advance. Thus, concept maps (crafted from constructed data) as "process summary" tools may be a solution to improve critical thinking and learning by making connections between the facts or knowledge shared by the participants during online discussion This exploratory paper begins with the description of this innovation tried on a web-based interacting platform (email list management software), FAIMER-Listserv, and generated qualitative evidence through peer-feedback. This process description is further supported by a theoretical construct which shows how social constructivism (inclusive of autonomy and complexity) affects the conversational learning. The paper rationalizes the use of concept map as mid-summary tool for extracting information and further sense making out of this apparent intricacy.

  17. Knowledge-based approach for generating target system specifications from a domain model

    NASA Technical Reports Server (NTRS)

    Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan

    1992-01-01

    Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.

  18. A Knowledge Generation Model via the Hypernetwork

    PubMed Central

    Liu, Jian-Guo; Yang, Guang-Yong; Hu, Zhao-Long

    2014-01-01

    The influence of the statistical properties of the network on the knowledge diffusion has been extensively studied. However, the structure evolution and the knowledge generation processes are always integrated simultaneously. By introducing the Cobb-Douglas production function and treating the knowledge growth as a cooperative production of knowledge, in this paper, we present two knowledge-generation dynamic evolving models based on different evolving mechanisms. The first model, named “HDPH model,” adopts the hyperedge growth and the hyperdegree preferential attachment mechanisms. The second model, named “KSPH model,” adopts the hyperedge growth and the knowledge stock preferential attachment mechanisms. We investigate the effect of the parameters on the total knowledge stock of the two models. The hyperdegree distribution of the HDPH model can be theoretically analyzed by the mean-field theory. The analytic result indicates that the hyperdegree distribution of the HDPH model obeys the power-law distribution and the exponent is . Furthermore, we present the distributions of the knowledge stock for different parameters . The findings indicate that our proposed models could be helpful for deeply understanding the scientific research cooperation. PMID:24626143

  19. A knowledge generation model via the hypernetwork.

    PubMed

    Liu, Jian-Guo; Yang, Guang-Yong; Hu, Zhao-Long

    2014-01-01

    The influence of the statistical properties of the network on the knowledge diffusion has been extensively studied. However, the structure evolution and the knowledge generation processes are always integrated simultaneously. By introducing the Cobb-Douglas production function and treating the knowledge growth as a cooperative production of knowledge, in this paper, we present two knowledge-generation dynamic evolving models based on different evolving mechanisms. The first model, named "HDPH model," adopts the hyperedge growth and the hyperdegree preferential attachment mechanisms. The second model, named "KSPH model," adopts the hyperedge growth and the knowledge stock preferential attachment mechanisms. We investigate the effect of the parameters (α,β) on the total knowledge stock of the two models. The hyperdegree distribution of the HDPH model can be theoretically analyzed by the mean-field theory. The analytic result indicates that the hyperdegree distribution of the HDPH model obeys the power-law distribution and the exponent is γ = 2 + 1/m. Furthermore, we present the distributions of the knowledge stock for different parameters (α,β). The findings indicate that our proposed models could be helpful for deeply understanding the scientific research cooperation.

  20. Modeling technology innovation: how science, engineering, and industry methods can combine to generate beneficial socioeconomic impacts.

    PubMed

    Stone, Vathsala I; Lane, Joseph P

    2012-05-16

    Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact-that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and "bench to bedside" expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits.

  1. Modeling technology innovation: How science, engineering, and industry methods can combine to generate beneficial socioeconomic impacts

    PubMed Central

    2012-01-01

    Background Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact—that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. Methods This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. Results The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and “bench to bedside” expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. Conclusions High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits. PMID:22591638

  2. A knowledge-based approach to improving and homogenizing intensity modulated radiation therapy planning quality among treatment centers: an example application to prostate cancer planning.

    PubMed

    Good, David; Lo, Joseph; Lee, W Robert; Wu, Q Jackie; Yin, Fang-Fang; Das, Shiva K

    2013-09-01

    Intensity modulated radiation therapy (IMRT) treatment planning can have wide variation among different treatment centers. We propose a system to leverage the IMRT planning experience of larger institutions to automatically create high-quality plans for outside clinics. We explore feasibility by generating plans for patient datasets from an outside institution by adapting plans from our institution. A knowledge database was created from 132 IMRT treatment plans for prostate cancer at our institution. The outside institution, a community hospital, provided the datasets for 55 prostate cancer cases, including their original treatment plans. For each "query" case from the outside institution, a similar "match" case was identified in the knowledge database, and the match case's plan parameters were then adapted and optimized to the query case by use of a semiautomated approach that required no expert planning knowledge. The plans generated with this knowledge-based approach were compared with the original treatment plans at several dose cutpoints. Compared with the original plan, the knowledge-based plan had a significantly more homogeneous dose to the planning target volume and a significantly lower maximum dose. The volumes of the rectum, bladder, and femoral heads above all cutpoints were nominally lower for the knowledge-based plan; the reductions were significantly lower for the rectum. In 40% of cases, the knowledge-based plan had overall superior (lower) dose-volume histograms for rectum and bladder; in 54% of cases, the comparison was equivocal; in 6% of cases, the knowledge-based plan was inferior for both bladder and rectum. Knowledge-based planning was superior or equivalent to the original plan in 95% of cases. The knowledge-based approach shows promise for homogenizing plan quality by transferring planning expertise from more experienced to less experienced institutions. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. Knowledge management for systems biology a general and visually driven framework applied to translational medicine.

    PubMed

    Maier, Dieter; Kalus, Wenzel; Wolff, Martin; Kalko, Susana G; Roca, Josep; Marin de Mas, Igor; Turan, Nil; Cascante, Marta; Falciani, Francesco; Hernandez, Miguel; Villà-Freixa, Jordi; Losko, Sascha

    2011-03-05

    To enhance our understanding of complex biological systems like diseases we need to put all of the available data into context and use this to detect relations, pattern and rules which allow predictive hypotheses to be defined. Life science has become a data rich science with information about the behaviour of millions of entities like genes, chemical compounds, diseases, cell types and organs, which are organised in many different databases and/or spread throughout the literature. Existing knowledge such as genotype-phenotype relations or signal transduction pathways must be semantically integrated and dynamically organised into structured networks that are connected with clinical and experimental data. Different approaches to this challenge exist but so far none has proven entirely satisfactory. To address this challenge we previously developed a generic knowledge management framework, BioXM™, which allows the dynamic, graphic generation of domain specific knowledge representation models based on specific objects and their relations supporting annotations and ontologies. Here we demonstrate the utility of BioXM for knowledge management in systems biology as part of the EU FP6 BioBridge project on translational approaches to chronic diseases. From clinical and experimental data, text-mining results and public databases we generate a chronic obstructive pulmonary disease (COPD) knowledge base and demonstrate its use by mining specific molecular networks together with integrated clinical and experimental data. We generate the first semantically integrated COPD specific public knowledge base and find that for the integration of clinical and experimental data with pre-existing knowledge the configuration based set-up enabled by BioXM reduced implementation time and effort for the knowledge base compared to similar systems implemented as classical software development projects. The knowledgebase enables the retrieval of sub-networks including protein-protein interaction, pathway, gene--disease and gene--compound data which are used for subsequent data analysis, modelling and simulation. Pre-structured queries and reports enhance usability; establishing their use in everyday clinical settings requires further simplification with a browser based interface which is currently under development.

  4. Knowledge management for systems biology a general and visually driven framework applied to translational medicine

    PubMed Central

    2011-01-01

    Background To enhance our understanding of complex biological systems like diseases we need to put all of the available data into context and use this to detect relations, pattern and rules which allow predictive hypotheses to be defined. Life science has become a data rich science with information about the behaviour of millions of entities like genes, chemical compounds, diseases, cell types and organs, which are organised in many different databases and/or spread throughout the literature. Existing knowledge such as genotype - phenotype relations or signal transduction pathways must be semantically integrated and dynamically organised into structured networks that are connected with clinical and experimental data. Different approaches to this challenge exist but so far none has proven entirely satisfactory. Results To address this challenge we previously developed a generic knowledge management framework, BioXM™, which allows the dynamic, graphic generation of domain specific knowledge representation models based on specific objects and their relations supporting annotations and ontologies. Here we demonstrate the utility of BioXM for knowledge management in systems biology as part of the EU FP6 BioBridge project on translational approaches to chronic diseases. From clinical and experimental data, text-mining results and public databases we generate a chronic obstructive pulmonary disease (COPD) knowledge base and demonstrate its use by mining specific molecular networks together with integrated clinical and experimental data. Conclusions We generate the first semantically integrated COPD specific public knowledge base and find that for the integration of clinical and experimental data with pre-existing knowledge the configuration based set-up enabled by BioXM reduced implementation time and effort for the knowledge base compared to similar systems implemented as classical software development projects. The knowledgebase enables the retrieval of sub-networks including protein-protein interaction, pathway, gene - disease and gene - compound data which are used for subsequent data analysis, modelling and simulation. Pre-structured queries and reports enhance usability; establishing their use in everyday clinical settings requires further simplification with a browser based interface which is currently under development. PMID:21375767

  5. Effects of Scaffolds and Scientific Reasoning Ability on Web-Based Scientific Inquiry

    ERIC Educational Resources Information Center

    Wu, Hui-Ling; Weng, Hsiao-Lan; She, Hsiao-Ching

    2016-01-01

    This study examined how background knowledge, scientific reasoning ability, and various scaffolding forms influenced students' science knowledge and scientific inquiry achievements. The students participated in an online scientific inquiry program involving such activities as generating scientific questions and drawing evidence-based conclusions,…

  6. Probabilistic load simulation: Code development status

    NASA Astrophysics Data System (ADS)

    Newell, J. F.; Ho, H.

    1991-05-01

    The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.

  7. Enhancing Learning Outcomes with an Interactive Knowledge-Based Learning Environment Providing Narrative Feedback

    ERIC Educational Resources Information Center

    Stranieri, Andrew; Yearwood, John

    2008-01-01

    This paper describes a narrative-based interactive learning environment which aims to elucidate reasoning using interactive scenarios that may be used in training novices in decision-making. Its design is based on an approach to generating narrative from knowledge that has been modelled in specific decision/reasoning domains. The approach uses a…

  8. Knowledge network model of the energy consumption in discrete manufacturing system

    NASA Astrophysics Data System (ADS)

    Xu, Binzi; Wang, Yan; Ji, Zhicheng

    2017-07-01

    Discrete manufacturing system generates a large amount of data and information because of the development of information technology. Hence, a management mechanism is urgently required. In order to incorporate knowledge generated from manufacturing data and production experience, a knowledge network model of the energy consumption in the discrete manufacturing system was put forward based on knowledge network theory and multi-granularity modular ontology technology. This model could provide a standard representation for concepts, terms and their relationships, which could be understood by both human and computer. Besides, the formal description of energy consumption knowledge elements (ECKEs) in the knowledge network was also given. Finally, an application example was used to verify the feasibility of the proposed method.

  9. Content and structure of knowledge base used for virtual control of android arm motion in specified environment

    NASA Astrophysics Data System (ADS)

    Pritykin, F. N.; Nebritov, V. I.

    2018-01-01

    The paper presents the configuration of knowledge base necessary for intelligent control of android arm mechanism motion with different positions of certain forbidden regions taken into account. The present structure of the knowledge base characterizes the past experience of arm motion synthesis in the vector of velocities with due regard for the known obstacles. This structure also specifies its intrinsic properties. Knowledge base generation is based on the study of the arm mechanism instantaneous states implementations. Computational experiments connected with the virtual control of android arm motion with known forbidden regions using the developed knowledge base are introduced. Using the developed knowledge base to control virtually the arm motion reduces the time of test assignments calculation. The results of the research can be used in developing control systems of autonomous android robots in the known in advance environment.

  10. Knowledge-based and model-based hybrid methodology for comprehensive waste minimization in electroplating plants

    NASA Astrophysics Data System (ADS)

    Luo, Keqin

    1999-11-01

    The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and accurately what is going on in each tank, and (iii) identify all WM opportunities through process improvement. This work has formed a solid foundation for the further development of powerful WM technologies for comprehensive WM in the following decade.

  11. The generation of simple compliance boundaries for mobile communication base station antennas using formulae for SAR estimation.

    PubMed

    Thors, B; Hansson, B; Törnevik, C

    2009-07-07

    In this paper, a procedure is proposed for generating simple and practical compliance boundaries for mobile communication base station antennas. The procedure is based on a set of formulae for estimating the specific absorption rate (SAR) in certain directions around a class of common base station antennas. The formulae, given for both whole-body and localized SAR, require as input the frequency, the transmitted power and knowledge of antenna-related parameters such as dimensions, directivity and half-power beamwidths. With knowledge of the SAR in three key directions it is demonstrated how simple and practical compliance boundaries can be generated outside of which the exposure levels do not exceed certain limit values. The conservativeness of the proposed procedure is discussed based on results from numerical radio frequency (RF) exposure simulations with human body phantoms from the recently developed Virtual Family.

  12. Knowledge environments representing molecular entities for the virtual physiological human.

    PubMed

    Hofmann-Apitius, Martin; Fluck, Juliane; Furlong, Laura; Fornes, Oriol; Kolárik, Corinna; Hanser, Susanne; Boeker, Martin; Schulz, Stefan; Sanz, Ferran; Klinger, Roman; Mevissen, Theo; Gattermayer, Tobias; Oliva, Baldo; Friedrich, Christoph M

    2008-09-13

    In essence, the virtual physiological human (VPH) is a multiscale representation of human physiology spanning from the molecular level via cellular processes and multicellular organization of tissues to complex organ function. The different scales of the VPH deal with different entities, relationships and processes, and in consequence the models used to describe and simulate biological functions vary significantly. Here, we describe methods and strategies to generate knowledge environments representing molecular entities that can be used for modelling the molecular scale of the VPH. Our strategy to generate knowledge environments representing molecular entities is based on the combination of information extraction from scientific text and the integration of information from biomolecular databases. We introduce @neuLink, a first prototype of an automatically generated, disease-specific knowledge environment combining biomolecular, chemical, genetic and medical information. Finally, we provide a perspective for the future implementation and use of knowledge environments representing molecular entities for the VPH.

  13. Exploring the Impacts of Social Networking Sites on Academic Relations in the University

    ERIC Educational Resources Information Center

    Rambe, Patient

    2011-01-01

    Social networking sites (SNS) affordances for persistent interaction, collective generation of knowledge, and formation of peer-based clusters for knowledge sharing render them useful for developing constructivist knowledge environments. However, notwithstanding their academic value, these environments are not necessarily insulated from the…

  14. Automatic two- and three-dimensional mesh generation based on fuzzy knowledge processing

    NASA Astrophysics Data System (ADS)

    Yagawa, G.; Yoshimura, S.; Soneda, N.; Nakao, K.

    1992-09-01

    This paper describes the development of a novel automatic FEM mesh generation algorithm based on the fuzzy knowledge processing technique. A number of local nodal patterns are stored in a nodal pattern database of the mesh generation system. These nodal patterns are determined a priori based on certain theories or past experience of experts of FEM analyses. For example, such human experts can determine certain nodal patterns suitable for stress concentration analyses of cracks, corners, holes and so on. Each nodal pattern possesses a membership function and a procedure of node placement according to this function. In the cases of the nodal patterns for stress concentration regions, the membership function which is utilized in the fuzzy knowledge processing has two meanings, i.e. the “closeness” of nodal location to each stress concentration field as well as “nodal density”. This is attributed to the fact that a denser nodal pattern is required near a stress concentration field. What a user has to do in a practical mesh generation process are to choose several local nodal patterns properly and to designate the maximum nodal density of each pattern. After those simple operations by the user, the system places the chosen nodal patterns automatically in an analysis domain and on its boundary, and connects them smoothly by the fuzzy knowledge processing technique. Then triangular or tetrahedral elements are generated by means of the advancing front method. The key issue of the present algorithm is an easy control of complex two- or three-dimensional nodal density distribution by means of the fuzzy knowledge processing technique. To demonstrate fundamental performances of the present algorithm, a prototype system was constructed with one of object-oriented languages, Smalltalk-80 on a 32-bit microcomputer, Macintosh II. The mesh generation of several two- and three-dimensional domains with cracks, holes and junctions was presented as examples.

  15. Contrasting burnout, turnover intention, control, value congruence and knowledge sharing between Baby Boomers and Generation X.

    PubMed

    Leiter, Michael P; Jackson, Nicole J; Shaughnessy, Krystelle

    2009-01-01

    This paper examines the contrasting role of work values for nurses from two generations: Baby Boomers and Generation X. Differences among nurses regarding core values pertaining to their work has a potential to influence the quality of their work life. These differences may have implications for their vulnerability to job burnout. The analysis is based upon questionnaire surveys of nurses representing Generation X (n = 255) and Baby Boomers (n = 193) that contrasted their responses on job burnout, areas of work life, knowledge transfer and intention to quit. The analysis identified a greater person/organization value mismatch for Generation X nurses than for Baby Boomer nurses. Their greater value mismatch was associated with a greater susceptibility to burnout and a stronger intention to quit for Generation X nurses. The article notes the influence of Baby Boomer nurses in the structure of work and the application of new knowledge in health care work settings. Implications for recruitment and retention are discussed with a focus on knowledge transfer activities associated with distinct learning styles. Understanding value differences between generations will help nursing managers to develop more responsive work settings for nurses of all ages.

  16. Realizing Relevance: The Influence of Domain-Specific Information on Generation of New Knowledge through Integration in 4- to 8-Year-Old Children

    ERIC Educational Resources Information Center

    Bauer, Patricia J.; Larkina, Marina

    2017-01-01

    In accumulating knowledge, direct modes of learning are complemented by productive processes, including self-generation based on integration of separate episodes. Effects of the number of potentially relevant episodes on integration were examined in 4- to 8-year-olds (N = 121; racially/ethnically heterogeneous sample, English speakers, from large…

  17. Semantic knowledge fractionations: verbal propositions vs. perceptual input? Evidence from a child with Klinefelter syndrome.

    PubMed

    Robinson, Sally J; Temple, Christine M

    2013-04-01

    This paper addresses the relative independence of different types of lexical- and factually-based semantic knowledge in JM, a 9-year-old boy with Klinefelter syndrome (KS). JM was matched to typically developing (TD) controls on the basis of chronological age. Lexical-semantic knowledge was investigated for common noun (CN) and mathematical vocabulary items (MV). Factually-based semantic knowledge was investigated for general and number facts. For CN items, JM's lexical stores were of a normal size but the volume of correct 'sensory feature' semantic knowledge he generated within verbal item descriptions was significantly reduced. He was also significantly impaired at naming item descriptions and pictures, particularly for fruit and vegetables. There was also weak object decision for fruit and vegetables. In contrast, for MV items, JM's lexical stores were elevated, with no significant difference in the amount and type of correct semantic knowledge generated within verbal item descriptions and normal naming. JM's fact retrieval accuracy was normal for all types of factual knowledge. JM's performance indicated a dissociation between the representation of CN and MV vocabulary items during development. JM's preserved semantic knowledge of facts in the face of impaired semantic knowledge of vocabulary also suggests that factually-based semantic knowledge representation is not dependent on normal lexical-semantic knowledge during development. These findings are discussed in relation to the emergence of distinct semantic knowledge representations during development, due to differing degrees of dependency upon the acquisition and representation of semantic knowledge from verbal propositions and perceptual input.

  18. Formalization of the engineering science discipline - knowledge engineering

    NASA Astrophysics Data System (ADS)

    Peng, Xiao

    Knowledge is the most precious ingredient facilitating aerospace engineering research and product development activities. Currently, the most common knowledge retention methods are paper-based documents, such as reports, books and journals. However, those media have innate weaknesses. For example, four generations of flying wing aircraft (Horten, Northrop XB-35/YB-49, Boeing BWB and many others) were mostly developed in isolation. The subsequent engineers were not aware of the previous developments, because these projects were documented such which prevented the next generation of engineers to benefit from the previous lessons learned. In this manner, inefficient knowledge retention methods have become a primary obstacle for knowledge transfer from the experienced to the next generation of engineers. In addition, the quality of knowledge itself is a vital criterion; thus, an accurate measure of the quality of 'knowledge' is required. Although qualitative knowledge evaluation criteria have been researched in other disciplines, such as the AAA criterion by Ernest Sosa stemming from the field of philosophy, a quantitative knowledge evaluation criterion needs to be developed which is capable to numerically determine the qualities of knowledge for aerospace engineering research and product development activities. To provide engineers with a high-quality knowledge management tool, the engineering science discipline Knowledge Engineering has been formalized to systematically address knowledge retention issues. This research undertaking formalizes Knowledge Engineering as follows: 1. Categorize knowledge according to its formats and representations for the first time, which serves as the foundation for the subsequent knowledge management function development. 2. Develop an efficiency evaluation criterion for knowledge management by analyzing the characteristics of both knowledge and the parties involved in the knowledge management processes. 3. Propose and develop an innovative Knowledge-Based System (KBS), AVD KBS, forming a systematic approach facilitating knowledge management. 4. Demonstrate the efficiency advantages of AVDKBS over traditional knowledge management methods via selected design case studies. This research formalizes, for the first time, Knowledge Engineering as a distinct discipline by delivering a robust and high-quality knowledge management and process tool, AVDKBS. Formalizing knowledge proves to significantly impact the effectiveness of aerospace knowledge retention and utilization.

  19. Caregiving Antecedents of Secure Base Script Knowledge: A Comparative Analysis of Young Adult Attachment Representations

    ERIC Educational Resources Information Center

    Steele, Ryan D.; Waters, Theodore E. A.; Bost, Kelly K.; Vaughn, Brian E.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn; Roisman, Glenn I.

    2014-01-01

    Based on a subsample (N = 673) of the NICHD Study of Early Child Care and Youth Development (SECCYD) cohort, this article reports data from a follow-up assessment at age 18 years on the antecedents of "secure base script knowledge", as reflected in the ability to generate narratives in which attachment-related difficulties are…

  20. Combining Open-domain and Biomedical Knowledge for Topic Recognition in Consumer Health Questions.

    PubMed

    Mrabet, Yassine; Kilicoglu, Halil; Roberts, Kirk; Demner-Fushman, Dina

    2016-01-01

    Determining the main topics in consumer health questions is a crucial step in their processing as it allows narrowing the search space to a specific semantic context. In this paper we propose a topic recognition approach based on biomedical and open-domain knowledge bases. In the first step of our method, we recognize named entities in consumer health questions using an unsupervised method that relies on a biomedical knowledge base, UMLS, and an open-domain knowledge base, DBpedia. In the next step, we cast topic recognition as a binary classification problem of deciding whether a named entity is the question topic or not. We evaluated our approach on a dataset from the National Library of Medicine (NLM), introduced in this paper, and another from the Genetic and Rare Disease Information Center (GARD). The combination of knowledge bases outperformed the results obtained by individual knowledge bases by up to 16.5% F1 and achieved state-of-the-art performance. Our results demonstrate that combining open-domain knowledge bases with biomedical knowledge bases can lead to a substantial improvement in understanding user-generated health content.

  1. Enrichment assessment of multiple virtual screening strategies for Toll-like receptor 8 agonists based on a maximal unbiased benchmarking data set.

    PubMed

    Pei, Fen; Jin, Hongwei; Zhou, Xin; Xia, Jie; Sun, Lidan; Liu, Zhenming; Zhang, Liangren

    2015-11-01

    Toll-like receptor 8 agonists, which activate adaptive immune responses by inducing robust production of T-helper 1-polarizing cytokines, are promising candidates for vaccine adjuvants. As the binding site of toll-like receptor 8 is large and highly flexible, virtual screening by individual method has inevitable limitations; thus, a comprehensive comparison of different methods may provide insights into seeking effective strategy for the discovery of novel toll-like receptor 8 agonists. In this study, the performance of knowledge-based pharmacophore, shape-based 3D screening, and combined strategies was assessed against a maximum unbiased benchmarking data set containing 13 actives and 1302 decoys specialized for toll-like receptor 8 agonists. Prior structure-activity relationship knowledge was involved in knowledge-based pharmacophore generation, and a set of antagonists was innovatively used to verify the selectivity of the selected knowledge-based pharmacophore. The benchmarking data set was generated from our recently developed 'mubd-decoymaker' protocol. The enrichment assessment demonstrated a considerable performance through our selected three-layer virtual screening strategy: knowledge-based pharmacophore (Phar1) screening, shape-based 3D similarity search (Q4_combo), and then a Gold docking screening. This virtual screening strategy could be further employed to perform large-scale database screening and to discover novel toll-like receptor 8 agonists. © 2015 John Wiley & Sons A/S.

  2. An object-oriented, knowledge-based system for cardiovascular rehabilitation--phase II.

    PubMed Central

    Ryder, R. M.; Inamdar, B.

    1995-01-01

    The Heart Monitor is an object-oriented, knowledge-based system designed to support the clinical activities of cardiovascular (CV) rehabilitation. The original concept was developed as part of graduate research completed in 1992. This paper describes the second generation system which is being implemented in collaboration with a local heart rehabilitation program. The PC UNIX-based system supports an extensive patient database organized by clinical areas. In addition, a knowledge base is employed to monitor patient status. Rule-based automated reasoning is employed to assess risk factors contraindicative to exercise therapy and to monitor administrative and statutory requirements. PMID:8563285

  3. ITMS: Individualized Teaching Material System: Adaptive Integration of Web Pages Distributed in Some Servers.

    ERIC Educational Resources Information Center

    Mitsuhara, Hiroyuki; Kurose, Yoshinobu; Ochi, Youji; Yano, Yoneo

    The authors developed a Web-based Adaptive Educational System (Web-based AES) named ITMS (Individualized Teaching Material System). ITMS adaptively integrates knowledge on the distributed Web pages and generates individualized teaching material that has various contents. ITMS also presumes the learners' knowledge levels from the states of their…

  4. Semantic Data Integration and Knowledge Management to Represent Biological Network Associations.

    PubMed

    Losko, Sascha; Heumann, Klaus

    2017-01-01

    The vast quantities of information generated by academic and industrial research groups are reflected in a rapidly growing body of scientific literature and exponentially expanding resources of formalized data, including experimental data, originating from a multitude of "-omics" platforms, phenotype information, and clinical data. For bioinformatics, the challenge remains to structure this information so that scientists can identify relevant information, to integrate this information as specific "knowledge bases," and to formalize this knowledge across multiple scientific domains to facilitate hypothesis generation and validation. Here we report on progress made in building a generic knowledge management environment capable of representing and mining both explicit and implicit knowledge and, thus, generating new knowledge. Risk management in drug discovery and clinical research is used as a typical example to illustrate this approach. In this chapter we introduce techniques and concepts (such as ontologies, semantic objects, typed relationships, contexts, graphs, and information layers) that are used to represent complex biomedical networks. The BioXM™ Knowledge Management Environment is used as an example to demonstrate how a domain such as oncology is represented and how this representation is utilized for research.

  5. Practice to Evidence: Using Evaluability Assessment to Generate Practice-Based Evidence in Rural South Georgia

    ERIC Educational Resources Information Center

    Honeycutt, Sally; Hermstad, April; Carvalho, Michelle L.; Arriola, Kimberly R. Jacob; Ballard, Denise; Escoffery, Cam; Kegler, Michelle C.

    2017-01-01

    Evidence from formal evaluation of real-world practice can address gaps in the public health knowledge base and provide information about feasible, relevant strategies for varied settings. Interest in evaluability assessment (EA) as an approach for generating practice-based evidence has grown. EA has been central to several structured assessment…

  6. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    PubMed

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. A knowledge-driven approach to biomedical document conceptualization.

    PubMed

    Zheng, Hai-Tao; Borchert, Charles; Jiang, Yong

    2010-06-01

    Biomedical document conceptualization is the process of clustering biomedical documents based on ontology-represented domain knowledge. The result of this process is the representation of the biomedical documents by a set of key concepts and their relationships. Most of clustering methods cluster documents based on invariant domain knowledge. The objective of this work is to develop an effective method to cluster biomedical documents based on various user-specified ontologies, so that users can exploit the concept structures of documents more effectively. We develop a flexible framework to allow users to specify the knowledge bases, in the form of ontologies. Based on the user-specified ontologies, we develop a key concept induction algorithm, which uses latent semantic analysis to identify key concepts and cluster documents. A corpus-related ontology generation algorithm is developed to generate the concept structures of documents. Based on two biomedical datasets, we evaluate the proposed method and five other clustering algorithms. The clustering results of the proposed method outperform the five other algorithms, in terms of key concept identification. With respect to the first biomedical dataset, our method has the F-measure values 0.7294 and 0.5294 based on the MeSH ontology and gene ontology (GO), respectively. With respect to the second biomedical dataset, our method has the F-measure values 0.6751 and 0.6746 based on the MeSH ontology and GO, respectively. Both results outperforms the five other algorithms in terms of F-measure. Based on the MeSH ontology and GO, the generated corpus-related ontologies show informative conceptual structures. The proposed method enables users to specify the domain knowledge to exploit the conceptual structures of biomedical document collections. In addition, the proposed method is able to extract the key concepts and cluster the documents with a relatively high precision. Copyright 2010 Elsevier B.V. All rights reserved.

  8. Model Based Analysis and Test Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  9. Automated generation of patient-tailored electronic care pathways by translating computer-interpretable guidelines into hierarchical task networks.

    PubMed

    González-Ferrer, Arturo; ten Teije, Annette; Fdez-Olivares, Juan; Milian, Krystyna

    2013-02-01

    This paper describes a methodology which enables computer-aided support for the planning, visualization and execution of personalized patient treatments in a specific healthcare process, taking into account complex temporal constraints and the allocation of institutional resources. To this end, a translation from a time-annotated computer-interpretable guideline (CIG) model of a clinical protocol into a temporal hierarchical task network (HTN) planning domain is presented. The proposed method uses a knowledge-driven reasoning process to translate knowledge previously described in a CIG into a corresponding HTN Planning and Scheduling domain, taking advantage of HTNs known ability to (i) dynamically cope with temporal and resource constraints, and (ii) automatically generate customized plans. The proposed method, focusing on the representation of temporal knowledge and based on the identification of workflow and temporal patterns in a CIG, makes it possible to automatically generate time-annotated and resource-based care pathways tailored to the needs of any possible patient profile. The proposed translation is illustrated through a case study based on a 70 pages long clinical protocol to manage Hodgkin's disease, developed by the Spanish Society of Pediatric Oncology. We show that an HTN planning domain can be generated from the corresponding specification of the protocol in the Asbru language, providing a running example of this translation. Furthermore, the correctness of the translation is checked and also the management of ten different types of temporal patterns represented in the protocol. By interpreting the automatically generated domain with a state-of-art HTN planner, a time-annotated care pathway is automatically obtained, customized for the patient's and institutional needs. The generated care pathway can then be used by clinicians to plan and manage the patients long-term care. The described methodology makes it possible to automatically generate patient-tailored care pathways, leveraging an incremental knowledge-driven engineering process that starts from the expert knowledge of medical professionals. The presented approach makes the most of the strengths inherent in both CIG languages and HTN planning and scheduling techniques: for the former, knowledge acquisition and representation of the original clinical protocol, and for the latter, knowledge reasoning capabilities and an ability to deal with complex temporal and resource constraints. Moreover, the proposed approach provides immediate access to technologies such as business process management (BPM) tools, which are increasingly being used to support healthcare processes. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. The Role of Teachers in Facilitating Mathematics Learning Opportunities in Agriculture, Food, and Natural Resources

    ERIC Educational Resources Information Center

    McKim, Aaron J.; Velez, Jonathan J.; Everett, Michael W.; Sorensen, Tyson J.

    2017-01-01

    Strengthening knowledge and skills in mathematics is critically important to preparing the next generation of innovators, problem solvers, and interdisciplinary thinkers. School-based agricultural education offers a valuable context to co-develop mathematics knowledge and skills alongside knowledge and skills in agriculture, food, and natural…

  11. Effects of Model-Based and Memory-Based Processing on Speed and Accuracy of Grammar String Generation

    ERIC Educational Resources Information Center

    Domangue, Thomas J.; Mathews, Robert C.; Sun, Ron; Roussel, Lewis G.; Guidry, Claire E.

    2004-01-01

    Learners are able to use 2 different types of knowledge to perform a skill. One type is a conscious mental model, and the other is based on memories of instances. The authors conducted 3 experiments that manipulated training conditions designed to affect the availability of 1 or both types of knowledge about an artificial grammar. Participants…

  12. TARGET: Rapid Capture of Process Knowledge

    NASA Technical Reports Server (NTRS)

    Ortiz, C. J.; Ly, H. V.; Saito, T.; Loftin, R. B.

    1993-01-01

    TARGET (Task Analysis/Rule Generation Tool) represents a new breed of tool that blends graphical process flow modeling capabilities with the function of a top-down reporting facility. Since NASA personnel frequently perform tasks that are primarily procedural in nature, TARGET models mission or task procedures and generates hierarchical reports as part of the process capture and analysis effort. Historically, capturing knowledge has proven to be one of the greatest barriers to the development of intelligent systems. Current practice generally requires lengthy interactions between the expert whose knowledge is to be captured and the knowledge engineer whose responsibility is to acquire and represent the expert's knowledge in a useful form. Although much research has been devoted to the development of methodologies and computer software to aid in the capture and representation of some types of knowledge, procedural knowledge has received relatively little attention. In essence, TARGET is one of the first tools of its kind, commercial or institutional, that is designed to support this type of knowledge capture undertaking. This paper will describe the design and development of TARGET for the acquisition and representation of procedural knowledge. The strategies employed by TARGET to support use by knowledge engineers, subject matter experts, programmers and managers will be discussed. This discussion includes the method by which the tool employs its graphical user interface to generate a task hierarchy report. Next, the approach to generate production rules for incorporation in and development of a CLIPS based expert system will be elaborated. TARGET also permits experts to visually describe procedural tasks as a common medium for knowledge refinement by the expert community and knowledge engineer making knowledge consensus possible. The paper briefly touches on the verification and validation issues facing the CLIPS rule generation aspects of TARGET. A description of efforts to support TARGET's interoperability issues on PCs, Macintoshes and UNIX workstations concludes the paper.

  13. Generating and Executing Complex Natural Language Queries across Linked Data.

    PubMed

    Hamon, Thierry; Mougin, Fleur; Grabar, Natalia

    2015-01-01

    With the recent and intensive research in the biomedical area, the knowledge accumulated is disseminated through various knowledge bases. Links between these knowledge bases are needed in order to use them jointly. Linked Data, SPARQL language, and interfaces in Natural Language question-answering provide interesting solutions for querying such knowledge bases. We propose a method for translating natural language questions in SPARQL queries. We use Natural Language Processing tools, semantic resources, and the RDF triples description. The method is designed on 50 questions over 3 biomedical knowledge bases, and evaluated on 27 questions. It achieves 0.78 F-measure on the test set. The method for translating natural language questions into SPARQL queries is implemented as Perl module available at http://search.cpan.org/ thhamon/RDF-NLP-SPARQLQuery.

  14. Evolution of a research prototype expert system for endemic populations of mountain pine beetle in lodgepole pine forests

    Treesearch

    Dale L. Bartos; Kent B. Downing

    1989-01-01

    A knowledge acquisition program was written to aid in obtaining knowledge from the experts concerning endemic populations of mountain pine beetle in lodgepole pine forest. An application expert system is then automatically generated by the knowledge acquisition program that contains the codified base of expert knowledge. Data can then be entered into the expert system...

  15. A knowledge-based, concept-oriented view generation system for clinical data.

    PubMed

    Zeng, Q; Cimino, J J

    2001-04-01

    Information overload is a well-known problem for clinicians who must review large amounts of data in patient records. Concept-oriented views, which organize patient data around clinical concepts such as diagnostic strategies and therapeutic goals, may offer a solution to the problem of information overload. However, although concept-oriented views are desirable, they are difficult to create and maintain. We have developed a general-purpose, knowledge-based approach to the generation of concept-oriented views and have developed a system to test our approach. The system creates concept-oriented views through automated identification of relevant patient data. The knowledge in the system is represented by both a semantic network and rules. The key relevant data identification function is accomplished by a rule-based traversal of the semantic network. This paper focuses on the design and implementation of the system; an evaluation of the system is reported separately.

  16. Progress and challenges in the application of artificial intelligence to computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1987-01-01

    An approach to analyzing CFD knowledge-based systems is proposed which is based, in part, on the concept of knowledge-level analysis. Consideration is given to the expert cooling fan design system, the PAN AIR knowledge system, grid adaptation, and expert zonal grid generation. These AI/CFD systems demonstrate that current AI technology can be successfully applied to well-formulated problems that are solved by means of classification or selection of preenumerated solutions.

  17. Web-Based Learning as a Tool of Knowledge Continuity

    ERIC Educational Resources Information Center

    Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita; Rambely, Azmin Sham

    2013-01-01

    The outbreak of information in a borderless world has prompted lecturers to move forward together with the technological innovation and erudition of knowledge in performing his/her responsibility to educate the young generations to be able to stand above the crowd at the global scene. Teaching and Learning through web-based learning platform is a…

  18. Evidence-Based Practice in Kinesiology: The Theory to Practice Gap Revisited

    ERIC Educational Resources Information Center

    Knudson, Duane

    2005-01-01

    As evidence-based practice sweeps the applied health professions, it is a good time to evaluate the generation of knowledge in Kinesiology and its transmission to professionals and the public. Knowledge transmission has been debated in the past from the perspectives of the theory-to-practice gap and the discipline versus profession emphasis.…

  19. Adults' Autonomic and Subjective Emotional Responses to Infant Vocalizations: The Role of Secure Base Script Knowledge

    ERIC Educational Resources Information Center

    Groh, Ashley M.; Roisman, Glenn I.

    2009-01-01

    This article examines the extent to which secure base script knowledge--as reflected in an adult's ability to generate narratives in which attachment-related threats are recognized, competent help is provided, and the problem is resolved--is associated with adults' autonomic and subjective emotional responses to infant distress and nondistress…

  20. Artificial Intelligence In Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1991-01-01

    Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.

  1. Bedside, classroom and bench: collaborative strategies to generate evidence-based knowledge for nursing practice.

    PubMed

    Weaver, Charlotte A; Warren, Judith J; Delaney, Connie

    2005-12-01

    The rise of evidence-base practice (EBP) as a standard for care delivery is rapidly emerging as a global phenomenon that is transcending political, economic and geographic boundaries. Evidence-based nursing (EBN) addresses the growing body of nursing knowledge supported by different levels of evidence for best practices in nursing care. Across all health care, including nursing, we face the challenge of how to most effectively close the gap between what is known and what is practiced. There is extensive literature on the barriers and difficulties of translating research findings into practical application. While the literature refers to this challenge as the "Bench to Bedside" lag, this paper presents three collaborative strategies that aim to minimize this gap. The Bedside strategy proposes to use the data generated from care delivery and captured in the massive data repositories of electronic health record (EHR) systems as empirical evidence that can be analysed to discover and then inform best practice. In the Classroom strategy, we present a description for how evidence-based nursing knowledge is taught in a baccalaureate nursing program. And finally, the Bench strategy describes applied informatics in converting paper-based EBN protocols into the workflow of clinical information systems. Protocols are translated into reference and executable knowledge with the goal of placing the latest scientific knowledge at the fingertips of front line clinicians. In all three strategies, information technology (IT) is presented as the underlying tool that makes this rapid translation of nursing knowledge into practice and education feasible.

  2. Participatory approach to the development of a knowledge base for problem-solving in diabetes self-management.

    PubMed

    Cole-Lewis, Heather J; Smaldone, Arlene M; Davidson, Patricia R; Kukafka, Rita; Tobin, Jonathan N; Cassells, Andrea; Mynatt, Elizabeth D; Hripcsak, George; Mamykina, Lena

    2016-01-01

    To develop an expandable knowledge base of reusable knowledge related to self-management of diabetes that can be used as a foundation for patient-centric decision support tools. The structure and components of the knowledge base were created in participatory design with academic diabetes educators using knowledge acquisition methods. The knowledge base was validated using scenario-based approach with practicing diabetes educators and individuals with diabetes recruited from Community Health Centers (CHCs) serving economically disadvantaged communities and ethnic minorities in New York. The knowledge base includes eight glycemic control problems, over 150 behaviors known to contribute to these problems coupled with contextual explanations, and over 200 specific action-oriented self-management goals for correcting problematic behaviors, with corresponding motivational messages. The validation of the knowledge base suggested high level of completeness and accuracy, and identified improvements in cultural appropriateness. These were addressed in new iterations of the knowledge base. The resulting knowledge base is theoretically grounded, incorporates practical and evidence-based knowledge used by diabetes educators in practice settings, and allows for personally meaningful choices by individuals with diabetes. Participatory design approach helped researchers to capture implicit knowledge of practicing diabetes educators and make it explicit and reusable. The knowledge base proposed here is an important step towards development of new generation patient-centric decision support tools for facilitating chronic disease self-management. While this knowledge base specifically targets diabetes, its overall structure and composition can be generalized to other chronic conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Participatory approach to the development of a knowledge base for problem-solving in diabetes self-management

    PubMed Central

    Cole-Lewis, Heather J.; Smaldone, Arlene M.; Davidson, Patricia R.; Kukafka, Rita; Tobin, Jonathan N.; Cassells, Andrea; Mynatt, Elizabeth D.; Hripcsak, George; Mamykina, Lena

    2015-01-01

    Objective To develop an expandable knowledge base of reusable knowledge related to self-management of diabetes that can be used as a foundation for patient-centric decision support tools. Materials and methods The structure and components of the knowledge base were created in participatory design with academic diabetes educators using knowledge acquisition methods. The knowledge base was validated using scenario-based approach with practicing diabetes educators and individuals with diabetes recruited from Community Health Centers (CHCs) serving economically disadvantaged communities and ethnic minorities in New York. Results The knowledge base includes eight glycemic control problems, over 150 behaviors known to contribute to these problems coupled with contextual explanations, and over 200 specific action-oriented self-management goals for correcting problematic behaviors, with corresponding motivational messages. The validation of the knowledge base suggested high level of completeness and accuracy, and identified improvements in cultural appropriateness. These were addressed in new iterations of the knowledge base. Discussion The resulting knowledge base is theoretically grounded, incorporates practical and evidence-based knowledge used by diabetes educators in practice settings, and allows for personally meaningful choices by individuals with diabetes. Participatory design approach helped researchers to capture implicit knowledge of practicing diabetes educators and make it explicit and reusable. Conclusion The knowledge base proposed here is an important step towards development of new generation patient-centric decision support tools for facilitating chronic disease self-management. While this knowledge base specifically targets diabetes, its overall structure and composition can be generalized to other chronic conditions. PMID:26547253

  4. Drug knowledge bases and their applications in biomedical informatics research.

    PubMed

    Zhu, Yongjun; Elemento, Olivier; Pathak, Jyotishman; Wang, Fei

    2018-01-03

    Recent advances in biomedical research have generated a large volume of drug-related data. To effectively handle this flood of data, many initiatives have been taken to help researchers make good use of them. As the results of these initiatives, many drug knowledge bases have been constructed. They range from simple ones with specific focuses to comprehensive ones that contain information on almost every aspect of a drug. These curated drug knowledge bases have made significant contributions to the development of efficient and effective health information technologies for better health-care service delivery. Understanding and comparing existing drug knowledge bases and how they are applied in various biomedical studies will help us recognize the state of the art and design better knowledge bases in the future. In addition, researchers can get insights on novel applications of the drug knowledge bases through a review of successful use cases. In this study, we provide a review of existing popular drug knowledge bases and their applications in drug-related studies. We discuss challenges in constructing and using drug knowledge bases as well as future research directions toward a better ecosystem of drug knowledge bases. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  5. Perspectives on knowledge in engineering design

    NASA Technical Reports Server (NTRS)

    Rasdorf, W. J.

    1985-01-01

    Various perspectives are given of the knowledge currently used in engineering design, specifically dealing with knowledge-based expert systems (KBES). Constructing an expert system often reveals inconsistencies in domain knowledge while formalizing it. The types of domain knowledge (facts, procedures, judgments, and control) differ from the classes of that knowledge (creative, innovative, and routine). The feasible tasks for expert systems can be determined based on these types and classes of knowledge. Interpretive tasks require reasoning about a task in light of the knowledge available, where generative tasks create potential solutions to be tested against constraints. Only after classifying the domain by type and level can the engineer select a knowledge-engineering tool for the domain being considered. The critical features to be weighed after classification are knowledge representation techniques, control strategies, interface requirements, compatibility with traditional systems, and economic considerations.

  6. The Effects of Embedded Generative Learning Strategies and Collaboration on Knowledge Acquisition in a Cognitive Flexibility-Based Computer Learning Environment

    DTIC Science & Technology

    1998-08-07

    cognitive flexibility theory and generative learning theory which focus primarily on the individual student’s cognitive development , collaborative... develop "Handling Transfusion Hazards," a computer program based upon cognitive flexibility theory principles. The Program: Handling Transfusion Hazards...computer program was developed according to cognitive flexibility theory principles. A generative version was then developed by embedding

  7. Generation of Test Questions from RDF Files Using PYTHON and SPARQL

    NASA Astrophysics Data System (ADS)

    Omarbekova, Assel; Sharipbay, Altynbek; Barlybaev, Alibek

    2017-02-01

    This article describes the development of the system for the automatic generation of test questions based on the knowledge base. This work has an applicable nature and provides detailed examples of the development of ontology and implementation the SPARQL queries in RDF-documents. Also it describes implementation of the program generating questions in the Python programming language including the necessary libraries while working with RDF-files.

  8. Introduction: The Growing Importance of Traditional Forest-Related Knowledge

    Treesearch

    Ronald L. Trosper; John A. Parrotta

    2012-01-01

    The knowledge, innovations, and practices of local and indigenous communities have supported their forest-based livelihoods for countless generations. The role of traditional knowledge—and the bio-cultural diversity it sustains—is increasingly recognized as important by decision makers, conservation and development organizations, and the scientifi c community. However...

  9. Dilemmatic Spaces: High-Stakes Testing and the Possibilities of Collaborative Knowledge Work to Generate Learning Innovations

    ERIC Educational Resources Information Center

    Singh, Parlo; Märtsin, Mariann; Glasswell, Kathryn

    2015-01-01

    This paper examines collaborative researcher-practitioner knowledge work around assessment data in culturally diverse, low socio-economic school communities in Queensland, Australia. Specifically, the paper draws on interview accounts about the work of a cohort of school-based researchers who acted as mediators bridging knowledge flows between a…

  10. The use and generation of illustrative examples in computer-based instructional systems

    NASA Technical Reports Server (NTRS)

    Selig, William John; Johannes, James D.

    1987-01-01

    A method is proposed whereby the underlying domain knowledge is represented such that illustrative examples may be generated on demand. This method has the advantage that the generated example can follow changes in the domain in addition to allowing automatic customization of the example to the individual.

  11. SU-E-T-129: Are Knowledge-Based Planning Dose Estimates Valid for Distensible Organs?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, R; Heron, D; Huq, M

    2015-06-15

    Purpose: Knowledge-based planning programs have become available to assist treatment planning in radiation therapy. Such programs can be used to generate estimated DVHs and planning constraints for organs at risk (OARs), based upon a model generated from previous plans. These estimates are based upon the planning CT scan. However, for distensible OARs like the bladder and rectum, daily variations in volume may make the dose estimates invalid. The purpose of this study is to determine whether knowledge-based DVH dose estimates may be valid for distensible OARs. Methods: The Varian RapidPlan™ knowledge-based planning module was used to generate OAR dose estimatesmore » and planning objectives for 10 prostate cases previously planned with VMAT, and final plans were calculated for each. Five weekly setup CBCT scans of each patient were then downloaded and contoured (assuming no change in size and shape of the target volume), and rectum and bladder DVHs were recalculated for each scan. Dose volumes were then compared at 75, 60,and 40 Gy for the bladder and rectum between the planning scan and the CBCTs. Results: Plan doses and estimates matched well at all dose points., Volumes of the rectum and bladder varied widely between planning CT and the CBCTs, ranging from 0.46 to 2.42 for the bladder and 0.71 to 2.18 for the rectum, causing relative dose volumes to vary between planning CT and CBCT, but absolute dose volumes were more consistent. The overall ratio of CBCT/plan dose volumes was 1.02 ±0.27 for rectum and 0.98 ±0.20 for bladder in these patients. Conclusion: Knowledge-based planning dose volume estimates for distensible OARs are still valid, in absolute volume terms, between treatment planning scans and CBCT’s taken during daily treatment. Further analysis of the data is being undertaken to determine how differences depend upon rectum and bladder filling state. This work has been supported by Varian Medical Systems.« less

  12. Integrating knowledge and control into hypermedia-based training environments: Experiments with HyperCLIPS

    NASA Technical Reports Server (NTRS)

    Hill, Randall W., Jr.

    1990-01-01

    The issues of knowledge representation and control in hypermedia-based training environments are discussed. The main objective is to integrate the flexible presentation capability of hypermedia with a knowledge-based approach to lesson discourse management. The instructional goals and their associated concepts are represented in a knowledge representation structure called a 'concept network'. Its functional usages are many: it is used to control the navigation through a presentation space, generate tests for student evaluation, and model the student. This architecture was implemented in HyperCLIPS, a hybrid system that creates a bridge between HyperCard, a popular hypertext-like system used for building user interfaces to data bases and other applications, and CLIPS, a highly portable government-owned expert system shell.

  13. Development of an expert system prototype for determining software functional requirements for command management activities at NASA Goddard

    NASA Technical Reports Server (NTRS)

    Liebowitz, J.

    1986-01-01

    The development of an expert system prototype for software functional requirement determination for NASA Goddard's Command Management System, as part of its process of transforming general requests into specific near-earth satellite commands, is described. The present knowledge base was formulated through interactions with domain experts, and was then linked to the existing Knowledge Engineering Systems (KES) expert system application generator. Steps in the knowledge-base development include problem-oriented attribute hierarchy development, knowledge management approach determination, and knowledge base encoding. The KES Parser and Inspector, in addition to backcasting and analogical mapping, were used to validate the expert system-derived requirements for one of the major functions of a spacecraft, the solar Maximum Mission. Knowledge refinement, evaluation, and implementation procedures of the expert system were then accomplished.

  14. Semantics-based plausible reasoning to extend the knowledge coverage of medical knowledge bases for improved clinical decision support.

    PubMed

    Mohammadhassanzadeh, Hossein; Van Woensel, William; Abidi, Samina Raza; Abidi, Syed Sibte Raza

    2017-01-01

    Capturing complete medical knowledge is challenging-often due to incomplete patient Electronic Health Records (EHR), but also because of valuable, tacit medical knowledge hidden away in physicians' experiences. To extend the coverage of incomplete medical knowledge-based systems beyond their deductive closure, and thus enhance their decision-support capabilities, we argue that innovative, multi-strategy reasoning approaches should be applied. In particular, plausible reasoning mechanisms apply patterns from human thought processes, such as generalization, similarity and interpolation, based on attributional, hierarchical, and relational knowledge. Plausible reasoning mechanisms include inductive reasoning , which generalizes the commonalities among the data to induce new rules, and analogical reasoning , which is guided by data similarities to infer new facts. By further leveraging rich, biomedical Semantic Web ontologies to represent medical knowledge, both known and tentative, we increase the accuracy and expressivity of plausible reasoning, and cope with issues such as data heterogeneity, inconsistency and interoperability. In this paper, we present a Semantic Web-based, multi-strategy reasoning approach, which integrates deductive and plausible reasoning and exploits Semantic Web technology to solve complex clinical decision support queries. We evaluated our system using a real-world medical dataset of patients with hepatitis, from which we randomly removed different percentages of data (5%, 10%, 15%, and 20%) to reflect scenarios with increasing amounts of incomplete medical knowledge. To increase the reliability of the results, we generated 5 independent datasets for each percentage of missing values, which resulted in 20 experimental datasets (in addition to the original dataset). The results show that plausibly inferred knowledge extends the coverage of the knowledge base by, on average, 2%, 7%, 12%, and 16% for datasets with, respectively, 5%, 10%, 15%, and 20% of missing values. This expansion in the KB coverage allowed solving complex disease diagnostic queries that were previously unresolvable, without losing the correctness of the answers. However, compared to deductive reasoning, data-intensive plausible reasoning mechanisms yield a significant performance overhead. We observed that plausible reasoning approaches, by generating tentative inferences and leveraging domain knowledge of experts, allow us to extend the coverage of medical knowledge bases, resulting in improved clinical decision support. Second, by leveraging OWL ontological knowledge, we are able to increase the expressivity and accuracy of plausible reasoning methods. Third, our approach is applicable to clinical decision support systems for a range of chronic diseases.

  15. Planning bioinformatics workflows using an expert system.

    PubMed

    Chen, Xiaoling; Chang, Jeffrey T

    2017-04-15

    Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  16. Planning bioinformatics workflows using an expert system

    PubMed Central

    Chen, Xiaoling; Chang, Jeffrey T.

    2017-01-01

    Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928

  17. Guide for applied public health workforce research: an evidence-based approach to workforce development.

    PubMed

    Thacker, Stephen B

    2009-11-01

    Essential to achievement of the public health mission is a knowledgeable, competent, and prepared workforce; yet, there is little application of science and technical knowledge to ensuring the effectiveness of that workforce, be it governmental or private. In this article, I review the evidence for effective workforce development and argue for an increased emphasis on an evidence-based approach to ensuring an effective workforce by encouraging the generation of the evidence base that is required. To achieve this, I propose the appointment of an independent Task Force on Public Health Workforce Practice to oversee the development of a Guide for Public Health Workforce Research and Practice (Workforce Guide), a process that will generate and bring together the workforce evidence base for use by public health practitioners.

  18. The Role of Domain Knowledge in Creative Generation

    ERIC Educational Resources Information Center

    Ward, Thomas B.

    2008-01-01

    Previous studies have shown that a predominant tendency in creative generation tasks is to base new ideas on well-known, specific instances of previous ideas (e.g., basing ideas for imaginary aliens on dogs, cats or bears). However, a substantial minority of individuals has been shown to adopt more abstract approaches to the task and to develop…

  19. Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Karpatne, A.; Kumar, V.

    2017-12-01

    Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.

  20. Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction

    PubMed Central

    Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian

    2017-01-01

    Abstract Motivation: Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. Results: We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Availability and Implementation: Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. Contact: deane@stats.ox.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28453681

  1. Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction.

    PubMed

    Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian; Deane, Charlotte M

    2017-05-01

    Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. deane@stats.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.

  2. Integrating Problem-Based Learning with ICT for Developing Trainee Teachers' Content Knowledge and Teaching Skill

    ERIC Educational Resources Information Center

    Karami, Mehdi; Karami, Zohreh; Attaran, Mohammad

    2013-01-01

    Professional teachers can guarantee the progress and the promotion of society because fostering the development of next generation is up to them and depends on their professional knowledge which has two kinds of sources: content knowledge and teaching skill. The aim of the present research was studying the effect of integrating problem-based…

  3. Ontology-based configuration of problem-solving methods and generation of knowledge-acquisition tools: application of PROTEGE-II to protocol-based decision support.

    PubMed

    Tu, S W; Eriksson, H; Gennari, J H; Shahar, Y; Musen, M A

    1995-06-01

    PROTEGE-II is a suite of tools and a methodology for building knowledge-based systems and domain-specific knowledge-acquisition tools. In this paper, we show how PROTEGE-II can be applied to the task of providing protocol-based decision support in the domain of treating HIV-infected patients. To apply PROTEGE-II, (1) we construct a decomposable problem-solving method called episodic skeletal-plan refinement, (2) we build an application ontology that consists of the terms and relations in the domain, and of method-specific distinctions not already captured in the domain terms, and (3) we specify mapping relations that link terms from the application ontology to the domain-independent terms used in the problem-solving method. From the application ontology, we automatically generate a domain-specific knowledge-acquisition tool that is custom-tailored for the application. The knowledge-acquisition tool is used for the creation and maintenance of domain knowledge used by the problem-solving method. The general goal of the PROTEGE-II approach is to produce systems and components that are reusable and easily maintained. This is the rationale for constructing ontologies and problem-solving methods that can be composed from a set of smaller-grained methods and mechanisms. This is also why we tightly couple the knowledge-acquisition tools to the application ontology that specifies the domain terms used in the problem-solving systems. Although our evaluation is still preliminary, for the application task of providing protocol-based decision support, we show that these goals of reusability and easy maintenance can be achieved. We discuss design decisions and the tradeoffs that have to be made in the development of the system.

  4. Knowledge-based IMRT treatment planning for prostate cancer.

    PubMed

    Chanyavanich, Vorakarn; Das, Shiva K; Lee, William R; Lo, Joseph Y

    2011-05-01

    To demonstrate the feasibility of using a knowledge base of prior treatment plans to generate new prostate intensity modulated radiation therapy (IMRT) plans. Each new case would be matched against others in the knowledge base. Once the best match is identified, that clinically approved plan is used to generate the new plan. A database of 100 prostate IMRT treatment plans was assembled into an information-theoretic system. An algorithm based on mutual information was implemented to identify similar patient cases by matching 2D beam's eye view projections of contours. Ten randomly selected query cases were each matched with the most similar case from the database of prior clinically approved plans. Treatment parameters from the matched case were used to develop new treatment plans. A comparison of the differences in the dose-volume histograms between the new and the original treatment plans were analyzed. On average, the new knowledge-based plan is capable of achieving very comparable planning target volume coverage as the original plan, to within 2% as evaluated for D98, D95, and D1. Similarly, the dose to the rectum and dose to the bladder are also comparable to the original plan. For the rectum, the mean and standard deviation of the dose percentage differences for D20, D30, and D50 are 1.8% +/- 8.5%, -2.5% +/- 13.9%, and -13.9% +/- 23.6%, respectively. For the bladder, the mean and standard deviation of the dose percentage differences for D20, D30, and D50 are -5.9% +/- 10.8%, -12.2% +/- 14.6%, and -24.9% +/- 21.2%, respectively. A negative percentage difference indicates that the new plan has greater dose sparing as compared to the original plan. The authors demonstrate a knowledge-based approach of using prior clinically approved treatment plans to generate clinically acceptable treatment plans of high quality. This semiautomated approach has the potential to improve the efficiency of the treatment planning process while ensuring that high quality plans are developed.

  5. From science to action: Principles for undertaking environmental research that enables knowledge exchange and evidence-based decision-making.

    PubMed

    Cvitanovic, C; McDonald, J; Hobday, A J

    2016-12-01

    Effective conservation requires knowledge exchange among scientists and decision-makers to enable learning and support evidence-based decision-making. Efforts to improve knowledge exchange have been hindered by a paucity of empirically-grounded guidance to help scientists and practitioners design and implement research programs that actively facilitate knowledge exchange. To address this, we evaluated the Ningaloo Research Program (NRP), which was designed to generate new scientific knowledge to support evidence-based decisions about the management of the Ningaloo Marine Park in north-western Australia. Specifically, we evaluated (1) outcomes of the NRP, including the extent to which new knowledge informed management decisions; (2) the barriers that prevented knowledge exchange among scientists and managers; (3) the key requirements for improving knowledge exchange processes in the future; and (4) the core capacities that are required to support knowledge exchange processes. While the NRP generated expansive and multidisciplinary science outputs directly relevant to the management of the Ningaloo Marine Park, decision-makers are largely unaware of this knowledge and little has been integrated into decision-making processes. A range of barriers prevented efficient and effective knowledge exchange among scientists and decision-makers including cultural differences among the groups, institutional barriers within decision-making agencies, scientific outputs that were not translated for decision-makers and poor alignment between research design and actual knowledge needs. We identify a set of principles to be implemented routinely as part of any applied research program, including; (i) stakeholder mapping prior to the commencement of research programs to identify all stakeholders, (ii) research questions to be co-developed with stakeholders, (iii) implementation of participatory research approaches, (iv) use of a knowledge broker, and (v) tailored knowledge management systems. Finally, we articulate the individual, institutional and financial capacities that must be developed to underpin successful knowledge exchange strategies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  6. The KAPTUR environment: An operations concept

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This report presents a high-level specification and operations concept for KAPTUR, a development environment based on knowledge acquisition for preservation of tradeoffs and underlying rationales. KAPTUR is intended to do what its name implies: to capture knowledge that is required or generated during the development process, but that is often lost because it is contextual, i.e., it does not appear directly in the end-products of development. Such knowledge includes issues that were raised during development, alternatives that were considered, and the reasons for choosing one alternative over others. Contextual information is usually only maintained as a memory in a developer's mind. As time passes, the memories become more vague and individuals become unavailable, and eventually the knowledge is lost. KAPTUR seeks to mitigate this process of attrition by recording and organizing contextual knowledge as it is generated.

  7. Exploring relation types for literature-based discovery.

    PubMed

    Preiss, Judita; Stevenson, Mark; Gaizauskas, Robert

    2015-09-01

    Literature-based discovery (LBD) aims to identify "hidden knowledge" in the medical literature by: (1) analyzing documents to identify pairs of explicitly related concepts (terms), then (2) hypothesizing novel relations between pairs of unrelated concepts that are implicitly related via a shared concept to which both are explicitly related. Many LBD approaches use simple techniques to identify semantically weak relations between concepts, for example, document co-occurrence. These generate huge numbers of hypotheses, difficult for humans to assess. More complex techniques rely on linguistic analysis, for example, shallow parsing, to identify semantically stronger relations. Such approaches generate fewer hypotheses, but may miss hidden knowledge. The authors investigate this trade-off in detail, comparing techniques for identifying related concepts to discover which are most suitable for LBD. A generic LBD system that can utilize a range of relation types was developed. Experiments were carried out comparing a number of techniques for identifying relations. Two approaches were used for evaluation: replication of existing discoveries and the "time slicing" approach.(1) RESULTS: Previous LBD discoveries could be replicated using relations based either on document co-occurrence or linguistic analysis. Using relations based on linguistic analysis generated many fewer hypotheses, but a significantly greater proportion of them were candidates for hidden knowledge. The use of linguistic analysis-based relations improves accuracy of LBD without overly damaging coverage. LBD systems often generate huge numbers of hypotheses, which are infeasible to manually review. Improving their accuracy has the potential to make these systems significantly more usable. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association.

  8. An automated knowledge-based textual summarization system for longitudinal, multivariate clinical data.

    PubMed

    Goldstein, Ayelet; Shahar, Yuval

    2016-06-01

    Design and implement an intelligent free-text summarization system: The system's input includes large numbers of longitudinal, multivariate, numeric and symbolic clinical raw data, collected over varying periods of time, and in different complex contexts, and a suitable medical knowledge base. The system then automatically generates a textual summary of the data. We aim to prove the feasibility of implementing such a system, and to demonstrate its potential benefits for clinicians and for enhancement of quality of care. We have designed a new, domain-independent, knowledge-based system, the CliniText system, for automated summarization in free text of longitudinal medical records of any duration, in any context. The system is composed of six components: (1) A temporal abstraction module generates all possible abstractions from the patient's raw data using a temporal-abstraction knowledge base; (2) The abductive reasoning module infers abstractions or events from the data, which were not explicitly included in the database; (3) The pruning module filters out raw or abstract data based on predefined heuristics; (4) The document structuring module organizes the remaining raw or abstract data, according to the desired format; (5) The microplanning module, groups the raw or abstract data and creates referring expressions; (6) The surface realization module, generates the text, and applies the grammar rules of the chosen language. We have performed an initial technical evaluation of the system in the cardiac intensive-care and diabetes domains. We also summarize the results of a more detailed evaluation study that we have performed in the intensive-care domain that assessed the completeness, correctness, and overall quality of the system's generated text, and its potential benefits to clinical decision making. We assessed these measures for 31 letters originally composed by clinicians, and for the same letters when generated by the CliniText system. We have successfully implemented all of the components of the CliniText system in software. We have also been able to create a comprehensive temporal-abstraction knowledge base to support its functionality, mostly in the intensive-care domain. The initial technical evaluation of the system in the cardiac intensive-care and diabetes domains has shown great promise, proving the feasibility of constructing and operating such systems. The detailed results of the evaluation in the intensive-care domain are out of scope of the current paper, and we refer the reader to a more detailed source. In all of the letters composed by clinicians, there were at least two important items per letter missed that were included by the CliniText system. The clinicians' letters got a significantly better grade in three out of four measured quality parameters, as judged by an expert; however, the variance in the quality was much higher in the clinicians' letters. In addition, three clinicians answered questions based on the discharge letter 40% faster, and answered four out of the five questions equally well or significantly better, when using the CliniText-generated letters, than when using the clinician-composed letters. Constructing a working system for automated summarization in free text of large numbers of varying periods of multivariate longitudinal clinical data is feasible. So is the construction of a large knowledge base, designed to support such a system, in a complex clinical domain, such as the intensive-care domain. The integration of the quality and functionality results suggests that the optimal discharge letter should exploit both human and machine, possibly by creating a machine-generated draft that will be polished by a human clinician. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Predicate Oriented Pattern Analysis for Biomedical Knowledge Discovery

    PubMed Central

    Shen, Feichen; Liu, Hongfang; Sohn, Sunghwan; Larson, David W.; Lee, Yugyung

    2017-01-01

    In the current biomedical data movement, numerous efforts have been made to convert and normalize a large number of traditional structured and unstructured data (e.g., EHRs, reports) to semi-structured data (e.g., RDF, OWL). With the increasing number of semi-structured data coming into the biomedical community, data integration and knowledge discovery from heterogeneous domains become important research problem. In the application level, detection of related concepts among medical ontologies is an important goal of life science research. It is more crucial to figure out how different concepts are related within a single ontology or across multiple ontologies by analysing predicates in different knowledge bases. However, the world today is one of information explosion, and it is extremely difficult for biomedical researchers to find existing or potential predicates to perform linking among cross domain concepts without any support from schema pattern analysis. Therefore, there is a need for a mechanism to do predicate oriented pattern analysis to partition heterogeneous ontologies into closer small topics and do query generation to discover cross domain knowledge from each topic. In this paper, we present such a model that predicates oriented pattern analysis based on their close relationship and generates a similarity matrix. Based on this similarity matrix, we apply an innovated unsupervised learning algorithm to partition large data sets into smaller and closer topics and generate meaningful queries to fully discover knowledge over a set of interlinked data sources. We have implemented a prototype system named BmQGen and evaluate the proposed model with colorectal surgical cohort from the Mayo Clinic. PMID:28983419

  10. Habitat classification modeling with incomplete data: Pushing the habitat envelope

    USGS Publications Warehouse

    Zarnetske, P.L.; Edwards, T.C.; Moisen, Gretchen G.

    2007-01-01

    Habitat classification models (HCMs) are invaluable tools for species conservation, land-use planning, reserve design, and metapopulation assessments, particularly at broad spatial scales. However, species occurrence data are often lacking and typically limited to presence points at broad scales. This lack of absence data precludes the use of many statistical techniques for HCMs. One option is to generate pseudo-absence points so that the many available statistical modeling tools can be used. Traditional techniques generate pseudoabsence points at random across broadly defined species ranges, often failing to include biological knowledge concerning the species-habitat relationship. We incorporated biological knowledge of the species-habitat relationship into pseudo-absence points by creating habitat envelopes that constrain the region from which points were randomly selected. We define a habitat envelope as an ecological representation of a species, or species feature's (e.g., nest) observed distribution (i.e., realized niche) based on a single attribute, or the spatial intersection of multiple attributes. We created HCMs for Northern Goshawk (Accipiter gentilis atricapillus) nest habitat during the breeding season across Utah forests with extant nest presence points and ecologically based pseudo-absence points using logistic regression. Predictor variables were derived from 30-m USDA Landfire and 250-m Forest Inventory and Analysis (FIA) map products. These habitat-envelope-based models were then compared to null envelope models which use traditional practices for generating pseudo-absences. Models were assessed for fit and predictive capability using metrics such as kappa, thresholdindependent receiver operating characteristic (ROC) plots, adjusted deviance (Dadj2), and cross-validation, and were also assessed for ecological relevance. For all cases, habitat envelope-based models outperformed null envelope models and were more ecologically relevant, suggesting that incorporating biological knowledge into pseudo-absence point generation is a powerful tool for species habitat assessments. Furthermore, given some a priori knowledge of the species-habitat relationship, ecologically based pseudo-absence points can be applied to any species, ecosystem, data resolution, and spatial extent. ?? 2007 by the Ecological Society of America.

  11. The COPD Knowledge Base: enabling data analysis and computational simulation in translational COPD research.

    PubMed

    Cano, Isaac; Tényi, Ákos; Schueller, Christine; Wolff, Martin; Huertas Migueláñez, M Mercedes; Gomez-Cabrero, David; Antczak, Philipp; Roca, Josep; Cascante, Marta; Falciani, Francesco; Maier, Dieter

    2014-11-28

    Previously we generated a chronic obstructive pulmonary disease (COPD) specific knowledge base (http://www.copdknowledgebase.eu) from clinical and experimental data, text-mining results and public databases. This knowledge base allowed the retrieval of specific molecular networks together with integrated clinical and experimental data. The COPDKB has now been extended to integrate over 40 public data sources on functional interaction (e.g. signal transduction, transcriptional regulation, protein-protein interaction, gene-disease association). In addition we integrated COPD-specific expression and co-morbidity networks connecting over 6 000 genes/proteins with physiological parameters and disease states. Three mathematical models describing different aspects of systemic effects of COPD were connected to clinical and experimental data. We have completely redesigned the technical architecture of the user interface and now provide html and web browser-based access and form-based searches. A network search enables the use of interconnecting information and the generation of disease-specific sub-networks from general knowledge. Integration with the Synergy-COPD Simulation Environment enables multi-scale integrated simulation of individual computational models while integration with a Clinical Decision Support System allows delivery into clinical practice. The COPD Knowledge Base is the only publicly available knowledge resource dedicated to COPD and combining genetic information with molecular, physiological and clinical data as well as mathematical modelling. Its integrated analysis functions provide overviews about clinical trends and connections while its semantically mapped content enables complex analysis approaches. We plan to further extend the COPDKB by offering it as a repository to publish and semantically integrate data from relevant clinical trials. The COPDKB is freely available after registration at http://www.copdknowledgebase.eu.

  12. Extracting genetic alteration information for personalized cancer therapy from ClinicalTrials.gov

    PubMed Central

    Xu, Jun; Lee, Hee-Jin; Zeng, Jia; Wu, Yonghui; Zhang, Yaoyun; Huang, Liang-Chin; Johnson, Amber; Holla, Vijaykumar; Bailey, Ann M; Cohen, Trevor; Meric-Bernstam, Funda; Bernstam, Elmer V

    2016-01-01

    Objective: Clinical trials investigating drugs that target specific genetic alterations in tumors are important for promoting personalized cancer therapy. The goal of this project is to create a knowledge base of cancer treatment trials with annotations about genetic alterations from ClinicalTrials.gov. Methods: We developed a semi-automatic framework that combines advanced text-processing techniques with manual review to curate genetic alteration information in cancer trials. The framework consists of a document classification system to identify cancer treatment trials from ClinicalTrials.gov and an information extraction system to extract gene and alteration pairs from the Title and Eligibility Criteria sections of clinical trials. By applying the framework to trials at ClinicalTrials.gov, we created a knowledge base of cancer treatment trials with genetic alteration annotations. We then evaluated each component of the framework against manually reviewed sets of clinical trials and generated descriptive statistics of the knowledge base. Results and Discussion: The automated cancer treatment trial identification system achieved a high precision of 0.9944. Together with the manual review process, it identified 20 193 cancer treatment trials from ClinicalTrials.gov. The automated gene-alteration extraction system achieved a precision of 0.8300 and a recall of 0.6803. After validation by manual review, we generated a knowledge base of 2024 cancer trials that are labeled with specific genetic alteration information. Analysis of the knowledge base revealed the trend of increased use of targeted therapy for cancer, as well as top frequent gene-alteration pairs of interest. We expect this knowledge base to be a valuable resource for physicians and patients who are seeking information about personalized cancer therapy. PMID:27013523

  13. Extracting genetic alteration information for personalized cancer therapy from ClinicalTrials.gov.

    PubMed

    Xu, Jun; Lee, Hee-Jin; Zeng, Jia; Wu, Yonghui; Zhang, Yaoyun; Huang, Liang-Chin; Johnson, Amber; Holla, Vijaykumar; Bailey, Ann M; Cohen, Trevor; Meric-Bernstam, Funda; Bernstam, Elmer V; Xu, Hua

    2016-07-01

    Clinical trials investigating drugs that target specific genetic alterations in tumors are important for promoting personalized cancer therapy. The goal of this project is to create a knowledge base of cancer treatment trials with annotations about genetic alterations from ClinicalTrials.gov. We developed a semi-automatic framework that combines advanced text-processing techniques with manual review to curate genetic alteration information in cancer trials. The framework consists of a document classification system to identify cancer treatment trials from ClinicalTrials.gov and an information extraction system to extract gene and alteration pairs from the Title and Eligibility Criteria sections of clinical trials. By applying the framework to trials at ClinicalTrials.gov, we created a knowledge base of cancer treatment trials with genetic alteration annotations. We then evaluated each component of the framework against manually reviewed sets of clinical trials and generated descriptive statistics of the knowledge base. The automated cancer treatment trial identification system achieved a high precision of 0.9944. Together with the manual review process, it identified 20 193 cancer treatment trials from ClinicalTrials.gov. The automated gene-alteration extraction system achieved a precision of 0.8300 and a recall of 0.6803. After validation by manual review, we generated a knowledge base of 2024 cancer trials that are labeled with specific genetic alteration information. Analysis of the knowledge base revealed the trend of increased use of targeted therapy for cancer, as well as top frequent gene-alteration pairs of interest. We expect this knowledge base to be a valuable resource for physicians and patients who are seeking information about personalized cancer therapy. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Need to Knowledge (NtK) Model: an evidence-based framework for generating technological innovations with socio-economic impacts.

    PubMed

    Flagg, Jennifer L; Lane, Joseph P; Lockett, Michelle M

    2013-02-15

    Traditional government policies suggest that upstream investment in scientific research is necessary and sufficient to generate technological innovations. The expected downstream beneficial socio-economic impacts are presumed to occur through non-government market mechanisms. However, there is little quantitative evidence for such a direct and formulaic relationship between public investment at the input end and marketplace benefits at the impact end. Instead, the literature demonstrates that the technological innovation process involves a complex interaction between multiple sectors, methods, and stakeholders. The authors theorize that accomplishing the full process of technological innovation in a deliberate and systematic manner requires an operational-level model encompassing three underlying methods, each designed to generate knowledge outputs in different states: scientific research generates conceptual discoveries; engineering development generates prototype inventions; and industrial production generates commercial innovations. Given the critical roles of engineering and business, the entire innovation process should continuously consider the practical requirements and constraints of the commercial marketplace.The Need to Knowledge (NtK) Model encompasses the activities required to successfully generate innovations, along with associated strategies for effectively communicating knowledge outputs in all three states to the various stakeholders involved. It is intentionally grounded in evidence drawn from academic analysis to facilitate objective and quantitative scrutiny, and industry best practices to enable practical application. The Need to Knowledge (NtK) Model offers a practical, market-oriented approach that avoids the gaps, constraints and inefficiencies inherent in undirected activities and disconnected sectors. The NtK Model is a means to realizing increased returns on public investments in those science and technology programs expressly intended to generate beneficial socio-economic impacts.

  15. Need to Knowledge (NtK) Model: an evidence-based framework for generating technological innovations with socio-economic impacts

    PubMed Central

    2013-01-01

    Background Traditional government policies suggest that upstream investment in scientific research is necessary and sufficient to generate technological innovations. The expected downstream beneficial socio-economic impacts are presumed to occur through non-government market mechanisms. However, there is little quantitative evidence for such a direct and formulaic relationship between public investment at the input end and marketplace benefits at the impact end. Instead, the literature demonstrates that the technological innovation process involves a complex interaction between multiple sectors, methods, and stakeholders. Discussion The authors theorize that accomplishing the full process of technological innovation in a deliberate and systematic manner requires an operational-level model encompassing three underlying methods, each designed to generate knowledge outputs in different states: scientific research generates conceptual discoveries; engineering development generates prototype inventions; and industrial production generates commercial innovations. Given the critical roles of engineering and business, the entire innovation process should continuously consider the practical requirements and constraints of the commercial marketplace. The Need to Knowledge (NtK) Model encompasses the activities required to successfully generate innovations, along with associated strategies for effectively communicating knowledge outputs in all three states to the various stakeholders involved. It is intentionally grounded in evidence drawn from academic analysis to facilitate objective and quantitative scrutiny, and industry best practices to enable practical application. Summary The Need to Knowledge (NtK) Model offers a practical, market-oriented approach that avoids the gaps, constraints and inefficiencies inherent in undirected activities and disconnected sectors. The NtK Model is a means to realizing increased returns on public investments in those science and technology programs expressly intended to generate beneficial socio-economic impacts. PMID:23414369

  16. LIMSI @ 2014 Clinical Decision Support Track

    DTIC Science & Technology

    2014-11-01

    MeSH and BoW runs) was based on the automatic generation of disease hypotheses for which we used data from OrphaNet [4] and the Disease Symptom Knowledge...with the MeSH terms of the top 5 disease hypotheses generated for the case reports. Compared to the other participants we achieved low scores...clinical question types. Query expansion (for both MeSH and BoW runs) was based on the automatic generation of disease hypotheses for which we used data

  17. The sixth generation robot in space

    NASA Technical Reports Server (NTRS)

    Butcher, A.; Das, A.; Reddy, Y. V.; Singh, H.

    1990-01-01

    The knowledge based simulator developed in the artificial intelligence laboratory has become a working test bed for experimenting with intelligent reasoning architectures. With this simulator, recently, small experiments have been done with an aim to simulate robot behavior to avoid colliding paths. An automatic extension of such experiments to intelligently planning robots in space demands advanced reasoning architectures. One such architecture for general purpose problem solving is explored. The robot, seen as a knowledge base machine, goes via predesigned abstraction mechanism for problem understanding and response generation. The three phases in one such abstraction scheme are: abstraction for representation, abstraction for evaluation, and abstraction for resolution. Such abstractions require multimodality. This multimodality requires the use of intensional variables to deal with beliefs in the system. Abstraction mechanisms help in synthesizing possible propagating lattices for such beliefs. The machine controller enters into a sixth generation paradigm.

  18. Lynx: a database and knowledge extraction engine for integrative medicine.

    PubMed

    Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T Conrad; Maltsev, Natalia

    2014-01-01

    We have developed Lynx (http://lynx.ci.uchicago.edu)--a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces.

  19. Undergraduate Understanding of Climate Change: The Influences of College Major and Environmental Group Membership on Survey Knowledge Scores

    ERIC Educational Resources Information Center

    Huxster, Joanna K.; Uribe-Zarain, Ximena; Kempton, Willett

    2015-01-01

    A survey covering the scientific and social aspects of climate change was administered to examine U.S. undergraduate student mental models, and compare knowledge between groups based on major and environmental group membership. A Knowledge Score (scale 0-35, mean score = 17.84) was generated for respondents at two, central East Coast, U.S.…

  20. Search Path Mapping: A Versatile Approach for Visualizing Problem-Solving Behavior.

    ERIC Educational Resources Information Center

    Stevens, Ronald H.

    1991-01-01

    Computer-based problem-solving examinations in immunology generate graphic representations of students' search paths, allowing evaluation of how organized and focused their knowledge is, how well their organization relates to critical concepts in immunology, where major misconceptions exist, and whether proper knowledge links exist between content…

  1. Web-Mediated Knowledge Synthesis for Educators

    ERIC Educational Resources Information Center

    DeSchryver, Michael

    2015-01-01

    Ubiquitous and instant access to information on the Web is challenging what constitutes 21st century literacies. This article explores the notion of Web-mediated knowledge synthesis, an approach to integrating Web-based learning that may result in generative synthesis of ideas. This article describes the skills and strategies that may support…

  2. Reciprocal and Scholarly Service Learning: Emergent Theoretical Understandings of the University-Community Interface in South Africa

    ERIC Educational Resources Information Center

    Smith-Tolken, Antoinette; Bitzer, Eli

    2017-01-01

    This study addresses underlying principles to interpret scholarly-based service-related teaching and learning. Such principles include addressing specific concerns of communities, transforming theoretical knowledge into lived experiences for students, making the knowledge generated within communities meaningful and forging constant growth and…

  3. Using Evolved Fuzzy Neural Networks for Injury Detection from Isokinetic Curves

    NASA Astrophysics Data System (ADS)

    Couchet, Jorge; Font, José María; Manrique, Daniel

    In this paper we propose an evolutionary fuzzy neural networks system for extracting knowledge from a set of time series containing medical information. The series represent isokinetic curves obtained from a group of patients exercising the knee joint on an isokinetic dynamometer. The system has two parts: i) it analyses the time series input in order generate a simplified model of an isokinetic curve; ii) it applies a grammar-guided genetic program to obtain a knowledge base represented by a fuzzy neural network. Once the knowledge base has been generated, the system is able to perform knee injuries detection. The results suggest that evolved fuzzy neural networks perform better than non-evolutionary approaches and have a high accuracy rate during both the training and testing phases. Additionally, they are robust, as the system is able to self-adapt to changes in the problem without human intervention.

  4. Designing a Constraint Based Parser for Sanskrit

    NASA Astrophysics Data System (ADS)

    Kulkarni, Amba; Pokar, Sheetal; Shukl, Devanand

    Verbal understanding (śā bdabodha) of any utterance requires the knowledge of how words in that utterance are related to each other. Such knowledge is usually available in the form of cognition of grammatical relations. Generative grammars describe how a language codes these relations. Thus the knowledge of what information various grammatical relations convey is available from the generation point of view and not the analysis point of view. In order to develop a parser based on any grammar one should then know precisely the semantic content of the grammatical relations expressed in a language string, the clues for extracting these relations and finally whether these relations are expressed explicitly or implicitly. Based on the design principles that emerge from this knowledge, we model the parser as finding a directed Tree, given a graph with nodes representing the words and edges representing the possible relations between them. Further, we also use the Mīmā ṃsā constraint of ākā ṅkṣā (expectancy) to rule out non-solutions and sannidhi (proximity) to prioritize the solutions. We have implemented a parser based on these principles and its performance was found to be satisfactory giving us a confidence to extend its functionality to handle the complex sentences.

  5. Health data and data governance.

    PubMed

    Hovenga, Evelyn J S; Grain, Heather

    2013-01-01

    Health is a knowledge industry, based on data collected to support care, service planning, financing and knowledge advancement. Increasingly there is a need to collect, retrieve and use health record information in an electronic format to provide greater flexibility, as this enables retrieval and display of data in multiple locations and formats irrespective of where the data were collected. Electronically maintained records require greater structure and consistency to achieve this. The use of data held in records generated in real time in clinical systems also has the potential to reduce the time it takes to gain knowledge, as there is less need to collect research specific information, this is only possible if data governance principles are applied. Connected devices and information systems are now generating huge amounts of data, as never before seen. An ability to analyse and mine very large amounts of data, "Big Data", provides policy and decision makers with new insights into varied aspects of work and information flow and operational business patterns and trends, and drives greater efficiencies, and safer and more effective health care. This enables decision makers to apply rules and guidance that have been developed based upon knowledge from many individual patient records through recognition of triggers based upon that knowledge. In clinical decision support systems information about the individual is compared to rules based upon knowledge gained from accumulated information of many to provide guidance at appropriate times in the clinical process. To achieve this the data in the individual system, and the knowledge rules must be represented in a compatible and consistent manner. This chapter describes data attributes; explains the difference between data and information; outlines the requirements for quality data; shows the relevance of health data standards; and describes how data governance impacts representation of content in systems and the use of that information.

  6. Developing genomic knowledge bases and databases to support clinical management: current perspectives.

    PubMed

    Huser, Vojtech; Sincan, Murat; Cimino, James J

    2014-01-01

    Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward.

  7. Developing genomic knowledge bases and databases to support clinical management: current perspectives

    PubMed Central

    Huser, Vojtech; Sincan, Murat; Cimino, James J

    2014-01-01

    Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward. PMID:25276091

  8. Perceptions of Generation Y Undergraduate Students on Career Choices and Employment Leadership: A Study on Private Higher Education Institutions in Selangor

    ERIC Educational Resources Information Center

    Puspanathan, Clarence Anthony; Ramendran SPR, Charles; Muthurajan, Pragash; Singh, Ninderpal Singh Balwant

    2017-01-01

    The crucial step for organisations which are recruiting Generation Y into their workforce is to understand their perceptions and expectations. This would help organisations emerge with the right strategies to attract and retain the Generation Y cohort. The aim of this paper is to contribute to the body of knowledge base in respect of Generation Y…

  9. An Investigation of Factors That Influence the Hypothesis Generation Ability of Students in School- Based Agricultural Education Programs When Troubleshooting Small Gasoline Engines

    ERIC Educational Resources Information Center

    Blackburn, J. Joey; Robinson, J. Shane

    2017-01-01

    The purpose of this study was to determine if selected factors influenced the ability of students in school-based agricultural education programs to generate a correct hypothesis when troubleshooting small gasoline engines. Variables of interest included students' cognitive style, age, GPA, and content knowledge in small gasoline engines. Kirton's…

  10. Plumbing the brain drain.

    PubMed

    Saravia, Nancy Gore; Miranda, Juan Francisco

    2004-08-01

    Opportunity is the driving force of migration. Unsatisfied demands for higher education and skills, which have been created by the knowledge-based global economy, have generated unprecedented opportunities in knowledge-intensive service industries. These multi-trillion dollar industries include information, communication, finance, business, education and health. The leading industrialized nations are also the focal points of knowledge-intensive service industries and as such constitute centres of research and development activity that proactively draw in talented individuals worldwide through selective immigration policies, employment opportunities and targeted recruitment. Higher education is another major conduit of talent from less-developed countries to the centres of the knowledge-based global economy. Together career and educational opportunities drive "brain drain and recirculation". The departure of a large proportion of the most competent and innovative individuals from developing nations slows the achievement of the critical mass needed to generate the enabling context in which knowledge creation occurs. To favourably modify the asymmetric movement and distribution of global talent, developing countries must implement bold and creative strategies that are backed by national policies to: provide world-class educational opportunities, construct knowledge-based research and development industries, and sustainably finance the required investment for these strategies. Brazil, China and India have moved in this direction, offering world-class education in areas crucial to national development, such as biotechnology and information technology, paralleled by investments in research and development. As a result, only a small proportion of the most highly educated individuals migrate from these countries, and research and development opportunities employ national talent and even attract immigrants.

  11. Effect of age on variability in the production of text-based global inferences.

    PubMed

    Williams, Lynne J; Dunlop, Joseph P; Abdi, Hervé

    2012-01-01

    As we age, our differences in cognitive skills become more visible, an effect especially true for memory and problem solving skills (i.e., fluid intelligence). However, by contrast with fluid intelligence, few studies have examined variability in measures that rely on one's world knowledge (i.e., crystallized intelligence). The current study investigated whether age increased the variability in text based global inference generation--a measure of crystallized intelligence. Global inference generation requires the integration of textual information and world knowledge and can be expressed as a gist or lesson. Variability in generating two global inferences for a single text was examined in young-old (62 to 69 years), middle-old (70 to 76 years) and old-old (77 to 94 years) adults. The older two groups showed greater variability, with the middle elderly group being most variable. These findings suggest that variability may be a characteristic of both fluid and crystallized intelligence in aging.

  12. Lynx: a database and knowledge extraction engine for integrative medicine

    PubMed Central

    Sulakhe, Dinanath; Balasubramanian, Sandhya; Xie, Bingqing; Feng, Bo; Taylor, Andrew; Wang, Sheng; Berrocal, Eduardo; Dave, Utpal; Xu, Jinbo; Börnigen, Daniela; Gilliam, T. Conrad; Maltsev, Natalia

    2014-01-01

    We have developed Lynx (http://lynx.ci.uchicago.edu)—a web-based database and a knowledge extraction engine, supporting annotation and analysis of experimental data and generation of weighted hypotheses on molecular mechanisms contributing to human phenotypes and disorders of interest. Its underlying knowledge base (LynxKB) integrates various classes of information from >35 public databases and private collections, as well as manually curated data from our group and collaborators. Lynx provides advanced search capabilities and a variety of algorithms for enrichment analysis and network-based gene prioritization to assist the user in extracting meaningful knowledge from LynxKB and experimental data, whereas its service-oriented architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces. PMID:24270788

  13. Globalisation and Higher Education in the Arab Gulf States

    ERIC Educational Resources Information Center

    Donn, Gari; Al Manthri, Yahya

    2010-01-01

    In our knowledge-based world, the societies that prosper are the ones that generate knowledge--through research, through the interwoven relationship between the academe and funded research bodies and with industry. They are the new "centre". It is strange indeed to think of the countries of the Arab Gulf States as the…

  14. Games for Learning: Which Template Generates Social Construction of Knowledge?

    ERIC Educational Resources Information Center

    Garcia, Francisco A.

    2015-01-01

    The purpose of this study was to discover how three person teams use game templates (trivia, role-play, or scavenger hunt) to socially construct knowledge. The researcher designed an experimental Internet-based database to facilitate teams creating each game. Teams consisted of teachers, students, hobbyist, and business owners who shared similar…

  15. Toward a Pedagogy of Border Thinking: Building on Latin@ Students' Subaltern Knowledge

    ERIC Educational Resources Information Center

    Cervantes-Soon, Claudia G.; Carrillo, Juan F.

    2016-01-01

    Based on Walter Mignolo's (2000) notion of border thinking, that is, the subaltern knowledge generated from the exterior borders of the modern/colonial world system, this article extends current conceptual frameworks for the implementation of a decolonizing border pedagogy with Latin@ students in secondary schools. In particular, Cervantes-Soon…

  16. Using Students' Knowledge to Generate Individual Feedback: Concept for an Intelligent Educational System on Logistics.

    ERIC Educational Resources Information Center

    Ziems, Dietrich; Neumann, Gaby

    1997-01-01

    Discusses a methods kit for interactive problem-solving exercises in engineering education as well as a methodology for intelligent evaluation of solutions. The quality of a system teaching logistics thinking can be improved using artificial intelligence. Embedding a rule-based diagnosis module that evaluates the student's knowledge actively…

  17. Generating strain signals under consideration of road surface profiles

    NASA Astrophysics Data System (ADS)

    Putra, T. E.; Abdullah, S.; Schramm, D.; Nuawi, M. Z.; Bruckmann, T.

    2015-08-01

    The current study aimed to develop the mechanism for generating strain signal utilising computer-based simulation. The strain data, caused by the acceleration, were undertaken from a fatigue data acquisition involving car movements. Using a mathematical model, the measured strain signals yielded to acceleration data used to describe the bumpiness of road surfaces. The acceleration signals were considered as an external disturbance on generating strain signals. Based on this comparison, both the actual and simulated strain data have similar pattern. The results are expected to provide new knowledge to generate a strain signal via a simulation.

  18. [The Role of Nursing Education in the Advancement of the Nursing Profession].

    PubMed

    Chang Yeh, Mei

    2017-02-01

    The present article discusses the role of nursing education in the advancement of the nursing profession in the context of the three facets of knowledge: generation, dissemination, and application. Nursing is an applied science and the application of knowledge in practice is the ultimate goal of the nursing profession. The reform of the healthcare delivery model requires that nurses acquire and utilize evidence-based clinical knowledge, critical thinking, effective communication, and team collaboration skills in order to ensure the quality of patient care and safety. Therefore, baccalaureate education has become the minimal requirement for pre-licensure nursing education. Schools of nursing are responsible to cultivate competent nurses to respond to the demands on the nursing workforce from the healthcare system. Attaining a master's education in nursing helps cultivate Advanced Practice Registered Nurses (APRNs) to further expand the roles and functions of the nursing profession in order to promote the quality of care in clinical practice. Nursing faculty and scholars of higher education institutions generate nursing knowledge and develop professional scholarship through research. Attaining a doctoral education in nursing cultivates faculties and scholars who will continually generate and disseminate nursing knowledge into the future.

  19. Ontology Research and Development. Part 1-A Review of Ontology Generation.

    ERIC Educational Resources Information Center

    Ding, Ying; Foo, Schubert

    2002-01-01

    Discusses the role of ontology in knowledge representation, including enabling content-based access, interoperability, communications, and new levels of service on the Semantic Web; reviews current ontology generation studies and projects as well as problems facing such research; and discusses ontology mapping, information extraction, natural…

  20. Unsupervised Ontology Generation from Unstructured Text. CRESST Report 827

    ERIC Educational Resources Information Center

    Mousavi, Hamid; Kerr, Deirdre; Iseli, Markus R.

    2013-01-01

    Ontologies are a vital component of most knowledge acquisition systems, and recently there has been a huge demand for generating ontologies automatically since manual or supervised techniques are not scalable. In this paper, we introduce "OntoMiner", a rule-based, iterative method to extract and populate ontologies from unstructured or…

  1. Linguistically Motivated Features for CCG Realization Ranking

    ERIC Educational Resources Information Center

    Rajkumar, Rajakrishnan

    2012-01-01

    Natural Language Generation (NLG) is the process of generating natural language text from an input, which is a communicative goal and a database or knowledge base. Informally, the architecture of a standard NLG system consists of the following modules (Reiter and Dale, 2000): content determination, sentence planning (or microplanning) and surface…

  2. Automating the design of scientific computing software

    NASA Technical Reports Server (NTRS)

    Kant, Elaine

    1992-01-01

    SINAPSE is a domain-specific software design system that generates code from specifications of equations and algorithm methods. This paper describes the system's design techniques (planning in a space of knowledge-based refinement and optimization rules), user interaction style (user has option to control decision making), and representation of knowledge (rules and objects). It also summarizes how the system knowledge has evolved over time and suggests some issues in building software design systems to facilitate reuse.

  3. Knowledge Transfer Loss in a Base Realignment and Closure (BRAC) Environment: A Positive or Negative Acquisition Paradigm Shift

    DTIC Science & Technology

    2012-04-01

    between 1946-1964), Gen-Xrs (born between 1965-1980), Millennials (born between 1981- 1990), and iGenerationals (born between 1991 and today). Research...generations (03 Traditionalist, 71 Generation X, 04 Millennial, and 00 iGeneration ). And there were 34 (11 percent) survey official responses where the...interesting insight into unintended consequences of “mass purge/new blood” on absorbing a short-term 0 50 100 150 200 No Response Opt-outs iGeneration

  4. Beyond rules: The next generation of expert systems

    NASA Technical Reports Server (NTRS)

    Ferguson, Jay C.; Wagner, Robert E.

    1987-01-01

    The PARAGON Representation, Management, and Manipulation system is introduced. The concepts of knowledge representation, knowledge management, and knowledge manipulation are combined in a comprehensive system for solving real world problems requiring high levels of expertise in a real time environment. In most applications the complexity of the problem and the representation used to describe the domain knowledge tend to obscure the information from which solutions are derived. This inhibits the acquisition of domain knowledge verification/validation, places severe constraints on the ability to extend and maintain a knowledge base while making generic problem solving strategies difficult to develop. A unique hybrid system was developed to overcome these traditional limitations.

  5. For the Love of the Game: Game- Versus Lecture-Based Learning With Generation Z Patients.

    PubMed

    Adamson, Mary A; Chen, Hengyi; Kackley, Russell; Micheal, Alicia

    2018-02-01

    The current study evaluated adolescent patients' enjoyment of and knowledge gained from game-based learning compared with an interactive lecture format on the topic of mood disorders. It was hypothesized that game-based learning would be statistically more effective than a lecture in knowledge acquisition and satisfaction scores. A pre-post design was implemented in which a convenience sample of 160 adolescent patients were randomized to either a lecture (n = 80) or game-based (n = 80) group. Both groups completed a pretest/posttest and satisfaction survey. Results showed that both groups had significant improvement in knowledge from pretest compared to posttest. Game-based learning was statistically more effective than the interactive lecture in knowledge achievement and satisfaction scores. This finding supports the contention that game-based learning is an active technique that may be used with patient education. [Journal of Psychosocial Nursing and Mental Health Services, 56(2), 29-36.]. Copyright 2018, SLACK Incorporated.

  6. Chemical name extraction based on automatic training data generation and rich feature set.

    PubMed

    Yan, Su; Spangler, W Scott; Chen, Ying

    2013-01-01

    The automation of extracting chemical names from text has significant value to biomedical and life science research. A major barrier in this task is the difficulty of getting a sizable and good quality data to train a reliable entity extraction model. Another difficulty is the selection of informative features of chemical names, since comprehensive domain knowledge on chemistry nomenclature is required. Leveraging random text generation techniques, we explore the idea of automatically creating training sets for the task of chemical name extraction. Assuming the availability of an incomplete list of chemical names, called a dictionary, we are able to generate well-controlled, random, yet realistic chemical-like training documents. We statistically analyze the construction of chemical names based on the incomplete dictionary, and propose a series of new features, without relying on any domain knowledge. Compared to state-of-the-art models learned from manually labeled data and domain knowledge, our solution shows better or comparable results in annotating real-world data with less human effort. Moreover, we report an interesting observation about the language for chemical names. That is, both the structural and semantic components of chemical names follow a Zipfian distribution, which resembles many natural languages.

  7. Generation of Natural-Language Textual Summaries from Longitudinal Clinical Records.

    PubMed

    Goldstein, Ayelet; Shahar, Yuval

    2015-01-01

    Physicians are required to interpret, abstract and present in free-text large amounts of clinical data in their daily tasks. This is especially true for chronic-disease domains, but holds also in other clinical domains. We have recently developed a prototype system, CliniText, which, given a time-oriented clinical database, and appropriate formal abstraction and summarization knowledge, combines the computational mechanisms of knowledge-based temporal data abstraction, textual summarization, abduction, and natural-language generation techniques, to generate an intelligent textual summary of longitudinal clinical data. We demonstrate our methodology, and the feasibility of providing a free-text summary of longitudinal electronic patient records, by generating summaries in two very different domains - Diabetes Management and Cardiothoracic surgery. In particular, we explain the process of generating a discharge summary of a patient who had undergone a Coronary Artery Bypass Graft operation, and a brief summary of the treatment of a diabetes patient for five years.

  8. Analyzing Pre-Service Primary Teachers' Fraction Knowledge Structures through Problem Posing

    ERIC Educational Resources Information Center

    Kilic, Cigdem

    2015-01-01

    In this study it was aimed to determine pre-service primary teachers' knowledge structures of fraction through problem posing activities. A total of 90 pre-service primary teachers participated in this study. A problem posing test consisting of two questions was used and the participants were asked to generate as many as problems based on the…

  9. Identifying and Verifying Earthquake Engineering Concepts to Create a Knowledge Base in STEM Education: A Modified Delphi Study

    ERIC Educational Resources Information Center

    Cavlazoglu, Baki; Stuessy, Carol L.

    2017-01-01

    Stakeholders in STEM education have called for integrating engineering content knowledge into STEM-content classrooms. To answer the call, stakeholders in science education announced a new framework, Next Generation Science Standards, which focuses on the integration of science and engineering in K-12 science education. However, research indicates…

  10. Generation of surgical pathology report using a 5,000-word speech recognizer.

    PubMed

    Tischler, A S; Martin, M R

    1989-10-01

    Pressures to decrease both turnaround time and operating costs simultaneously have placed conflicting demands on traditional forms of medical transcription. The new technology of voice recognition extends the promise of enabling the pathologist or other medical professional to dictate a correct report and have it printed and/or transmitted to a database immediately. The usefulness of voice recognition systems depends on several factors, including ease of use, reliability, speed, and accuracy. These in turn depend on the general underlying design of the systems and inclusion in the systems of a specific knowledge base appropriate for each application. Development of a good knowledge base requires close collaboration between a domain expert and a knowledge engineer with expertise in voice recognition. The authors have recently completed a knowledge base for surgical pathology using the Kurzweil VoiceReport 5,000-word system.

  11. Using background knowledge for picture organization and retrieval

    NASA Astrophysics Data System (ADS)

    Quintana, Yuri

    1997-01-01

    A picture knowledge base management system is described that is used to represent, organize and retrieve pictures from a frame knowledge base. Experiments with human test subjects were conducted to obtain further descriptions of pictures from news magazines. These descriptions were used to represent the semantic content of pictures in frame representations. A conceptual clustering algorithm is described which organizes pictures not only on the observable features, but also on implicit properties derived from the frame representations. The algorithm uses inheritance reasoning to take into account background knowledge in the clustering. The algorithm creates clusters of pictures using a group similarity function that is based on the gestalt theory of picture perception. For each cluster created, a frame is generated which describes the semantic content of pictures in the cluster. Clustering and retrieval experiments were conducted with and without background knowledge. The paper shows how the use of background knowledge and semantic similarity heuristics improves the speed, precision, and recall of queries processed. The paper concludes with a discussion of how natural language processing of can be used to assist in the development of knowledge bases and the processing of user queries.

  12. How does the knowledge environment shape procurement practices for orthopaedic medical devices in Mexico?

    PubMed

    Lingg, Myriam; Wyss, Kaspar; Durán-Arenas, Luis

    2016-07-08

    In organisational theory there is an assumption that knowledge is used effectively in healthcare systems that perform well. Actors in healthcare systems focus on managing knowledge of clinical processes like, for example, clinical decision-making to improve patient care. We know little about connecting that knowledge to administrative processes like high-risk medical device procurement. We analysed knowledge-related factors that influence procurement and clinical procedures for orthopaedic medical devices in Mexico. We based our qualitative study on 48 semi-structured interviews with various stakeholders in Mexico: orthopaedic specialists, government officials, and social security system managers or administrators. We took a knowledge-management related perspective (i) to analyse factors of managing knowledge of clinical procedures, (ii) to assess the role of this knowledge and in relation to procurement of orthopaedic medical devices, and (iii) to determine how to improve the situation. The results of this study are primarily relevant for Mexico but may also give impulsion to other health systems with highly standardized procurement practices. We found that knowledge of clinical procedures in orthopaedics is generated inconsistently and not always efficiently managed. Its support for procuring orthopaedic medical devices is insufficient. Identified deficiencies: leaders who lack guidance and direction and thus use knowledge poorly; failure to share knowledge; insufficiently defined formal structures and processes for collecting information and making it available to actors of health system; lack of strategies to benefit from synergies created by information and knowledge exchange. Many factors are related directly or indirectly to technological aspects, which are insufficiently developed. The content of this manuscript is novel as it analyses knowledge-related factors that influence procurement of orthopaedic medical devices in Mexico. Based on our results we recommend that the procurement mechanism should integrate knowledge from clinical procedures adequately in their decision-making. Without strong guidance, organisational changes, and support by technological solutions to improve the generation and management of knowledge, procurement processes for orthopaedic high-risk medical devices will remain sub-optimal.

  13. More than Anecdotes: Fishers' Ecological Knowledge Can Fill Gaps for Ecosystem Modeling.

    PubMed

    Bevilacqua, Ana Helena V; Carvalho, Adriana R; Angelini, Ronaldo; Christensen, Villy

    2016-01-01

    Ecosystem modeling applied to fisheries remains hampered by a lack of local information. Fishers' knowledge could fill this gap, improving participation in and the management of fisheries. The same fishing area was modeled using two approaches: based on fishers' knowledge and based on scientific information. For the former, the data was collected by interviews through the Delphi methodology, and for the latter, the data was gathered from the literature. Agreement between the attributes generated by the fishers' knowledge model and scientific model is discussed and explored, aiming to improve data availability, the ecosystem model, and fisheries management. The ecosystem attributes produced from the fishers' knowledge model were consistent with the ecosystem attributes produced by the scientific model, and elaborated using only the scientific data from literature. This study provides evidence that fishers' knowledge may suitably complement scientific data, and may improve the modeling tools for the research and management of fisheries.

  14. Dynamic knowledge representation using agent-based modeling: ontology instantiation and verification of conceptual models.

    PubMed

    An, Gary

    2009-01-01

    The sheer volume of biomedical research threatens to overwhelm the capacity of individuals to effectively process this information. Adding to this challenge is the multiscale nature of both biological systems and the research community as a whole. Given this volume and rate of generation of biomedical information, the research community must develop methods for robust representation of knowledge in order for individuals, and the community as a whole, to "know what they know." Despite increasing emphasis on "data-driven" research, the fact remains that researchers guide their research using intuitively constructed conceptual models derived from knowledge extracted from publications, knowledge that is generally qualitatively expressed using natural language. Agent-based modeling (ABM) is a computational modeling method that is suited to translating the knowledge expressed in biomedical texts into dynamic representations of the conceptual models generated by researchers. The hierarchical object-class orientation of ABM maps well to biomedical ontological structures, facilitating the translation of ontologies into instantiated models. Furthermore, ABM is suited to producing the nonintuitive behaviors that often "break" conceptual models. Verification in this context is focused at determining the plausibility of a particular conceptual model, and qualitative knowledge representation is often sufficient for this goal. Thus, utilized in this fashion, ABM can provide a powerful adjunct to other computational methods within the research process, as well as providing a metamodeling framework to enhance the evolution of biomedical ontologies.

  15. Investigation into Differences in Level of Knowledge about Hypertension between High School Students and Elderly People.

    PubMed

    Sanagawa, Akimasa; Ogasawara, Misa; Kusahara, Yuri; Yasumoto, Miki; Iwaki, Soichiro; Fujii, Satoshi

    2017-01-01

    As a major chronic non-communicable disease, hypertension is the most important risk factor for cardiovascular disease, chronic kidney disease, stroke and, if not treated appropriately, premature death. A population-based approach aimed at decreasing high blood pressure among the general population is an important component of any comprehensive plan to prevent hypertension. However, few studies have investigated generational differences in knowledge about, and consciousness of, hypertension. Thus, we conducted a questionnaire survey about hypertension, with the aim of clarifying differences of understanding about hypertension between high school students and elderly people. The results of this investigation suggested that there is indeed a generational difference: knowledge about hypertension, and awareness of its relationship with salt intake, was higher in elderly people than in high school students. Furthermore, our study showed that among high school students, salt intake consciousness correlated with a family history of hypertension. By contrast, in elderly people, salt intake consciousness is related to age and to an awareness of recommended daily salt intake. This study strongly showed that knowledge and consciousness of hypertension varied among generations, with the elderly being more aware and conscientious about salt intake. Acknowledgement of this generational diversity is critical to developing an effective overall preventive strategy for hypertension.

  16. Plumbing the brain drain.

    PubMed Central

    Saravia, Nancy Gore; Miranda, Juan Francisco

    2004-01-01

    Opportunity is the driving force of migration. Unsatisfied demands for higher education and skills, which have been created by the knowledge-based global economy, have generated unprecedented opportunities in knowledge-intensive service industries. These multi-trillion dollar industries include information, communication, finance, business, education and health. The leading industrialized nations are also the focal points of knowledge-intensive service industries and as such constitute centres of research and development activity that proactively draw in talented individuals worldwide through selective immigration policies, employment opportunities and targeted recruitment. Higher education is another major conduit of talent from less-developed countries to the centres of the knowledge-based global economy. Together career and educational opportunities drive "brain drain and recirculation". The departure of a large proportion of the most competent and innovative individuals from developing nations slows the achievement of the critical mass needed to generate the enabling context in which knowledge creation occurs. To favourably modify the asymmetric movement and distribution of global talent, developing countries must implement bold and creative strategies that are backed by national policies to: provide world-class educational opportunities, construct knowledge-based research and development industries, and sustainably finance the required investment for these strategies. Brazil, China and India have moved in this direction, offering world-class education in areas crucial to national development, such as biotechnology and information technology, paralleled by investments in research and development. As a result, only a small proportion of the most highly educated individuals migrate from these countries, and research and development opportunities employ national talent and even attract immigrants. PMID:15375451

  17. Automatic Generation of Tests from Domain and Multimedia Ontologies

    ERIC Educational Resources Information Center

    Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos

    2011-01-01

    The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…

  18. A knowledge-based framework for image enhancement in aviation security.

    PubMed

    Singh, Maneesha; Singh, Sameer; Partridge, Derek

    2004-12-01

    The main aim of this paper is to present a knowledge-based framework for automatically selecting the best image enhancement algorithm from several available on a per image basis in the context of X-ray images of airport luggage. The approach detailed involves a system that learns to map image features that represent its viewability to one or more chosen enhancement algorithms. Viewability measures have been developed to provide an automatic check on the quality of the enhanced image, i.e., is it really enhanced? The choice is based on ground-truth information generated by human X-ray screening experts. Such a system, for a new image, predicts the best-suited enhancement algorithm. Our research details the various characteristics of the knowledge-based system and shows extensive results on real images.

  19. Neuro-symbolic representation learning on biological knowledge graphs.

    PubMed

    Alshahrani, Mona; Khan, Mohammad Asif; Maddouri, Omar; Kinjo, Akira R; Queralt-Rosinach, Núria; Hoehndorf, Robert

    2017-09-01

    Biological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. In the past years, feature learning methods that are applicable to graph-structured data are becoming available, but have not yet widely been applied and evaluated on structured biological knowledge. Results: We develop a novel method for feature learning on biological knowledge graphs. Our method combines symbolic methods, in particular knowledge representation using symbolic logic and automated reasoning, with neural networks to generate embeddings of nodes that encode for related information within knowledge graphs. Through the use of symbolic logic, these embeddings contain both explicit and implicit information. We apply these embeddings to the prediction of edges in the knowledge graph representing problems of function prediction, finding candidate genes of diseases, protein-protein interactions, or drug target relations, and demonstrate performance that matches and sometimes outperforms traditional approaches based on manually crafted features. Our method can be applied to any biological knowledge graph, and will thereby open up the increasing amount of Semantic Web based knowledge bases in biology to use in machine learning and data analytics. https://github.com/bio-ontology-research-group/walking-rdf-and-owl. robert.hoehndorf@kaust.edu.sa. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  20. A Loud Silence: Working with Research-Based Theatre and A/R/Tography

    ERIC Educational Resources Information Center

    Lea, Graham W.; Belliveau, George; Wager, Amanda; Beck, Jaime L.

    2011-01-01

    Arts-based approaches to research have emerged as an integral component of current scholarship in the social sciences, education, health research, and humanities. Integrating arts-based methods and methodologies with research generates possibilities for fresh approaches for creating, translating, and exchanging knowledge (Barone & Eisner, 1997;…

  1. The KATE shell: An implementation of model-based control, monitor and diagnosis

    NASA Technical Reports Server (NTRS)

    Cornell, Matthew

    1987-01-01

    The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.

  2. NASDA knowledge-based network planning system

    NASA Technical Reports Server (NTRS)

    Yamaya, K.; Fujiwara, M.; Kosugi, S.; Yambe, M.; Ohmori, M.

    1993-01-01

    One of the SODS (space operation and data system) sub-systems, NP (network planning) was the first expert system used by NASDA (national space development agency of Japan) for tracking and control of satellite. The major responsibilities of the NP system are: first, the allocation of network and satellite control resources and, second, the generation of the network operation plan data (NOP) used in automated control of the stations and control center facilities. Up to now, the first task of network resource scheduling was done by network operators. NP system automatically generates schedules using its knowledge base, which contains information on satellite orbits, station availability, which computer is dedicated to which satellite, and how many stations must be available for a particular satellite pass or a certain time period. The NP system is introduced.

  3. Computer based interpretation of infrared spectra-structure of the knowledge-base, automatic rule generation and interpretation

    NASA Astrophysics Data System (ADS)

    Ehrentreich, F.; Dietze, U.; Meyer, U.; Abbas, S.; Schulz, H.

    1995-04-01

    It is a main task within the SpecInfo-Project to develop interpretation tools that can handle a great deal more of the complicated, more specific spectrum-structure-correlations. In the first step the empirical knowledge about the assignment of structural groups and their characteristic IR-bands has been collected from literature and represented in a computer readable well-structured form. Vague, verbal rules are managed by introduction of linguistic variables. The next step was the development of automatic rule generating procedures. We had combined and enlarged the IDIOTS algorithm with the algorithm by Blaffert relying on set theory. The procedures were successfully applied to the SpecInfo database. The realization of the preceding items is a prerequisite for the improvement of the computerized structure elucidation procedure.

  4. Consulting as a Strategy for Knowledge Transfer

    PubMed Central

    Jacobson, Nora; Butterill, Dale; Goering, Paula

    2005-01-01

    Academic researchers who work on health policy and health services are expected to transfer knowledge to decision makers. Decision makers often do not, however, regard academics’ traditional ways of doing research and disseminating their findings as relevant or useful. This article argues that consulting can be a strategy for transferring knowledge between researchers and decision makers and is effective at promoting the “enlightenment” and “interactive” models of knowledge use. Based on three case studies, it develops a model of knowledge transfer–focused consulting that consists of six stages and four types of work. Finally, the article explores how knowledge is generated in consulting and identifies several classes of factors facilitating its use by decision makers. PMID:15960773

  5. Interpretive Medicine

    PubMed Central

    Reeve, Joanne

    2010-01-01

    Patient-centredness is a core value of general practice; it is defined as the interpersonal processes that support the holistic care of individuals. To date, efforts to demonstrate their relationship to patient outcomes have been disappointing, whilst some studies suggest values may be more rhetoric than reality. Contextual issues influence the quality of patient-centred consultations, impacting on outcomes. The legitimate use of knowledge, or evidence, is a defining aspect of modern practice, and has implications for patient-centredness. Based on a critical review of the literature, on my own empirical research, and on reflections from my clinical practice, I critique current models of the use of knowledge in supporting individualised care. Evidence-Based Medicine (EBM), and its implementation within health policy as Scientific Bureaucratic Medicine (SBM), define best evidence in terms of an epistemological emphasis on scientific knowledge over clinical experience. It provides objective knowledge of disease, including quantitative estimates of the certainty of that knowledge. Whilst arguably appropriate for secondary care, involving episodic care of selected populations referred in for specialist diagnosis and treatment of disease, application to general practice can be questioned given the complex, dynamic and uncertain nature of much of the illness that is treated. I propose that general practice is better described by a model of Interpretive Medicine (IM): the critical, thoughtful, professional use of an appropriate range of knowledges in the dynamic, shared exploration and interpretation of individual illness experience, in order to support the creative capacity of individuals in maintaining their daily lives. Whilst the generation of interpreted knowledge is an essential part of daily general practice, the profession does not have an adequate framework by which this activity can be externally judged to have been done well. Drawing on theory related to the recognition of quality in interpretation and knowledge generation within the qualitative research field, I propose a framework by which to evaluate the quality of knowledge generated within generalist, interpretive clinical practice. I describe three priorities for research in developing this model further, which will strengthen and preserve core elements of the discipline of general practice, and thus promote and support the health needs of the public. PMID:21805819

  6. Interpretive medicine: Supporting generalism in a changing primary care world.

    PubMed

    Reeve, Joanne

    2010-01-01

    Patient-centredness is a core value of general practice; it is defined as the interpersonal processes that support the holistic care of individuals. To date, efforts to demonstrate their relationship to patient outcomes have been disappointing, whilst some studies suggest values may be more rhetoric than reality. Contextual issues influence the quality of patient-centred consultations, impacting on outcomes. The legitimate use of knowledge, or evidence, is a defining aspect of modern practice, and has implications for patient-centredness. Based on a critical review of the literature, on my own empirical research, and on reflections from my clinical practice, I critique current models of the use of knowledge in supporting individualised care. Evidence-Based Medicine (EBM), and its implementation within health policy as Scientific Bureaucratic Medicine (SBM), define best evidence in terms of an epistemological emphasis on scientific knowledge over clinical experience. It provides objective knowledge of disease, including quantitative estimates of the certainty of that knowledge. Whilst arguably appropriate for secondary care, involving episodic care of selected populations referred in for specialist diagnosis and treatment of disease, application to general practice can be questioned given the complex, dynamic and uncertain nature of much of the illness that is treated. I propose that general practice is better described by a model of Interpretive Medicine (IM): the critical, thoughtful, professional use of an appropriate range of knowledges in the dynamic, shared exploration and interpretation of individual illness experience, in order to support the creative capacity of individuals in maintaining their daily lives. Whilst the generation of interpreted knowledge is an essential part of daily general practice, the profession does not have an adequate framework by which this activity can be externally judged to have been done well. Drawing on theory related to the recognition of quality in interpretation and knowledge generation within the qualitative research field, I propose a framework by which to evaluate the quality of knowledge generated within generalist, interpretive clinical practice. I describe three priorities for research in developing this model further, which will strengthen and preserve core elements of the discipline of general practice, and thus promote and support the health needs of the public.

  7. Non-Linear Effects in Knowledge Production

    NASA Astrophysics Data System (ADS)

    Purica, Ionut

    2007-04-01

    The generation of technological knowledge is paramount to our present development; the production of technological knowledge is governed by the same Cobb Douglas type model, with the means of research and the intelligence level replacing capital, respectively labor. We are exploring the basic behavior of present days' economies that are producing technological knowledge, along with the `usual' industrial production and determine a basic behavior that turns out to be a `Henon attractor'. Measures are introduced for the gain of technological knowledge and for the information of technological sequences that are based respectively on the underlying multi-valued modal logic of the technological research and on nonlinear thermodynamic considerations.

  8. Revisiting Professional Learning Communities to Increase College Readiness: The Importance of Pedagogical Content Knowledge

    ERIC Educational Resources Information Center

    Bausmith, Jennifer Merriman; Barry, Carol

    2011-01-01

    For over a decade, professional learning communities (PLCs) have been touted as an effective way to build upon the knowledge and skills of experienced teachers, yet much of the evidence base is derived from self-reports by practitioners. Although several generations of school reform (the standards movement, No Child Left Behind, and now the Common…

  9. Assessing Learning Progression of Energy Concepts across Middle School Grades: The Knowledge Integration Perspective

    ERIC Educational Resources Information Center

    Lee, Hee-Sun; Liu, Ou Lydia

    2010-01-01

    We use a construct-based assessment approach to measure learning progression of energy concepts across physical, life, and earth science contexts in middle school grades. We model the knowledge integration construct in six levels in terms of the numbers of ideas and links used in student-generated explanations. For this study, we selected 10 items…

  10. When generating answers benefits arithmetic skill: the importance of prior knowledge.

    PubMed

    Rittle-Johnson, Bethany; Kmicikewycz, Alexander Oleksij

    2008-09-01

    People remember information better if they generate the information while studying rather than read the information. However, prior research has not investigated whether this generation effect extends to related but unstudied items and has not been conducted in classroom settings. We compared third graders' success on studied and unstudied multiplication problems after they spent a class period generating answers to problems or reading the answers from a calculator. The effect of condition interacted with prior knowledge. Students with low prior knowledge had higher accuracy in the generate condition, but as prior knowledge increased, the advantage of generating answers decreased. The benefits of generating answers may extend to unstudied items and to classroom settings, but only for learners with low prior knowledge.

  11. Translating knowledge into practice: An exploratory study of dementia-specific training for community-based service providers.

    PubMed

    O'Sullivan, Grace; Hocking, Clare; McPherson, Kathryn

    2017-08-01

    Objective To develop, deliver, and evaluate dementia-specific training designed to inform service delivery by enhancing the knowledge of community-based service providers. Methods This exploratory qualitative study used an interdisciplinary, interuniversity team approach to develop and deliver dementia-specific training. Participants included management, care staff, and clients from three organizations funded to provide services in the community. Data on the acceptability, applicability, and perceived outcomes of the training were gathered through focus group discussions and individual interviews. Transcripts were analyzed to generate open codes which were clustered into themes and sub-themes addressing the content, delivery, and value of the training. Findings Staff valued up-to-date knowledge and "real stories" grounded in practice. Clients welcomed the strengths-based approach. Contractual obligations impact on the application of knowledge in practice. Implications The capacity to implement new knowledge may be limited by the legislative policies which frame service provision, to the detriment of service users.

  12. Evaluating an education/training module to foster knowledge of cockpit weather technology.

    PubMed

    Cobbett, Erin A; Blickensderfer, Elizabeth L; Lanicci, John

    2014-10-01

    Previous research has indicated that general aviation (GA) pilots may use the sophisticated meteorological information available to them via a variety of Next-Generation Weather Radar (NEXRAD) based weather products in a manner that actually decreases flight safety. The current study examined an education/training method designed to enable GA pilots to use NEXRAD-based products effectively in convective weather situations. The training method was lecture combined with paper-based scenario exercises. A multivariate analysis of variance revealed that subjects in the training condition performed significantly better than did subjects in the control condition on several knowledge and attitude measures. Subjects in the training condition improved from a mean score of 66% to 80% on the radar-knowledge test and from 62% to 75% on the scenario-knowledge test. Although additional research is needed, these results demonstrated that pilots can benefit from a well-designed education/training program involving specific areas of aviation weather-related knowledge.

  13. [Trends on generation and reproduction of knowledge about economic evaluation and health].

    PubMed

    Arredondo, A; Parada, I

    2001-08-01

    This paper identifies the trends and recent progress in the generation and reproduction of knowledge on health economic evaluation. Analysis is organized along nine public health action fields, namely: health determinants and predictors, economic value of health, healthcare demand, healthcare supply, microeconomic evaluation of healthcare, healthcare market balance, evaluation of policy instruments, general evaluation of the health system, and healthcare planning, regulation and supervision. Each action field is defined to place the reader in the proper setting and level of analysis. In addition, thematic research topics developed in each action field are proposed and discussed. The generation and reproduction of knowledge on the different action fields was based on the review of the bibliographic databases MEDLINE and LILACS for the 1992-2000 period. Results lead to the conclusion that development and application of economic evaluation of healthcare has been uneven across different countries and that there is a growing increase of applications starting in 1994, the year of initiation of healthcare reform in Latin America.

  14. Uncertainty management by relaxation of conflicting constraints in production process scheduling

    NASA Technical Reports Server (NTRS)

    Dorn, Juergen; Slany, Wolfgang; Stary, Christian

    1992-01-01

    Mathematical-analytical methods as used in Operations Research approaches are often insufficient for scheduling problems. This is due to three reasons: the combinatorial complexity of the search space, conflicting objectives for production optimization, and the uncertainty in the production process. Knowledge-based techniques, especially approximate reasoning and constraint relaxation, are promising ways to overcome these problems. A case study from an industrial CIM environment, namely high-grade steel production, is presented to demonstrate how knowledge-based scheduling with the desired capabilities could work. By using fuzzy set theory, the applied knowledge representation technique covers the uncertainty inherent in the problem domain. Based on this knowledge representation, a classification of jobs according to their importance is defined which is then used for the straightforward generation of a schedule. A control strategy which comprises organizational, spatial, temporal, and chemical constraints is introduced. The strategy supports the dynamic relaxation of conflicting constraints in order to improve tentative schedules.

  15. Pain Assessment and Management in Nursing Education Using Computer-based Simulations.

    PubMed

    Romero-Hall, Enilda

    2015-08-01

    It is very important for nurses to have a clear understanding of the patient's pain experience and of management strategies. However, a review of the nursing literature shows that one of the main barriers to proper pain management practice is lack of knowledge. Nursing schools are in a unique position to address the gap in pain management knowledge by facilitating the acquisition and use of knowledge by the next generation of nurses. The purpose of this article is to discuss the role of computer-based simulations as a reliable educational technology strategy that can enhance the learning experience of nursing students acquiring pain management knowledge and practice. Computer-based simulations provide a significant number of learning affordances that can help change nursing students' attitudes and behaviors toward and practice of pain assessment and management. Copyright © 2015 American Society for Pain Management Nursing. Published by Elsevier Inc. All rights reserved.

  16. D and D knowledge management information tool - a web based system developed to share D and D knowledge worldwide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lagos, L.; Upadhyay, H.; Shoffner, P.

    2013-07-01

    Deactivation and decommissioning (D and D) work is a high risk and technically challenging enterprise within the U.S. Department of Energy complex. During the past three decades, the DOE's Office of Environmental Management has been in charge of carrying out one of the largest environmental restoration efforts in the world: the cleanup of the Manhattan Project legacy. In today's corporate world, worker experiences and knowledge that have developed over time represent a valuable corporate asset. The ever-dynamic workplace, coupled with an aging workforce, presents corporations with the ongoing challenge of preserving work-related experiences and knowledge for cross-generational knowledge transfer tomore » the future workforce [5]. To prevent the D and D knowledge base and expertise from being lost over time, the DOE and the Applied Research Center at Florida International University (FIU) have developed the web-based Knowledge Management Information Tool (KM-IT) to capture and maintain this valuable information in a universally available and easily accessible and usable system. The D and D KM-IT was developed in collaboration with DOE Headquarters (HQ), the Energy Facility Contractors Group (EFCOG), and the ALARA [as low as reasonably achievable] Centers at Savannah River Sites to preserve the D and D information generated and collected by the D and D community. This is an open secured system that can be accessed from https://www.dndkm.org over the web and through mobile devices at https://m.dndkm.org. This knowledge system serves as a centralized repository and provides a common interface for D and D-related activities. It also improves efficiency by reducing the need to rediscover knowledge and promotes the reuse of existing knowledge. It is a community-driven system that facilitates the gathering, analyzing, storing, and sharing of knowledge and information within the D and D community. It assists the DOE D and D community in identifying potential solutions to their problem areas by using the vast resources and knowledge base available throughout the global D and D community. The D and D KM-IT offers a mechanism to the global D and D community for searching relevant D and D information and is focused on providing a single point of access into the collective knowledge base of the D and D community within and outside of the DOE. Collecting information from subject matter specialists, it builds a knowledge repository for future reference archiving Lessons Learned, Best Practices, ALARA reports, and other relevant documents and maintains a secured collaboration platform for the global D and D community to share knowledge. With the dynamic nature and evolution of the D and D knowledge base due to multiple factors such as changes in the workforce, new technologies and methodologies, economics, and regulations, the D and D KM-IT is being developed in a phased and modular fashion. (authors)« less

  17. Supervised Learning Based Hypothesis Generation from Biomedical Literature.

    PubMed

    Sang, Shengtian; Yang, Zhihao; Li, Zongyao; Lin, Hongfei

    2015-01-01

    Nowadays, the amount of biomedical literatures is growing at an explosive speed, and there is much useful knowledge undiscovered in this literature. Researchers can form biomedical hypotheses through mining these works. In this paper, we propose a supervised learning based approach to generate hypotheses from biomedical literature. This approach splits the traditional processing of hypothesis generation with classic ABC model into AB model and BC model which are constructed with supervised learning method. Compared with the concept cooccurrence and grammar engineering-based approaches like SemRep, machine learning based models usually can achieve better performance in information extraction (IE) from texts. Then through combining the two models, the approach reconstructs the ABC model and generates biomedical hypotheses from literature. The experimental results on the three classic Swanson hypotheses show that our approach outperforms SemRep system.

  18. Predicting future discoveries from current scientific literature.

    PubMed

    Petrič, Ingrid; Cestnik, Bojan

    2014-01-01

    Knowledge discovery in biomedicine is a time-consuming process starting from the basic research, through preclinical testing, towards possible clinical applications. Crossing of conceptual boundaries is often needed for groundbreaking biomedical research that generates highly inventive discoveries. We demonstrate the ability of a creative literature mining method to advance valuable new discoveries based on rare ideas from existing literature. When emerging ideas from scientific literature are put together as fragments of knowledge in a systematic way, they may lead to original, sometimes surprising, research findings. If enough scientific evidence is already published for the association of such findings, they can be considered as scientific hypotheses. In this chapter, we describe a method for the computer-aided generation of such hypotheses based on the existing scientific literature. Our literature-based discovery of NF-kappaB with its possible connections to autism was recently approved by scientific community, which confirms the ability of our literature mining methodology to accelerate future discoveries based on rare ideas from existing literature.

  19. How Will "Generation Me, Me, Me" Work for Others' Children?

    ERIC Educational Resources Information Center

    Clement, Mary C.

    2016-01-01

    If what is known about the work of teachers is overlaid across the information from the literature about the millennial generation, what comparisons can be made? What may be different with regard to providing millennial teachers the support and motivation they need for their work? How might employers tap in to that knowledge base in hiring,…

  20. Research on complex 3D tree modeling based on L-system

    NASA Astrophysics Data System (ADS)

    Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li

    2018-03-01

    L-system as a fractal iterative system could simulate complex geometric patterns. Based on the field observation data of trees and knowledge of forestry experts, this paper extracted modeling constraint rules and obtained an L-system rules set. Using the self-developed L-system modeling software the L-system rule set was parsed to generate complex tree 3d models.The results showed that the geometrical modeling method based on l-system could be used to describe the morphological structure of complex trees and generate 3D tree models.

  1. Community based research for an urban recreation application of benefits-based management

    Treesearch

    William T. Borrie; Joseph W. Roggenbuck

    1995-01-01

    Benefits-based management is an approach to park and recreation management that focuses on the positive outcomes of engaging in recreational experiences. Because one class of possible benefits accrue to the community, a philosophical framework is discussed suggesting that communities are themselves the primary sources, generators, and repositories of knowledge....

  2. Studies in knowledge-based diagnosis of failures in robotic assembly

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Pollard, Nancy S.; Desai, Rajiv S.

    1990-01-01

    The telerobot diagnostic system (TDS) is a knowledge-based system that is being developed for identification and diagnosis of failures in the space robotic domain. The system is able to isolate the symptoms of the failure, generate failure hypotheses based on these symptoms, and test their validity at various levels by interpreting or simulating the effects of the hypotheses on results of plan execution. The implementation of the TDS is outlined. The classification of failures and the types of system models used by the TDS are discussed. A detailed example of the TDS approach to failure diagnosis is provided.

  3. Critical review on the mechanisms of maturation stress generation in trees

    PubMed Central

    Clair, Bruno

    2016-01-01

    Trees control their posture by generating asymmetric mechanical stress around the periphery of the trunk or branches. This stress is produced in wood during the maturation of the cell wall. When the need for reaction is high, it is accompanied by strong changes in cell organization and composition called reaction wood, namely compression wood in gymnosperms and tension wood in angiosperms. The process by which stress is generated in the cell wall during its formation is not yet known, and various hypothetical mechanisms have been proposed in the literature. Here we aim at discriminating between these models. First, we summarize current knowledge about reaction wood structure, state and behaviour relevant to the understanding of maturation stress generation. Then, the mechanisms proposed in the literature are listed and discussed in order to identify which can be rejected based on their inconsistency with current knowledge at the frontier between plant science and mechanical engineering. PMID:27605169

  4. Knowledge Data Base for Amorphous Metals

    DTIC Science & Technology

    2007-07-26

    not programmatic, updates. Over 100 custom SQL statements that maintain the domain specific data are attached to the workflow entries in a generic...for the form by populating the SQL and run generation tables. Application data may be prepared in different ways for two steps that invoke the same form...run generation mode). There is a single table of SQL commands. Each record has a user-definable ID, the SQL code, and a comment. The run generation

  5. Navigating complexity through knowledge coproduction: Mainstreaming ecosystem services into disaster risk reduction.

    PubMed

    Reyers, Belinda; Nel, Jeanne L; O'Farrell, Patrick J; Sitas, Nadia; Nel, Deon C

    2015-06-16

    Achieving the policy and practice shifts needed to secure ecosystem services is hampered by the inherent complexities of ecosystem services and their management. Methods for the participatory production and exchange of knowledge offer an avenue to navigate this complexity together with the beneficiaries and managers of ecosystem services. We develop and apply a knowledge coproduction approach based on social-ecological systems research and assess its utility in generating shared knowledge and action for ecosystem services. The approach was piloted in South Africa across four case studies aimed at reducing the risk of disasters associated with floods, wildfires, storm waves, and droughts. Different configurations of stakeholders (knowledge brokers, assessment teams, implementers, and bridging agents) were involved in collaboratively designing each study, generating and exchanging knowledge, and planning for implementation. The approach proved useful in the development of shared knowledge on the sizable contribution of ecosystem services to disaster risk reduction. This knowledge was used by stakeholders to design and implement several actions to enhance ecosystem services, including new investments in ecosystem restoration, institutional changes in the private and public sector, and innovative partnerships of science, practice, and policy. By bringing together multiple disciplines, sectors, and stakeholders to jointly produce the knowledge needed to understand and manage a complex system, knowledge coproduction approaches offer an effective avenue for the improved integration of ecosystem services into decision making.

  6. Navigating complexity through knowledge coproduction: Mainstreaming ecosystem services into disaster risk reduction

    PubMed Central

    Reyers, Belinda; Nel, Jeanne L.; O’Farrell, Patrick J.; Sitas, Nadia; Nel, Deon C.

    2015-01-01

    Achieving the policy and practice shifts needed to secure ecosystem services is hampered by the inherent complexities of ecosystem services and their management. Methods for the participatory production and exchange of knowledge offer an avenue to navigate this complexity together with the beneficiaries and managers of ecosystem services. We develop and apply a knowledge coproduction approach based on social–ecological systems research and assess its utility in generating shared knowledge and action for ecosystem services. The approach was piloted in South Africa across four case studies aimed at reducing the risk of disasters associated with floods, wildfires, storm waves, and droughts. Different configurations of stakeholders (knowledge brokers, assessment teams, implementers, and bridging agents) were involved in collaboratively designing each study, generating and exchanging knowledge, and planning for implementation. The approach proved useful in the development of shared knowledge on the sizable contribution of ecosystem services to disaster risk reduction. This knowledge was used by stakeholders to design and implement several actions to enhance ecosystem services, including new investments in ecosystem restoration, institutional changes in the private and public sector, and innovative partnerships of science, practice, and policy. By bringing together multiple disciplines, sectors, and stakeholders to jointly produce the knowledge needed to understand and manage a complex system, knowledge coproduction approaches offer an effective avenue for the improved integration of ecosystem services into decision making. PMID:26082541

  7. Mothers' electrophysiological, subjective, and observed emotional responding to infant crying: The role of secure base script knowledge.

    PubMed

    Groh, Ashley M; Roisman, Glenn I; Haydon, Katherine C; Bost, Kelly; McElwain, Nancy; Garcia, Leanna; Hester, Colleen

    2015-11-01

    This study examined the extent to which secure base script knowledge-reflected in the ability to generate narratives in which attachment-relevant events are encountered, a clear need for assistance is communicated, competent help is provided and accepted, and the problem is resolved-is associated with mothers' electrophysiological, subjective, and observed emotional responses to an infant distress vocalization. While listening to an infant crying, mothers (N = 108, M age = 34 years) lower on secure base script knowledge exhibited smaller shifts in relative left (vs. right) frontal EEG activation from rest, reported smaller reductions in feelings of positive emotion from rest, and expressed greater levels of tension. Findings indicate that lower levels of secure base script knowledge are associated with an organization of emotional responding indicative of a less flexible and more emotionally restricted response to infant distress. Discussion focuses on the contribution of mothers' attachment representations to their ability to effectively manage emotional responding to infant distress in a manner expected to support sensitive caregiving.

  8. Model-driven development of covariances for spatiotemporal environmental health assessment.

    PubMed

    Kolovos, Alexander; Angulo, José Miguel; Modis, Konstantinos; Papantonopoulos, George; Wang, Jin-Feng; Christakos, George

    2013-01-01

    Known conceptual and technical limitations of mainstream environmental health data analysis have directed research to new avenues. The goal is to deal more efficiently with the inherent uncertainty and composite space-time heterogeneity of key attributes, account for multi-sourced knowledge bases (health models, survey data, empirical relationships etc.), and generate more accurate predictions across space-time. Based on a versatile, knowledge synthesis methodological framework, we introduce new space-time covariance functions built by integrating epidemic propagation models and we apply them in the analysis of existing flu datasets. Within the knowledge synthesis framework, the Bayesian maximum entropy theory is our method of choice for the spatiotemporal prediction of the ratio of new infectives (RNI) for a case study of flu in France. The space-time analysis is based on observations during a period of 15 weeks in 1998-1999. We present general features of the proposed covariance functions, and use these functions to explore the composite space-time RNI dependency. We then implement the findings to generate sufficiently detailed and informative maps of the RNI patterns across space and time. The predicted distributions of RNI suggest substantive relationships in accordance with the typical physiographic and climatologic features of the country.

  9. A feature dictionary supporting a multi-domain medical knowledge base.

    PubMed

    Naeymi-Rad, F

    1989-01-01

    Because different terminology is used by physicians of different specialties in different locations to refer to the same feature (signs, symptoms, test results), it is essential that our knowledge development tools provide a means to access a common pool of terms. This paper discusses the design of an online medical dictionary that provides a solution to this problem for developers of multi-domain knowledge bases for MEDAS (Medical Emergency Decision Assistance System). Our Feature Dictionary supports phrase equivalents for features, feature interactions, feature classifications, and translations to the binary features generated by the expert during knowledge creation. It is also used in the conversion of a domain knowledge to the database used by the MEDAS inference diagnostic sessions. The Feature Dictionary also provides capabilities for complex queries across multiple domains using the supported relations. The Feature Dictionary supports three methods for feature representation: (1) for binary features, (2) for continuous valued features, and (3) for derived features.

  10. Knowledge, attitudes and intention regarding mHealth in generation Y: evidence from a population based cross sectional study in Chakaria, Bangladesh

    PubMed Central

    Rahman, M Shafiqur; Hanifi, Syed; Khatun, Fatema; Iqbal, Mohammad; Rasheed, Sabrina; Ahmed, Tanvir; Hoque, Shahidul; Sharmin, Tamanna; Khan, Nazib-Uz Zaman; Mahmood, Shehrin Shaila; Bhuiya, Abbas

    2017-01-01

    Background and objectives mHealth offers a new opportunity to ensure access to qualified healthcare providers. Therefore, to better understand its potential in Bangladesh, it is important to understand how young people use mobile phones for healthcare. Here we examine the knowledge, attitudes and intentions to use mHealth services among young population. Design Population based cross sectional household survey. Setting and participants A total of 4909 respondents, aged 18 years and above, under the Chakaria Health and Demographic Surveillance System (HDSS) area, were interviewed during the period November 2012 to April 2013. Methods Participants younger than 30 years of age were defined as young (or generation Y). To examine the level of knowledge about and intention towards mHealth services in generation Y compared with their older counterparts, the percentage of the respective outcome measure from a 2×2 contingency table and adjusted odds ratio (aOR), which controls for potential confounders such as mobile ownership, sex, education, occupation and socioeconomic status, were estimated. The aOR was estimated using both the Cochran–Mantel–Haenszel approach and multivariable logistic regression models controlling for confounders. Results Generation Y had significantly greater access to mobile phones (50%vs40%) and better knowledge about its use for healthcare (37.8%vs27.5%;aOR 1.6 (95% CI1.3 to 2.0)). Furthermore, the level of knowledge about two existing mHealth services in generation Y was significantly higher compared with their older counterparts, with aOR values of 3.2 (95% CI 2.6 to 5.5) and 1.5 (95% CI 1.1 to 1.8), respectively. Similarly, generation Y showed significantly greater intention towards future use of mHealth services compared with their older counterparts (aOR 1.3 (95% CI 1.1 to 1.4)). The observed associations were not modified by sociodemographic factors. Conclusion There is a greater potential for mHealth services in the future among young people compared with older age groups. However, given the low overall use of mHealth, appropriate policy measures need to be formulated to enhance availability, access, utilisation and effectiveness of mHealth services. PMID:29146634

  11. Knowledge, attitudes and intention regarding mHealth in generation Y: evidence from a population based cross sectional study in Chakaria, Bangladesh.

    PubMed

    Rahman, M Shafiqur; Hanifi, Syed; Khatun, Fatema; Iqbal, Mohammad; Rasheed, Sabrina; Ahmed, Tanvir; Hoque, Shahidul; Sharmin, Tamanna; Khan, Nazib-Uz Zaman; Mahmood, Shehrin Shaila; Bhuiya, Abbas

    2017-11-15

    mHealth offers a new opportunity to ensure access to qualified healthcare providers. Therefore, to better understand its potential in Bangladesh, it is important to understand how young people use mobile phones for healthcare. Here we examine the knowledge, attitudes and intentions to use mHealth services among young population. Population based cross sectional household survey. A total of 4909 respondents, aged 18 years and above, under the Chakaria Health and Demographic Surveillance System (HDSS) area, were interviewed during the period November 2012 to April 2013. Participants younger than 30 years of age were defined as young (or generation Y). To examine the level of knowledge about and intention towards mHealth services in generation Y compared with their older counterparts, the percentage of the respective outcome measure from a 2×2 contingency table and adjusted odds ratio (aOR), which controls for potential confounders such as mobile ownership, sex, education, occupation and socioeconomic status, were estimated. The aOR was estimated using both the Cochran-Mantel-Haenszel approach and multivariable logistic regression models controlling for confounders. Generation Y had significantly greater access to mobile phones (50%vs40%) and better knowledge about its use for healthcare (37.8%vs27.5%;aOR 1.6 (95% CI1.3 to 2.0)). Furthermore, the level of knowledge about two existing mHealth services in generation Y was significantly higher compared with their older counterparts, with aOR values of 3.2 (95% CI 2.6 to 5.5) and 1.5 (95% CI 1.1 to 1.8), respectively. Similarly, generation Y showed significantly greater intention towards future use of mHealth services compared with their older counterparts (aOR 1.3 (95% CI 1.1 to 1.4)). The observed associations were not modified by sociodemographic factors. There is a greater potential for mHealth services in the future among young people compared with older age groups. However, given the low overall use of mHealth, appropriate policy measures need to be formulated to enhance availability, access, utilisation and effectiveness of mHealth services. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  12. Factors influencing analysis of complex cognitive tasks: a framework and example from industrial process control.

    PubMed

    Prietula, M J; Feltovich, P J; Marchak, F

    2000-01-01

    We propose that considering four categories of task factors can facilitate knowledge elicitation efforts in the analysis of complex cognitive tasks: materials, strategies, knowledge characteristics, and goals. A study was conducted to examine the effects of altering aspects of two of these task categories on problem-solving behavior across skill levels: materials and goals. Two versions of an applied engineering problem were presented to expert, intermediate, and novice participants. Participants were to minimize the cost of running a steam generation facility by adjusting steam generation levels and flows. One version was cast in the form of a dynamic, computer-based simulation that provided immediate feedback on flows, costs, and constraint violations, thus incorporating key variable dynamics of the problem context. The other version was cast as a static computer-based model, with no dynamic components, cost feedback, or constraint checking. Experts performed better than the other groups across material conditions, and, when required, the presentation of the goal assisted the experts more than the other groups. The static group generated richer protocols than the dynamic group, but the dynamic group solved the problem in significantly less time. Little effect of feedback was found for intermediates, and none for novices. We conclude that demonstrating differences in performance in this task requires different materials than explicating underlying knowledge that leads to performance. We also conclude that substantial knowledge is required to exploit the information yielded by the dynamic form of the task or the explicit solution goal. This simple model can help to identify the contextual factors that influence elicitation and specification of knowledge, which is essential in the engineering of joint cognitive systems.

  13. Semantic based man-machine interface for real-time communication

    NASA Technical Reports Server (NTRS)

    Ali, M.; Ai, C.-S.

    1988-01-01

    A flight expert system (FLES) was developed to assist pilots in monitoring, diagnosing and recovering from in-flight faults. To provide a communications interface between the flight crew and FLES, a natural language interface (NALI) was implemented. Input to NALI is processed by three processors: (1) the semantics parser; (2) the knowledge retriever; and (3) the response generator. First the semantic parser extracts meaningful words and phrases to generate an internal representation of the query. At this point, the semantic parser has the ability to map different input forms related to the same concept into the same internal representation. Then the knowledge retriever analyzes and stores the context of the query to aid in resolving ellipses and pronoun references. At the end of this process, a sequence of retrievel functions is created as a first step in generating the proper response. Finally, the response generator generates the natural language response to the query. The architecture of NALI was designed to process both temporal and nontemporal queries. The architecture and implementation of NALI are described.

  14. The Teaching of Protein Synthesis--A Microcomputer Based Method.

    ERIC Educational Resources Information Center

    Goodridge, Frank

    1983-01-01

    Describes two computer programs (BASIC for 32K Commodore PET) for teaching protein synthesis. The first is an interactive test of base-pairing knowledge, and the second generates random DNA nucleotide sequences, with instructions for substitution, insertion, and deletion printed out for each student. (JN)

  15. E-pharmacovigilance: development and implementation of a computable knowledge base to identify adverse drug reactions.

    PubMed

    Neubert, Antje; Dormann, Harald; Prokosch, Hans-Ulrich; Bürkle, Thomas; Rascher, Wolfgang; Sojer, Reinhold; Brune, Kay; Criegee-Rieck, Manfred

    2013-09-01

    Computer-assisted signal generation is an important issue for the prevention of adverse drug reactions (ADRs). However, due to poor standardization of patients' medical data and a lack of computable medical drug knowledge the specificity of computerized decision support systems for early ADR detection is too low and thus those systems are not yet implemented in daily clinical practice. We report on a method to formalize knowledge about ADRs based on the Summary of Product Characteristics (SmPCs) and linking them with structured patient data to generate safety signals automatically and with high sensitivity and specificity. A computable ADR knowledge base (ADR-KB) that inherently contains standardized concepts for ADRs (WHO-ART), drugs (ATC) and laboratory test results (LOINC) was built. The system was evaluated in study populations of paediatric and internal medicine inpatients. A total of 262 different ADR concepts related to laboratory findings were linked to 212 LOINC terms. The ADR knowledge base was retrospectively applied to a study population of 970 admissions (474 internal and 496 paediatric patients), who underwent intensive ADR surveillance. The specificity increased from 7% without ADR-KB up to 73% in internal patients and from 19.6% up to 91% in paediatric inpatients, respectively. This study shows that contextual linkage of patients' medication data with laboratory test results is a useful and reasonable instrument for computer-assisted ADR detection and a valuable step towards a systematic drug safety process. The system enables automated detection of ADRs during clinical practice with a quality close to intensive chart review. © 2013 The Authors. British Journal of Clinical Pharmacology © 2013 The British Pharmacological Society.

  16. Verification and Validation of KBS with Neural Network Components

    NASA Technical Reports Server (NTRS)

    Wen, Wu; Callahan, John

    1996-01-01

    Artificial Neural Network (ANN) play an important role in developing robust Knowledge Based Systems (KBS). The ANN based components used in these systems learn to give appropriate predictions through training with correct input-output data patterns. Unlike traditional KBS that depends on a rule database and a production engine, the ANN based system mimics the decisions of an expert without specifically formulating the if-than type of rules. In fact, the ANNs demonstrate their superiority when such if-then type of rules are hard to generate by human expert. Verification of traditional knowledge based system is based on the proof of consistency and completeness of the rule knowledge base and correctness of the production engine.These techniques, however, can not be directly applied to ANN based components.In this position paper, we propose a verification and validation procedure for KBS with ANN based components. The essence of the procedure is to obtain an accurate system specification through incremental modification of the specifications using an ANN rule extraction algorithm.

  17. Transformation based endorsement systems

    NASA Technical Reports Server (NTRS)

    Sudkamp, Thomas

    1988-01-01

    Evidential reasoning techniques classically represent support for a hypothesis by a numeric value or an evidential interval. The combination of support is performed by an arithmetic rule which often requires restrictions to be placed on the set of possibilities. These assumptions usually require the hypotheses to be exhausitive and mutually exclusive. Endorsement based classification systems represent support for the alternatives symbolically rather than numerically. A framework for constructing endorsement systems is presented in which transformations are defined to generate and update the knowledge base. The interaction of the knowledge base and transformations produces a non-monotonic reasoning system. Two endorsement based reasoning systems are presented to demonstrate the flexibility of the transformational approach for reasoning with ambiguous and inconsistent information.

  18. Semantic computing and language knowledge bases

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Wang, Houfeng; Yu, Shiwen

    2017-09-01

    As the proposition of the next-generation Web - semantic Web, semantic computing has been drawing more and more attention within the circle and the industries. A lot of research has been conducted on the theory and methodology of the subject, and potential applications have also been investigated and proposed in many fields. The progress of semantic computing made so far cannot be detached from its supporting pivot - language resources, for instance, language knowledge bases. This paper proposes three perspectives of semantic computing from a macro view and describes the current status of affairs about the construction of language knowledge bases and the related research and applications that have been carried out on the basis of these resources via a case study in the Institute of Computational Linguistics at Peking University.

  19. Development of a knowledge acquisition tool for an expert system flight status monitor

    NASA Technical Reports Server (NTRS)

    Disbrow, J. D.; Duke, E. L.; Regenie, V. A.

    1986-01-01

    Two of the main issues in artificial intelligence today are knowledge acquisition dion and knowledge representation. The Dryden Flight Research Facility of NASA's Ames Research Center is presently involved in the design and implementation of an expert system flight status monitor that will provide expertise and knowledge to aid the flight systems engineer in monitoring today's advanced high-performance aircraft. The flight status monitor can be divided into two sections: the expert system itself and the knowledge acquisition tool. The knowledge acquisition tool, the means it uses to extract knowledge from the domain expert, and how that knowledge is represented for computer use is discussed. An actual aircraft system has been codified by this tool with great success. Future real-time use of the expert system has been facilitated by using the knowledge acquisition tool to easily generate a logically consistent and complete knowledge base.

  20. Development of a knowledge acquisition tool for an expert system flight status monitor

    NASA Technical Reports Server (NTRS)

    Disbrow, J. D.; Duke, E. L.; Regenie, V. A.

    1986-01-01

    Two of the main issues in artificial intelligence today are knowledge acquisition and knowledge representation. The Dryden Flight Research Facility of NASA's Ames Research Center is presently involved in the design and implementation of an expert system flight status monitor that will provide expertise and knowledge to aid the flight systems engineer in monitoring today's advanced high-performance aircraft. The flight status monitor can be divided into two sections: the expert system itself and the knowledge acquisition tool. This paper discusses the knowledge acquisition tool, the means it uses to extract knowledge from the domain expert, and how that knowledge is represented for computer use. An actual aircraft system has been codified by this tool with great success. Future real-time use of the expert system has been facilitated by using the knowledge acquisition tool to easily generate a logically consistent and complete knowledge base.

  1. Integrated Risk and Knowledge Management Program -- IRKM-P

    NASA Technical Reports Server (NTRS)

    Lengyel, David M.

    2009-01-01

    The NASA Exploration Systems Mission Directorate (ESMD) IRKM-P tightly couples risk management and knowledge management processes and tools to produce an effective "modern" work environment. IRKM-P objectives include: (1) to learn lessons from past and current programs (Apollo, Space Shuttle, and the International Space Station); (2) to generate and share new engineering design, operations, and management best practices through preexisting Continuous Risk Management (CRM) procedures and knowledge-management practices; and (3) to infuse those lessons and best practices into current activities. The conceptual framework of the IRKM-P is based on the assumption that risks highlight potential knowledge gaps that might be mitigated through one or more knowledge management practices or artifacts. These same risks also serve as cues for collection of knowledge particularly, knowledge of technical or programmatic challenges that might recur.

  2. More than Anecdotes: Fishers’ Ecological Knowledge Can Fill Gaps for Ecosystem Modeling

    PubMed Central

    Bevilacqua, Ana Helena V.; Carvalho, Adriana R.; Angelini, Ronaldo; Christensen, Villy

    2016-01-01

    Background Ecosystem modeling applied to fisheries remains hampered by a lack of local information. Fishers’ knowledge could fill this gap, improving participation in and the management of fisheries. Methodology The same fishing area was modeled using two approaches: based on fishers’ knowledge and based on scientific information. For the former, the data was collected by interviews through the Delphi methodology, and for the latter, the data was gathered from the literature. Agreement between the attributes generated by the fishers’ knowledge model and scientific model is discussed and explored, aiming to improve data availability, the ecosystem model, and fisheries management. Principal Findings The ecosystem attributes produced from the fishers’ knowledge model were consistent with the ecosystem attributes produced by the scientific model, and elaborated using only the scientific data from literature. Conclusions/Significance This study provides evidence that fishers’ knowledge may suitably complement scientific data, and may improve the modeling tools for the research and management of fisheries. PMID:27196131

  3. Hypertension Knowledge-Level Scale (HK-LS): a study on development, validity and reliability.

    PubMed

    Erkoc, Sultan Baliz; Isikli, Burhanettin; Metintas, Selma; Kalyoncu, Cemalettin

    2012-03-01

    This study was conducted to develop a scale to measure knowledge about hypertension among Turkish adults. The Hypertension Knowledge-Level Scale (HK-LS) was generated based on content, face, and construct validity, internal consistency, test re-test reliability, and discriminative validity procedures. The final scale had 22 items with six sub-dimensions. The scale was applied to 457 individuals aged ≥ 18 years, and 414 of them were re-evaluated for test-retest reliability. The six sub-dimensions encompassed 60.3% of the total variance. Cronbach alpha coefficients were 0.82 for the entire scale and 0.92, 0.59, 0.67, 0.77, 0.72, and 0.76 for the sub-dimensions of definition, medical treatment, drug compliance, lifestyle, diet, and complications, respectively. The scale ensured internal consistency in reliability and construct validity, as well as stability over time. Significant relationships were found between knowledge score and age, gender, educational level, and history of hypertension of the participants. No correlation was found between knowledge score and working at an income-generating job. The present scale, developed to measure the knowledge level of hypertension among Turkish adults, was found to be valid and reliable.

  4. Using XML and XSLT for flexible elicitation of mental-health risk knowledge.

    PubMed

    Buckingham, C D; Ahmed, A; Adams, A E

    2007-03-01

    Current tools for assessing risks associated with mental-health problems require assessors to make high-level judgements based on clinical experience. This paper describes how new technologies can enhance qualitative research methods to identify lower-level cues underlying these judgements, which can be collected by people without a specialist mental-health background. Content analysis of interviews with 46 multidisciplinary mental-health experts exposed the cues and their interrelationships, which were represented by a mind map using software that stores maps as XML. All 46 mind maps were integrated into a single XML knowledge structure and analysed by a Lisp program to generate quantitative information about the numbers of experts associated with each part of it. The knowledge was refined by the experts, using software developed in Flash to record their collective views within the XML itself. These views specified how the XML should be transformed by XSLT, a technology for rendering XML, which resulted in a validated hierarchical knowledge structure associating patient cues with risks. Changing knowledge elicitation requirements were accommodated by flexible transformations of XML data using XSLT, which also facilitated generation of multiple data-gathering tools suiting different assessment circumstances and levels of mental-health knowledge.

  5. Traditional knowledge hiding in plain sight - twenty-first century ethnobotany of the Chácobo in Beni, Bolivia.

    PubMed

    Paniagua Zambrana, Narel Y; Bussmann, Rainer W; Hart, Robbie E; Moya Huanca, Araceli L; Ortiz Soria, Gere; Ortiz Vaca, Milton; Ortiz Álvarez, David; Soria Morán, Jorge; Soria Morán, María; Chávez, Saúl; Chávez Moreno, Bertha; Chávez Moreno, Gualberto; Roca, Oscar; Siripi, Erlin

    2017-10-10

    The Chácobo are a Panoan speaking tribe of about 1000 members (300+ adults) in Beni, Bolivia. Originally nomadic, the Chácabo were relocated to their current main location in the 1960s. Researchers have visited the Chácabo since 1911. A first more detailed anthropological report exists from the late 1960s, and ecological-ethnobotanical studies were conducted in the 1980s and 1990s. The presented work represents a complete ethnobotanical inventory of the entire adult Chácobo population, with interviews and plant collection conducted directly by Chácobo counterparts. Based on previous reports and our preliminary studies, we hypothesized that twenty-first century Chácobo plant use centered on income generation, and that traditional plant use related to household utensils, medicine and traditional crop varieties had almost disappeared. To test this hypothesis, we started the "Chácobo Ethnobotany Project," training 10 indigenous Chácobo participants in ethnobotanical interview and plant collection techniques, in order to more fully document Chácobo knowledge and avoid the influence of foreign interviewers. Our study found 331 useful plant species in 241genera of 95 plant families, with leaves, roots and bark being the most commonly used plant parts The comprehensive documentation that these methods enabled completely nullified our initial hypothesis of knowledge loss. Traditional crop varieties are still widely grown and traditional knowledge is alive. Moreover, it is being actively recuperated in certain domains by the younger generation. Most Chácobo know, and can name, traditional utensils and tools, although only the older generation has still the skills to manufacture them. While many Chácobo still know the names and uses of medicinal species, the younger generation is however often unsure how to identify them. In this paper we illustrate the complexity of perspectives on knowledge at different ages, and the persistence of knowledge over almost a century. We found that traditional knowledge was only partially affected by the processes of exposure to a market economy, and that different knowledge domains experienced different trends as a result of these changes. Overall knowledge was widely distributed, and we did not observe a directional knowledge loss. We stress the importance to not directly conclude processes of knowledge loss, cultural erosion or acculturation when comparing the knowledge of different age groups.

  6. Thermally controlled comb generation and soliton modelocking in microresonators.

    PubMed

    Joshi, Chaitanya; Jang, Jae K; Luke, Kevin; Ji, Xingchen; Miller, Steven A; Klenner, Alexander; Okawachi, Yoshitomo; Lipson, Michal; Gaeta, Alexander L

    2016-06-01

    We report, to the best of our knowledge, the first demonstration of thermally controlled soliton mode-locked frequency comb generation in microresonators. By controlling the electric current through heaters integrated with silicon nitride microresonators, we demonstrate a systematic and repeatable pathway to single- and multi-soliton mode-locked states without adjusting the pump laser wavelength. Such an approach could greatly simplify the generation of mode-locked frequency combs and facilitate applications such as chip-based dual-comb spectroscopy.

  7. On Generating Knowledge in Service to Society

    ERIC Educational Resources Information Center

    Schwandt, Thomas A.

    2005-01-01

    In the US debate over science-based educational research, experimentation is again coming to the fore as ideal for establishing an evidence base of what works in educational interventions thus contributing to the climate of accountability in educational research and practice. Mixed methods research is being positioned in this discussion as a…

  8. Decision Making: New Paradigm for Education.

    ERIC Educational Resources Information Center

    Wales, Charles E.; And Others

    1986-01-01

    Defines education's new paradigm as schooling based on decision making, the critical thinking skills serving it, and the knowledge base supporting it. Outlines a model decision-making process using a hypothetical breakfast problem; a late riser chooses goals, generates ideas, develops an action plan, and implements and evaluates it. (4 references)…

  9. Implementing Anchored Instruction: Guiding Principles for Curriculum Development.

    ERIC Educational Resources Information Center

    McLarty, Kim; And Others

    A curriculum based on "anchored instruction" was developed to enhance students' literacy development and acquisition of knowledge. The curriculum was designed to create a rich, shared environment that generates interest and enables students to identify and define problems while they explore the content from many perspectives. Based on…

  10. Experts' views regarding Australian school-leavers' knowledge of nutrition and food systems.

    PubMed

    Sadegholvad, Sanaz; Yeatman, Heather; Parrish, Anne-Maree; Worsley, Anthony

    2017-10-01

    To explore Australian experts' views regarding strengths and gaps in school-leavers' knowledge of nutrition and food systems ( N&FS) and factors that influence that knowledge. Semi-structured interviews were conducted with 21 highly experienced food-related experts in Australia. Qualitative data were analysed thematically using Attride-Stirling's thematic network framework. Two global themes and several organising themes were identified. The first global theme, 'structural curriculum-based problems', emerged from three organising themes of: inconsistencies in provided food education programs at schools in Australia; insufficient coverage of food-related skills and food systems topics in school curricula; and the lack of trained school teachers. The second global theme, 'insufficient levels of school-leavers knowledge of N&FS ', was generated from four organising themes, which together described Australian school-leavers' poor knowledge of N&FS more broadly and knowledge translation problem for everyday practices. Study findings identified key problems relating to current school-based N&FS education programs in Australia and reported knowledge gaps in relation to N&FS among Australian school-leavers. These findings provide important guidance for N&FS curriculum development, to clearly articulate broadly-based N&FS knowledge acquisition in curriculum policy and education documents for Australian schools. © 2017 The Authors.

  11. 'Best practice' development and transfer in the NHS: the importance of process as well as product knowledge.

    PubMed

    Newell, Sue; Edelman, Linda; Scarbrough, Harry; Swan, Jacky; Bresnen, Mike

    2003-02-01

    A core prescription from the knowledge management movement is that the successful management of organizational knowledge will prevent firms from 'reinventing the wheel', in particular through the transfer of 'best practices'. Our findings challenge this logic. They suggest instead that knowledge is emergent and enacted in practice, and that normally those involved in a given practice have only a partial understanding of the overall practice. Generating knowledge about current practice is therefore a precursor to changing that practice. In this sense, knowledge transfer does not occur independently of or in sequence to knowledge generation, but instead the process of knowledge generation and its transfer are inexorably intertwined. Thus, rather than transferring 'product' knowledge about the new 'best practice' per se, our analysis suggests that it is more useful to transfer 'process' knowledge about effective ways to generate the knowledge of existing practice, which is the essential starting point for attempts to change that practice.

  12. Performing Art-Based Research: Innovation in Graduate Art Therapy Education

    ERIC Educational Resources Information Center

    Moon, Bruce L.; Hoffman, Nadia

    2014-01-01

    This article presents an innovation in art therapy research and education in which art-based performance is used to generate, embody, and creatively synthesize knowledge. An art therapy graduate student's art-based process of inquiry serves to demonstrate how art and performance may be used to identify the research question, to conduct a process…

  13. Constructing Agent Model for Virtual Training Systems

    NASA Astrophysics Data System (ADS)

    Murakami, Yohei; Sugimoto, Yuki; Ishida, Toru

    Constructing highly realistic agents is essential if agents are to be employed in virtual training systems. In training for collaboration based on face-to-face interaction, the generation of emotional expressions is one key. In training for guidance based on one-to-many interaction such as direction giving for evacuations, emotional expressions must be supplemented by diverse agent behaviors to make the training realistic. To reproduce diverse behavior, we characterize agents by using a various combinations of operation rules instantiated by the user operating the agent. To accomplish this goal, we introduce a user modeling method based on participatory simulations. These simulations enable us to acquire information observed by each user in the simulation and the operating history. Using these data and the domain knowledge including known operation rules, we can generate an explanation for each behavior. Moreover, the application of hypothetical reasoning, which offers consistent selection of hypotheses, to the generation of explanations allows us to use otherwise incompatible operation rules as domain knowledge. In order to validate the proposed modeling method, we apply it to the acquisition of an evacuee's model in a fire-drill experiment. We successfully acquire a subject's model corresponding to the results of an interview with the subject.

  14. VuWiki: An Ontology-Based Semantic Wiki for Vulnerability Assessments

    NASA Astrophysics Data System (ADS)

    Khazai, Bijan; Kunz-Plapp, Tina; Büscher, Christian; Wegner, Antje

    2014-05-01

    The concept of vulnerability, as well as its implementation in vulnerability assessments, is used in various disciplines and contexts ranging from disaster management and reduction to ecology, public health or climate change and adaptation, and a corresponding multitude of ideas about how to conceptualize and measure vulnerability exists. Three decades of research in vulnerability have generated a complex and growing body of knowledge that challenges newcomers, practitioners and even experienced researchers. To provide a structured representation of the knowledge field "vulnerability assessment", we have set up an ontology-based semantic wiki for reviewing and representing vulnerability assessments: VuWiki, www.vuwiki.org. Based on a survey of 55 vulnerability assessment studies, we first developed an ontology as an explicit reference system for describing vulnerability assessments. We developed the ontology in a theoretically controlled manner based on general systems theory and guided by principles for ontology development in the field of earth and environment (Raskin and Pan 2005). Four key questions form the first level "branches" or categories of the developed ontology: (1) Vulnerability of what? (2) Vulnerability to what? (3) What reference framework was used in the vulnerability assessment?, and (4) What methodological approach was used in the vulnerability assessment? These questions correspond to the basic, abstract structure of the knowledge domain of vulnerability assessments and have been deduced from theories and concepts of various disciplines. The ontology was then implemented in a semantic wiki which allows for the classification and annotation of vulnerability assessments. As a semantic wiki, VuWiki does not aim at "synthesizing" a holistic and overarching model of vulnerability. Instead, it provides both scientists and practitioners with a uniform ontology as a reference system and offers easy and structured access to the knowledge field of vulnerability assessments with the possibility for any user to retrieve assessments using specific research criteria. Furthermore, Vuwiki can serve as a collaborative knowledge platform that allows for the active participation of those generating and using the knowledge represented in the wiki.

  15. Quantum mechanical energy-based screening of combinatorially generated library of tautomers. TauTGen: a tautomer generator program.

    PubMed

    Harańczyk, Maciej; Gutowski, Maciej

    2007-01-01

    We describe a procedure of finding low-energy tautomers of a molecule. The procedure consists of (i) combinatorial generation of a library of tautomers, (ii) screening based on the results of geometry optimization of initial structures performed at the density functional level of theory, and (iii) final refinement of geometry for the top hits at the second-order Möller-Plesset level of theory followed by single-point energy calculations at the coupled cluster level of theory with single, double, and perturbative triple excitations. The library of initial structures of various tautomers is generated with TauTGen, a tautomer generator program. The procedure proved to be successful for these molecular systems for which common chemical knowledge had not been sufficient to predict the most stable structures.

  16. Universal Verification Methodology Based Register Test Automation Flow.

    PubMed

    Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu

    2016-05-01

    In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.

  17. Assessing children's inference generation: what do tests of reading comprehension measure?

    PubMed

    Bowyer-Crane, Claudine; Snowling, Margaret J

    2005-06-01

    Previous research suggests that children with specific comprehension difficulties have problems with the generation of inferences. This raises important questions as to whether poor comprehenders have poor comprehension skills generally, or whether their problems are confined to specific inference types. The main aims of the study were (a) using two commonly used tests of reading comprehension to classify the questions requiring the generation of inferences, and (b) to investigate the relative performance of skilled and less-skilled comprehenders on questions tapping different inference types. The performance of 10 poor comprehenders (mean age 110.06 months) was compared with the performance of 10 normal readers (mean age 112.78 months) on two tests of reading comprehension. A qualitative analysis of the NARA II (form 1) and the WORD comprehension subtest was carried out. Participants were then administered the NARA II, WORD comprehension subtest and a test of non-word reading. The NARA II was heavily reliant on the generation of knowledge-based inferences, while the WORD comprehension subtest was biased towards the retention of literal information. Children identified by the NARA II as having comprehension difficulties performed in the normal range on the WORD comprehension subtests. Further, children with comprehension difficulties performed poorly on questions requiring the generation of knowledge-based and elaborative inferences. However, they were able to answer questions requiring attention to literal information or use of cohesive devices at a level comparable to normal readers. Different reading tests tap different types of inferencing skills. Lessskilled comprehenders have particular difficulty applying real-world knowledge to a text during reading, and this has implications for the formulation of effective intervention strategies.

  18. Defining the requisite knowledge for providers of in-service professional development for K--12 teachers of science: Refining the construct

    NASA Astrophysics Data System (ADS)

    Tucker, Deborah L.

    Purpose. The purpose of this grounded theory study was to refine, using a Delphi study process, the four categories of the theoretical model of the comprehensive knowledge base required by providers of professional development for K-12 teachers of science generated from a review of the literature. Methodology. This grounded theory study used data collected through a modified Delphi technique and interviews to refine and validate the literature-based knowledge base required by providers of professional development for K-12 teachers of science. Twenty-three participants, experts in the fields of science education, how people learn, instructional and assessment strategies, and learning contexts, responded to the study's questions. Findings. By "densifying" the four categories of the knowledge base, this study determined the causal conditions (the science subject matter knowledge), the intervening conditions (how people learn), the strategies (the effective instructional and assessment strategies), and the context (the context and culture of formal learning environments) surrounding the science professional development process. Eight sections were added to the literature-based knowledge base; the final model comprised of forty-nine sections. The average length of the operational definitions increased nearly threefold and the number of citations per operational definition increased more than twofold. Conclusions. A four-category comprehensive model that can serve as the foundation for the knowledge base required by science professional developers now exists. Subject matter knowledge includes science concepts, inquiry, the nature of science, and scientific habits of mind; how people learn includes the principles of learning, active learning, andragogy, variations in learners, neuroscience and cognitive science, and change theory; effective instructional and assessment strategies include constructivist learning and inquiry-based teaching, differentiation of instruction, making knowledge and thinking accessible to learners, automatic and fluent retrieval of nonscience-specific skills, and science assessment and assessment strategies, science-specific instructional strategies, and safety within a learning environment; and, contextual knowledge includes curriculum selection and implementation strategies and knowledge of building program coherence. Recommendations. Further research on the use of which specific instructional strategies identified in the refined knowledge base have positive, significant effect sizes for adult learners is recommended.

  19. A measure of knowledge flow between specific fields: Implications of interdisciplinarity for impact and funding

    PubMed Central

    Solomon, Gregg E. A.; Youtie, Jan; Porter, Alan L.

    2017-01-01

    Encouraging knowledge flow between mutually relevant disciplines is a worthy aim of research policy makers. Yet, it is less clear what types of research promote cross-disciplinary knowledge flow and whether such research generates particularly influential knowledge. Empirical questions remain as to how to identify knowledge-flow mediating research and how to provide support for this research. This study contributes to addressing these gaps by proposing a new way to identify knowledge-flow mediating research at the individual research article level, instead of at more aggregated levels. We identify journal articles that link two mutually relevant disciplines in three ways—aggregating, bridging, and diffusing. We then examine the likelihood that these papers receive subsequent citations or have funding acknowledgments. Our case study of cognitive science and educational research knowledge flow suggests that articles that aggregate knowledge from multiple disciplines are cited significantly more often than are those whose references are drawn primarily from a single discipline. Interestingly, the articles that meet the criteria for being considered knowledge-flow mediators are less likely to reflect funding, based on reported acknowledgements, than were those that did not meet these criteria. Based on these findings, we draw implications for research policymakers. PMID:29016631

  20. Mission planning and simulation via intelligent agents

    NASA Technical Reports Server (NTRS)

    Gargan, Robert A., Jr.; Tilley, Randall W.

    1987-01-01

    A system that can operate from a flight manifest to plan and simulate payload preparation and transport via Shuttle flights is described. The design alternatives and the prototype implementation of the payload hardware and inventory tracking system are discussed. It is shown how intelligent agents can be used to generate mission schedules, and how, through the use of these intelligent agents, knowledge becomes separated into small manageable knowledge bases.

  1. Locally-Based Kernal PLS Smoothing to Non-Parametric Regression Curve Fitting

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Trejo, Leonard J.; Wheeler, Kevin; Korsmeyer, David (Technical Monitor)

    2002-01-01

    We present a novel smoothing approach to non-parametric regression curve fitting. This is based on kernel partial least squares (PLS) regression in reproducing kernel Hilbert space. It is our concern to apply the methodology for smoothing experimental data where some level of knowledge about the approximate shape, local inhomogeneities or points where the desired function changes its curvature is known a priori or can be derived based on the observed noisy data. We propose locally-based kernel PLS regression that extends the previous kernel PLS methodology by incorporating this knowledge. We compare our approach with existing smoothing splines, hybrid adaptive splines and wavelet shrinkage techniques on two generated data sets.

  2. A flexible telerobotic system for space operations

    NASA Technical Reports Server (NTRS)

    Sliwa, N. O.; Will, R. W.

    1987-01-01

    The objective and design of a proposed goal-oriented knowledge-based telerobotic system for space operations is described. This design effort encompasses the elements of the system executive and user interface and the distribution and general structure of the knowledge base, the displays, and the task sequencing. The objective of the design effort is to provide an expandable structure for a telerobotic system that provides cooperative interaction between the human operator and computer control. The initial phase of the implementation provides a rule-based, goal-oriented script generator to interface to the existing control modes of a telerobotic research system, in the Intelligent Systems Research Lab at NASA Research Center.

  3. Building a knowledge based economy in Russia using guided entrepreneurship

    NASA Astrophysics Data System (ADS)

    Reznik, Boris N.; Daniels, Marc; Ichim, Thomas E.; Reznik, David L.

    2005-06-01

    Despite advanced scientific and technological (S&T) expertise, the Russian economy is presently based upon manufacturing and raw material exports. Currently, governmental incentives are attempting to leverage the existing scientific infrastructure through the concept of building a Knowledge Based Economy. However, socio-economic changes do not occur solely by decree, but by alteration of approach to the market. Here we describe the "Guided Entrepreneurship" plan, a series of steps needed for generation of an army of entrepreneurs, which initiate a chain reaction of S&T-driven growth. The situation in Russia is placed in the framework of other areas where Guided Entrepreneurship has been successful.

  4. Investigating students' view on STEM in learning about electrical current through STS approach

    NASA Astrophysics Data System (ADS)

    Tupsai, Jiraporn; Yuenyong, Chokchai

    2018-01-01

    This study aims to investigate Grade 11 students' views on Science Technology Engineering Mathematics (STEM) with the integration of learning about electrical current based on Science Technology Society (STS) approach [8]. The participants were 60 Grade 11 students in Demonstration Secondary School, Khon Kaen University, Khon Kaen Province, Thailand. The methodology is in the respect of interpretive paradigm. The teaching and learning about Electrical Current through STS approach carried out over 6 weeks. The Electrical Current unit through STS approach was developed based on framework[8] that consists of five stages including (1) identification of social issues, (2) identification of potential solutions, (3) need for knowledge, (4) decision making, and (5) socialization stage. To start with, the question "what if this world is lack of electricity" was challenged in the class in order to move students to find the problem of how to design Electricity Generation from Clean Energy. Students were expected to apply scientific and other knowledge to design of Electricity Generation. Students' views on STEM were collected during their learning by participant' observation and students' tasks. Their views on STEM were categorized when they applied their knowledge for designing the Electricity Generation. The findings indicated that students cooperatively work to solve the problem when applying knowledge about the content of Science and Mathematics and processing skill of Technology and Engineering. It showed that students held the integration of science, technology, engineering and mathematics to design their possible solutions in learning about Electrical Current. The paper also discusses implications for science teaching and learning through STS in Thailand.

  5. Preservice Science Teachers' Epistemological Beliefs and Informal Reasoning Regarding Socioscientific Issues

    NASA Astrophysics Data System (ADS)

    Ozturk, Nilay; Yilmaz-Tuzun, Ozgul

    2017-12-01

    This study investigated preservice elementary science teachers' (PSTs) informal reasoning regarding socioscientific issues (SSI), their epistemological beliefs, and the relationship between informal reasoning and epistemological beliefs. From several SSIs, nuclear power usage was selected for this study. A total of 647 Turkish PSTs enrolled in three large universities in Turkey completed the open-ended questionnaire, which assessed the participants' informal reasoning about the target SSI, and Schommer's (1990) Epistemological Questionnaire. The participants' epistemological beliefs were assessed quantitatively and their informal reasoning was assessed both qualitatively and quantitatively. The findings revealed that PSTs preferred to generate evidence-based arguments rather than intuitive-based arguments; however, they failed to generate quality evidence and present different types of evidence to support their claims. Furthermore, among the reasoning quality indicators, PSTs mostly generated supportive argument construction. Regarding the use of reasoning modes, types of risk arguments and political-oriented arguments emerged as the new reasoning modes. The study demonstrated that the PSTs had different epistemological beliefs in terms of innate ability, omniscient authority, certain knowledge, and quick learning. Correlational analyses revealed that there was a strong negative correlation between the PSTs' certain knowledge and counterargument construction, and there were negative correlations between the PSTs' innate ability, certain knowledge, and quick learning dimensions of epistemological beliefs and their total argument construction. This study has implications for both science teacher education and the practice of science education. For example, PST teacher education programs should give sufficient importance to training teachers that are skillful and knowledgeable regarding SSIs. To achieve this, specific SSI-related courses should form part of science teacher education programs.

  6. PU-ICE Summary Information.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, Michael

    The Generator Knowledge Report for the Plutonium Isentropic Compression Experiment Containment Systems (GK Report) provides information for the Plutonium Isentropic Compression Experiment (Pu- ICE) program to support waste management and characterization efforts. Attachment 3-18 presents generator knowledge (GK) information specific to the eighteenth Pu-ICE conducted in August 2015, also known as ‘Shot 18 (Aug 2015) and Pu-ICE Z-2841 (1).’ Shot 18 (Aug 2015) was generated on August 28, 2015 (1). Calculations based on the isotopic content of Shot 18 (Aug 2015) and the measured mass of the containment system demonstrate the post-shot containment system is low-level waste (LLW). Therefore, thismore » containment system will be managed at Sandia National Laboratory/New Mexico (SNL/NM) as LLW. Attachment 3-18 provides documentation of the TRU concentration and documents the concentration of any hazardous constituents.« less

  7. New approach to generating insights for aging research based on literature mining and knowledge integration

    PubMed Central

    Kwon, Yeondae; Natori, Yukikazu

    2017-01-01

    The proportion of the elderly population in most countries worldwide is increasing dramatically. Therefore, social interest in the fields of health, longevity, and anti-aging has been increasing as well. However, the basic research results obtained from a reductionist approach in biology and a bioinformatic approach in genome science have limited usefulness for generating insights on future health, longevity, and anti-aging-related research on a case by case basis. We propose a new approach that uses our literature mining technique and bioinformatics, which lead to a better perspective on research trends by providing an expanded knowledge base to work from. We demonstrate that our approach provides useful information that deepens insights on future trends which differs from data obtained conventionally, and this methodology is already paving the way for a new field in aging-related research based on literature mining. One compelling example of this is how our new approach can be a useful tool in drug repositioning. PMID:28817730

  8. Automation of route identification and optimisation based on data-mining and chemical intuition.

    PubMed

    Lapkin, A A; Heer, P K; Jacob, P-M; Hutchby, M; Cunningham, W; Bull, S D; Davidson, M G

    2017-09-21

    Data-mining of Reaxys and network analysis of the combined literature and in-house reactions set were used to generate multiple possible reaction routes to convert a bio-waste feedstock, limonene, into a pharmaceutical API, paracetamol. The network analysis of data provides a rich knowledge-base for generation of the initial reaction screening and development programme. Based on the literature and the in-house data, an overall flowsheet for the conversion of limonene to paracetamol was proposed. Each individual reaction-separation step in the sequence was simulated as a combination of the continuous flow and batch steps. The linear model generation methodology allowed us to identify the reaction steps requiring further chemical optimisation. The generated model can be used for global optimisation and generation of environmental and other performance indicators, such as cost indicators. However, the identified further challenge is to automate model generation to evolve optimal multi-step chemical routes and optimal process configurations.

  9. Fertility preservation: a pilot study to assess previsit patient knowledge quantitatively.

    PubMed

    Balthazar, Ursula; Fritz, Marc A; Mersereau, Jennifer E

    2011-05-01

    To provide a quantitative assessment of patient knowledge about fertility and fertility preservation treatment options before the initial fertility preservation consultation at a university-based fertility preservation center. Prospective pilot survey containing 13 items assessing patient knowledge about fertility preservation, including the available treatment options and their requirements, success rates, and associated risks. University-based IVF center. Women aged 18 to 41 years with illnesses requiring treatments posing a serious threat to future fertility who were referred for fertility preservation consultation between April 2009 and June 2010. None. Knowledge score. Forty-one eligible patients were identified, and all completed surveys before their consultation. A knowledge score was generated for each patient with 1 point awarded for each correct answer. Overall, patients had poor previsit fertility preservation knowledge (mean score 5.9±2.7). Higher knowledge scores were correlated with personal experience with infertility and previous exposure to fertility preservation treatment information. There was no correlation between knowledge score and age, relationship status, pregnancy history, education, or income. Patients seen for fertility preservation consultation at our university-based center generally tend to be in their early 30s, white, well educated, and married. Previsit knowledge about fertility preservation treatment options was poor and did not correlate with age, education, and relationship status. Copyright © 2011 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.

  10. The design and development of a third generation OSEE instrument

    NASA Technical Reports Server (NTRS)

    Perey, D. F.; Yost, W. T.; Stone, F. D.; Welch, C. S.; Scales, E.; Gasser, E. S.; Joe, E.; Goodman, T.; Pascual, X.; Hefner, B.

    1995-01-01

    Optically Stimulated Electron Emission (OSEE) has been used to quantify surface contamination in the aerospace community. As advances are made towards the understanding of OSEE, it is desirable to incorporate technological advances with succeeding generations of instrumentation, so that improvements in the practical application of OSEE may be disseminated among the user community. Several studies undertaken by Yost, Welch, Abedin and others have expanded the knowledge base related to the underlying principles of OSEE. The conclusions of these studies, together with inputs from the user community were the foundation upon which the development of a third generation OSEE instrument was based. This manuscript describes the significant improvements incorporated into a third generation OSEE instrument as well as the elements unique to its design.

  11. Knowledge 'Translation' as social learning: negotiating the uptake of research-based knowledge in practice.

    PubMed

    Salter, K L; Kothari, A

    2016-02-29

    Knowledge translation and evidence-based practice have relied on research derived from clinical trials, which are considered to be methodologically rigorous. The result is practice recommendations based on a narrow view of evidence. We discuss how, within a practice environment, in fact individuals adopt and apply new evidence derived from multiple sources through ongoing, iterative learning cycles. The discussion is presented in four sections. After elaborating on the multiple forms of evidence used in practice, in section 2 we argue that the practitioner derives contextualized knowledge through reflective practice. Then, in section 3, the focus shifts from the individual to the team with consideration of social learning and theories of practice. In section 4 we discuss the implications of integrative and negotiated knowledge exchange and generation within the practice environment. Namely, how can we promote the use of research within a team-based, contextualized knowledge environment? We suggest support for: 1) collaborative learning environments for active learning and reflection, 2) engaged scholarship approaches so that practice can inform research in a collaborative manner and 3) leveraging authoritative opinion leaders for their clinical expertise during the shared negotiation of knowledge and research. Our approach also points to implications for studying evidence-informed practice: the identification of practice change (as an outcome) ought to be supplemented with understandings of how and when social negotiation processes occur to achieve integrated knowledge. This article discusses practice knowledge as dependent on the practice context and on social learning processes, and suggests how research knowledge uptake might be supported from this vantage point.

  12. Knowledge-based segmentation and feature analysis of hand and wrist radiographs

    NASA Astrophysics Data System (ADS)

    Efford, Nicholas D.

    1993-07-01

    The segmentation of hand and wrist radiographs for applications such as skeletal maturity assessment is best achieved by model-driven approaches incorporating anatomical knowledge. The reasons for this are discussed, and a particular frame-based or 'blackboard' strategy for the simultaneous segmentation of the hand and estimation of bone age via the TW2 method is described. The new approach is structured for optimum robustness and computational efficiency: features of interest are detected and analyzes in order of their size and prominence in the image, the largest and most distinctive being dealt with first, and the evidence generated by feature analysis is used to update a model of hand anatomy and hence guide later stages of the segmentation. Closed bone boundaries are formed by a hybrid technique combining knowledge-based, one-dimensional edge detection with model-assisted heuristic tree searching.

  13. Network-based approaches to climate knowledge discovery

    NASA Astrophysics Data System (ADS)

    Budich, Reinhard; Nyberg, Per; Weigel, Tobias

    2011-11-01

    Climate Knowledge Discovery Workshop; Hamburg, Germany, 30 March to 1 April 2011 Do complex networks combined with semantic Web technologies offer the next generation of solutions in climate science? To address this question, a first Climate Knowledge Discovery (CKD) Workshop, hosted by the German Climate Computing Center (Deutsches Klimarechenzentrum (DKRZ)), brought together climate and computer scientists from major American and European laboratories, data centers, and universities, as well as representatives from industry, the broader academic community, and the semantic Web communities. The participants, representing six countries, were concerned with large-scale Earth system modeling and computational data analysis. The motivation for the meeting was the growing problem that climate scientists generate data faster than it can be interpreted and the need to prepare for further exponential data increases. Current analysis approaches are focused primarily on traditional methods, which are best suited for large-scale phenomena and coarse-resolution data sets. The workshop focused on the open discussion of ideas and technologies to provide the next generation of solutions to cope with the increasing data volumes in climate science.

  14. Next generation data systems and knowledge products to support agricultural producers and science-based policy decision making.

    PubMed

    Capalbo, Susan M; Antle, John M; Seavert, Clark

    2017-07-01

    Research on next generation agricultural systems models shows that the most important current limitation is data, both for on-farm decision support and for research investment and policy decision making. One of the greatest data challenges is to obtain reliable data on farm management decision making, both for current conditions and under scenarios of changed bio-physical and socio-economic conditions. This paper presents a framework for the use of farm-level and landscape-scale models and data to provide analysis that could be used in NextGen knowledge products, such as mobile applications or personal computer data analysis and visualization software. We describe two analytical tools - AgBiz Logic and TOA-MD - that demonstrate the current capability of farmlevel and landscape-scale models. The use of these tools is explored with a case study of an oilseed crop, Camelina sativa , which could be used to produce jet aviation fuel. We conclude with a discussion of innovations needed to facilitate the use of farm and policy-level models to generate data and analysis for improved knowledge products.

  15. An expert system prototype for aiding in the development of software functional requirements for NASA Goddard's command management system: A case study and lessons learned

    NASA Technical Reports Server (NTRS)

    Liebowitz, Jay

    1986-01-01

    At NASA Goddard, the role of the command management system (CMS) is to transform general requests for spacecraft opeerations into detailed operational plans to be uplinked to the spacecraft. The CMS is part of the NASA Data System which entails the downlink of science and engineering data from NASA near-earth satellites to the user, and the uplink of command and control data to the spacecraft. Presently, it takes one to three years, with meetings once or twice a week, to determine functional requirements for CMS software design. As an alternative approach to the present technique of developing CMS software functional requirements, an expert system prototype was developed to aid in this function. Specifically, the knowledge base was formulated through interactions with domain experts, and was then linked to an existing expert system application generator called 'Knowledge Engineering System (Version 1.3).' Knowledge base development focused on four major steps: (1) develop the problem-oriented attribute hierachy; (2) determine the knowledge management approach; (3) encode the knowledge base; and (4) validate, test, certify, and evaluate the knowledge base and the expert system prototype as a whole. Backcasting was accomplished for validating and testing the expert system prototype. Knowledge refinement, evaluation, and implementation procedures of the expert system prototype were then transacted.

  16. Knowledge-based requirements analysis for automating software development

    NASA Technical Reports Server (NTRS)

    Markosian, Lawrence Z.

    1988-01-01

    We present a new software development paradigm that automates the derivation of implementations from requirements. In this paradigm, informally-stated requirements are expressed in a domain-specific requirements specification language. This language is machine-understable and requirements expressed in it are captured in a knowledge base. Once the requirements are captured, more detailed specifications and eventually implementations are derived by the system using transformational synthesis. A key characteristic of the process is that the required human intervention is in the form of providing problem- and domain-specific engineering knowledge, not in writing detailed implementations. We describe a prototype system that applies the paradigm in the realm of communication engineering: the prototype automatically generates implementations of buffers following analysis of the requirements on each buffer.

  17. The Formative Method for Adapting Psychotherapy (FMAP): A community-based developmental approach to culturally adapting therapy

    PubMed Central

    Hwang, Wei-Chin

    2010-01-01

    How do we culturally adapt psychotherapy for ethnic minorities? Although there has been growing interest in doing so, few therapy adaptation frameworks have been developed. The majority of these frameworks take a top-down theoretical approach to adapting psychotherapy. The purpose of this paper is to introduce a community-based developmental approach to modifying psychotherapy for ethnic minorities. The Formative Method for Adapting Psychotherapy (FMAP) is a bottom-up approach that involves collaborating with consumers to generate and support ideas for therapy adaptation. It involves 5-phases that target developing, testing, and reformulating therapy modifications. These phases include: (a) generating knowledge and collaborating with stakeholders (b) integrating generated information with theory and empirical and clinical knowledge, (c) reviewing the initial culturally adapted clinical intervention with stakeholders and revising the culturally adapted intervention, (d) testing the culturally adapted intervention, and (e) finalizing the culturally adapted intervention. Application of the FMAP is illustrated using examples from a study adapting psychotherapy for Chinese Americans, but can also be readily applied to modify therapy for other ethnic groups. PMID:20625458

  18. French Plans for Fifth Generation Computer Systems.

    DTIC Science & Technology

    1984-12-07

    centrally man- French industry In electronics, compu- aged project in France that covers all ters, software, and services and to make the facets of the...Centre National of Japan’s Fifth Generation Project , the de Recherche Scientifique (CNRS) Cooper- French scientific and industrial com- ative Research...systems, man-computer The National Projects interaction, novel computer structures, The French Ministry of Research and knowledge-based computer systems

  19. A protocol for generating a high-quality genome-scale metabolic reconstruction.

    PubMed

    Thiele, Ines; Palsson, Bernhard Ø

    2010-01-01

    Network reconstructions are a common denominator in systems biology. Bottom-up metabolic network reconstructions have been developed over the last 10 years. These reconstructions represent structured knowledge bases that abstract pertinent information on the biochemical transformations taking place within specific target organisms. The conversion of a reconstruction into a mathematical format facilitates a myriad of computational biological studies, including evaluation of network content, hypothesis testing and generation, analysis of phenotypic characteristics and metabolic engineering. To date, genome-scale metabolic reconstructions for more than 30 organisms have been published and this number is expected to increase rapidly. However, these reconstructions differ in quality and coverage that may minimize their predictive potential and use as knowledge bases. Here we present a comprehensive protocol describing each step necessary to build a high-quality genome-scale metabolic reconstruction, as well as the common trials and tribulations. Therefore, this protocol provides a helpful manual for all stages of the reconstruction process.

  20. A protocol for generating a high-quality genome-scale metabolic reconstruction

    PubMed Central

    Thiele, Ines; Palsson, Bernhard Ø.

    2011-01-01

    Network reconstructions are a common denominator in systems biology. Bottom-up metabolic network reconstructions have developed over the past 10 years. These reconstructions represent structured knowledge-bases that abstract pertinent information on the biochemical transformations taking place within specific target organisms. The conversion of a reconstruction into a mathematical format facilitates myriad computational biological studies including evaluation of network content, hypothesis testing and generation, analysis of phenotypic characteristics, and metabolic engineering. To date, genome-scale metabolic reconstructions for more than 30 organisms have been published and this number is expected to increase rapidly. However, these reconstructions differ in quality and coverage that may minimize their predictive potential and use as knowledge-bases. Here, we present a comprehensive protocol describing each step necessary to build a high-quality genome-scale metabolic reconstruction as well as common trials and tribulations. Therefore, this protocol provides a helpful manual for all stages of the reconstruction process. PMID:20057383

  1. Using a Clinical Knowledge Base to Assess Comorbidity Interrelatedness Among Patients with Multiple Chronic Conditions.

    PubMed

    Zulman, Donna M; Martins, Susana B; Liu, Yan; Tu, Samson W; Hoffman, Brian B; Asch, Steven M; Goldstein, Mary K

    2015-01-01

    Decision support tools increasingly integrate clinical knowledge such as medication indications and contraindications with electronic health record (EHR) data to support clinical care and patient safety. The availability of this encoded information and patient data provides an opportunity to develop measures of clinical decision complexity that may be of value for quality improvement and research efforts. We investigated the feasibility of using encoded clinical knowledge and EHR data to develop a measure of comorbidity interrelatedness (the degree to which patients' co-occurring conditions interact to generate clinical complexity). Using a common clinical scenario-decisions about blood pressure medications in patients with hypertension-we quantified comorbidity interrelatedness by calculating the number of indications and contraindications to blood pressure medications that are generated by patients' comorbidities (e.g., diabetes, gout, depression). We examined properties of comorbidity interrelatedness using data from a decision support system for hypertension in the Veterans Affairs Health Care System.

  2. Towards a New Generation of Agricultural System Data, Models and Knowledge Products: Design and Improvement

    NASA Technical Reports Server (NTRS)

    Antle, John M.; Basso, Bruno; Conant, Richard T.; Godfray, H. Charles J.; Jones, James W.; Herrero, Mario; Howitt, Richard E.; Keating, Brian A.; Munoz-Carpena, Rafael; Rosenzweig, Cynthia

    2016-01-01

    This paper presents ideas for a new generation of agricultural system models that could meet the needs of a growing community of end-users exemplified by a set of Use Cases. We envision new data, models and knowledge products that could accelerate the innovation process that is needed to achieve the goal of achieving sustainable local, regional and global food security. We identify desirable features for models, and describe some of the potential advances that we envisage for model components and their integration. We propose an implementation strategy that would link a "pre-competitive" space for model development to a "competitive space" for knowledge product development and through private-public partnerships for new data infrastructure. Specific model improvements would be based on further testing and evaluation of existing models, the development and testing of modular model components and integration, and linkages of model integration platforms to new data management and visualization tools.

  3. Towards a new generation of agricultural system data, models and knowledge products: Design and improvement.

    PubMed

    Antle, John M; Basso, Bruno; Conant, Richard T; Godfray, H Charles J; Jones, James W; Herrero, Mario; Howitt, Richard E; Keating, Brian A; Munoz-Carpena, Rafael; Rosenzweig, Cynthia; Tittonell, Pablo; Wheeler, Tim R

    2017-07-01

    This paper presents ideas for a new generation of agricultural system models that could meet the needs of a growing community of end-users exemplified by a set of Use Cases. We envision new data, models and knowledge products that could accelerate the innovation process that is needed to achieve the goal of achieving sustainable local, regional and global food security. We identify desirable features for models, and describe some of the potential advances that we envisage for model components and their integration. We propose an implementation strategy that would link a "pre-competitive" space for model development to a "competitive space" for knowledge product development and through private-public partnerships for new data infrastructure. Specific model improvements would be based on further testing and evaluation of existing models, the development and testing of modular model components and integration, and linkages of model integration platforms to new data management and visualization tools.

  4. Maintaining consistency between planning hierarchies: Techniques and applications

    NASA Technical Reports Server (NTRS)

    Zoch, David R.

    1987-01-01

    In many planning and scheduling environments, it is desirable to be able to view and manipulate plans at different levels of abstraction, allowing the users the option of viewing and manipulating either a very detailed representation of the plan or a high-level more abstract version of the plan. Generating a detailed plan from a more abstract plan requires domain-specific planning/scheduling knowledge; the reverse process of generating a high-level plan from a detailed plan Reverse Plan Maintenance, or RPM) requires having the system remember the actions it took based on its domain-specific knowledge and its reasons for taking those actions. This reverse plan maintenance process is described as implemented in a specific planning and scheduling tool, The Mission Operations Planning Assistant (MOPA), as well as the applications of RPM to other planning and scheduling problems; emphasizing the knowledge that is needed to maintain the correspondence between the different hierarchical planning levels.

  5. A Cervical Cancer Community-Based Participatory Research Project in a Native American Community

    ERIC Educational Resources Information Center

    Christopher, Suzanne; Gidley, Allison L.; Letiecq, Bethany; Smith, Adina; McCormick, Alma Knows His Gun

    2008-01-01

    The Messengers for Health on the Apsaalooke Reservation project uses a community-based participatory research (CBPR) approach and lay health advisors (LHAs) to generate knowledge and awareness about cervical cancer prevention among community members in a culturally competent manner. Northern Plains Native Americans, of whom Apsaalooke women are a…

  6. How Quality Improvement Practice Evidence Can Advance the Knowledge Base.

    PubMed

    OʼRourke, Hannah M; Fraser, Kimberly D

    2016-01-01

    Recommendations for the evaluation of quality improvement interventions have been made in order to improve the evidence base of whether, to what extent, and why quality improvement interventions affect chosen outcomes. The purpose of this article is to articulate why these recommendations are appropriate to improve the rigor of quality improvement intervention evaluation as a research endeavor, but inappropriate for the purposes of everyday quality improvement practice. To support our claim, we describe the differences between quality improvement interventions that occur for the purpose of practice as compared to research. We then carefully consider how feasibility, ethics, and the aims of evaluation each impact how quality improvement interventions that occur in practice, as opposed to research, can or should be evaluated. Recommendations that fit the evaluative goals of practice-based quality improvement interventions are needed to support fair appraisal of the distinct evidence they produce. We describe a current debate on the nature of evidence to assist in reenvisioning how quality improvement evidence generated from practice might complement that generated from research, and contribute in a value-added way to the knowledge base.

  7. Team Science, Justice, and the Co-Production of Knowledge.

    PubMed

    Tebes, Jacob Kraemer

    2018-06-08

    Science increasingly consists of interdisciplinary team-based research to address complex social, biomedical, public health, and global challenges through a practice known as team science. In this article, I discuss the added value of team science, including participatory team science, for generating scientific knowledge. Participatory team science involves the inclusion of public stakeholders on science teams as co-producers of knowledge. I also discuss how constructivism offers a common philosophical foundation for both community psychology and team science, and how this foundation aligns well with contemporary developments in science that emphasize the co-production of knowledge. I conclude with a discussion of how the co-production of knowledge in team science can promote justice. © Society for Community Research and Action 2018.

  8. The development of a novel knowledge-based weaning algorithm using pulmonary parameters: a simulation study.

    PubMed

    Guler, Hasan; Kilic, Ugur

    2018-03-01

    Weaning is important for patients and clinicians who have to determine correct weaning time so that patients do not become addicted to the ventilator. There are already some predictors developed, such as the rapid shallow breathing index (RSBI), the pressure time index (PTI), and Jabour weaning index. Many important dimensions of weaning are sometimes ignored by these predictors. This is an attempt to develop a knowledge-based weaning process via fuzzy logic that eliminates the disadvantages of the present predictors. Sixteen vital parameters listed in published literature have been used to determine the weaning decisions in the developed system. Since there are considered to be too many individual parameters in it, related parameters were grouped together to determine acid-base balance, adequate oxygenation, adequate pulmonary function, hemodynamic stability, and the psychological status of the patients. To test the performance of the developed algorithm, 20 clinical scenarios were generated using Monte Carlo simulations and the Gaussian distribution method. The developed knowledge-based algorithm and RSBI predictor were applied to the generated scenarios. Finally, a clinician evaluated each clinical scenario independently. The Student's t test was used to show the statistical differences between the developed weaning algorithm, RSBI, and the clinician's evaluation. According to the results obtained, there were no statistical differences between the proposed methods and the clinician evaluations.

  9. Validation and detection of vessel landmarks by using anatomical knowledge

    NASA Astrophysics Data System (ADS)

    Beck, Thomas; Bernhardt, Dominik; Biermann, Christina; Dillmann, Rüdiger

    2010-03-01

    The detection of anatomical landmarks is an important prerequisite to analyze medical images fully automatically. Several machine learning approaches have been proposed to parse 3D CT datasets and to determine the location of landmarks with associated uncertainty. However, it is a challenging task to incorporate high-level anatomical knowledge to improve these classification results. We propose a new approach to validate candidates for vessel bifurcation landmarks which is also applied to systematically search missed and to validate ambiguous landmarks. A knowledge base is trained providing human-readable geometric information of the vascular system, mainly vessel lengths, radii and curvature information, for validation of landmarks and to guide the search process. To analyze the bifurcation area surrounding a vessel landmark of interest, a new approach is proposed which is based on Fast Marching and incorporates anatomical information from the knowledge base. Using the proposed algorithms, an anatomical knowledge base has been generated based on 90 manually annotated CT images containing different parts of the body. To evaluate the landmark validation a set of 50 carotid datasets has been tested in combination with a state of the art landmark detector with excellent results. Beside the carotid bifurcation the algorithm is designed to handle a wide range of vascular landmarks, e.g. celiac, superior mesenteric, renal, aortic, iliac and femoral bifurcation.

  10. Meliponiculture in Quilombola communities of Ipiranga and Gurugi, Paraíba state, Brazil: an ethnoecological approach

    PubMed Central

    2014-01-01

    Background The Quilombola communities of Ipiranga and Gurugi, located in Atlantic Rainforest in Southern of Paraíba state, have stories that are interwoven throughout time. The practice of meliponicultura has been carried out for generations in these social groups and provides an elaborate ecological knowledge based on native stingless bees, the melliferous flora and the management techniques used. The traditional knowledge that Quilombola have of stingless bees is of utmost importance for the establishment of conservation strategies for many species. Methods To deepen study concerning the ecological knowledge of the beekeepers, the method of participant observation together with structured and semi-structured interviews was used, as well as the collection of entomological and botanical categories of bees and plants mentioned. With the aim of recording the knowledge related to meliponiculture previously exercised by the residents, the method of the oral story was employed. Results and discussion Results show that the informants sampled possess knowledge of twelve categories of stingless bees (Apidae: Meliponini), classified according to morphological, behavioral and ecological characteristics. Their management techniques are represented by the making of traditional cortiço and the melliferous flora is composed of many species predominant in the Atlantic Rainforest. From recording the memories and recollections of the individuals, it was observed that an intricate system of beliefs has permeated the keeping of uruçu bees (Melipona scutellaris) for generations. Conclusion According to management techniques used by beekeepers, the keeping of stingless bees in the communities is considered a traditional activity that is embedded within a network of ecological knowledge and beliefs accumulated by generations over time, and is undergoing a process of transformation that provides new meanings to such knowledge, as can be observed in the practices of young people. PMID:24410767

  11. Critical review on the mechanisms of maturation stress generation in trees.

    PubMed

    Alméras, Tancrède; Clair, Bruno

    2016-09-01

    Trees control their posture by generating asymmetric mechanical stress around the periphery of the trunk or branches. This stress is produced in wood during the maturation of the cell wall. When the need for reaction is high, it is accompanied by strong changes in cell organization and composition called reaction wood, namely compression wood in gymnosperms and tension wood in angiosperms. The process by which stress is generated in the cell wall during its formation is not yet known, and various hypothetical mechanisms have been proposed in the literature. Here we aim at discriminating between these models. First, we summarize current knowledge about reaction wood structure, state and behaviour relevant to the understanding of maturation stress generation. Then, the mechanisms proposed in the literature are listed and discussed in order to identify which can be rejected based on their inconsistency with current knowledge at the frontier between plant science and mechanical engineering. © 2016 The Author(s).

  12. Assessing knowledge ambiguity in the creation of a model based on expert knowledge and comparison with the results of a landscape succession model in central Labrador. Chapter 10.

    Treesearch

    Frederik Doyon; Brian Sturtevant; Michael J. Papaik; Andrew Fall; Brian Miranda; Daniel D. Kneeshaw; Christian Messier; Marie-Josee Fortin; Patrick M.A. James

    2012-01-01

    Sustainable forest management (SFM) recognizes that the spatial and temporal patterns generated at different scales by natural landscape and stand dynamics processes should serve as a guide for managing the forest within its range of natural variability. Landscape simulation modeling is a powerful tool that can help encompass such complexity and support SFM planning....

  13. The Power of Storytelling: A Native Hawaiian Approach to Science Communication

    NASA Astrophysics Data System (ADS)

    Frank, K. L.

    2016-12-01

    Generational assimilation of observational data enabled Native Hawaiians to preserve a holistic understanding of the connectivity, structure and function - from mountain to sea - within their island ecosystems. Their intimate understandings of the geographic and temporal variability in winds, rains, and currents, and how these factors governed the extent and distribution of biodiversity were perpetuated through stories, songs and chants. Many of these oral histories - which conveyed information via anthropomorphized characters in entertaining and engaging plots - preserved the scientific integrity of traditional phenomenological observations and remain shockingly consistent with contemporary biogeochemical and geophysical observations. These indigenous methods of communicating scientific knowledge are clear models for contemporary best practices in geoscience communication. Storytelling is a tried and true mechanism that both engages and teaches diverse audiences of all ages, ethnicities and skill levels. Scientific storytelling - which can either be examinations of indigenous stories through scientific lenses, or generations of new stories based on scientific observation - enables multiple layers of meaning and levels of knowledge acquisition that bridge cultural and historical place-based knowledge with contemporary knowledge systems. Here, I will share my journey of optimizing the engagement of Native Hawaiian communities (students, land managers, stewards, practitioners, etc…) with my biogeochemical research on a Native Hawaiian coastal estuarine environment (Héeia Fishpond). I will speak about the importance and effectiveness of disseminating research in culturally accessible formats by framing research in the context of traditional knowledge to help elevate the perception of "science" in the Hawaiian community.

  14. Biomedical physics in continuing medical education: an analysis of learning needs.

    PubMed

    Rotomskis, Ricardas; Karenauskaite, Violeta; Balzekiene, Aiste

    2009-01-01

    To examine the learning and practice needs of medical professionals in the field of continuing education of biomedical physics in Lithuania. The study was based on a questionnaire survey of 309 medical professionals throughout Lithuania, 3 focus group discussions, and 18 interviews with medical and physics experts. The study showed that medical professionals lack knowledge of physics: only 15.1% of the respondents admitted that they had enough knowledge in biomedical physics to understand the functioning of the medical devices that they used, and 7.5% of respondents indicated that they had enough knowledge to understand and adopt medical devices of the new generation. Physics knowledge was valued more highly by medical professionals with scientific degrees. As regards continuing medical education, it was revealed that personal motivation (88.7%) and responsibility for patients (44.3%) were the most important motives for upgrading competencies, whereas workload (65.4%) and financial limits (45.3%) were the main obstacles. The most popular teaching methods were those based on practical work (78.9%), and the least popular was project work (27.8%). The study revealed that biomedical physics knowledge was needed in both specializations and practical work, and the most important factor for determining its need was professional aspirations. Medical professionals' understanding of medical devices, especially those of the new generation, is essentially functional in nature. Professional upgrading courses contain only fragmented biomedical physics content, and new courses should be developed jointly by experts in physics and medicine to meet the specialized needs of medical professionals.

  15. A digital protection system incorporating knowledge based learning

    NASA Astrophysics Data System (ADS)

    Watson, Karan; Russell, B. Don; McCall, Kurt

    A digital system architecture used to diagnoses the operating state and health of electric distribution lines and to generate actions for line protection is presented. The architecture is described functionally and to a limited extent at the hardware level. This architecture incorporates multiple analysis and fault-detection techniques utilizing a variety of parameters. In addition, a knowledge-based decision maker, a long-term memory retention and recall scheme, and a learning environment are described. Preliminary laboratory implementations of the system elements have been completed. Enhanced protection for electric distribution feeders is provided by this system. Advantages of the system are enumerated.

  16. Contextualizing Next Generation Science Standards to Guide Climate Education in the U.S. Affiliated Pacific Islands (USAPI)

    NASA Astrophysics Data System (ADS)

    Sussman, A.; Fletcher, C. H.; Sachs, J. P.

    2012-12-01

    The USAPI has a population of about 1,800,000 people spread across 4.9 million square miles of the Pacific Ocean. The Pacific Islands are characterized by a multitude of indigenous cultures and languages. Many USAPI students live considerably below the poverty line. The Pacific Island region is projected to experience some of the most profound negative impacts of climate change considerably sooner than other regions. Funded by the National Science Foundation (NSF), the Pacific Islands Climate Education Partnership (PCEP) has developed a detailed strategic plan to collaboratively improve climate knowledge among the region's students and citizens in ways that exemplify modern science and indigenous environmental knowledge, address the urgency of climate change impacts, and honor indigenous cultures. Students and citizens within the region will have the knowledge and skills to advance understanding of climate change, and to adapt to its impacts. Core PCEP partners contribute expertise in climate science, the science of learning, the region's education infrastructure, and the region's cultures and indigenous knowledge and practices. PCEP's strategic education plan is guided by a general, multidisciplinary K-14 Climate Education Framework (CEF) that organizes fundamental science concepts and practices within appropriate grade-span progressions. This CEF is based largely upon the National Research Council's "A Framework for K-12 Science Education: Practices, Crosscutting Concepts, and Core Ideas" and the emerging Next Generation Science Standards. While the CEF is based upon these national Next Generation documents, it is also informed and strongly influenced by the region's geographic, climatic, cultural and socioeconomic contexts, notably indigenous knowledge and practices. Guided by the CEF, the PCEP in its initial development/planning phase has prototyped regional approaches to professional development, contextualizing curricula, and supporting community/school partnerships. With new, multiyear NSF implementation funding, the PCEP is building upon these prototypes and the strategic education plan to transform climate education across the region. Examples include a program of climate education certification being developed among the region's community colleges; research-based professional development focused on improving teachers' pedagogical content knowledge that has demonstrated striking success with both teacher and student outcomes; regional curricula based on local ecosystems and in local languages as well as English; and local school/community partnerships that combine the climate education work with local community climate adaptation projects. PCEP's interactive web-based environment (http://pcep.dsp.wested.org) interlinks the region's locations, organizations and people with information about climate science and climate impacts. This system enables the region's diverse stakeholders to access and contribute to the same information pool. This web-based environment both supports the development of PCEP resources such as the CEF and their continuing evolution and dissemination.

  17. Interactive knowledge networks for interdisciplinary course navigation within Moodle.

    PubMed

    Scherl, Andre; Dethleffsen, Kathrin; Meyer, Michael

    2012-12-01

    Web-based hypermedia learning environments are widely used in modern education and seem particularly well suited for interdisciplinary learning. Previous work has identified guidance through these complex environments as a crucial problem of their acceptance and efficiency. We reasoned that map-based navigation might provide straightforward and effortless orientation. To achieve this, we developed a clickable and user-oriented concept map-based navigation plugin. This tool is implemented as an extension of Moodle, a widely used learning management system. It visualizes inner and interdisciplinary relations between learning objects and is generated dynamically depending on user set parameters and interactions. This plugin leaves the choice of navigation type to the user and supports direct guidance. Previously developed and evaluated face-to-face interdisciplinary learning materials bridging physiology and physics courses of a medical curriculum were integrated as learning objects, the relations of which were defined by metadata. Learning objects included text pages, self-assessments, videos, animations, and simulations. In a field study, we analyzed the effects of this learning environment on physiology and physics knowledge as well as the transfer ability of third-term medical students. Data were generated from pre- and posttest questionnaires and from tracking student navigation. Use of the hypermedia environment resulted in a significant increase of knowledge and transfer capability. Furthermore, the efficiency of learning was enhanced. We conclude that hypermedia environments based on Moodle and enriched by concept map-based navigation tools can significantly support interdisciplinary learning. Implementation of adaptivity may further strengthen this approach.

  18. Automatic capture of attention by conceptually generated working memory templates.

    PubMed

    Sun, Sol Z; Shen, Jenny; Shaw, Mark; Cant, Jonathan S; Ferber, Susanne

    2015-08-01

    Many theories of attention propose that the contents of working memory (WM) can act as an attentional template, which biases processing in favor of perceptually similar inputs. While support has been found for this claim, it is unclear how attentional templates are generated when searching real-world environments. We hypothesized that in naturalistic settings, attentional templates are commonly generated from conceptual knowledge, an idea consistent with sensorimotor models of knowledge representation. Participants performed a visual search task in the delay period of a WM task, where the item in memory was either a colored disk or a word associated with a color concept (e.g., "Rose," associated with red). During search, we manipulated whether a singleton distractor in the array matched the contents of WM. Overall, we found that search times were impaired in the presence of a memory-matching distractor. Furthermore, the degree of impairment did not differ based on the contents of WM. Put differently, regardless of whether participants were maintaining a perceptually colored disk identical to the singleton distractor, or whether they were simply maintaining a word associated with the color of the distractor, the magnitude of attentional capture was the same. Our results suggest that attentional templates can be generated from conceptual knowledge, in the physical absence of the visual feature.

  19. A generative tool for building health applications driven by ISO 13606 archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Martínez-Costa, Catalina; Fernández-Breis, Jesualdo Tomás

    2012-10-01

    The use of Electronic Healthcare Records (EHR) standards in the development of healthcare applications is crucial for achieving the semantic interoperability of clinical information. Advanced EHR standards make use of the dual model architecture, which provides a solution for clinical interoperability based on the separation of the information and knowledge. However, the impact of such standards is biased by the limited availability of tools that facilitate their usage and practical implementation. In this paper, we present an approach for the automatic generation of clinical applications for the ISO 13606 EHR standard, which is based on the dual model architecture. This generator has been generically designed, so it can be easily adapted to other dual model standards and can generate applications for multiple technological platforms. Such good properties are based on the combination of standards for the representation of generic user interfaces and model-driven engineering techniques.

  20. A generator for unique quantum random numbers based on vacuum states

    NASA Astrophysics Data System (ADS)

    Gabriel, Christian; Wittmann, Christoffer; Sych, Denis; Dong, Ruifang; Mauerer, Wolfgang; Andersen, Ulrik L.; Marquardt, Christoph; Leuchs, Gerd

    2010-10-01

    Random numbers are a valuable component in diverse applications that range from simulations over gambling to cryptography. The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational unpredictability of quantum mechanics. However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique. Here we present a simple experimental setup based on homodyne measurements that uses the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably unique randomness, are important attributes for achieving high-reliability, high-speed and low-cost quantum random number generators.

  1. Measurement-Based Investigation of Inter- and Intra-Area Effects of Wind Power Plant Integration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Alicia J.; Singh, Mohit; Muljadi, Eduard

    This paper has a two pronged objective: the first objective is to analyze the general effects of wind power plant (WPP) integration and the resulting displacement of conventional power plant (CPP) inertia on power system stability and the second is to demonstrate the efficacy of PMU data in power system stability analyses, specifically when knowledge of the network is incomplete. Traditionally modal analysis applies small signal stability analysis based on Eigenvalues and the assumption of complete knowledge of the network and all of its components. The analysis presented here differs because it is a measurement-based investigation and employs simulated measurementmore » data. Even if knowledge of the network were incomplete, this methodology would allow for monitoring and analysis of modes. This allows non-utility entities and study of power system stability. To generate inter- and intra-area modes, Kundur's well-known two-area four-generator system is modeled in PSCAD/EMTDC. A doubly-fed induction generator based WPP model, based on the Western Electricity Coordination Council (WECC) standard model, is included to analyze the effects of wind power on system modes. The two-area system and WPP are connected in various configurations with respect to WPP placement, CPP inertia and WPP penetration level. Analysis is performed on the data generated by the simulations. For each simulation run, a different configuration is chosen and a large disturbance is applied. The sampling frequency is set to resemble the sampling frequency at which data is available from phasor measurement units (PMUs). The estimate of power spectral density of these signals is made using the Yule-Walker algorithm. The resulting analysis shows that the presence of a WPP does not, of itself, lead to the introduction of new modes. The analysis also shows however that displacement of inertia may lead to introduction of new modes. The effects of location of inertia displacement (i.e. the effects on modes if WPP integration leads to displacement of inertia in its own region or in another region) and of WPP controls such as droop control and synthetic inertia are also examined. In future work, the methods presented here will be applied to real-world phasor data to examine the effects of integration of variable generation and displacement of CPP inertia on inter- and intra-area modes.« less

  2. A knowledge-based design framework for airplane conceptual and preliminary design

    NASA Astrophysics Data System (ADS)

    Anemaat, Wilhelmus A. J.

    The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane. Using these tools will show an improvement in efficiency over using separate programs due to the automatic recalculation with any change of input data. The direct visual feedback of 3D geometry in the AAA-AML, will lead to quicker resolving of problems as opposed to conventional methods.

  3. The representation of semantic knowledge in a child with Williams syndrome.

    PubMed

    Robinson, Sally J; Temple, Christine M

    2009-05-01

    This study investigated whether there are distinct types of semantic knowledge with distinct representational bases during development. The representation of semantic knowledge in a teenage child (S.T.) with Williams syndrome was explored for the categories of animals, fruit, and vegetables, manipulable objects, and nonmanipulable objects. S.T.'s lexical stores were of a normal size but the volume of "sensory feature" semantic knowledge she generated in oral descriptions was reduced. In visual recognition decisions, S.T. made more false positives to nonitems than did controls. Although overall naming of pictures was unimpaired, S.T. exhibited a category-specific anomia for nonmanipulable objects and impaired naming of visual-feature descriptions of animals. S.T.'s performance was interpreted as reflecting the impaired integration of distinctive features from perceptual input, which may impact upon nonmanipulable objects to a greater extent than the other knowledge categories. Performance was used to inform adult-based models of semantic representation, with category structure proposed to emerge due to differing degrees of dependency upon underlying knowledge types, feature correlations, and the acquisition of information from modality-specific processing modules.

  4. From hospital information system components to the medical record and clinical guidelines & protocols.

    PubMed

    Veloso, M; Estevão, N; Ferreira, P; Rodrigues, R; Costa, C T; Barahona, P

    1997-01-01

    This paper introduces an ongoing project towards the development of a new generation HIS, aiming at the integration of clinical and administrative information within a common framework. Its design incorporates explicit knowledge about domain objects and professional activities to be processed by the system together with related knowledge management services and act management services. The paper presents the conceptual model of the proposed HIS architecture, that supports a rich and fully integrated patient data model, enabling the implementation of a dynamic electronic patient record tightly coupled with computerised guideline knowledge bases.

  5. Extending Cross-Generational Knowledge Flow Research in Edge Organizations

    DTIC Science & Technology

    2008-06-01

    letting Protégé generate the basic user interface, and then gradually write widgets and plug-ins to customize its look-and- feel and behavior . 4 3.0...2007a) focused on cross-generational knowledge flows in edge organizations. We found that cross- generational biases affect tacit knowledge transfer...the software engineering field, many matured methodologies already exist, such as Rational Unified Process (Hunt, 2003) or Extreme Programming (Beck

  6. High-power, cladding-pumped all-fiber laser with selective transverse mode generation property.

    PubMed

    Li, Lei; Wang, Meng; Liu, Tong; Leng, Jinyong; Zhou, Pu; Chen, Jinbao

    2017-06-10

    We demonstrate, to the best of our knowledge, the first cladding-pumped all-fiber oscillator configuration with selective transverse mode generation based on a mode-selective fiber Bragg grating pair. Operating in the second-order (LP 11 ) mode, maximum output power of 4.2 W is obtained with slope efficiency of about 38%. This is the highest reported output power of single higher-order transverse mode generation in an all-fiber configuration. The intensity distribution profile and spectral evolution have also been investigated in this paper. Our work suggests the potential of realizing higher power with selective transverse mode operation based on a mode-selective fiber Bragg grating pair.

  7. Bermuda Triangle or three to tango: generation Y, e-health and knowledge management.

    PubMed

    Yee, Kwang Chien

    2007-01-01

    Generation Y workers are slowly gathering critical mass in the healthcare sector. The sustainability of future healthcare is highly dependent on this group of workers. This generation of workers loves technology and thrives in stimulating environments. They have great thirst for life-experience and therefore they move from one working environment to the other. The healthcare system has a hierarchical operational, information and knowledge structure, which unfortunately might not be the ideal ground to integrate with generation Y. The challenges ahead present a fantastic opportunity for electronic health implementation and knowledge management to flourish. Generation Y workers, however, have very different expectation of technology utilisation, technology design and knowledge presentation. This paper will argue that a clear understanding of this group of workers is essential for researchers in health informatics and knowledge management in order to provide socio-technical integrated solution for this group of future workers. The sustainability of a quality healthcare system will depend upon the integration of generation Y, health informatics and knowledge management strategies in a re-invented healthcare system.

  8. Use of knowledge-sharing web-based portal in gross and microscopic anatomy.

    PubMed

    Durosaro, Olayemi; Lachman, Nirusha; Pawlina, Wojciech

    2008-12-01

    Changes in worldwide healthcare delivery require review of current medical school curricula structure to develop learning outcomes that ensures mastery of knowledge and clinical competency. In the last 3 years, Mayo Medical School implemented outcomes-based curriculum to encompass new graduate outcomes. Standard courses were replaced by 6-week clinically-integrated didactic blocks separated by student-self selected academic enrichment activities. Gross and microscopic anatomy was integrated with radiology and genetics respectively. Laboratory components include virtual microscopy and anatomical dissection. Students assigned to teams utilise computer portals to share learning experiences. High-resolution computed tomographic (CT) scans of cadavers prior to dissection were made available for correlative learning between the cadaveric material and radiologic images. Students work in teams on assigned presentations that include histology, cell and molecular biology, genetics and genomic using the Nexus Portal, based on DrupalEd, to share their observations, reflections and dissection findings. New generation of medical students are clearly comfortable utilising web-based programmes that maximise their learning potential of conceptually difficult and labor intensive courses. Team-based learning approach emphasising the use of knowledge-sharing computer portals maximises opportunities for students to master their knowledge and improve cognitive skills to ensure clinical competency.

  9. Understanding natural language for spacecraft sequencing

    NASA Technical Reports Server (NTRS)

    Katz, Boris; Brooks, Robert N., Jr.

    1987-01-01

    The paper describes a natural language understanding system, START, that translates English text into a knowledge base. The understanding and the generating modules of START share a Grammar which is built upon reversible transformations. Users can retrieve information by querying the knowledge base in English; the system then produces an English response. START can be easily adapted to many different domains. One such domain is spacecraft sequencing. A high-level overview of sequencing as it is practiced at JPL is presented in the paper, and three areas within this activity are identified for potential application of the START system. Examples are given of an actual dialog with START based on simulated data for the Mars Observer mission.

  10. The Sydney West Knowledge Portal: Evaluating the Growth of a Knowledge Portal to Support Translational Research.

    PubMed

    Janssen, Anna; Robinson, Tracy Elizabeth; Provan, Pamela; Shaw, Tim

    2016-06-29

    The Sydney West Translational Cancer Research Centre is an organization funded to build capacity for translational research in cancer. Translational research is essential for ensuring the integration of best available evidence into practice and for improving patient outcomes. However, there is a low level of awareness regarding what it is and how to conduct it optimally. One solution to addressing this gap is the design and deployment of web-based knowledge portals to disseminate new knowledge and engage with and connect dispersed networks of researchers. A knowledge portal is an web-based platform for increasing knowledge dissemination and management in a specialized area. To measure the design and growth of an web-based knowledge portal for increasing individual awareness of translational research and to build organizational capacity for the delivery of translational research projects in cancer. An adaptive methodology was used to capture the design and growth of an web-based knowledge portal in cancer. This involved stakeholder consultations to inform initial design of the portal. Once the portal was live, site analytics were reviewed to evaluate member usage of the portal and to measure growth in membership. Knowledge portal membership grew consistently for the first 18 months after deployment, before leveling out. Analysis of site metrics revealed members were most likely to visit portal pages with community-generated content, particularly pages with a focus on translational research. This was closely followed by pages that disseminated educational material about translational research. Preliminary data from this study suggest that knowledge portals may be beneficial tools for translating new evidence and fostering an environment of communication and collaboration.

  11. The Sydney West Knowledge Portal: Evaluating the Growth of a Knowledge Portal to Support Translational Research

    PubMed Central

    2016-01-01

    Background The Sydney West Translational Cancer Research Centre is an organization funded to build capacity for translational research in cancer. Translational research is essential for ensuring the integration of best available evidence into practice and for improving patient outcomes. However, there is a low level of awareness regarding what it is and how to conduct it optimally. One solution to addressing this gap is the design and deployment of web-based knowledge portals to disseminate new knowledge and engage with and connect dispersed networks of researchers. A knowledge portal is an web-based platform for increasing knowledge dissemination and management in a specialized area. Objective To measure the design and growth of an web-based knowledge portal for increasing individual awareness of translational research and to build organizational capacity for the delivery of translational research projects in cancer. Methods An adaptive methodology was used to capture the design and growth of an web-based knowledge portal in cancer. This involved stakeholder consultations to inform initial design of the portal. Once the portal was live, site analytics were reviewed to evaluate member usage of the portal and to measure growth in membership. Results Knowledge portal membership grew consistently for the first 18 months after deployment, before leveling out. Analysis of site metrics revealed members were most likely to visit portal pages with community-generated content, particularly pages with a focus on translational research. This was closely followed by pages that disseminated educational material about translational research. Conclusions Preliminary data from this study suggest that knowledge portals may be beneficial tools for translating new evidence and fostering an environment of communication and collaboration. PMID:27357641

  12. Bridging the Field Trip Gap: Integrating Web-Based Video as a Teaching and Learning Partner in Interior Design Education

    ERIC Educational Resources Information Center

    Roehl, Amy

    2013-01-01

    This study utilizes web-based video as a strategy to transfer knowledge about the interior design industry in a format that interests the current generation of students. The model of instruction developed is based upon online video as an engaging, economical, and time-saving alternative to a field trip, guest speaker, or video teleconference.…

  13. Building a knowledge-based statistical potential by capturing high-order inter-residue interactions and its applications in protein secondary structure assessment.

    PubMed

    Li, Yaohang; Liu, Hui; Rata, Ionel; Jakobsson, Eric

    2013-02-25

    The rapidly increasing number of protein crystal structures available in the Protein Data Bank (PDB) has naturally made statistical analyses feasible in studying complex high-order inter-residue correlations. In this paper, we report a context-based secondary structure potential (CSSP) for assessing the quality of predicted protein secondary structures generated by various prediction servers. CSSP is a sequence-position-specific knowledge-based potential generated based on the potentials of mean force approach, where high-order inter-residue interactions are taken into consideration. The CSSP potential is effective in identifying secondary structure predictions with good quality. In 56% of the targets in the CB513 benchmark, the optimal CSSP potential is able to recognize the native secondary structure or a prediction with Q3 accuracy higher than 90% as best scored in the predicted secondary structures generated by 10 popularly used secondary structure prediction servers. In more than 80% of the CB513 targets, the predicted secondary structures with the lowest CSSP potential values yield higher than 80% Q3 accuracy. Similar performance of CSSP is found on the CASP9 targets as well. Moreover, our computational results also show that the CSSP potential using triplets outperforms the CSSP potential using doublets and is currently better than the CSSP potential using quartets.

  14. Improving the Acquisition of Basic Technical Surgical Skills with VR-Based Simulation Coupled with Computer-Based Video Instruction.

    PubMed

    Rojas, David; Kapralos, Bill; Dubrowski, Adam

    2016-01-01

    Next to practice, feedback is the most important variable in skill acquisition. Feedback can vary in content and the way that it is used for delivery. Health professions education research has extensively examined the different effects provided by the different feedback methodologies. In this paper we compared two different types of knowledge of performance (KP) feedback. The first type was video-based KP feedback while the second type consisted of computer generated KP feedback. Results of this study showed that computer generated performance feedback is more effective than video based performance feedback. The combination of the two feedback methodologies provides trainees with a better understanding.

  15. The dynamics of team cognition: A process-oriented theory of knowledge emergence in teams.

    PubMed

    Grand, James A; Braun, Michael T; Kuljanin, Goran; Kozlowski, Steve W J; Chao, Georgia T

    2016-10-01

    Team cognition has been identified as a critical component of team performance and decision-making. However, theory and research in this domain continues to remain largely static; articulation and examination of the dynamic processes through which collectively held knowledge emerges from the individual- to the team-level is lacking. To address this gap, we advance and systematically evaluate a process-oriented theory of team knowledge emergence. First, we summarize the core concepts and dynamic mechanisms that underlie team knowledge-building and represent our theory of team knowledge emergence (Step 1). We then translate this narrative theory into a formal computational model that provides an explicit specification of how these core concepts and mechanisms interact to produce emergent team knowledge (Step 2). The computational model is next instantiated into an agent-based simulation to explore how the key generative process mechanisms described in our theory contribute to improved knowledge emergence in teams (Step 3). Results from the simulations demonstrate that agent teams generate collectively shared knowledge more effectively when members are capable of processing information more efficiently and when teams follow communication strategies that promote equal rates of information sharing across members. Lastly, we conduct an empirical experiment with real teams participating in a collective knowledge-building task to verify that promoting these processes in human teams also leads to improved team knowledge emergence (Step 4). Discussion focuses on implications of the theory for examining team cognition processes and dynamics as well as directions for future research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Critical maternal health knowledge gaps in low- and middle-income countries for the post-2015 era.

    PubMed

    Kendall, Tamil; Langer, Ana

    2015-06-05

    Effective interventions to promote maternal health and address obstetric complications exist, however 800 women die every day during pregnancy and childbirth from largely preventable causes and more than 90% of these deaths occur in low and middle income countries (LMIC). In 2014, the Maternal Health Task Force consulted 26 global maternal health researchers to identify persistent and critical knowledge gaps to be filled to reduce maternal morbidity and mortality and improve maternal health. The vision of maternal health articulated was comprehensive and priorities for knowledge generation encompassed improving the availability, accessibility, acceptability, and quality of institutional labor and delivery services and other effective interventions, such as contraception and safe abortion services. Respondents emphasized the need for health systems research to identify models that can deliver what is known to be effective to prevent and treat the main causes of maternal death at scale in different contexts and to sustain coverage and quality over time. Researchers also emphasized the development of tools to measure quality of care and promote ongoing quality improvement at the facility, district, and national level. Knowledge generation to improve distribution and retention of healthcare workers, facilitate task shifting, develop and evaluate training models to improve "hands-on" skills and promote evidence-based practice, and increase managerial capacity at different levels of the health system were also prioritized. Interviewees noted that attitudes, behavior, and power relationships between health professionals and within institutions must be transformed to achieve coverage of high-quality maternal health services in LMIC. The increasing burden of non-communicable diseases, urbanization, and the persistence of social and economic inequality were identified as emerging challenges that require knowledge generation to improve health system responses and evaluate progress. Respondents emphasized evaluating effectiveness, feasibility, and equity impacts of health system interventions. A prominent role for implementation science, evidence for policy advocacy, and interdisciplinary collaboration were identified as critical areas for knowledge generation to improve maternal health in the post-2015 era.

  17. The Interaction Systems Generated by the Teacher's Didactic Imprinting

    ERIC Educational Resources Information Center

    Peralta, Nadia S.; Roselli, Néstor D.

    2015-01-01

    The current study aimed to identify and analyze the systems of interaction implemented by teachers in university classes, based on their teaching imprints. It focused on the interactions occurred in scholar natural contexts and the construction of knowledge based on said interaction. A form to observe the different behaviors was designed in order…

  18. Designing a Strategic Plan through an Emerging Knowledge Generation Process: The ATM Experience

    ERIC Educational Resources Information Center

    Zanotti, Francesco

    2012-01-01

    Purpose: The aim of this contribution is to describe a new methodology for designing strategic plans and how it was implemented by ATM, a public transportation agency based in Milan, Italy. Design/methodology/approach: This methodology is founded on a new system theory, called "quantum systemics". It is based on models and metaphors both…

  19. Exploring Teacher Use of an Online Forum to Develop Game-Based Learning Literacy

    ERIC Educational Resources Information Center

    Barany, Amanda; Shah, Mamta; Foster, Aroutis

    2017-01-01

    Game-based learning researchers have emphasized the importance of teachers' game literacy and knowledge of pedagogical approaches involved in successfully adopting an instructional approach (Bell and Gresalfi, 2017). In this paper, we describe findings from an online resource that teachers used to generate a repository of games for use both during…

  20. The Brains behind Brain-Based Research: The Tale of Two Postsecondary Online Learners

    ERIC Educational Resources Information Center

    McGuckin, Dawn; Ladhani, Mubeen

    2010-01-01

    This paper is written from the perspective of two postsecondary students who realized the implications for brain-based learning in the online environment. This paper explores the relationship between online learning in regards to how the brain generates meaning and understanding, the role of emotions, the collaborative construction of knowledge,…

  1. Knowledge-based approach to system integration

    NASA Technical Reports Server (NTRS)

    Blokland, W.; Krishnamurthy, C.; Biegl, C.; Sztipanovits, J.

    1988-01-01

    To solve complex problems one can often use the decomposition principle. However, a problem is seldom decomposable into completely independent subproblems. System integration deals with problem of resolving the interdependencies and the integration of the subsolutions. A natural method of decomposition is the hierarchical one. High-level specifications are broken down into lower level specifications until they can be transformed into solutions relatively easily. By automating the hierarchical decomposition and solution generation an integrated system is obtained in which the declaration of high level specifications is enough to solve the problem. We offer a knowledge-based approach to integrate the development and building of control systems. The process modeling is supported by using graphic editors. The user selects and connects icons that represent subprocesses and might refer to prewritten programs. The graphical editor assists the user in selecting parameters for each subprocess and allows the testing of a specific configuration. Next, from the definitions created by the graphical editor, the actual control program is built. Fault-diagnosis routines are generated automatically as well. Since the user is not required to write program code and knowledge about the process is present in the development system, the user is not required to have expertise in many fields.

  2. Beyond the classroom: using technology to meet the educational needs of multigenerational perinatal nurses.

    PubMed

    Gallo, Ana-Maria

    2011-01-01

    For the first time in history, there are 4 distinct generations of nurses working side by side at the clinical bedside: Veterans, Baby Boomers, Generation X, and Generation Y. All the generations have their unique personalities, beliefs, values, and learning styles. Approach to learning range from the traditional instructional method preferred by the Veteran's nurses to the more advanced technology (eg, Web-based, webinars, simulations, podcasts, and blogs) approach favored by Generation Y. Nurse educators and clinical nurse specialists must consider each generation's style of learning to best engage, stimulate, and promote transference and assimilations of new knowledge. This article briefly describes the generational learning style differences and explores alternative educational modalities to the traditional classroom instruction.

  3. Constraint methods that accelerate free-energy simulations of biomolecules.

    PubMed

    Perez, Alberto; MacCallum, Justin L; Coutsias, Evangelos A; Dill, Ken A

    2015-12-28

    Atomistic molecular dynamics simulations of biomolecules are critical for generating narratives about biological mechanisms. The power of atomistic simulations is that these are physics-based methods that satisfy Boltzmann's law, so they can be used to compute populations, dynamics, and mechanisms. But physical simulations are computationally intensive and do not scale well to the sizes of many important biomolecules. One way to speed up physical simulations is by coarse-graining the potential function. Another way is to harness structural knowledge, often by imposing spring-like restraints. But harnessing external knowledge in physical simulations is problematic because knowledge, data, or hunches have errors, noise, and combinatoric uncertainties. Here, we review recent principled methods for imposing restraints to speed up physics-based molecular simulations that promise to scale to larger biomolecules and motions.

  4. The generic task toolset: High level languages for the construction of planning and problem solving systems

    NASA Technical Reports Server (NTRS)

    Chandrasekaran, B.; Josephson, J.; Herman, D.

    1987-01-01

    The current generation of languages for the construction of knowledge-based systems as being at too low a level of abstraction is criticized, and the need for higher level languages for building problem solving systems is advanced. A notion of generic information processing tasks in knowledge-based problem solving is introduced. A toolset which can be used to build expert systems in a way that enhances intelligibility and productivity in knowledge acquistion and system construction is described. The power of these ideas is illustrated by paying special attention to a high level language called DSPL. A description is given of how it was used in the construction of a system called MPA, which assists with planning in the domain of offensive counter air missions.

  5. Linguistic Knowledge and Reasoning for Error Diagnosis and Feedback Generation.

    ERIC Educational Resources Information Center

    Delmonte, Rodolfo

    2003-01-01

    Presents four sets of natural language processing-based exercises for which error correction and feedback are produced by means of a rich database in which linguistic information is encoded either at the lexical or the grammatical level. (Author/VWL)

  6. Dual-loop self-optimizing robust control of wind power generation with Doubly-Fed Induction Generator.

    PubMed

    Chen, Quan; Li, Yaoyu; Seem, John E

    2015-09-01

    This paper presents a self-optimizing robust control scheme that can maximize the power generation for a variable speed wind turbine with Doubly-Fed Induction Generator (DFIG) operated in Region 2. A dual-loop control structure is proposed to synergize the conversion from aerodynamic power to rotor power and the conversion from rotor power to the electrical power. The outer loop is an Extremum Seeking Control (ESC) based generator torque regulation via the electric power feedback. The ESC can search for the optimal generator torque constant to maximize the rotor power without wind measurement or accurate knowledge of power map. The inner loop is a vector-control based scheme that can both regulate the generator torque requested by the ESC and also maximize the conversion from the rotor power to grid power. An ℋ(∞) controller is synthesized for maximizing, with performance specifications defined based upon the spectrum of the rotor power obtained by the ESC. Also, the controller is designed to be robust against the variations of some generator parameters. The proposed control strategy is validated via simulation study based on the synergy of several software packages including the TurbSim and FAST developed by NREL, Simulink and SimPowerSystems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  7. What is knowledge and when should it be implemented?

    PubMed

    O'Grady, Laura

    2012-10-01

    A primary purpose of research is to generate new knowledge. Scientific advances have progressively identified optimal ways to achieve this purpose. Included in this evolution are the notions of evidence-based medicine, decision aids, shared decision making, measurement and evaluation as well as implementation. The importance of including qualitative and quantitative methods in our research is now understood. We have debated the meaning of evidence and how to implement it. However, we have yet to consider how to include in our study findings other types of information such as tacit and experiential knowledge. This key consideration needs to take place before we translate new findings or 'knowledge' into clinical practice. This article critiques assumptions regarding the nature of knowledge and suggests a framework for implementing research findings into practice. © 2012 Blackwell Publishing Ltd.

  8. Development of theory-based knowledge translation interventions to facilitate the implementation of evidence-based guidelines on the early management of adults with traumatic spinal cord injury.

    PubMed

    Bérubé, Mélanie; Albert, Martin; Chauny, Jean-Marc; Contandriopoulos, Damien; DuSablon, Anne; Lacroix, Sébastien; Gagné, Annick; Laflamme, Élise; Boutin, Nathalie; Delisle, Stéphane; Pauzé, Anne-Marie; MacThiong, Jean-Marc

    2015-12-01

    Optimal, early management following a spinal cord injury (SCI) can limit individuals' disabilities and costs related to their care. Several knowledge syntheses were recently published to guide health care professionals with regard to early interventions in SCI patients. However, no knowledge translation (KT) intervention, selected according to a behaviour change theory, has been proposed to facilitate the use of SCI guidelines in an acute care setting. To develop theory-informed KT interventions to promote the application of evidence-based recommendations on the acute care management of SCI patients. The first four phases of the knowledge-to-action model were used to establish the study design. Knowledge selection was based on the Grading of Recommendations Assessment, Development and Evaluation system. Knowledge adaptation to the local context was sourced from the ADAPTE process. The theoretical domains framework oriented the selection and development of the interventions based on an assessment of barriers and enablers to knowledge application. Twenty-nine recommendations were chosen and operationalized in measurable clinical indicators. Barriers related to knowledge, skills, perceived capacities, beliefs about consequences, social influences, and the environmental context and resources theoretical domains were identified. The mapping of behaviour change techniques associated with those barriers led to the development of an online educational curriculum, interdisciplinary clinical pathways as well as policies and procedures. This research project allowed us developing KT interventions according to a thorough behavioural change methodology. Exposure to the generated interventions will support health care professionals in providing the best care to SCI patients. © 2015 John Wiley & Sons, Ltd.

  9. Framing of scientific knowledge as a new category of health care research.

    PubMed

    Salvador-Carulla, Luis; Fernandez, Ana; Madden, Rosamond; Lukersmith, Sue; Colagiuri, Ruth; Torkfar, Ghazal; Sturmberg, Joachim

    2014-12-01

    The new area of health system research requires a revision of the taxonomy of scientific knowledge that may facilitate a better understanding and representation of complex health phenomena in research discovery, corroboration and implementation. A position paper by an expert group following and iterative approach. 'Scientific evidence' should be differentiated from 'elicited knowledge' of experts and users, and this latter typology should be described beyond the traditional qualitative framework. Within this context 'framing of scientific knowledge' (FSK) is defined as a group of studies of prior expert knowledge specifically aimed at generating formal scientific frames. To be distinguished from other unstructured frames, FSK must be explicit, standardized, based on the available evidence, agreed by a group of experts and subdued to the principles of commensurability, transparency for corroboration and transferability that characterize scientific research. A preliminary typology of scientific framing studies is presented. This typology includes, among others, health declarations, position papers, expert-based clinical guides, conceptual maps, classifications, expert-driven health atlases and expert-driven studies of costs and burden of illness. This grouping of expert-based studies constitutes a different kind of scientific knowledge and should be clearly differentiated from 'evidence' gathered from experimental and observational studies in health system research. © 2014 John Wiley & Sons, Ltd.

  10. Impact of a brief addiction medicine training experience on knowledge self-assessment among medical learners.

    PubMed

    Klimas, Jan; Ahamad, Keith; Fairgrieve, Christoper; McLean, Mark; Mead, Annabel; Nolan, Seonaid; Wood, Evan

    2017-01-01

    Implementation of evidence-based approaches to the treatment of various substance use disorders is needed to tackle the existing epidemic of substance use and related harms. Most clinicians, however, lack knowledge and practical experience with these approaches. Given this deficit, the authors examined the impact of an inpatient elective in addiction medicine amongst medical trainees on addiction-related knowledge and medical management. Trainees who completed an elective with a hospital-based Addiction Medicine Consult Team (AMCT) in Vancouver, Canada, from May 2015 to May 2016, completed a 9-item self-evaluation scale before and immediately after the elective. A total of 48 participants completed both pre and post AMCT elective surveys. On average, participants were 28 years old (interquartile range [IQR] = 27-29) and contributed 20 days (IQR = 13-27) of clinical service. Knowledge of addiction medicine increased significantly post elective (mean difference [MD] = 8.63, standard deviation [SD] = 18.44; P = .002). The most and the least improved areas of knowledge were relapse prevention and substance use screening, respectively. Completion of a clinical elective with a hospital-based AMCT appears to improve medical trainees' addiction-related knowledge. Further evaluation and expansion of addiction medicine education is warranted to develop the next generation of skilled addiction care providers.

  11. School-Based Educational Intervention to Improve Children's Oral Health-Related Knowledge.

    PubMed

    Blake, Holly; Dawett, Bhupinder; Leighton, Paul; Rose-Brady, Laura; Deery, Chris

    2015-07-01

    To evaluate a brief oral health promotion intervention delivered in schools by a primary care dental practice, aimed at changing oral health care knowledge and oral health-related behaviors in children. Cohort study with pretest-posttest design. Three primary schools. One hundred and fifty children (aged 9-12 years). Children received a 60-minute theory-driven classroom-based interactive educational session delivered by a dental care professional and received take-home literature on oral health. All children completed a questionnaire on oral health-related knowledge and self-reported oral health-related behaviors before, immediately after, and 6 weeks following the intervention. Children's dental knowledge significantly improved following the intervention, with improvement evident at immediate follow-up and maintained 6 weeks later. Significantly more children reported using dental floss 6 weeks after the intervention compared with baseline. No significant differences were detected in toothbrushing or dietary behaviors. School-based preventative oral health education delivered by primary care dental practices can generate short-term improvements in children's knowledge of oral health and some aspects of oral hygiene behavior. Future research should engage parents/carers and include objective clinical and behavioral outcomes in controlled study designs. © 2014 Society for Public Health Education.

  12. SSME fault monitoring and diagnosis expert system

    NASA Technical Reports Server (NTRS)

    Ali, Moonis; Norman, Arnold M.; Gupta, U. K.

    1989-01-01

    An expert system, called LEADER, has been designed and implemented for automatic learning, detection, identification, verification, and correction of anomalous propulsion system operations in real time. LEADER employs a set of sensors to monitor engine component performance and to detect, identify, and validate abnormalities with respect to varying engine dynamics and behavior. Two diagnostic approaches are adopted in the architecture of LEADER. In the first approach fault diagnosis is performed through learning and identifying engine behavior patterns. LEADER, utilizing this approach, generates few hypotheses about the possible abnormalities. These hypotheses are then validated based on the SSME design and functional knowledge. The second approach directs the processing of engine sensory data and performs reasoning based on the SSME design, functional knowledge, and the deep-level knowledge, i.e., the first principles (physics and mechanics) of SSME subsystems and components. This paper describes LEADER's architecture which integrates a design based reasoning approach with neural network-based fault pattern matching techniques. The fault diagnosis results obtained through the analyses of SSME ground test data are presented and discussed.

  13. Caregiving Antecedents of Secure Base Script Knowledge: A Comparative Analysis of Young Adult Attachment Representations

    PubMed Central

    Steele, Ryan D.; Waters, Theodore E. A.; Bost, Kelly K.; Vaughn, Brian E.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn; Roisman, Glenn I.

    2015-01-01

    Based on a sub-sample (N = 673) of the NICHD Study of Early Child Care and Youth Development (SECCYD) cohort, this paper reports data from a follow-up assessment at age 18 years on the antecedents of secure base script knowledge, as reflected in the ability to generate narratives in which attachment-related difficulties are recognized, competent help is provided, and the problem is resolved. Secure base script knowledge was (a) modestly to moderately correlated with more well established assessments of adult attachment, (b) associated with mother-child attachment in the first three years of life and with observations of maternal and paternal sensitivity from childhood to adolescence, and (c) partially accounted for associations previously documented in the SECCYD cohort between early caregiving experiences and Adult Attachment Interview states of mind (Booth-LaForce & Roisman, 2014) as well as self-reported attachment styles (Fraley, Roisman, Booth-LaForce, Owen, & Holland, 2013). PMID:25264703

  14. Model-based diagnostics for Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Stephan, Amy; Martin, Eric R.; Lerutte, Marcel G.

    1991-01-01

    An innovative approach to fault management was recently demonstrated for the NASA LeRC Space Station Freedom (SSF) power system testbed. This project capitalized on research in model-based reasoning, which uses knowledge of a system's behavior to monitor its health. The fault management system (FMS) can isolate failures online, or in a post analysis mode, and requires no knowledge of failure symptoms to perform its diagnostics. An in-house tool called MARPLE was used to develop and run the FMS. MARPLE's capabilities are similar to those available from commercial expert system shells, although MARPLE is designed to build model-based as opposed to rule-based systems. These capabilities include functions for capturing behavioral knowledge, a reasoning engine that implements a model-based technique known as constraint suspension, and a tool for quickly generating new user interfaces. The prototype produced by applying MARPLE to SSF not only demonstrated that model-based reasoning is a valuable diagnostic approach, but it also suggested several new applications of MARPLE, including an integration and testing aid, and a complement to state estimation.

  15. Evaluation of an automated knowledge-based textual summarization system for longitudinal clinical data, in the intensive care domain.

    PubMed

    Goldstein, Ayelet; Shahar, Yuval; Orenbuch, Efrat; Cohen, Matan J

    2017-10-01

    To examine the feasibility of the automated creation of meaningful free-text summaries of longitudinal clinical records, using a new general methodology that we had recently developed; and to assess the potential benefits to the clinical decision-making process of using such a method to generate draft letters that can be further manually enhanced by clinicians. We had previously developed a system, CliniText (CTXT), for automated summarization in free text of longitudinal medical records, using a clinical knowledge base. In the current study, we created an Intensive Care Unit (ICU) clinical knowledge base, assisted by two ICU clinical experts in an academic tertiary hospital. The CTXT system generated free-text summary letters from the data of 31 different patients, which were compared to the respective original physician-composed discharge letters. The main evaluation measures were (1) relative completeness, quantifying the data items missed by one of the letters but included by the other, and their importance; (2) quality parameters, such as readability; (3) functional performance, assessed by the time needed, by three clinicians reading each of the summaries, to answer five key questions, based on the discharge letter (e.g., "What are the patient's current respiratory requirements?"), and by the correctness of the clinicians' answers. Completeness: In 13/31 (42%) of the letters the number of important items missed in the CTXT-generated letter was actually less than or equal to the number of important items missed by the MD-composed letter. In each of the MD-composed letters, at least two important items that were mentioned by the CTXT system were missed (a mean of 7.2±5.74). In addition, the standard deviation in the number of missed items in the MD letters (STD=15.4) was much higher than the standard deviation in the CTXT-generated letters (STD=5.3). Quality: The MD-composed letters obtained a significantly better grade in three out of four measured parameters. However, the standard variation in the quality of the MD-composed letters was much greater than the standard variation in the quality of the CTXT-generated letters (STD=6.25 vs. STD=2.57, respectively). Functional evaluation: The clinicians answered the five questions on average 40% faster (p<0.001) when using the CTXT-generated letters than when using the MD-composed letters. In four out of the five questions the clinicians' correctness was equal to or significantly better (p<0.005) when using the CTXT-generated letters than when using the MD-composed letters. An automatic knowledge-based summarization system, such as the CTXT system, has the capability to model complex clinical domains, such as the ICU, and to support interpretation and summarization tasks such as the creation of a discharge summary letter. Based on the results, we suggest that the use of such systems could potentially enhance the standardization of the letters, significantly increase their completeness, and reduce the time to write the discharge summary. The results also suggest that using the resultant structured letters might reduce the decision time, and enhance the decision quality, of decisions made by other clinicians. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Addressing the translational dilemma: dynamic knowledge representation of inflammation using agent-based modeling.

    PubMed

    An, Gary; Christley, Scott

    2012-01-01

    Given the panoply of system-level diseases that result from disordered inflammation, such as sepsis, atherosclerosis, cancer, and autoimmune disorders, understanding and characterizing the inflammatory response is a key target of biomedical research. Untangling the complex behavioral configurations associated with a process as ubiquitous as inflammation represents a prototype of the translational dilemma: the ability to translate mechanistic knowledge into effective therapeutics. A critical failure point in the current research environment is a throughput bottleneck at the level of evaluating hypotheses of mechanistic causality; these hypotheses represent the key step toward the application of knowledge for therapy development and design. Addressing the translational dilemma will require utilizing the ever-increasing power of computers and computational modeling to increase the efficiency of the scientific method in the identification and evaluation of hypotheses of mechanistic causality. More specifically, development needs to focus on facilitating the ability of non-computer trained biomedical researchers to utilize and instantiate their knowledge in dynamic computational models. This is termed "dynamic knowledge representation." Agent-based modeling is an object-oriented, discrete-event, rule-based simulation method that is well suited for biomedical dynamic knowledge representation. Agent-based modeling has been used in the study of inflammation at multiple scales. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggest that this modeling framework is well suited for addressing the translational dilemma. This review describes agent-based modeling, gives examples of its applications in the study of inflammation, and introduces a proposed general expansion of the use of modeling and simulation to augment the generation and evaluation of knowledge by the biomedical research community at large.

  17. Acquisition, representation and rule generation for procedural knowledge

    NASA Technical Reports Server (NTRS)

    Ortiz, Chris; Saito, Tim; Mithal, Sachin; Loftin, R. Bowen

    1991-01-01

    Current research into the design and continuing development of a system for the acquisition of procedural knowledge, its representation in useful forms, and proposed methods for automated C Language Integrated Production System (CLIPS) rule generation is discussed. The Task Analysis and Rule Generation Tool (TARGET) is intended to permit experts, individually or collectively, to visually describe and refine procedural tasks. The system is designed to represent the acquired knowledge in the form of graphical objects with the capacity for generating production rules in CLIPS. The generated rules can then be integrated into applications such as NASA's Intelligent Computer Aided Training (ICAT) architecture. Also described are proposed methods for use in translating the graphical and intermediate knowledge representations into CLIPS rules.

  18. Multi-Disciplinary Knowledge Synthesis for Human Health Assessment on Earth and in Space

    NASA Astrophysics Data System (ADS)

    Christakos, G.

    We discuss methodological developments in multi-disciplinary knowledge synthesis (KS) of human health assessment. A theoretical KS framework can provide the rational means for the assimilation of various information bases (general, site-specific etc.) that are relevant to the life system of interest. KS-based techniques produce a realistic representation of the system, provide a rigorous assessment of the uncertainty sources, and generate informative health state predictions across space-time. The underlying epistemic cognition methodology is based on teleologic criteria and stochastic logic principles. The mathematics of KS involves a powerful and versatile spatiotemporal random field model that accounts rigorously for the uncertainty features of the life system and imposes no restriction on the shape of the probability distributions or the form of the predictors. KS theory is instrumental in understanding natural heterogeneities, assessing crucial human exposure correlations and laws of physical change, and explaining toxicokinetic mechanisms and dependencies in a spatiotemporal life system domain. It is hoped that a better understanding of KS fundamentals would generate multi-disciplinary models that are useful for the maintenance of human health on Earth and in Space.

  19. A Comparison of Functional Models for Use in the Function-Failure Design Method

    NASA Technical Reports Server (NTRS)

    Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.

    2006-01-01

    When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based on NTSB accident reports. To best record this data, standardized functional and failure mode vocabularies are used. Two separate function-failure knowledge bases are then created aid compared. Results indicate that encoding failure data using more detailed functional models allows for a more robust knowledge base. Interestingly however, when applying the EFDM, high level descriptions continue to produce useful results when using the knowledge base generated from the detailed functional models.

  20. The spatial data and knowledge gateways at the International Water Management Institute (IWMI)

    NASA Astrophysics Data System (ADS)

    Thenkabail, P. S.; Biradar, C. M.; Noojipady, P.; Islam, A.; Velpuri, M.; Vithanage, J.; Kulawardhana, W.; Li, Yuan Jie; Dheeravath, V.; Gunasinghe, S.; Alankara, R.

    2006-10-01

    In this paper we discuss spatial data and knowledge base (SDKB) gateway portals developed by the International Water Management Institute (IWMI). Our vision is to generate and/or facilitate easy and free access to state-of-art SDKB of excellence globally. Our mission is to make SDKB accessible online, globally, for free. The IWMI data storehouse pathway (IWMIDSP; http://www.iwmidsp.org) is a pathfinder global public good (GPG) portal on remote sensing and GIS (RS/GIS) data and products with specific emphasis on river basin data, but also storing valuable data on Nations, Regions, and the World. A number of other specialty GPG portals have also been released. These include Global map of irrigated area (http://www.iwmigiam.org), Drought monitoring system for southwest Asia (http://dms.iwmi.org), Tsunami satellite sensor data catalogue (http://tsdc.iwmi.org), and Knowledge base system (KBS) for Sri Lanka (http://www.iwmikbs.org). The IWMIDSP has been the backbone of several other projects such as global irrigated area mapping, drought monitoring system, wetlands, and knowledge base systems. A discussion on these pathfinder web portals follow.

  1. Changing Nephrology Nurses' Beliefs about the Value of Evidence-Based Practice and Their Ability to Implement in Clinical Practice.

    PubMed

    Hain, Debra; Haras, Mary S

    2015-01-01

    A rapidly evolving healthcare environment demands sound research evidence to inform clinical practice and improve patient outcomes. Over the past several decades, nurses have generated new knowledge by conducting research studies, but it takes time for this evidence to be implemented in practice. As nurses strive to be leaders and active participants in healthcare redesign, it is essential that they possess the requisite knowledge and skills to engage in evidence-based practice (EBP). Professional nursing organizations can make substantial contributions to the move healthcare quality forward by providing EBP workshops similar to those conducted by the American Nephrology Nurses'Association.

  2. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  3. A review of turbine blade tip heat transfer.

    PubMed

    Bunker, R S

    2001-05-01

    This paper presents a review of the publicly available knowledge base concerning turbine blade tip heat transfer, from the early fundamental research which laid the foundations of our knowledge, to current experimental and numerical studies utilizing engine-scaled blade cascades and turbine rigs. Focus is placed on high-pressure, high-temperature axial-turbine blade tips, which are prevalent in the majority of today's aircraft engines and power generating turbines. The state of our current understanding of turbine blade tip heat transfer is in the transitional phase between fundamentals supported by engine-based experience, and the ability to a priori correctly predict and efficiently design blade tips for engine service.

  4. Importance of Knowledge Management in the Higher Educational Institutes

    ERIC Educational Resources Information Center

    Namdev Dhamdhere, Sangeeta

    2015-01-01

    Every academic institution contributes to knowledge. The generated information and knowledge is to be compiled at a central place and disseminated among the society for further growth. It is observed that the generated knowledge in the academic institute is not stored or captured properly. It is also observed that many a times generated…

  5. Teaching and Learning Morphology: A Reflection on Generative Vocabulary Instruction

    ERIC Educational Resources Information Center

    Templeton, Shane

    2012-01-01

    Students' knowledge of morphology can play a critical role in vocabulary development, and by extension, reading comprehension and writing. This reflection describes the nature of this knowledge and how it may be developed through the examination of generative vocabulary knowledge and the role of the spelling system in developing this knowledge. In…

  6. Making Each Other’s Daily Life: Nurse Assistants’ Experiences and Knowledge on Developing a Meaningful Daily Life in Nursing Homes

    PubMed Central

    James, Inger; Fredriksson, Carin; Wahlström, Catrin; Kihlgren, Annica; Blomberg, Karin

    2014-01-01

    Background: In a larger action research project, guidelines were generated for how a meaningful daily life could be developed for older persons. In this study, we focused on the nurse assistants’ (NAs) perspectives, as their knowledge is essential for a well-functioning team and quality of care. The aim was to learn from NAs’ experiences and knowledge about how to develop a meaningful daily life for older persons in nursing homes and the meaning NAs ascribe to their work. Methods: The project is based on Participatory and Appreciative Action and Reflection. Data were generated through interviews, participating observations and informal conversations with 27 NAs working in nursing homes in Sweden, and a thematic analysis was used. Result: NAs developed a meaningful daily life by sensing and finding the “right” way of being (Theme 1). They sense and read the older person in order to judge how the person was feeling (Theme 2). They adapt to the older person (Theme 3) and share their daily life (Theme 4). NAs use emotional involvement to develop a meaningful daily life for the older person and meaning in their own work (Theme 5), ultimately making each other’s daily lives meaningful. Conclusion: It was obvious that NAs based the development of a meaningful daily life on different forms of knowledge: the oreticaland practical knowledge, and practical wisdom, all of which are intertwined. These results could be used within the team to constitute a meaningful daily life for older persons in nursing homes. PMID:25246997

  7. A combined park management framework based on regulatory and behavioral strategies: use of visitors' knowledge to assess effectiveness.

    PubMed

    Papageorgiou, K

    2001-07-01

    In light of the increasing mandate for greater efficiency in conservation of natural reserves such as national parks, the present study suggests educational approaches as a tool to achieve conservation purposes. Currently, the management of human-wildlife interactions is dominated by regulatory strategies, but considerable potential exists for environmental education to enhance knowledge in the short run and to prompt attitude change in the long run. A framework for conservation based on both traditional regulatory- and behavior-oriented strategies was proposed whereby the level of knowledge that park visitors have acquired comprises an obvious outcome and establishes a basis upon which the effectiveness of regulatory- and behavior-based regimes could be assessed. The perceptions regarding park-related issues of two distinct visitor groups (locals and nonlocals) are summarized from a survey undertaken in Vikos-Aoos national park. The findings suggest a superficial knowledge for certain concepts but little profound understanding of the content of such concepts, indicating that knowledge-raising efforts should go a long way towards establishing a positive attitude for the resource. Visitors' poor knowledge of the park's operation regulation contest the efficiency of the presently dominant regulatory management regime. While geographical distances did not appear to significantly differentiate knowledge between the two groups, wilderness experience (as certified by visits to other parks) was proved to be an impetus for generating substantial learner interest in critical park issues among nonlocal visitors. School education and media were found to be significant knowledge providers.

  8. Web 2.0 Technologies for Effective Knowledge Management in Organizations: A Qualitative Analysis

    ERIC Educational Resources Information Center

    Nath, Anupam Kumar

    2012-01-01

    A new generation of Internet-based collaborative tools, commonly known as Web 2.0, has increased in popularity, availability, and power in the last few years (Kane and Fichman, 2009). Web 2.0 is a set of Internet-based applications that harness network effects by facilitating collaborative and participative computing (O'Reilly, 2006).…

  9. Praxis-based research networks: An emerging paradigm for research that is rigorous, relevant, and inclusive.

    PubMed

    Werner, James J; Stange, Kurt C

    2014-01-01

    Practice-based research networks (PBRNs) have developed a grounded approach to conducting practice-relevant and translational research in community practice settings. Seismic shifts in the health care landscape are shaping PBRNs that work across organizational and institutional margins to address complex problems. Praxis-based research networks combine PBRN knowledge generation with multistakeholder learning, experimentation, and application of practical knowledge. The catalytic processes in praxis-based research networks are cycles of action and reflection based on experience, observation, conceptualization, and experimentation by network members and partners. To facilitate co-learning and solution-building, these networks have a flexible architecture that allows pragmatic inclusion of stakeholders based on the demands of the problem and the needs of the network. Praxis-based research networks represent an evolving trend that combines the core values of PBRNs with new opportunities for relevance, rigor, and broad participation. © Copyright 2014 by the American Board of Family Medicine.

  10. Swarm-based medicine.

    PubMed

    Putora, Paul Martin; Oldenburg, Jan

    2013-09-19

    Occasionally, medical decisions have to be taken in the absence of evidence-based guidelines. Other sources can be drawn upon to fill in the gaps, including experience and intuition. Authorities or experts, with their knowledge and experience, may provide further input--known as "eminence-based medicine". Due to the Internet and digital media, interactions among physicians now take place at a higher rate than ever before. With the rising number of interconnected individuals and their communication capabilities, the medical community is obtaining the properties of a swarm. The way individual physicians act depends on other physicians; medical societies act based on their members. Swarm behavior might facilitate the generation and distribution of knowledge as an unconscious process. As such, "swarm-based medicine" may add a further source of information to the classical approaches of evidence- and eminence-based medicine. How to integrate swarm-based medicine into practice is left to the individual physician, but even this decision will be influenced by the swarm.

  11. SU-G-TeP4-14: Quality Control of Treatment Planning Using Knowledge-Based Planning Across a System of Radiation Oncology Practices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Masi, K; Ditman, M; Marsh, R

    Purpose: There is potentially a wide variation in plan quality for a certain disease site, even for clinics located in the same system of hospitals. We have used a prostate-specific knowledge-based planning (KBP) model as a quality control tool to investigate the variation in prostate treatment planning across a network of affiliated radiation oncology departments. Methods: A previously created KBP model was applied to 10 patients each from 4 community-based clinics (Clinics A, B, C, and D). The KBP model was developed using RapidPlan (Eclipse v13.5, Varian Medical Systems) from 60 prostate/prostate bed IMRT plans that were originally planned usingmore » an in-house treatment planning system at the central institution of the community-based clinics. The dosimetric plan quality (target coverage and normal-tissue sparing) of each model-generated plan was compared to the respective clinically-used plan. Each community-based clinic utilized the same planning goals to develop the clinically-used plans that were used at the main institution. Results: Across all 4 clinics, the model-generated plans decreased the mean dose to the rectum by varying amounts (on average, 12.5, 2.6, 4.5, and 2.7 Gy for Clinics A, B, C, and D, respectively). The mean dose to the bladder also decreased with the model-generated plans (5.4, 2.3, 3.0, and 4.1 Gy, respectively). The KBP model also identified that target coverage (D95%) improvements were possible for for Clinics A, B, and D (0.12, 1.65, and 2.75%) while target coverage decreased by 0.72% for Clinic C, demonstrating potentially different trade-offs made in clinical plans at different institutions. Conclusion: Quality control of dosimetric plan quality across a system of radiation oncology practices is possible with knowledge-based planning. By using a quality KBP model, smaller community-based clinics can potentially identify the areas of their treatment plans that may be improved, whether it be in normal-tissue sparing or improved target coverage. M. Matuszak has research funding for KBP from Varian Medical Systems.« less

  12. Design, development, and evaluation of a second generation interactive Simulator for Engineering Ethics Education (SEEE2).

    PubMed

    Alfred, Michael; Chung, Christopher A

    2012-12-01

    This paper describes a second generation Simulator for Engineering Ethics Education. Details describing the first generation activities of this overall effort are published in Chung and Alfred (Sci Eng Ethics 15:189-199, 2009). The second generation research effort represents a major development in the interactive simulator educational approach. As with the first generation effort, the simulator places students in first person perspective scenarios involving different types of ethical situations. Students must still gather data, assess the situation, and make decisions. The approach still requires students to develop their own ability to identify and respond to ethical engineering situations. However, were as, the generation one effort involved the use of a dogmatic model based on National Society of Professional Engineers' Code of Ethics, the new generation two model is based on a mathematical model of the actual experiences of engineers involved in ethical situations. This approach also allows the use of feedback in the form of decision effectiveness and professional career impact. Statistical comparisons indicate a 59 percent increase in overall knowledge and a 19 percent improvement in teaching effectiveness over an Internet Engineering Ethics resource based approach.

  13. Brief history of agricultural systems modeling.

    PubMed

    Jones, James W; Antle, John M; Basso, Bruno; Boote, Kenneth J; Conant, Richard T; Foster, Ian; Godfray, H Charles J; Herrero, Mario; Howitt, Richard E; Janssen, Sander; Keating, Brian A; Munoz-Carpena, Rafael; Porter, Cheryl H; Rosenzweig, Cynthia; Wheeler, Tim R

    2017-07-01

    Agricultural systems science generates knowledge that allows researchers to consider complex problems or take informed agricultural decisions. The rich history of this science exemplifies the diversity of systems and scales over which they operate and have been studied. Modeling, an essential tool in agricultural systems science, has been accomplished by scientists from a wide range of disciplines, who have contributed concepts and tools over more than six decades. As agricultural scientists now consider the "next generation" models, data, and knowledge products needed to meet the increasingly complex systems problems faced by society, it is important to take stock of this history and its lessons to ensure that we avoid re-invention and strive to consider all dimensions of associated challenges. To this end, we summarize here the history of agricultural systems modeling and identify lessons learned that can help guide the design and development of next generation of agricultural system tools and methods. A number of past events combined with overall technological progress in other fields have strongly contributed to the evolution of agricultural system modeling, including development of process-based bio-physical models of crops and livestock, statistical models based on historical observations, and economic optimization and simulation models at household and regional to global scales. Characteristics of agricultural systems models have varied widely depending on the systems involved, their scales, and the wide range of purposes that motivated their development and use by researchers in different disciplines. Recent trends in broader collaboration across institutions, across disciplines, and between the public and private sectors suggest that the stage is set for the major advances in agricultural systems science that are needed for the next generation of models, databases, knowledge products and decision support systems. The lessons from history should be considered to help avoid roadblocks and pitfalls as the community develops this next generation of agricultural systems models.

  14. Brief history of agricultural systems modeling

    DOE PAGES

    Jones, James W.; Antle, John M.; Basso, Bruno; ...

    2017-06-21

    Agricultural systems science generates knowledge that allows researchers to consider complex problems or take informed agricultural decisions. The rich history of this science exemplifies the diversity of systems and scales over which they operate and have been studied. Modeling, an essential tool in agricultural systems science, has been accomplished by scientists from a wide range of disciplines, who have contributed concepts and tools over more than six decades. As agricultural scientists now consider the "next generation" models, data, and knowledge products needed to meet the increasingly complex systems problems faced by society, it is important to take stock of thismore » history and its lessons to ensure that we avoid re-invention and strive to consider all dimensions of associated challenges. To this end, we summarize here the history of agricultural systems modeling and identify lessons learned that can help guide the design and development of next generation of agricultural system tools and methods. A number of past events combined with overall technological progress in other fields have strongly contributed to the evolution of agricultural system modeling, including development of process-based bio-physical models of crops and livestock, statistical models based on historical observations, and economic optimization and simulation models at household and regional to global scales. Characteristics of agricultural systems models have varied widely depending on the systems involved, their scales, and the wide range of purposes that motivated their development and use by researchers in different disciplines. Recent trends in broader collaboration across institutions, across disciplines, and between the public and private sectors suggest that the stage is set for the major advances in agricultural systems science that are needed for the next generation of models, databases, knowledge products and decision support systems. Furthermore, the lessons from history should be considered to help avoid roadblocks and pitfalls as the community develops this next generation of agricultural systems models.« less

  15. Brief history of agricultural systems modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, James W.; Antle, John M.; Basso, Bruno

    Agricultural systems science generates knowledge that allows researchers to consider complex problems or take informed agricultural decisions. The rich history of this science exemplifies the diversity of systems and scales over which they operate and have been studied. Modeling, an essential tool in agricultural systems science, has been accomplished by scientists from a wide range of disciplines, who have contributed concepts and tools over more than six decades. As agricultural scientists now consider the "next generation" models, data, and knowledge products needed to meet the increasingly complex systems problems faced by society, it is important to take stock of thismore » history and its lessons to ensure that we avoid re-invention and strive to consider all dimensions of associated challenges. To this end, we summarize here the history of agricultural systems modeling and identify lessons learned that can help guide the design and development of next generation of agricultural system tools and methods. A number of past events combined with overall technological progress in other fields have strongly contributed to the evolution of agricultural system modeling, including development of process-based bio-physical models of crops and livestock, statistical models based on historical observations, and economic optimization and simulation models at household and regional to global scales. Characteristics of agricultural systems models have varied widely depending on the systems involved, their scales, and the wide range of purposes that motivated their development and use by researchers in different disciplines. Recent trends in broader collaboration across institutions, across disciplines, and between the public and private sectors suggest that the stage is set for the major advances in agricultural systems science that are needed for the next generation of models, databases, knowledge products and decision support systems. Furthermore, the lessons from history should be considered to help avoid roadblocks and pitfalls as the community develops this next generation of agricultural systems models.« less

  16. Brief History of Agricultural Systems Modeling

    NASA Technical Reports Server (NTRS)

    Jones, James W.; Antle, John M.; Basso, Bruno O.; Boote, Kenneth J.; Conant, Richard T.; Foster, Ian; Godfray, H. Charles J.; Herrrero, Mario; Howitt, Richard E.; Janssen, Sandor; hide

    2016-01-01

    Agricultural systems science generates knowledge that allows researchers to consider complex problems or take informed agricultural decisions. The rich history of this science exemplifies the diversity of systems and scales over which they operate and have been studied. Modeling, an essential tool in agricultural systems science, has been accomplished by scientists from a wide range of disciplines, who have contributed concepts and tools over more than six decades. As agricultural scientists now consider the next generation models, data, and knowledge products needed to meet the increasingly complex systems problems faced by society, it is important to take stock of this history and its lessons to ensure that we avoid re-invention and strive to consider all dimensions of associated challenges. To this end, we summarize here the history of agricultural systems modeling and identify lessons learned that can help guide the design and development of next generation of agricultural system tools and methods. A number of past events combined with overall technological progress in other fields have strongly contributed to the evolution of agricultural system modeling, including development of process-based bio-physical models of crops and livestock, statistical models based on historical observations, and economic optimization and simulation models at household and regional to global scales. Characteristics of agricultural systems models have varied widely depending on the systems involved, their scales, and the wide range of purposes that motivated their development and use by researchers in different disciplines. Recent trends in broader collaboration across institutions, across disciplines, and between the public and private sectors suggest that the stage is set for the major advances in agricultural systems science that are needed for the next generation of models, databases, knowledge products and decision support systems. The lessons from history should be considered to help avoid roadblocks and pitfalls as the community develops this next generation of agricultural systems models.

  17. Big data to smart data in Alzheimer's disease: The brain health modeling initiative to foster actionable knowledge.

    PubMed

    Geerts, Hugo; Dacks, Penny A; Devanarayan, Viswanath; Haas, Magali; Khachaturian, Zaven S; Gordon, Mark Forrest; Maudsley, Stuart; Romero, Klaus; Stephenson, Diane

    2016-09-01

    Massive investment and technological advances in the collection of extensive and longitudinal information on thousands of Alzheimer patients results in large amounts of data. These "big-data" databases can potentially advance CNS research and drug development. However, although necessary, they are not sufficient, and we posit that they must be matched with analytical methods that go beyond retrospective data-driven associations with various clinical phenotypes. Although these empirically derived associations can generate novel and useful hypotheses, they need to be organically integrated in a quantitative understanding of the pathology that can be actionable for drug discovery and development. We argue that mechanism-based modeling and simulation approaches, where existing domain knowledge is formally integrated using complexity science and quantitative systems pharmacology can be combined with data-driven analytics to generate predictive actionable knowledge for drug discovery programs, target validation, and optimization of clinical development. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. COOKING-RELATED PARTICLE CONCENTRATIONS MEASURED IN AN OCCUPIED TOWNHOME IN RESTON, VA

    EPA Science Inventory

    In non-smoking households, cooking is one of the most significant sources of indoor particles. To date, there are limited data available regarding indoor particle concentrations generated by different types of cooking. To increase the knowledge base associated with particles ...

  19. Effect of Age on Variability in the Production of Text-Based Global Inferences

    PubMed Central

    Williams, Lynne J.; Dunlop, Joseph P.; Abdi, Hervé

    2012-01-01

    As we age, our differences in cognitive skills become more visible, an effect especially true for memory and problem solving skills (i.e., fluid intelligence). However, by contrast with fluid intelligence, few studies have examined variability in measures that rely on one’s world knowledge (i.e., crystallized intelligence). The current study investigated whether age increased the variability in text based global inference generation–a measure of crystallized intelligence. Global inference generation requires the integration of textual information and world knowledge and can be expressed as a gist or lesson. Variability in generating two global inferences for a single text was examined in young-old (62 to 69 years), middle-old (70 to 76 years) and old-old (77 to 94 years) adults. The older two groups showed greater variability, with the middle elderly group being most variable. These findings suggest that variability may be a characteristic of both fluid and crystallized intelligence in aging. PMID:22590523

  20. Machine Learning for Knowledge Extraction from PHR Big Data.

    PubMed

    Poulymenopoulou, Michaela; Malamateniou, Flora; Vassilacopoulos, George

    2014-01-01

    Cloud computing, Internet of things (IOT) and NoSQL database technologies can support a new generation of cloud-based PHR services that contain heterogeneous (unstructured, semi-structured and structured) patient data (health, social and lifestyle) from various sources, including automatically transmitted data from Internet connected devices of patient living space (e.g. medical devices connected to patients at home care). The patient data stored in such PHR systems constitute big data whose analysis with the use of appropriate machine learning algorithms is expected to improve diagnosis and treatment accuracy, to cut healthcare costs and, hence, to improve the overall quality and efficiency of healthcare provided. This paper describes a health data analytics engine which uses machine learning algorithms for analyzing cloud based PHR big health data towards knowledge extraction to support better healthcare delivery as regards disease diagnosis and prognosis. This engine comprises of the data preparation, the model generation and the data analysis modules and runs on the cloud taking advantage from the map/reduce paradigm provided by Apache Hadoop.

  1. Generative and Item-Specific Knowledge of Language

    ERIC Educational Resources Information Center

    Morgan, Emily Ida Popper

    2016-01-01

    The ability to generate novel utterances compositionally using generative knowledge is a hallmark property of human language. At the same time, languages contain non-compositional or idiosyncratic items, such as irregular verbs, idioms, etc. This dissertation asks how and why language achieves a balance between these two systems--generative and…

  2. Integrating machine learning techniques into robust data enrichment approach and its application to gene expression data.

    PubMed

    Erdoğdu, Utku; Tan, Mehmet; Alhajj, Reda; Polat, Faruk; Rokne, Jon; Demetrick, Douglas

    2013-01-01

    The availability of enough samples for effective analysis and knowledge discovery has been a challenge in the research community, especially in the area of gene expression data analysis. Thus, the approaches being developed for data analysis have mostly suffered from the lack of enough data to train and test the constructed models. We argue that the process of sample generation could be successfully automated by employing some sophisticated machine learning techniques. An automated sample generation framework could successfully complement the actual sample generation from real cases. This argument is validated in this paper by describing a framework that integrates multiple models (perspectives) for sample generation. We illustrate its applicability for producing new gene expression data samples, a highly demanding area that has not received attention. The three perspectives employed in the process are based on models that are not closely related. The independence eliminates the bias of having the produced approach covering only certain characteristics of the domain and leading to samples skewed towards one direction. The first model is based on the Probabilistic Boolean Network (PBN) representation of the gene regulatory network underlying the given gene expression data. The second model integrates Hierarchical Markov Model (HIMM) and the third model employs a genetic algorithm in the process. Each model learns as much as possible characteristics of the domain being analysed and tries to incorporate the learned characteristics in generating new samples. In other words, the models base their analysis on domain knowledge implicitly present in the data itself. The developed framework has been extensively tested by checking how the new samples complement the original samples. The produced results are very promising in showing the effectiveness, usefulness and applicability of the proposed multi-model framework.

  3. A logic programming approach to medical errors in imaging.

    PubMed

    Rodrigues, Susana; Brandão, Paulo; Nelas, Luís; Neves, José; Alves, Victor

    2011-09-01

    In 2000, the Institute of Medicine reported disturbing numbers on the scope it covers and the impact of medical error in the process of health delivery. Nevertheless, a solution to this problem may lie on the adoption of adverse event reporting and learning systems that can help to identify hazards and risks. It is crucial to apply models to identify the adverse events root causes, enhance the sharing of knowledge and experience. The efficiency of the efforts to improve patient safety has been frustratingly slow. Some of this insufficiency of progress may be assigned to the lack of systems that take into account the characteristic of the information about the real world. In our daily lives, we formulate most of our decisions normally based on incomplete, uncertain and even forbidden or contradictory information. One's knowledge is less based on exact facts and more on hypothesis, perceptions or indications. From the data collected on our adverse event treatment and learning system on medical imaging, and through the use of Extended Logic Programming to knowledge representation and reasoning, and the exploitation of new methodologies for problem solving, namely those based on the perception of what is an agent and/or multi-agent systems, we intend to generate reports that identify the most relevant causes of error and define improvement strategies, concluding about the impact, place of occurrence, form or type of event recorded in the healthcare institutions. The Eindhoven Classification Model was extended and adapted to the medical imaging field and used to classify adverse events root causes. Extended Logic Programming was used for knowledge representation with defective information, allowing for the modelling of the universe of discourse in terms of data and knowledge default. A systematization of the evolution of the body of knowledge about Quality of Information embedded in the Root Cause Analysis was accomplished. An adverse event reporting and learning system was developed based on the presented approach to medical errors in imaging. This system was deployed in two Portuguese healthcare institutions, with an appealing outcome. The system enabled to verify that the majority of occurrences were concentrated in a few events that could be avoided. The developed system allowed automatic knowledge extraction, enabling report generation with strategies for the improvement of quality-of-care. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  4. Prior knowledge-based approach for associating ...

    EPA Pesticide Factsheets

    Evaluating the potential human health and/or ecological risks associated with exposures to complex chemical mixtures in the ambient environment is one of the central challenges of chemical safety assessment and environmental protection. There is a need for approaches that can help to integrate chemical monitoring and bio-effects data to evaluate risks associated with chemicals present in the environment. We used prior knowledge about chemical-gene interactions to develop a knowledge assembly model for detected chemicals at five locations near two wastewater treatment plants. The assembly model was used to generate hypotheses about the biological impacts of the chemicals at each location. The hypotheses were tested using empirical hepatic gene expression data from fathead minnows exposed for 12 d at each location. Empirical gene expression data was also mapped to the assembly models to statistically evaluate the likelihood of a chemical contributing to the observed biological responses. The prior knowledge approach was able reasonably hypothesize the biological impacts at one site but not the other. Chemicals most likely contributing to the observed biological responses were identified at each location. Despite limitations to the approach, knowledge assembly models have strong potential for associating chemical occurrence with potential biological effects and providing a foundation for hypothesis generation to guide research and/or monitoring efforts relat

  5. The SwissLipids knowledgebase for lipid biology

    PubMed Central

    Liechti, Robin; Hyka-Nouspikel, Nevila; Niknejad, Anne; Gleizes, Anne; Götz, Lou; Kuznetsov, Dmitry; David, Fabrice P.A.; van der Goot, F. Gisou; Riezman, Howard; Bougueleret, Lydie; Xenarios, Ioannis; Bridge, Alan

    2015-01-01

    Motivation: Lipids are a large and diverse group of biological molecules with roles in membrane formation, energy storage and signaling. Cellular lipidomes may contain tens of thousands of structures, a staggering degree of complexity whose significance is not yet fully understood. High-throughput mass spectrometry-based platforms provide a means to study this complexity, but the interpretation of lipidomic data and its integration with prior knowledge of lipid biology suffers from a lack of appropriate tools to manage the data and extract knowledge from it. Results: To facilitate the description and exploration of lipidomic data and its integration with prior biological knowledge, we have developed a knowledge resource for lipids and their biology—SwissLipids. SwissLipids provides curated knowledge of lipid structures and metabolism which is used to generate an in silico library of feasible lipid structures. These are arranged in a hierarchical classification that links mass spectrometry analytical outputs to all possible lipid structures, metabolic reactions and enzymes. SwissLipids provides a reference namespace for lipidomic data publication, data exploration and hypothesis generation. The current version of SwissLipids includes over 244 000 known and theoretically possible lipid structures, over 800 proteins, and curated links to published knowledge from over 620 peer-reviewed publications. We are continually updating the SwissLipids hierarchy with new lipid categories and new expert curated knowledge. Availability: SwissLipids is freely available at http://www.swisslipids.org/. Contact: alan.bridge@isb-sib.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25943471

  6. Gaining customer knowledge: obtaining and using customer judgments for hospitalwide quality improvement.

    PubMed

    Nelson, E C; Caldwell, C; Quinn, D; Rose, R

    1991-03-01

    Customer knowledge is an essential feature of hospitalwide quality improvement. All systems and processes have customers. The aim is to use customer knowledge and voice of the customer measurement to plan, design, improve, and monitor these systems and processes continuously. In this way, the hospital stands the best chance of meeting customers' needs and, hopefully, delivering services that are so outstanding that customers will be surprised and delighted. There are many methods, both soft and hard, that can be used to increase customer knowledge. One useful strategy is to use a family of quality measures that reflect the voice of the customer. These measures can generate practical and powerful customer knowledge information that is essential to performing strategic planning, deploying quality policy, designing new services, finding targets for improvements, and monitoring those continuous improvements based on customers' judgments.

  7. Generation of Signs within Semantic and Phonological Categories: Data from Deaf Adults and Children Who Use American Sign Language

    ERIC Educational Resources Information Center

    Beal-Alvarez, Jennifer S.; Figueroa, Daileen M.

    2017-01-01

    Two key areas of language development include semantic and phonological knowledge. Semantic knowledge relates to word and concept knowledge. Phonological knowledge relates to how language parameters combine to create meaning. We investigated signing deaf adults' and children's semantic and phonological sign generation via one-minute tasks,…

  8. A Novel Multiple Choice Question Generation Strategy: Alternative Uses for Controlled Vocabulary Thesauri in Biomedical-Sciences Education.

    PubMed

    Lopetegui, Marcelo A; Lara, Barbara A; Yen, Po-Yin; Çatalyürek, Ümit V; Payne, Philip R O

    2015-01-01

    Multiple choice questions play an important role in training and evaluating biomedical science students. However, the resource intensive nature of question generation limits their open availability, reducing their contribution to evaluation purposes mainly. Although applied-knowledge questions require a complex formulation process, the creation of concrete-knowledge questions (i.e., definitions, associations) could be assisted by the use of informatics methods. We envisioned a novel and simple algorithm that exploits validated knowledge repositories and generates concrete-knowledge questions by leveraging concepts' relationships. In this manuscript we present the development and validation of a prototype which successfully produced meaningful concrete-knowledge questions, opening new applications for existing knowledge repositories, potentially benefiting students of all biomedical sciences disciplines.

  9. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  10. Genetic counselors’ (GC) knowledge, awareness, and understanding of clinical next-generation sequencing (NGS) genomic testing

    PubMed Central

    Boland, PM; Ruth, K; Matro, JM; Rainey, KL; Fang, CY; Wong, YN; Daly, MB; Hall, MJ

    2014-01-01

    Genomic tests are increasingly complex, less expensive, and more widely available with the advent of next-generation sequencing (NGS). We assessed knowledge and perceptions among genetic counselors pertaining to NGS genomic testing via an online survey. Associations between selected characteristics and perceptions were examined. Recent education on NGS testing was common, but practical experience limited. Perceived understanding of clinical NGS was modest, specifically concerning tumor testing. Greater perceived understanding of clinical NGS testing correlated with more time spent in cancer-related counseling, exposure to NGS testing, and NGS-focused education. Substantial disagreement about the role of counseling for tumor-based testing was seen. Finally, a majority of counselors agreed with the need for more education about clinical NGS testing, supporting this approach to optimizing implementation. PMID:25523111

  11. RAVE: Rapid Visualization Environment

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos

    1994-01-01

    Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.

  12. Next generation agricultural system data, models and knowledge products: Introduction.

    PubMed

    Antle, John M; Jones, James W; Rosenzweig, Cynthia E

    2017-07-01

    Agricultural system models have become important tools to provide predictive and assessment capability to a growing array of decision-makers in the private and public sectors. Despite ongoing research and model improvements, many of the agricultural models today are direct descendants of research investments initially made 30-40 years ago, and many of the major advances in data, information and communication technology (ICT) of the past decade have not been fully exploited. The purpose of this Special Issue of Agricultural Systems is to lay the foundation for the next generation of agricultural systems data, models and knowledge products. The Special Issue is based on a "NextGen" study led by the Agricultural Model Intercomparison and Improvement Project (AgMIP) with support from the Bill and Melinda Gates Foundation.

  13. Next Generation Agricultural System Data, Models and Knowledge Products: Introduction

    NASA Technical Reports Server (NTRS)

    Antle, John M.; Jones, James W.; Rosenzweig, Cynthia E.

    2016-01-01

    Agricultural system models have become important tools to provide predictive and assessment capability to a growing array of decision-makers in the private and public sectors. Despite ongoing research and model improvements, many of the agricultural models today are direct descendants of research investments initially made 30-40 years ago, and many of the major advances in data, information and communication technology (ICT) of the past decade have not been fully exploited. The purpose of this Special Issue of Agricultural Systems is to lay the foundation for the next generation of agricultural systems data, models and knowledge products. The Special Issue is based on a 'NextGen' study led by the Agricultural Model Intercomparison and Improvement Project (AgMIP) with support from the Bill and Melinda Gates Foundation.

  14. Evaluating the effectiveness of a practical inquiry-based learning bioinformatics module on undergraduate student engagement and applied skills.

    PubMed

    Brown, James A L

    2016-05-06

    A pedagogic intervention, in the form of an inquiry-based peer-assisted learning project (as a practical student-led bioinformatics module), was assessed for its ability to increase students' engagement, practical bioinformatic skills and process-specific knowledge. Elements assessed were process-specific knowledge following module completion, qualitative student-based module evaluation and the novelty, scientific validity and quality of written student reports. Bioinformatics is often the starting point for laboratory-based research projects, therefore high importance was placed on allowing students to individually develop and apply processes and methods of scientific research. Students led a bioinformatic inquiry-based project (within a framework of inquiry), discovering, justifying and exploring individually discovered research targets. Detailed assessable reports were produced, displaying data generated and the resources used. Mimicking research settings, undergraduates were divided into small collaborative groups, with distinctive central themes. The module was evaluated by assessing the quality and originality of the students' targets through reports, reflecting students' use and understanding of concepts and tools required to generate their data. Furthermore, evaluation of the bioinformatic module was assessed semi-quantitatively using pre- and post-module quizzes (a non-assessable activity, not contributing to their grade), which incorporated process- and content-specific questions (indicative of their use of the online tools). Qualitative assessment of the teaching intervention was performed using post-module surveys, exploring student satisfaction and other module specific elements. Overall, a positive experience was found, as was a post module increase in correct process-specific answers. In conclusion, an inquiry-based peer-assisted learning module increased students' engagement, practical bioinformatic skills and process-specific knowledge. © 2016 by The International Union of Biochemistry and Molecular Biology, 44:304-313 2016. © 2016 The International Union of Biochemistry and Molecular Biology.

  15. Tools for knowledge acquisition within the NeuroScholar system and their application to anatomical tract-tracing data

    PubMed Central

    Burns, Gully APC; Cheng, Wei-Cheng

    2006-01-01

    Background Knowledge bases that summarize the published literature provide useful online references for specific areas of systems-level biology that are not otherwise supported by large-scale databases. In the field of neuroanatomy, groups of small focused teams have constructed medium size knowledge bases to summarize the literature describing tract-tracing experiments in several species. Despite years of collation and curation, these databases only provide partial coverage of the available published literature. Given that the scientists reading these papers must all generate the interpretations that would normally be entered into such a system, we attempt here to provide general-purpose annotation tools to make it easy for members of the community to contribute to the task of data collation. Results In this paper, we describe an open-source, freely available knowledge management system called 'NeuroScholar' that allows straightforward structured markup of the PDF files according to a well-designed schema to capture the essential details of this class of experiment. Although, the example worked through in this paper is quite specific to neuroanatomical connectivity, the design is freely extensible and could conceivably be used to construct local knowledge bases for other experiment types. Knowledge representations of the experiment are also directly linked to the contributing textual fragments from the original research article. Through the use of this system, not only could members of the community contribute to the collation task, but input data can be gathered for automated approaches to permit knowledge acquisition through the use of Natural Language Processing (NLP). Conclusion We present a functional, working tool to permit users to populate knowledge bases for neuroanatomical connectivity data from the literature through the use of structured questionnaires. This system is open-source, fully functional and available for download from [1]. PMID:16895608

  16. Ensuring long-term utility of the AOP framework and knowledge for multiple stakeholders

    EPA Science Inventory

    1.Introduction There is a need to increase the development and implementation of predictive approaches to support chemical safety assessment. These predictive approaches feature generation of data from tools such as computational models, pathway-based in vitro assays, and short-t...

  17. Goethe's Faust Revisited: Lessons from DIT Research.

    ERIC Educational Resources Information Center

    Nucci, Larry

    2002-01-01

    Discusses the Defining Issues Test as an invaluable tool for research and practice in moral education. Explains that because such instruments are based upon previous developmental research, they are unsuitable for research on moral development. Argues that these measures stand in the way of generating new knowledge. (CAJ)

  18. Automated unit-level testing with heuristic rules

    NASA Technical Reports Server (NTRS)

    Carlisle, W. Homer; Chang, Kai-Hsiung; Cross, James H.; Keleher, William; Shackelford, Keith

    1990-01-01

    Software testing plays a significant role in the development of complex software systems. Current testing methods generally require significant effort to generate meaningful test cases. The QUEST/Ada system is a prototype system designed using CLIPS to experiment with expert system based test case generation. The prototype is designed to test for condition coverage, and attempts to generate test cases to cover all feasible branches contained in an Ada program. This paper reports on heuristics sued by the system. These heuristics vary according to the amount of knowledge obtained by preprocessing and execution of the boolean conditions in the program.

  19. A reusable knowledge acquisition shell: KASH

    NASA Technical Reports Server (NTRS)

    Westphal, Christopher; Williams, Stephen; Keech, Virginia

    1991-01-01

    KASH (Knowledge Acquisition SHell) is proposed to assist a knowledge engineer by providing a set of utilities for constructing knowledge acquisition sessions based on interviewing techniques. The information elicited from domain experts during the sessions is guided by a question dependency graph (QDG). The QDG defined by the knowledge engineer, consists of a series of control questions about the domain that are used to organize the knowledge of an expert. The content information supplies by the expert, in response to the questions, is represented in the form of a concept map. These maps can be constructed in a top-down or bottom-up manner by the QDG and used by KASH to generate the rules for a large class of expert system domains. Additionally, the concept maps can support the representation of temporal knowledge. The high degree of reusability encountered in the QDG and concept maps can vastly reduce the development times and costs associated with producing intelligent decision aids, training programs, and process control functions.

  20. Basic self-knowledge and transparency.

    PubMed

    Borgoni, Cristina

    2018-01-01

    Cogito -like judgments, a term coined by Burge (1988), comprise thoughts such as, I am now thinking , I [hereby] judge that Los Angeles is at the same latitude as North Africa, or I [hereby] intend to go to the opera tonight. It is widely accepted that we form cogito -like judgments in an authoritative and not merely empirical manner. We have privileged self-knowledge of the mental state that is self-ascribed in a cogito -like judgment. Thus, models of self-knowledge that aim to explain privileged self-knowledge should have the resources to explain the special self-knowledge involved in cogito judgments. My objective in this paper is to examine whether a transparency model of self-knowledge (i.e., models based on Evans ' 1982 remarks) can provide such an explanation: granted that cogito judgments are paradigmatic cases of privileged self-knowledge, does the transparency procedure explain why this is so? The paper advances a negative answer, arguing that the transparency procedure cannot generate the type of thought constitutive of cogito judgments.

  1. "I Ulu No Ka Lala I Ke Kumu", The Branches Grow Because of the Trunk: Ancestral Knowledge as Refusal

    ERIC Educational Resources Information Center

    Chandler, Kapua L.

    2018-01-01

    This paper will discuss the ways that Native Hawaiian scholars are engaging in innovative strategies that incorporate ancestral knowledges into the academy. Ancestral knowledges are highly valued as Indigenous communities strive to pass on such wisdom and lessons from generation to generation. Ancestral knowledges are all around us no matter where…

  2. Shifting from Implicit to Explicit Knowledge: Different Roles of Early- and Late-Night Sleep

    ERIC Educational Resources Information Center

    Yordanova, Juliana; Kolev, Vasil; Verleger, Rolf; Bataghva, Zhamak; Born, Jan; Wagner, Ullrich

    2008-01-01

    Sleep has been shown to promote the generation of explicit knowledge as indicated by the gain of insight into previously unrecognized task regularities. Here, we explored whether this generation of explicit knowledge depends on pre-sleep implicit knowledge, and specified the differential roles of slow-wave sleep (SWS) vs. rapid eye movement (REM)…

  3. Knowledge-based support for the participatory design and implementation of shift systems.

    PubMed

    Gissel, A; Knauth, P

    1998-01-01

    This study developed a knowledge-based software system to support the participatory design and implementation of shift systems as a joint planning process including shift workers, the workers' committee, and management. The system was developed using a model-based approach. During the 1st phase, group discussions were repeatedly conducted with 2 experts. Thereafter a structure model of the process was generated and subsequently refined by the experts in additional semistructured interviews. Next, a factual knowledge base of 1713 relevant studies was collected on the effects of shift work. Finally, a prototype of the knowledge-based system was tested on 12 case studies. During the first 2 phases of the system, important basic information about the tasks to be carried out is provided for the user. During the 3rd phase this approach uses the problem-solving method of case-based reasoning to determine a shift rota which has already proved successful in other applications. It can then be modified in the 4th phase according to the shift workers' preferences. The last 2 phases support the final testing and evaluation of the system. The application of this system has shown that it is possible to obtain shift rotas suitable to actual problems and representative of good ergonomic solutions. A knowledge-based approach seems to provide valuable support for the complex task of designing and implementing a new shift system. The separation of the task into several phases, the provision of information at all stages, and the integration of all parties concerned seem to be essential factors for the success of the application.

  4. A prototype knowledge-based simulation support system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, T.R.; Roberts, S.D.

    1987-04-01

    As a preliminary step toward the goal of an intelligent automated system for simulation modeling support, we explore the feasibility of the overall concept by generating and testing a prototypical framework. A prototype knowledge-based computer system was developed to support a senior level course in industrial engineering so that the overall feasibility of an expert simulation support system could be studied in a controlled and observable setting. The system behavior mimics the diagnostic (intelligent) process performed by the course instructor and teaching assistants, finding logical errors in INSIGHT simulation models and recommending appropriate corrective measures. The system was programmed inmore » a non-procedural language (PROLOG) and designed to run interactively with students working on course homework and projects. The knowledge-based structure supports intelligent behavior, providing its users with access to an evolving accumulation of expert diagnostic knowledge. The non-procedural approach facilitates the maintenance of the system and helps merge the roles of expert and knowledge engineer by allowing new knowledge to be easily incorporated without regard to the existing flow of control. The background, features and design of the system are describe and preliminary results are reported. Initial success is judged to demonstrate the utility of the reported approach and support the ultimate goal of an intelligent modeling system which can support simulation modelers outside the classroom environment. Finally, future extensions are suggested.« less

  5. Engaging Karen refugee students in science learning through a cross-cultural learning community

    NASA Astrophysics Data System (ADS)

    Harper, Susan G.

    2017-02-01

    This research explored how Karen (first-generation refugees from Burma) elementary students engaged with the Next Generation Science Standards (NGSS) practice of constructing scientific explanations based on evidence within the context of a cross-cultural learning community. In this action research, the researcher and a Karen parent served as co-teachers for fourth- and fifth-grade Karen and non-Karen students in a science and culture after-school programme in a public elementary school in the rural southeastern United States. Photovoice provided a critical platform for students to create their own cultural discourses for the learning community. The theoretical framework of critical pedagogy of place provided a way for the learning community to decolonise and re-inhabit the learning spaces with knowledge they co-constructed. Narrative analysis of video transcripts of the after-school programme, ethnographic interviews, and focus group discussions from Photovoice revealed a pattern of emerging agency by Karen students in the scientific practice of constructing scientific explanations based on evidence and in Karen language lessons. This evidence suggests that science learning embedded within a cross-cultural learning community can empower refugee students to construct their own hybrid cultural knowledge and leverage that knowledge to engage in a meaningful way with the epistemology of science.

  6. Sustainable development and next generation's health: a long-term perspective about the consequences of today's activities for food safety.

    PubMed

    Frazzoli, Chiara; Petrini, Carlo; Mantovani, Alberto

    2009-01-01

    Development is defined sustainable when it meets the needs of the present without compromising the ability of future generations to meet their own needs. Pivoting on social, environmental and economic aspects of food chain sustainability, this paper presents the concept of sustainable food safety based on the prevention of risks and burden of poor health for generations to come. Under this respect, the assessment of long-term, transgenerational risks is still hampered by serious scientific uncertainties. Critical issues to the development of a sustainable food safety framework may include: endocrine disrupters as emerging contaminants that specifically target developing organisms; toxicological risks assessment in Countries at the turning point of development; translating knowledge into toxicity indexes to support risk management approaches, such as hazard analysis and critical control points (HACCP); the interplay between chemical hazards and social determinants. Efforts towards the comprehensive knowledge and management of key factors of sustainable food safety appear critical to the effectiveness of the overall sustainability policies.

  7. An American knowledge base in England - Alternate implementations of an expert system flight status monitor

    NASA Technical Reports Server (NTRS)

    Butler, G. F.; Graves, A. T.; Disbrow, J. D.; Duke, E. L.

    1989-01-01

    A joint activity between the Dryden Flight Research Facility of the NASA Ames Research Center (Ames-Dryden) and the Royal Aerospace Establishment (RAE) on knowledge-based systems has been agreed. Under the agreement, a flight status monitor knowledge base developed at Ames-Dryden has been implemented using the real-time AI (artificial intelligence) toolkit MUSE, which was developed in the UK. Here, the background to the cooperation is described and the details of the flight status monitor and a prototype MUSE implementation are presented. It is noted that the capabilities of the expert-system flight status monitor to monitor data downlinked from the flight test aircraft and to generate information on the state and health of the system for the test engineers provides increased safety during flight testing of new systems. Furthermore, the expert-system flight status monitor provides the systems engineers with ready access to the large amount of information required to describe a complex aircraft system.

  8. Field-based generation and social validation managers and staff competencies for small community residences.

    PubMed

    Thousand, J S; Burchard, S N; Hasazi, J E

    1986-01-01

    Characteristics and competencies for four staff positions in community residences for individuals with mental retardation were identified utilizing multiple empirical and deductive methods with field-based practitioners and field-based experts. The more commonly used competency generation methods of expert opinion and job performance analysis generated a high degree of knowledge and skill-based competencies similar to course curricula. Competencies generated by incumbent practitioners through open-ended methods of personal structured interview and critical incident analysis were ones which related to personal style, interpersonal interaction, and humanistic orientation. Although seldom included in staff, paraprofessional, or professional training curricula, these latter competencies include those identified by Carl Rogers as essential for developing an effective helping relationship in a therapeutic situation (i.e., showing liking, interest, and respect for the clients; being able to communicate positive regard to the client). Of 21 core competency statements selected as prerequisites to employment for all four staff positions, the majority (17 of 21) represented interpersonal skills important to working with others, including responsiveness to resident needs, personal valuation of persons with mental retardation, and normalization principles.

  9. Development of clinical practice guidelines.

    PubMed

    Hollon, Steven D; Areán, Patricia A; Craske, Michelle G; Crawford, Kermit A; Kivlahan, Daniel R; Magnavita, Jeffrey J; Ollendick, Thomas H; Sexton, Thomas L; Spring, Bonnie; Bufka, Lynn F; Galper, Daniel I; Kurtzman, Howard

    2014-01-01

    Clinical practice guidelines (CPGs) are intended to improve mental, behavioral, and physical health by promoting clinical practices that are based on the best available evidence. The American Psychological Association (APA) is committed to generating patient-focused CPGs that are scientifically sound, clinically useful, and informative for psychologists, other health professionals, training programs, policy makers, and the public. The Institute of Medicine (IOM) 2011 standards for generating CPGs represent current best practices in the field. These standards involve multidisciplinary guideline development panels charged with generating recommendations based on comprehensive systematic reviews of the evidence. The IOM standards will guide the APA as it generates CPGs that can be used to inform the general public and the practice community regarding the benefits and harms of various treatment options. CPG recommendations are advisory rather than compulsory. When used appropriately, high-quality guidelines can facilitate shared decision making and identify gaps in knowledge.

  10. Mechanism and modulation of terahertz generation from a semimetal - graphite

    PubMed Central

    Ye, Tong; Meng, Sheng; Zhang, Jin; E, Yiwen; Yang, Yuping; Liu, Wuming; Yin, Yan; Wang, Li

    2016-01-01

    Semi-metals might offer a stronger interaction and a better confinement for terahertz wave than semiconductors, while preserve tunability. Particularly, graphene-based materials are envisioned as terahertz modulators, filters and ultra-broadband sources. However, the understanding of terahertz generation from those materials is still not clear, thus limits us recognizing the potential and improving device performances. Graphite, the mother material of graphene and a typical bulk semi-metal, is a good system to study semi-metals and graphene-based materials. Here we experimentally modulate and maximize the terahertz signal from graphite surface, thus reveal the mechanism - surface field driving photon induced carriers into transient current to radiate terahertz wave. We also discuss the differences between graphite and semiconductors; particularly graphite shows very weak temperature dependency from room temperature to 80 °C. Above knowledge will help us understand terahertz generations, achieve maximum output and electric modulation, in semi-metal or graphene based devices. PMID:26972818

  11. Mechanism and modulation of terahertz generation from a semimetal--graphite.

    PubMed

    Ye, Tong; Meng, Sheng; Zhang, Jin; E, Yiwen; Yang, Yuping; Liu, Wuming; Yin, Yan; Wang, Li

    2016-03-14

    Semi-metals might offer a stronger interaction and a better confinement for terahertz wave than semiconductors, while preserve tunability. Particularly, graphene-based materials are envisioned as terahertz modulators, filters and ultra-broadband sources. However, the understanding of terahertz generation from those materials is still not clear, thus limits us recognizing the potential and improving device performances. Graphite, the mother material of graphene and a typical bulk semi-metal, is a good system to study semi-metals and graphene-based materials. Here we experimentally modulate and maximize the terahertz signal from graphite surface, thus reveal the mechanism--surface field driving photon induced carriers into transient current to radiate terahertz wave. We also discuss the differences between graphite and semiconductors; particularly graphite shows very weak temperature dependency from room temperature to 80 °C. Above knowledge will help us understand terahertz generations, achieve maximum output and electric modulation, in semi-metal or graphene based devices.

  12. A Case Study: Leadership Style and Practice Leveraging Knowledge Management in Multigenerational Professional Learning Communities

    ERIC Educational Resources Information Center

    Giles-Weeks, Veda

    2014-01-01

    Age related demographic changes, within public school organizations are resulting in leadership challenges in leveraging organizational knowledge across four unique generational cohorts. Competitive success within schools has linkages to organizational cohesiveness and knowledge management (KM). Generational cohorts maintain values affecting…

  13. Newly graduated nurses' use of knowledge sources: a meta-ethnography.

    PubMed

    Voldbjerg, Siri Lygum; Grønkjaer, Mette; Sørensen, Erik Elgaard; Hall, Elisabeth O C

    2016-08-01

    To advance evidence on newly graduated nurses' use of knowledge sources. Clinical decisions need to be evidence-based and understanding the knowledge sources that newly graduated nurses use will inform both education and practice. Qualitative studies on newly graduated nurses' use of knowledge sources are increasing though generated from scattered healthcare contexts. Therefore, a metasynthesis of qualitative research on what knowledge sources new graduates use in decision-making was conducted. Meta-ethnography. Nineteen reports, representing 17 studies, published from 2000-2014 were identified from iterative searches in relevant databases from May 2013-May 2014. Included reports were appraised for quality and Noblit and Hare's meta-ethnography guided the interpretation and synthesis of data. Newly graduated nurses' use of knowledge sources during their first 2-year postgraduation were interpreted in the main theme 'self and others as knowledge sources,' with two subthemes 'doing and following' and 'knowing and doing,' each with several elucidating categories. The metasynthesis revealed a line of argument among the report findings underscoring progression in knowledge use and perception of competence and confidence among newly graduated nurses. The transition phase, feeling of confidence and ability to use critical thinking and reflection, has a great impact on knowledge sources incorporated in clinical decisions. The synthesis accentuates that for use of newly graduated nurses' qualifications and skills in evidence-based practice, clinical practice needs to provide a supportive environment which nurtures critical thinking and questions and articulates use of multiple knowledge sources. © 2016 John Wiley & Sons Ltd.

  14. Use of information and communication technologies for teaching physics at the Technical University

    NASA Astrophysics Data System (ADS)

    Polezhaev, V. D.; Polezhaeva, L. N.; Kamenev, V. V.

    2017-01-01

    The paper discusses the ways to improve methods and algorithms of the automated control of knowledge, approaches to the establishment and effective functioning of electronic teaching complexes, which include tests of a new generation, and their use is not limited control purpose only. Possibilities of computer-based testing system SCIENTIA are presented. This system is a tool to automate the control of knowledge that can be used for the assessment and monitoring of students' knowledge in different types of exams, self-control of students' knowledge, making test materials, creating a unified database of tests on a wide range of subjects etc. Successful operation of informational system is confirmed in practice during the study of the course of physics by students at Technical University.

  15. Data, knowledge and method bases in chemical sciences. Part IV. Current status in databases.

    PubMed

    Braibanti, Antonio; Rao, Rupenaguntla Sambasiva; Rao, Gollapalli Nagesvara; Ramam, Veluri Anantha; Rao, Sattiraju Veera Venkata Satyanarayana

    2002-01-01

    Computer readable databases have become an integral part of chemical research right from planning data acquisition to interpretation of the information generated. The databases available today are numerical, spectral and bibliographic. Data representation by different schemes--relational, hierarchical and objects--is demonstrated. Quality index (QI) throws light on the quality of data. The objective, prospects and impact of database activity on expert systems are discussed. The number and size of corporate databases available on international networks crossed manageable number leading to databases about their contents. Subsets of corporate or small databases have been developed by groups of chemists. The features and role of knowledge-based or intelligent databases are described.

  16. Hydrological research in Ethiopia

    NASA Astrophysics Data System (ADS)

    Gebremichael, M.

    2012-12-01

    Almost all major development problems in Ethiopia are water-related: food insecurity, low economic development, recurrent droughts, disastrous floods, poor health conditions, and low energy condition. In order to develop and manage existing water resources in a sustainable manner, knowledge is required about water availability, water quality, water demand in various sectors, and the impacts of water resource projects on health and the environment. The lack of ground-based data has been a major challenge for generating this knowledge. Current advances in remote sensing and computer simulation technology could provide alternative source of datasets. In this talk, I will present the challenges and opportunities in using remote sensing datasets and hydrological models in regions such as Africa where ground-based datasets are scarce.

  17. Community Intelligence in Knowledge Curation: An Application to Managing Scientific Nomenclature

    PubMed Central

    Zou, Dong; Li, Ang; Liu, Guocheng; Chen, Fei; Wu, Jiayan; Xiao, Jingfa; Wang, Xumin; Yu, Jun; Zhang, Zhang

    2013-01-01

    Harnessing community intelligence in knowledge curation bears significant promise in dealing with communication and education in the flood of scientific knowledge. As knowledge is accumulated at ever-faster rates, scientific nomenclature, a particular kind of knowledge, is concurrently generated in all kinds of fields. Since nomenclature is a system of terms used to name things in a particular discipline, accurate translation of scientific nomenclature in different languages is of critical importance, not only for communications and collaborations with English-speaking people, but also for knowledge dissemination among people in the non-English-speaking world, particularly young students and researchers. However, it lacks of accuracy and standardization when translating scientific nomenclature from English to other languages, especially for those languages that do not belong to the same language family as English. To address this issue, here we propose for the first time the application of community intelligence in scientific nomenclature management, namely, harnessing collective intelligence for translation of scientific nomenclature from English to other languages. As community intelligence applied to knowledge curation is primarily aided by wiki and Chinese is the native language for about one-fifth of the world’s population, we put the proposed application into practice, by developing a wiki-based English-to-Chinese Scientific Nomenclature Dictionary (ESND; http://esnd.big.ac.cn). ESND is a wiki-based, publicly editable and open-content platform, exploiting the whole power of the scientific community in collectively and collaboratively managing scientific nomenclature. Based on community curation, ESND is capable of achieving accurate, standard, and comprehensive scientific nomenclature, demonstrating a valuable application of community intelligence in knowledge curation. PMID:23451119

  18. Community intelligence in knowledge curation: an application to managing scientific nomenclature.

    PubMed

    Dai, Lin; Xu, Chao; Tian, Ming; Sang, Jian; Zou, Dong; Li, Ang; Liu, Guocheng; Chen, Fei; Wu, Jiayan; Xiao, Jingfa; Wang, Xumin; Yu, Jun; Zhang, Zhang

    2013-01-01

    Harnessing community intelligence in knowledge curation bears significant promise in dealing with communication and education in the flood of scientific knowledge. As knowledge is accumulated at ever-faster rates, scientific nomenclature, a particular kind of knowledge, is concurrently generated in all kinds of fields. Since nomenclature is a system of terms used to name things in a particular discipline, accurate translation of scientific nomenclature in different languages is of critical importance, not only for communications and collaborations with English-speaking people, but also for knowledge dissemination among people in the non-English-speaking world, particularly young students and researchers. However, it lacks of accuracy and standardization when translating scientific nomenclature from English to other languages, especially for those languages that do not belong to the same language family as English. To address this issue, here we propose for the first time the application of community intelligence in scientific nomenclature management, namely, harnessing collective intelligence for translation of scientific nomenclature from English to other languages. As community intelligence applied to knowledge curation is primarily aided by wiki and Chinese is the native language for about one-fifth of the world's population, we put the proposed application into practice, by developing a wiki-based English-to-Chinese Scientific Nomenclature Dictionary (ESND; http://esnd.big.ac.cn). ESND is a wiki-based, publicly editable and open-content platform, exploiting the whole power of the scientific community in collectively and collaboratively managing scientific nomenclature. Based on community curation, ESND is capable of achieving accurate, standard, and comprehensive scientific nomenclature, demonstrating a valuable application of community intelligence in knowledge curation.

  19. A continuous quality improvement team approach to adverse drug reaction reporting.

    PubMed

    Flowers, P; Dzierba, S; Baker, O

    1992-07-01

    Crossfunctional teams can generate more new ideas, concepts, and possible solutions than does a department-based process alone. Working collaboratively can increase knowledge of teams using CQI approaches and appropriate tools. CQI produces growth and development at multiple levels resulting from involvement in the process of incremental improvement.

  20. Respondents as Interlocutors: Translating Deliberative Democratic Principles to Qualitative Interviewing Ethics

    ERIC Educational Resources Information Center

    Curato, Nicole

    2012-01-01

    The epistemic interview is a conversational practice, which aims to generate knowledge by subjecting respondents' beliefs to dialectical tests of reasons. Developed by Svend Brinkmann, this model draws inspiration from Socratic dialogues where the interviewer asks confronting questions to press respondents to articulate the normative bases of…

  1. Next generation sequencing of the genomes of 11 international RWA biotypes

    USDA-ARS?s Scientific Manuscript database

    Scientists researching poorly characterized species struggle to gain understanding of the species they study on a sub-cellular level due to the time and investment required to build up an informative knowledge base. This becomes problematic when a poorly characterized species is a pest of a major e...

  2. The Next Generation: Our Legacy, Their Future

    ERIC Educational Resources Information Center

    Boyce, B. Ann

    2008-01-01

    In this "Seventeenth Delphine Hanna Commemorative Lecture," Boyce draws on the legacy of Delphine Hanna's work in science-based curriculum to address the need for today's educators to balance both professional mission and disciplinary knowledge. In the mid 1960s, Franklin Henry proposed the notion that the foundation of physical…

  3. Does Active Learning Improve Students' Knowledge of and Attitudes toward Research Methods?

    ERIC Educational Resources Information Center

    Campisi, Jay; Finn, Kevin E.

    2011-01-01

    We incorporated an active, collaborative-based research project in our undergraduate Research Methods course for first-year sports medicine majors. Working in small groups, students identified a research question, generated a hypothesis to be tested, designed an experiment, implemented the experiment, analyzed the data, and presented their…

  4. Understanding Critical Thinking to Create Better Doctors

    ERIC Educational Resources Information Center

    Zayapragassarazan, Zayabalaradjane; Menon, Vikas; Kar, Sitanshu Sekhar; Batmanabane, Gitanjali

    2016-01-01

    Medical students master an enormous body of knowledge, but lack systematic problem solving ability and effective clinical decision making. High profile reports have called for reforms in medical education to create a better generation of doctors who can cope with the system based problems they would encounter in an interdisciplinary and…

  5. Interactive Distance Education: A Cognitive Load Perspective

    ERIC Educational Resources Information Center

    Kalyuga, Slava

    2012-01-01

    Evidence-based approaches to the design of the next generation of interactive distance education need to take into account established multimedia learning principles. Cognitive load theory is a theory that has significantly contributed to the development of such principles. It has applied our knowledge of major features and processing limitations…

  6. Infusing Action Mazes into Language Assessment Class Using Quandary

    ERIC Educational Resources Information Center

    Kiliçkaya, Ferit

    2017-01-01

    It is widely acknowledged that problem solving is one of today's prominent skills and is an ongoing activity where learners are actively involved in seeking information, generating new knowledge based on this information, and making decisions accordingly. In this respective, through infusing problem-solving into curriculum of language teaching, it…

  7. A Generational Opportunity: A 21st Century Learning Content Delivery System

    ERIC Educational Resources Information Center

    McElroy, Patrick

    2007-01-01

    This paper describes a collaboratively developed, open marketplace for network-based learning and research content for the higher education community. It explores how available technologies and standards can facilitate a new knowledge creation industry for higher education learning content that engages all stakeholders in new ways. The Advisory…

  8. The Red and White Yeast Lab: An Introduction to Science as a Process.

    ERIC Educational Resources Information Center

    White, Brian T.

    1999-01-01

    Describes an experimental system based on an engineered strain of bakers' yeast that is designed to involve students in the process by which scientific knowledge is generated. Students are asked to determine why the yeast grow to form a reproducible pattern of red and white. (WRM)

  9. Promoting Prospective Elementary Teachers' Learning to Use Formative Assessment for Life Science Instruction

    ERIC Educational Resources Information Center

    Sabel, Jaime L.; Forbes, Cory T.; Zangori, Laura

    2015-01-01

    To support elementary students' learning of core, standards-based life science concepts highlighted in the "Next Generation Science Standards," prospective elementary teachers should develop an understanding of life science concepts and learn to apply their content knowledge in instructional practice to craft elementary science learning…

  10. Designing Project-Based Instruction to Foster Generative and Mechanistic Understandings in Genetics

    ERIC Educational Resources Information Center

    Duncan, Ravit Golan; Tseng, Katie Ann

    2011-01-01

    The acquisition of scientific knowledge is fraught with difficulties and challenges for the learner. The very nature of some scientific domains contributes to the learning difficulties students' experience. Phenomena in these domains are composed of multiple organization levels featuring complicated interactions within and across these levels.…

  11. Effective Prototype Costing Policies in Research Universities: Are They Possible?

    ERIC Educational Resources Information Center

    McClure, Maureen W.; Abu-Duhou, Ibtisam

    Policy problems of prototype costing at research universities are discussed, based on a case study of a clinical treatment prototype program at a research university hospital. Prototypes programs generate reproducible knowledge with useful applications and are primarily developed in professional schools. The potential of using costing prototypes…

  12. A Qualitative Approach to Portfolios: The Early Assessment for Exceptional Potential Model.

    ERIC Educational Resources Information Center

    Shaklee, Beverly D.; Viechnicki, Karen J.

    1995-01-01

    The Early Assessment for Exceptional Potential portfolio assessment model assesses children as exceptional learners, users, generators, and pursuers of knowledge. It is based on use of authentic learning opportunities; interaction of assessment, curriculum, and instruction; multiple criteria derived from multiple sources; and systematic teacher…

  13. Spurious symptom reduction in fault monitoring

    NASA Technical Reports Server (NTRS)

    Shontz, William D.; Records, Roger M.; Choi, Jai J.

    1993-01-01

    Previous work accomplished on NASA's Faultfinder concept suggested that the concept was jeopardized by spurious symptoms generated in the monitoring phase. The purpose of the present research was to investigate methods of reducing the generation of spurious symptoms during in-flight engine monitoring. Two approaches for reducing spurious symptoms were investigated. A knowledge base of rules was constructed to filter known spurious symptoms and a neural net was developed to improve the expectation values used in the monitoring process. Both approaches were effective in reducing spurious symptoms individually. However, the best results were obtained using a hybrid system combining the neural net capability to improve expectation values with the rule-based logic filter.

  14. Model compilation: An approach to automated model derivation

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Baudin, Catherine; Iwasaki, Yumi; Nayak, Pandurang; Tanaka, Kazuo

    1990-01-01

    An approach is introduced to automated model derivation for knowledge based systems. The approach, model compilation, involves procedurally generating the set of domain models used by a knowledge based system. With an implemented example, how this approach can be used to derive models of different precision and abstraction is illustrated, and models are tailored to different tasks, from a given set of base domain models. In particular, two implemented model compilers are described, each of which takes as input a base model that describes the structure and behavior of a simple electromechanical device, the Reaction Wheel Assembly of NASA's Hubble Space Telescope. The compilers transform this relatively general base model into simple task specific models for troubleshooting and redesign, respectively, by applying a sequence of model transformations. Each transformation in this sequence produces an increasingly more specialized model. The compilation approach lessens the burden of updating and maintaining consistency among models by enabling their automatic regeneration.

  15. Anomaly Detection for Next-Generation Space Launch Ground Operations

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Iverson, David L.; Hall, David R.; Taylor, William M.; Patterson-Hine, Ann; Brown, Barbara; Ferrell, Bob A.; Waterman, Robert D.

    2010-01-01

    NASA is developing new capabilities that will enable future human exploration missions while reducing mission risk and cost. The Fault Detection, Isolation, and Recovery (FDIR) project aims to demonstrate the utility of integrated vehicle health management (IVHM) tools in the domain of ground support equipment (GSE) to be used for the next generation launch vehicles. In addition to demonstrating the utility of IVHM tools for GSE, FDIR aims to mature promising tools for use on future missions and document the level of effort - and hence cost - required to implement an application with each selected tool. One of the FDIR capabilities is anomaly detection, i.e., detecting off-nominal behavior. The tool we selected for this task uses a data-driven approach. Unlike rule-based and model-based systems that require manual extraction of system knowledge, data-driven systems take a radically different approach to reasoning. At the basic level, they start with data that represent nominal functioning of the system and automatically learn expected system behavior. The behavior is encoded in a knowledge base that represents "in-family" system operations. During real-time system monitoring or during post-flight analysis, incoming data is compared to that nominal system operating behavior knowledge base; a distance representing deviation from nominal is computed, providing a measure of how far "out of family" current behavior is. We describe the selected tool for FDIR anomaly detection - Inductive Monitoring System (IMS), how it fits into the FDIR architecture, the operations concept for the GSE anomaly monitoring, and some preliminary results of applying IMS to a Space Shuttle GSE anomaly.

  16. Literature search strategies for conducting knowledge-building and theory-generating qualitative systematic reviews.

    PubMed

    Finfgeld-Connett, Deborah; Johnson, E Diane

    2013-01-01

    To report literature search strategies for the purpose of conducting knowledge-building and theory-generating qualitative systematic reviews. Qualitative systematic reviews lie on a continuum from knowledge-building and theory-generating to aggregating and summarizing. Different types of literature searches are needed to optimally support these dissimilar reviews. Articles published between 1989-Autumn 2011. These documents were identified using a hermeneutic approach and multiple literature search strategies. Redundancy is not the sole measure of validity when conducting knowledge-building and theory-generating systematic reviews. When conducting these types of reviews, literature searches should be consistent with the goal of fully explicating concepts and the interrelationships among them. To accomplish this objective, a 'berry picking' approach is recommended along with strategies for overcoming barriers to finding qualitative research reports. To enhance integrity of knowledge-building and theory-generating systematic reviews, reviewers are urged to make literature search processes as transparent as possible, despite their complexity. This includes fully explaining and rationalizing what databases were used and how they were searched. It also means describing how literature tracking was conducted and grey literature was searched. In the end, the decision to cease searching also needs to be fully explained and rationalized. Predetermined linear search strategies are unlikely to generate search results that are adequate for purposes of conducting knowledge-building and theory-generating qualitative systematic reviews. Instead, it is recommended that iterative search strategies take shape as reviews evolve. © 2012 Blackwell Publishing Ltd.

  17. An Integrated Planning Representation Using Macros, Abstractions, and Cases

    NASA Technical Reports Server (NTRS)

    Baltes, Jacky; MacDonald, Bruce

    1992-01-01

    Planning will be an essential part of future autonomous robots and integrated intelligent systems. This paper focuses on learning problem solving knowledge in planning systems. The system is based on a common representation for macros, abstractions, and cases. Therefore, it is able to exploit both classical and case based techniques. The general operators in a successful plan derivation would be assessed for their potential usefulness, and some stored. The feasibility of this approach was studied through the implementation of a learning system for abstraction. New macros are motivated by trying to improve the operatorset. One heuristic used to improve the operator set is generating operators with more general preconditions than existing ones. This heuristic leads naturally to abstraction hierarchies. This investigation showed promising results on the towers of Hanoi problem. The paper concludes by describing methods for learning other problem solving knowledge. This knowledge can be represented by allowing operators at different levels of abstraction in a refinement.

  18. Medical knowledge packages and their integration into health-care information systems and the World Wide Web.

    PubMed

    Adlassnig, Klaus-Peter; Rappelsberger, Andrea

    2008-01-01

    Software-based medical knowledge packages (MKPs) are packages of highly structured medical knowledge that can be integrated into various health-care information systems or the World Wide Web. They have been established to provide different forms of clinical decision support such as textual interpretation of combinations of laboratory rest results, generating diagnostic hypotheses as well as confirmed and excluded diagnoses to support differential diagnosis in internal medicine, or for early identification and automatic monitoring of hospital-acquired infections. Technically, an MKP may consist of a number of inter-connected Arden Medical Logic Modules. Several MKPs have been integrated thus far into hospital, laboratory, and departmental information systems. This has resulted in useful and widely accepted software-based clinical decision support for the benefit of the patient, the physician, and the organization funding the health care system.

  19. Application of in vitro biopharmaceutical methods in development of immediate release oral dosage forms intended for paediatric patients.

    PubMed

    Batchelor, Hannah K; Kendall, Richard; Desset-Brethes, Sabine; Alex, Rainer; Ernest, Terry B

    2013-11-01

    Biopharmaceutics is routinely used in the design and development of medicines to generate science based evidence to predict in vivo performance; the application of this knowledge specifically to paediatric medicines development is yet to be explored. The aim of this review is to present the current status of available biopharmaceutical tools and tests including solubility, permeability and dissolution that may be appropriate for use in the development of immediate release oral paediatric medicines. The existing tools used in adults are discussed together with any limitations for their use within paediatric populations. The results of this review highlight several knowledge gaps in current methodologies in paediatric biopharmaceutics. The authors provide recommendations based on existing knowledge to adapt tests to better represent paediatric patient populations and also provide suggestions for future research that may lead to better tools to evaluate paediatric medicines. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Mathematics/Arithmetic Knowledge-Based Way of Thinking and Its Maintenance Needed for Engineers

    NASA Astrophysics Data System (ADS)

    Harada, Shoji

    Examining curriculum among universities revealed that no significant difference in math class or related subjects can be seen. However, amount and depth of those studies, in general, differed depending on content of curriculum and the level of achievement at entrance to individual university. Universalization of higher education shows that students have many problems in learning higher level of traditional math and that the memory of math they learned quickly fades away after passing in exam. It means that further development of higher math knowledgebased engineer after graduation from universities. Under these circumstances, the present author, as one of fun of math, propose how to maintain way of thinking generated by math knowledge. What necessary for engineer is to pay attention to common books, dealing with elementary mathematics or arithmetic- related matters. This surely leads engineer to nourish math/arithmetic knowledge-based way of thinking.

  1. From data mining rules to medical logical modules and medical advices.

    PubMed

    Gomoi, Valentin; Vida, Mihaela; Robu, Raul; Stoicu-Tivadar, Vasile; Bernad, Elena; Lupşe, Oana

    2013-01-01

    Using data mining in collaboration with Clinical Decision Support Systems adds new knowledge as support for medical diagnosis. The current work presents a tool which translates data mining rules supporting generation of medical advices to Arden Syntax formalism. The developed system was tested with data related to 2326 births that took place in 2010 at the Bega Obstetrics - Gynaecology Hospital, Timişoara. Based on processing these data, 14 medical rules regarding the Apgar score were generated and then translated in Arden Syntax language.

  2. A method of computer aided design with self-generative models in NX Siemens environment

    NASA Astrophysics Data System (ADS)

    Grabowik, C.; Kalinowski, K.; Kempa, W.; Paprocka, I.

    2015-11-01

    Currently in CAD/CAE/CAM systems it is possible to create 3D design virtual models which are able to capture certain amount of knowledge. These models are especially useful in an automation of routine design tasks. These models are known as self-generative or auto generative and they can behave in an intelligent way. The main difference between the auto generative and fully parametric models consists in the auto generative models ability to self-organizing. In this case design model self-organizing means that aside from the possibility of making of automatic changes of model quantitative features these models possess knowledge how these changes should be made. Moreover they are able to change quality features according to specific knowledge. In spite of undoubted good points of self-generative models they are not so often used in design constructional process which is mainly caused by usually great complexity of these models. This complexity makes the process of self-generative time and labour consuming. It also needs a quite great investment outlays. The creation process of self-generative model consists of the three stages it is knowledge and information acquisition, model type selection and model implementation. In this paper methods of the computer aided design with self-generative models in NX Siemens CAD/CAE/CAM software are presented. There are the five methods of self-generative models preparation in NX with: parametric relations model, part families, GRIP language application, knowledge fusion and OPEN API mechanism. In the paper examples of each type of the self-generative model are presented. These methods make the constructional design process much faster. It is suggested to prepare this kind of self-generative models when there is a need of design variants creation. The conducted research on assessing the usefulness of elaborated models showed that they are highly recommended in case of routine tasks automation. But it is still difficult to distinguish which method of self-generative preparation is most preferred. It always depends on a problem complexity. The easiest way for such a model preparation is this with the parametric relations model whilst the hardest one is this with the OPEN API mechanism. From knowledge processing point of view the best choice is application of the knowledge fusion.

  3. Child-orientated environmental education influences adult knowledge and household behaviour

    NASA Astrophysics Data System (ADS)

    Damerell, P.; Howe, C.; Milner-Gulland, E. J.

    2013-03-01

    Environmental education is frequently undertaken as a conservation intervention designed to change the attitudes and behaviour of recipients. Much conservation education is aimed at children, with the rationale that children influence the attitudes of their parents, who will consequently change their behaviour. Empirical evidence to substantiate this suggestion is very limited, however. For the first time, we use a controlled trial to assess the influence of wetland-related environmental education on the knowledge of children and their parents and household behaviour. We demonstrate adults exhibiting greater knowledge of wetlands and improved reported household water management behaviour when their child has received wetland-based education at Seychelles wildlife clubs. We distinguish between ‘folk’ knowledge of wetland environments and knowledge obtained from formal education, with intergenerational transmission of each depending on different factors. Our study provides the first strong support for the suggestion that environmental education can be transferred between generations and indirectly induce targeted behavioural changes.

  4. Learning Ecosystem Complexity: A Study on Small-Scale Fishers' Ecological Knowledge Generation

    ERIC Educational Resources Information Center

    Garavito-Bermúdez, Diana

    2018-01-01

    Small-scale fisheries are learning contexts of importance for generating, transferring and updating ecological knowledge of natural environments through everyday work practices. The rich knowledge fishers have of local ecosystems is the result of the intimate relationship fishing communities have had with their natural environments across…

  5. Pre-coding assisted generation of a frequency quadrupled optical vector D-band millimeter wave with one Mach-Zehnder modulator.

    PubMed

    Zhou, Wen; Li, Xinying; Yu, Jianjun

    2017-10-30

    We propose QPSK millimeter-wave (mm-wave) vector signal generation for D-band based on balanced precoding-assisted photonic frequency quadrupling technology employing a single intensity modulator without an optical filter. The intensity MZM is driven by a balanced pre-coding 37-GHz QPSK RF signal. The modulated optical subcarriers are directly sent into the single ended photodiode to generate 148-GHz QPSK vector signal. We experimentally demonstrate 1-Gbaud 148-GHz QPSK mm-wave vector signal generation, and investigate the bit-error-rate (BER) performance of the vector signals at 148-GHz. The experimental results show that the BER value can be achieved as low as 1.448 × 10 -3 when the optical power into photodiode is 8.8dBm. To the best of our knowledge, it is the first time to realize the frequency-quadrupling vector mm-wave signal generation at D-band based on only one MZM without an optical filter.

  6. Achieving realistic performance and decison-making capabilities in computer-generated air forces

    NASA Astrophysics Data System (ADS)

    Banks, Sheila B.; Stytz, Martin R.; Santos, Eugene, Jr.; Zurita, Vincent B.; Benslay, James L., Jr.

    1997-07-01

    For a computer-generated force (CGF) system to be useful in training environments, it must be able to operate at multiple skill levels, exhibit competency at assigned missions, and comply with current doctrine. Because of the rapid rate of change in distributed interactive simulation (DIS) and the expanding set of performance objectives for any computer- generated force, the system must also be modifiable at reasonable cost and incorporate mechanisms for learning. Therefore, CGF applications must have adaptable decision mechanisms and behaviors and perform automated incorporation of past reasoning and experience into its decision process. The CGF must also possess multiple skill levels for classes of entities, gracefully degrade its reasoning capability in response to system stress, possess an expandable modular knowledge structure, and perform adaptive mission planning. Furthermore, correctly performing individual entity behaviors is not sufficient. Issues related to complex inter-entity behavioral interactions, such as the need to maintain formation and share information, must also be considered. The CGF must also be able to acceptably respond to unforeseen circumstances and be able to make decisions in spite of uncertain information. Because of the need for increased complexity in the virtual battlespace, the CGF should exhibit complex, realistic behavior patterns within the battlespace. To achieve these necessary capabilities, an extensible software architecture, an expandable knowledge base, and an adaptable decision making mechanism are required. Our lab has addressed these issues in detail. The resulting DIS-compliant system is called the automated wingman (AW). The AW is based on fuzzy logic, the common object database (CODB) software architecture, and a hierarchical knowledge structure. We describe the techniques we used to enable us to make progress toward a CGF entity that satisfies the requirements presented above. We present our design and implementation of an adaptable decision making mechanism that uses multi-layered, fuzzy logic controlled situational analysis. Because our research indicates that fuzzy logic can perform poorly under certain circumstances, we combine fuzzy logic inferencing with adversarial game tree techniques for decision making in strategic and tactical engagements. We describe the approach we employed to achieve this fusion. We also describe the automated wingman's system architecture and knowledge base architecture.

  7. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    NASA Astrophysics Data System (ADS)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  8. Geospatial Standards and the Knowledge Generation Lifescycle

    NASA Technical Reports Server (NTRS)

    Khalsa, Siri Jodha S.; Ramachandran, Rahul

    2014-01-01

    Standards play an essential role at each stage in the sequence of processes by which knowledge is generated from geoscience observations, simulations and analysis. This paper provides an introduction to the field of informatics and the knowledge generation lifecycle in the context of the geosciences. In addition we discuss how the newly formed Earth Science Informatics Technical Committee is helping to advance the application of standards and best practices to make data and data systems more usable and interoperable.

  9. Endodontic Microbiology and Pathobiology: Current State of Knowledge.

    PubMed

    Fouad, Ashraf F

    2017-01-01

    Newer research tools and basic science knowledge base have allowed the exploration of endodontic diseases in the pulp and periapical tissues in novel ways. The use of next generation sequencing, bioinformatics analyses, genome-wide association studies, to name just a few of these innovations, has allowed the identification of hundreds of microorganisms and of host response factors. This review addresses recent advances in endodontic microbiology and the host response and discusses the potential for future innovations in this area. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. A New Educational Method to Acquire and Transfer Experience-based Wisdom for Power Engineers

    NASA Astrophysics Data System (ADS)

    Kyomoto, Sumie; Doi, Atsushi

    Electric power industry faces circumstances where advances in system automation technologies and enhancement of operational reliability make on-the-job training (OJT) opportunities less frequent and consequently it becomes difficult to rely simply on a traditional method based on OJT for successfully passing experimental knowledge and skills from one generation of technicians to another. In addition, the “year 2007 issue” puts companies concerned at risk of losing sophisticated skills or know-how which veterans in their employment have accumulated over many years of service. This paper discusses, in light of the usefulness of “guided experience” under an apprentice system, a training/education scheme designed to realize an inheritance of experienced personnel's know-how, in particular tacit knowledge, and a new educational system which is based on this notion. A system is proposed which involves: 1) making use of a work simulator, 2) accumulating tacit knowledge which experienced personnel use as the way or process to identify, analyze and solve complex problems in specific challenging situations, and 3) realizing “learning by doing” which is supported by the database of tacit knowledge. Trial on a prototype has proved the feasibility of this system.

  11. Knowledge Based Systems: A Critical Survey of Major Concepts, Issues, and Techniques. M.S. Thesis Final Report, 1 Jul. 1985 - 31 Dec. 1987

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kavi, Srinu

    1984-01-01

    This Working Paper Series entry presents a detailed survey of knowledge based systems. After being in a relatively dormant state for many years, only recently is Artificial Intelligence (AI) - that branch of computer science that attempts to have machines emulate intelligent behavior - accomplishing practical results. Most of these results can be attributed to the design and use of Knowledge-Based Systems, KBSs (or ecpert systems) - problem solving computer programs that can reach a level of performance comparable to that of a human expert in some specialized problem domain. These systems can act as a consultant for various requirements like medical diagnosis, military threat analysis, project risk assessment, etc. These systems possess knowledge to enable them to make intelligent desisions. They are, however, not meant to replace the human specialists in any particular domain. A critical survey of recent work in interactive KBSs is reported. A case study (MYCIN) of a KBS, a list of existing KBSs, and an introduction to the Japanese Fifth Generation Computer Project are provided as appendices. Finally, an extensive set of KBS-related references is provided at the end of the report.

  12. A Voronoi interior adjacency-based approach for generating a contour tree

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Qiao, Chaofei; Zhao, Renliang

    2004-05-01

    A contour tree is a good graphical tool for representing the spatial relations of contour lines and has found many applications in map generalization, map annotation, terrain analysis, etc. A new approach for generating contour trees by introducing a Voronoi-based interior adjacency set concept is proposed in this paper. The immediate interior adjacency set is employed to identify all of the children contours of each contour without contour elevations. It has advantages over existing methods such as the point-in-polygon method and the region growing-based method. This new approach can be used for spatial data mining and knowledge discovering, such as the automatic extraction of terrain features and construction of multi-resolution digital elevation model.

  13. Vision for a Global Registry of Anticipated Public Health Studies

    PubMed Central

    Choi, Bernard C.K.; Frank, John; Mindell, Jennifer S.; Orlova, Anna; Lin, Vivian; Vaillancourt, Alain D.M.G.; Puska, Pekka; Pang, Tikki; Skinner, Harvey A.; Marsh, Marsha; Mokdad, Ali H.; Yu, Shun-Zhang; Lindner, M. Cristina; Sherman, Gregory; Barreto, Sandhi M.; Green, Lawrence W.; Svenson, Lawrence W.; Sainsbury, Peter; Yan, Yongping; Zhang, Zuo-Feng; Zevallos, Juan C.; Ho, Suzanne C.; de Salazar, Ligia M.

    2007-01-01

    In public health, the generation, management, and transfer of knowledge all need major improvement. Problems in generating knowledge include an imbalance in research funding, publication bias, unnecessary studies, adherence to fashion, and undue interest in novel and immediate issues. Impaired generation of knowledge, combined with a dated and inadequate process for managing knowledge and an inefficient system for transferring knowledge, mean a distorted body of evidence available for decisionmaking in public health. This article hopes to stimulate discussion by proposing a Global Registry of Anticipated Public Health Studies. This prospective, comprehensive system for tracking research in public health could help enhance collaboration and improve efficiency. Practical problems must be discussed before such a vision can be further developed. PMID:17413073

  14. Identifying problems and generating recommendations for enhancing complex systems: applying the abstraction hierarchy framework as an analytical tool.

    PubMed

    Xu, Wei

    2007-12-01

    This study adopts J. Rasmussen's (1985) abstraction hierarchy (AH) framework as an analytical tool to identify problems and pinpoint opportunities to enhance complex systems. The process of identifying problems and generating recommendations for complex systems using conventional methods is usually conducted based on incompletely defined work requirements. As the complexity of systems rises, the sheer mass of data generated from these methods becomes unwieldy to manage in a coherent, systematic form for analysis. There is little known work on adopting a broader perspective to fill these gaps. AH was used to analyze an aircraft-automation system in order to further identify breakdowns in pilot-automation interactions. Four steps follow: developing an AH model for the system, mapping the data generated by various methods onto the AH, identifying problems based on the mapped data, and presenting recommendations. The breakdowns lay primarily with automation operations that were more goal directed. Identified root causes include incomplete knowledge content and ineffective knowledge structure in pilots' mental models, lack of effective higher-order functional domain information displayed in the interface, and lack of sufficient automation procedures for pilots to effectively cope with unfamiliar situations. The AH is a valuable analytical tool to systematically identify problems and suggest opportunities for enhancing complex systems. It helps further examine the automation awareness problems and identify improvement areas from a work domain perspective. Applications include the identification of problems and generation of recommendations for complex systems as well as specific recommendations regarding pilot training, flight deck interfaces, and automation procedures.

  15. On the acquisition and representation of procedural knowledge

    NASA Technical Reports Server (NTRS)

    Saito, T.; Ortiz, C.; Loftin, R. B.

    1992-01-01

    Historically knowledge acquisition has proven to be one of the greatest barriers to the development of intelligent systems. Current practice generally requires lengthy interactions between the expert whose knowledge is to be captured and the knowledge engineer whose responsibility is to acquire and represent knowledge in a useful form. Although much research has been devoted to the development of methodologies and computer software to aid in the capture and representation of some of some types of knowledge, little attention has been devoted to procedural knowledge. NASA personnel frequently perform tasks that are primarily procedural in nature. Previous work is reviewed in the field of knowledge acquisition and then focus on knowledge acquisition for procedural tasks with special attention devoted to the Navy's VISTA tool. The design and development is described of a system for the acquisition and representation of procedural knowledge-TARGET (Task Analysis and Rule Generation Tool). TARGET is intended as a tool that permits experts to visually describe procedural tasks and as a common medium for knowledge refinement by the expert and knowledge engineer. The system is designed to represent the acquired knowledge in the form of production rules. Systems such as TARGET have the potential to profoundly reduce the time, difficulties, and costs of developing knowledge-based systems for the performance of procedural tasks.

  16. Building a developmental toxicity ontology.

    PubMed

    Baker, Nancy; Boobis, Alan; Burgoon, Lyle; Carney, Edward; Currie, Richard; Fritsche, Ellen; Knudsen, Thomas; Laffont, Madeleine; Piersma, Aldert H; Poole, Alan; Schneider, Steffen; Daston, George

    2018-04-03

    As more information is generated about modes of action for developmental toxicity and more data are generated using high-throughput and high-content technologies, it is becoming necessary to organize that information. This report discussed the need for a systematic representation of knowledge about developmental toxicity (i.e., an ontology) and proposes a method to build one based on knowledge of developmental biology and mode of action/ adverse outcome pathways in developmental toxicity. This report is the result of a consensus working group developing a plan to create an ontology for developmental toxicity that spans multiple levels of biological organization. This report provide a description of some of the challenges in building a developmental toxicity ontology and outlines a proposed methodology to meet those challenges. As the ontology is built on currently available web-based resources, a review of these resources is provided. Case studies on one of the most well-understood morphogens and developmental toxicants, retinoic acid, are presented as examples of how such an ontology might be developed. This report outlines an approach to construct a developmental toxicity ontology. Such an ontology will facilitate computer-based prediction of substances likely to induce human developmental toxicity. © 2018 Wiley Periodicals, Inc.

  17. Is it time to drop the 'knowledge translation' metaphor? A critical literature review.

    PubMed

    Greenhalgh, Trisha; Wieringa, Sietse

    2011-12-01

    The literature on 'knowledge translation' presents challenges for the reviewer because different terms have been used to describe the generation, sharing and application of knowledge and different research approaches embrace different philosophical positions on what knowledge is. We present a narrative review of this literature which deliberately sought to highlight rather than resolve tensions between these different framings. Our findings suggest that while 'translation' is a widely used metaphor in medicine, it constrains how we conceptualise and study the link between knowledge and practice. The 'translation' metaphor has, arguably, led to particular difficulties in the fields of 'evidence-based management' and 'evidence-based policymaking' - where it seems that knowledge obstinately refuses to be driven unproblematically into practice. Many non-medical disciplines such as philosophy, sociology and organization science conceptualise knowledge very differently, as being (for example) 'created', 'constructed', 'embodied', 'performed' and 'collectively negotiated' - and also as being value-laden and tending to serve the vested interests of dominant élites. We propose that applying this wider range of metaphors and models would allow us to research the link between knowledge and practice in more creative and critical ways. We conclude that research should move beyond a narrow focus on the 'know-do gap' to cover a richer agenda, including: (a) the situation-specific practical wisdom (phronesis) that underpins clinical judgement; (b) the tacit knowledge that is built and shared among practitioners ('mindlines'); (c) the complex links between power and knowledge; and (d) approaches to facilitating macro-level knowledge partnerships between researchers, practitioners, policymakers and commercial interests.

  18. Knowledge and practice regarding dengue and chikungunya: a cross-sectional study among Healthcare workers and community in Northern Tanzania.

    PubMed

    Kajeguka, Debora C; Desrochers, Rachelle E; Mwangi, Rose; Mgabo, Maseke R; Alifrangis, Michael; Kavishe, Reginald A; Mosha, Franklin W; Kulkarni, Manisha A

    2017-05-01

    To investigate knowledge and prevention practices regarding dengue and chikungunya amongst community members, as well as knowledge, treatment and diagnostic practices among healthcare workers. We conducted a cross-sectional survey with 125 community members and 125 healthcare workers from 13 health facilities in six villages in the Hai district of Tanzania. A knowledge score was generated based on participant responses to a structured questionnaire, with a score of 40 or higher (of 80 and 50 total scores for community members and healthcare workers, respectively) indicating good knowledge. We conducted qualitative survey (n = 40) to further assess knowledge and practice regarding dengue and chikungunya fever. 15.2% (n = 19) of community members had good knowledge regarding dengue, whereas 53.6%, (n = 67) of healthcare workers did. 20.3% (n = 16) of participants from lowland areas and 6.5% (n = 3) from highland areas had good knowledge of dengue (χ 2 = 4.25, P = 0.03). Only 2.4% (n = 3) of all participants had a good knowledge score for chikungunya. In the qualitative study, community members expressed uncertainty about dengue and chikungunya. Some healthcare workers thought that they were new diseases. There is insufficient knowledge regarding dengue and chikungunya fever among community members and healthcare workers. Health promotion activities on these diseases based on Ecological Health Mode components to increase knowledge and improve preventive practices should be developed. © 2017 John Wiley & Sons Ltd.

  19. Effects of newspaper coverage on public knowledge about modifiable cancer risks.

    PubMed

    Stryker, Jo Ellen; Moriarty, Cortney M; Jensen, Jakob D

    2008-07-01

    This study explores the relationship between cancer newspaper coverage and public knowledge about cancer prevention, confirming self-reported associations between news exposure and cancer prevention knowledge with descriptions of newspaper coverage of modifiable cancer risks. Content analyses (N = 954) revealed that newspapers pay relatively little attention to cancer prevention. However, there is greater newspaper attention to tobacco and diet than to exercise, sun, and alcohol. Survey analysis (the National Cancer Institute's Health Information National Trends Survey) revealed that after controlling for differences based on gender, race, age, income, and education, attention to health news was significantly associated with knowledge about cancer risks associated with food and smoking but not for knowledge about exercise, sun, or alcohol. These findings conform to the findings of the content analysis data and provide a validation of a self-reported measure of media exposure, as well as evidence suggesting a threshold below which news coverage may not generate public knowledge about cancer prevention.

  20. A standard based approach for biomedical knowledge representation.

    PubMed

    Farkash, Ariel; Neuvirth, Hani; Goldschmidt, Yaara; Conti, Costanza; Rizzi, Federica; Bianchi, Stefano; Salvi, Erika; Cusi, Daniele; Shabo, Amnon

    2011-01-01

    The new generation of health information standards, where the syntax and semantics of the content is explicitly formalized, allows for interoperability in healthcare scenarios and analysis in clinical research settings. Studies involving clinical and genomic data include accumulating knowledge as relationships between genotypic and phenotypic information as well as associations within the genomic and clinical worlds. Some involve analysis results targeted at a specific disease; others are of a predictive nature specific to a patient and may be used by decision support applications. Representing knowledge is as important as representing data since data is more useful when coupled with relevant knowledge. Any further analysis and cross-research collaboration would benefit from persisting knowledge and data in a unified way. This paper describes a methodology used in Hypergenes, an EC FP7 project targeting Essential Hypertension, which captures data and knowledge using standards such as HL7 CDA and Clinical Genomics, aligned with the CEN EHR 13606 specification. We demonstrate the benefits of such an approach for clinical research as well as in healthcare oriented scenarios.

  1. Drawing on Dynamic Local Knowledge through Student-Generated Photography

    ERIC Educational Resources Information Center

    Coles-Ritchie, Marilee; Monson, Bayley; Moses, Catherine

    2015-01-01

    In this research, the authors explored how teachers using student-generated photography draw on local knowledge. The study draws on the framework of funds of knowledge to highlight the assets marginalized students bring to the classroom and the need for culturally relevant pedagogy to address the needs of a diverse public school population. The…

  2. Does the social capital in networks of “fish and fire” scientists and managers suggest learning?

    Treesearch

    A. Paige Fischer; Ken Vance-Borland; Kelly M. Burnett; Susan Hummel; Janean H. Creighton; Sherri L. Johnson; Lorien Jasny

    2014-01-01

    Patterns of social interaction influence how knowledge is generated, communicated, and applied. Theories of social capital and organizational learning suggest that interactions within disciplinary or functional groups foster communication of knowledge, whereas interactions across groups foster generation of new knowledge. We used social network analysis to examine...

  3. Knowledge Creation in Higher Education and the Nigerian Academics: Practices and Challenges

    ERIC Educational Resources Information Center

    Oku, Obianuju O.; Ike-Obioha, Benny Uzo

    2012-01-01

    Institutions of Higher Learning (Universities, Polytechnics and Colleges of Education) are prepared as centres of excellence in their tripartite role as reservoir and transmitters of knowledge from generation to generation, the advancement of the horizons of knowledge by research and the provision of high level manpower. To be able to discharge…

  4. The Effects of Domain Knowledge and Instructional Manipulation on Creative Idea Generation

    ERIC Educational Resources Information Center

    Hao, Ning

    2010-01-01

    The experiment was designed to explore the effects of domain knowledge, instructional manipulation, and the interaction between them on creative idea generation. Three groups of participants who respectively possessed the domain knowledge of biology, sports, or neither were asked to finish two tasks: imagining an extraterrestrial animal and…

  5. Inference of Gene Regulatory Networks Incorporating Multi-Source Biological Knowledge via a State Space Model with L1 Regularization

    PubMed Central

    Hasegawa, Takanori; Yamaguchi, Rui; Nagasaki, Masao; Miyano, Satoru; Imoto, Seiya

    2014-01-01

    Comprehensive understanding of gene regulatory networks (GRNs) is a major challenge in the field of systems biology. Currently, there are two main approaches in GRN analysis using time-course observation data, namely an ordinary differential equation (ODE)-based approach and a statistical model-based approach. The ODE-based approach can generate complex dynamics of GRNs according to biologically validated nonlinear models. However, it cannot be applied to ten or more genes to simultaneously estimate system dynamics and regulatory relationships due to the computational difficulties. The statistical model-based approach uses highly abstract models to simply describe biological systems and to infer relationships among several hundreds of genes from the data. However, the high abstraction generates false regulations that are not permitted biologically. Thus, when dealing with several tens of genes of which the relationships are partially known, a method that can infer regulatory relationships based on a model with low abstraction and that can emulate the dynamics of ODE-based models while incorporating prior knowledge is urgently required. To accomplish this, we propose a method for inference of GRNs using a state space representation of a vector auto-regressive (VAR) model with L1 regularization. This method can estimate the dynamic behavior of genes based on linear time-series modeling constructed from an ODE-based model and can infer the regulatory structure among several tens of genes maximizing prediction ability for the observational data. Furthermore, the method is capable of incorporating various types of existing biological knowledge, e.g., drug kinetics and literature-recorded pathways. The effectiveness of the proposed method is shown through a comparison of simulation studies with several previous methods. For an application example, we evaluated mRNA expression profiles over time upon corticosteroid stimulation in rats, thus incorporating corticosteroid kinetics/dynamics, literature-recorded pathways and transcription factor (TF) information. PMID:25162401

  6. The Company of Others: Generating Knowhow in Later Life

    ERIC Educational Resources Information Center

    Kimberley, Helen; Golding, Barry; Simons, Bonnie

    2016-01-01

    This paper explores some important aspects of the generation of practical knowledge through later life. It is about the relationship between knowledge generation, agency and capability, developed informally through the life experiences in and through the Company of Others. It emphasises how the everyday processes of socialisation create invaluable…

  7. Dynamic Courseware Generation on the WWW.

    ERIC Educational Resources Information Center

    Vassileva, Julita; Deters, Ralph

    1998-01-01

    The Dynamic Courseware Generator (DCG), which runs on a Web server, was developed for the authoring of adaptive computer-assisted learning courses. It generates an individual course according to the learner's goals and previous knowledge, and dynamically adapts the course according to the learner's success in knowledge acquisition. The tool may be…

  8. On the suitability and development of layout templates for analog layout reuse and layout-aware synthesis

    NASA Astrophysics Data System (ADS)

    Castro-Lopez, Rafael; Fernandez, Francisco V.; Rodriguez Vazquez, Angel

    2005-06-01

    Accelerating the synthesis of increasingly complex analog integrated circuits is key to bridge the widening gap between what we can integrate and what we can design while meeting ever-tightening time-to-market constraints. It is a well-known fact in the semiconductor industry that such goal can only be attained by means of adequate CAD methodologies, techniques, and accompanying tools. This is particularly important in analog physical synthesis (a.k.a. layout generation), where large sensitivities of the circuit performances to the many subtle details of layout implementation (device matching, loading and coupling effects, reliability, and area features are of utmost importance to analog designers), render complete automation a truly challenging task. To approach the problem, two directions have been traditionally considered, knowledge-based and optimization-based, both with their own pros and cons. Besides, recently reported solutions oriented to speed up the overall design flow by means of reuse-based practices or by cutting off time-consuming, error-prone spins between electrical and layout synthesis (a technique known as layout-aware synthesis), rely on a outstandingly rapid yet efficient layout generation method. This paper analyses the suitability of procedural layout generation based on templates (a knowledge-based approach) by examining the requirements that both layout reuse and layout-aware solutions impose, and how layout templates face them. The ability to capture the know-how of experienced layout designers and the turnaround times for layout instancing are considered main comparative aspects in relation to other layout generation approaches. A discussion on the benefit-cost trade-off of using layout templates is also included. In addition to this analysis, the paper delves deeper into systematic techniques to develop fully reusable layout templates for analog circuits, either for a change of the circuit sizing (i.e., layout retargeting) or a change of the fabrication process (i.e., layout migration). Several examples implemented with the Cadence's Virtuoso tool suite are provided as demonstration of the paper's contributions.

  9. Inferring Facts From Fiction: Reading Correct and Incorrect Information Affects Memory for Related Information

    PubMed Central

    Butler, Andrew C.; Dennis, Nancy A.; Marsh, Elizabeth J.

    2012-01-01

    People can acquire both true and false knowledge about the world from fictional stories (Marsh & Fazio, 2007). The present study explored whether the benefits and costs of learning about the world from fictional stories extend beyond memory for directly stated pieces of information. Of interest was whether readers would use correct and incorrect story references to make deductive inferences about related information in the story, and then integrate those inferences into their knowledge bases. Subjects read stories containing correct, neutral, and misleading references to facts about the world; each reference could be combined with another reference that occurred in a later sentence to make a deductive inference. Later, they answered general knowledge questions that tested for these deductive inferences. The results showed that subjects generated and retained the deductive inferences regardless of whether the inferences were consistent or inconsistent with world knowledge, and irrespective of whether the references were placed consecutively in the text or separated by many sentences. Readers learn more than what is directly stated in stories; they use references to the real world to make both correct and incorrect inferences that are integrated into their knowledge bases. PMID:22640369

  10. A conceptual framework for teaching research in nursing.

    PubMed

    Wright, S C D

    2005-08-01

    Though research is often referred to the lifeblood, hallmark or cornerstone in the development of a profession (Brink, 1996:2), teaching research in nursing is a challenge. The challenge does not just lie in teaching the subject, but in resistance and unwillingness of students to engage in the subject. In the experience of the researcher, registered nurses identify themselves with being a nurse and a caregiver; the role of researcher has never been internalised. The challenge is to achieve the outcome envisaged, namely, nurses who are knowledgeable consumers of research as well as continuous productive scholars in their application of nursing. Research generates knowledge and knowledge is the basis of caring with excellence. Nursing is an art and a science and the science must produce the knowledge upon which the art is based. The purpose of this article is to propose a conceptual framework of how to teach research in order to achieve such a successful outcome. The conceptual framework proposed in this article is based on four pillars, theoretical knowledge of research, scientific writing, psychological support and experiential learning. The importance of the research facilitator, not just as a teacher but also as a positive role model, is also described.

  11. A novel knowledge-based potential for RNA 3D structure evaluation

    NASA Astrophysics Data System (ADS)

    Yang, Yi; Gu, Qi; Zhang, Ben-Gong; Shi, Ya-Zhou; Shao, Zhi-Gang

    2018-03-01

    Ribonucleic acids (RNAs) play a vital role in biology, and knowledge of their three-dimensional (3D) structure is required to understand their biological functions. Recently structural prediction methods have been developed to address this issue, but a series of RNA 3D structures are generally predicted by most existing methods. Therefore, the evaluation of the predicted structures is generally indispensable. Although several methods have been proposed to assess RNA 3D structures, the existing methods are not precise enough. In this work, a new all-atom knowledge-based potential is developed for more accurately evaluating RNA 3D structures. The potential not only includes local and nonlocal interactions but also fully considers the specificity of each RNA by introducing a retraining mechanism. Based on extensive test sets generated from independent methods, the proposed potential correctly distinguished the native state and ranked near-native conformations to effectively select the best. Furthermore, the proposed potential precisely captured RNA structural features such as base-stacking and base-pairing. Comparisons with existing potential methods show that the proposed potential is very reliable and accurate in RNA 3D structure evaluation. Project supported by the National Science Foundation of China (Grants Nos. 11605125, 11105054, 11274124, and 11401448).

  12. Structure refinement of membrane proteins via molecular dynamics simulations.

    PubMed

    Dutagaci, Bercem; Heo, Lim; Feig, Michael

    2018-07-01

    A refinement protocol based on physics-based techniques established for water soluble proteins is tested for membrane protein structures. Initial structures were generated by homology modeling and sampled via molecular dynamics simulations in explicit lipid bilayer and aqueous solvent systems. Snapshots from the simulations were selected based on scoring with either knowledge-based or implicit membrane-based scoring functions and averaged to obtain refined models. The protocol resulted in consistent and significant refinement of the membrane protein structures similar to the performance of refinement methods for soluble proteins. Refinement success was similar between sampling in the presence of lipid bilayers and aqueous solvent but the presence of lipid bilayers may benefit the improvement of lipid-facing residues. Scoring with knowledge-based functions (DFIRE and RWplus) was found to be as good as scoring using implicit membrane-based scoring functions suggesting that differences in internal packing is more important than orientations relative to the membrane during the refinement of membrane protein homology models. © 2018 Wiley Periodicals, Inc.

  13. Adaptive critic neural network-based object grasping control using a three-finger gripper.

    PubMed

    Jagannathan, S; Galan, Gustavo

    2004-03-01

    Grasping of objects has been a challenging task for robots. The complex grasping task can be defined as object contact control and manipulation subtasks. In this paper, object contact control subtask is defined as the ability to follow a trajectory accurately by the fingers of a gripper. The object manipulation subtask is defined in terms of maintaining a predefined applied force by the fingers on the object. A sophisticated controller is necessary since the process of grasping an object without a priori knowledge of the object's size, texture, softness, gripper, and contact dynamics is rather difficult. Moreover, the object has to be secured accurately and considerably fast without damaging it. Since the gripper, contact dynamics, and the object properties are not typically known beforehand, an adaptive critic neural network (NN)-based hybrid position/force control scheme is introduced. The feedforward action generating NN in the adaptive critic NN controller compensates the nonlinear gripper and contact dynamics. The learning of the action generating NN is performed on-line based on a critic NN output signal. The controller ensures that a three-finger gripper tracks a desired trajectory while applying desired forces on the object for manipulation. Novel NN weight tuning updates are derived for the action generating and critic NNs so that Lyapunov-based stability analysis can be shown. Simulation results demonstrate that the proposed scheme successfully allows fingers of a gripper to secure objects without the knowledge of the underlying gripper and contact dynamics of the object compared to conventional schemes.

  14. Clinician adoption patterns and patient outcome results in use of evidence-based nursing plans of care.

    PubMed

    Kim, Tae Youn; Lang, Norma M; Berg, Karen; Weaver, Charlotte; Murphy, Judy; Ela, Sue

    2007-10-11

    Delivery of safe, effective and appropriate health care is an imperative facing health care organizations globally. While many initiatives have been launched in a number of countries to address this need from a medical perspective, a similar focus for generating evidence-based nursing knowledge has been missing. This paper reports on a collaborative evidence-based practice (EBP) research initiative that adds nursing knowledge into computerized care protocols. Here, a brief overview of the study's aims, purpose and methodology is presented as well as results of data analysis and lessons learned. The research team examined nurses' adoption patterns of EBP recommendations with respect to activity tolerance using four-month patient data collected from a pilot hospital. Study findings indicate a need for more focus on the system design and implementation process with the next rollout phase to promote evidence-based nursing practice.

  15. Clinician Adoption Patterns and Patient Outcome Results in Use of Evidence-Based Nursing Plans of Care

    PubMed Central

    Kim, Tae Youn; Lang, Norma M.; Berg, Karen; Weaver, Charlotte; Murphy, Judy; Ela, Sue

    2007-01-01

    Delivery of safe, effective and appropriate health care is an imperative facing health care organizations globally. While many initiatives have been launched in a number of countries to address this need from a medical perspective, a similar focus for generating evidence-based nursing knowledge has been missing [1]. This paper reports on a collaborative evidence-based practice (EBP) research initiative that adds nursing knowledge into computerized care protocols. Here, a brief overview of the study’s aims, purpose and methodology is presented as well as results of data analysis and lessons learned. The research team examined nurses’ adoption patterns of EBP recommendations with respect to activity tolerance using four-month patient data collected from a pilot hospital. Study findings indicate a need for more focus on the system design and implementation process with the next rollout phase to promote evidence-based nursing practice. PMID:18693871

  16. Working with Multilingual Learners and Vocabulary Knowledge for Secondary Schools: Developing Word Consciousness

    ERIC Educational Resources Information Center

    Cox, Robyn; O'Brien, Katherine; Walsh, Maureen; West, Helen

    2015-01-01

    This paper reports on a 10 week vocabulary focused intervention based on the Word Generation program (Snow, 2002, 2010; SERP, 2011) in primary and secondary schools, which demonstrated clear improvements, particularly with students who are EAL/D learners. Teachers across English, Science, Maths and Social Sciences developed professional learning…

  17. Heterogeneous Catalysis with Renewed Attention: Principles, Theories, and Concepts

    ERIC Educational Resources Information Center

    Dumeignil, Franck; Paul, Jean-Francois; Paul, Sebastien

    2017-01-01

    With the development of a strong bioeconomy sector related to the creation of next-generation biorefineries, heterogeneous catalysis is receiving renewed attention. Indeed, catalysis is at the core of biorefinery design, and many new catalysts and catalytic processes are being developed. On the one hand, they are based on knowledge acquired during…

  18. Revisiting the Media Generation: Youth Media Use and Computational Literacy Instruction

    ERIC Educational Resources Information Center

    Jenson, Jennifer; Droumeva, Milena

    2017-01-01

    An ongoing challenge of 21st century learning is ensuring everyone has the requisite skills to participate in a digital, knowledge-based economy. Once an anathema to parents and teachers, digital games are increasingly at the forefront of conversations about ways to address student engagement and provoke challenges to media pedagogies. While…

  19. Automatic Detection of Student Mental Models during Prior Knowledge Activation in MetaTutor

    ERIC Educational Resources Information Center

    Rus, Vasile; Lintean, Mihai; Azevedo, Roger

    2009-01-01

    This paper presents several methods to automatically detecting students' mental models in MetaTutor, an intelligent tutoring system that teaches students self-regulatory processes during learning of complex science topics. In particular, we focus on detecting students' mental models based on student-generated paragraphs during prior knowledge…

  20. Understanding the social acceptability of natural resource decisionmaking processes by using a knowledge base modeling approach.

    Treesearch

    Christina Kakoyannis; Bruce Shindler; George Stankey

    2001-01-01

    Natural resource managers are being confronted with increasing conflict and litigation with those who find their management plans unacceptable. Compatible and sustainable management decisions necessitate that natural resource agencies generate plans that are not only biologically possible and economically feasible but also socially acceptable. Currently, however, we...

  1. Integration of HTML documents into an XML-based knowledge repository.

    PubMed

    Roemer, Lorrie K; Rocha, Roberto A; Del Fiol, Guilherme

    2005-01-01

    The Emergency Patient Instruction Generator (EPIG) is an electronic content compiler / viewer / editor developed by Intermountain Health Care. The content is vendor-licensed HTML patient discharge instructions. This work describes the process by which discharge instructions where converted from ASCII-encoded HTML to XML, then loaded to a database for use by EPIG.

  2. Competence-Based System Self-Study System for Suggesting Study Materials Links

    ERIC Educational Resources Information Center

    Nitchot, Athitaya; Gilbert, Lester; Wills, Gary B.

    2014-01-01

    The article proposes a self-study system which suggests web links to learners. The suggestions depend upon the learner's chosen competences selected from a competence structure for a particular knowledge domain. Three experiments were conducted, where the first compared the perceived usefulness and value of the links generated by different…

  3. Personalized Surgical Risk Assessment Using Population-Based Data Analysis

    ERIC Educational Resources Information Center

    AbuSalah, Ahmad Mohammad

    2013-01-01

    The volume of information generated by healthcare providers is growing at a relatively high speed. This tremendous growth has created a gap between knowledge and clinical practice that experts say could be narrowed with the proper use of healthcare data to guide clinical decisions and tools that support rapid information availability at the…

  4. Effect of Changing Treatment Disinfectants on the Microbiology of Distributed Water and Pipe Biofilm Communities using Conventional and Metagenomic Approaches

    EPA Science Inventory

    The purpose of this research was to add to our knowledge of chlorine and monochloramine disinfectants, with regards to effects on the microbial communities in distribution systems. A whole metagenome-based approach using sophisticated molecular tools (e.g., next generation sequen...

  5. University Continuing Education: Strategies for an Uncertain Future.

    ERIC Educational Resources Information Center

    Baskett, H. K.; Hamilton, A. Bruce

    Some of the most common predictions relating to university continuing education units provide a base from which a discussion of future strategies can begin. These include the following: the Big Generation (i.e., baby boomers) is here; knowledge, not products, is the major focus of society; competition for the traditional university continuing…

  6. Infection of five Phytophthora ramorum hosts in response to increasing inoculum levels

    Treesearch

    Paul Tooley; Marsha Browning; Robert Leighty

    2013-01-01

    The objective of this work was to establish inoculum density relationships between Phytophthora ramorum and selected hosts based on whole plant inoculations. Knowledge of levels of initial inoculum needed to generate epidemics is needed for disease prediction and development of pest risk assessments. Sporangia of six P. ramorum...

  7. Generating "Good Enough" Evidence for Co-Production

    ERIC Educational Resources Information Center

    Durose, Catherine; Needham, Catherine; Mangan, Catherine; Rees, James

    2017-01-01

    Co-production is not a new concept but it is one with renewed prominence and reach in contemporary policy discourse. It refers to joint working between people or groups who have traditionally been separated into categories of user and producer. The article focuses on the co-production of public services, offering theory-based and knowledge-based…

  8. Benefits of Career Development Events as Perceived by School-Based, Agricultural Education Teachers

    ERIC Educational Resources Information Center

    Lundry, Jerrod; Ramsey, Jon W.; Edwards, M. Craig; Robinson, J. Shane

    2015-01-01

    Agriculture is the nation's largest employer with more than 24 million people working in some phase of the agricultural industry; however, the knowledge and skills needed in today's agricultural industry are lacking. Assuring future generations are agriculturally literate and taught the significance of agriculture is crucial. Systematic delivery…

  9. Examining Teacher Framing, Student Reasoning, and Student Agency in School-Based Citizen Science

    ERIC Educational Resources Information Center

    Harris, Emily Mae

    2017-01-01

    This dissertation presents three interrelated studies examining opportunities for student learning through contributory citizen science (CS), where students collect and contribute data to help generate new scientific knowledge. I draw on sociocultural perspectives of learning to analyze three cases where teachers integrated CS into school science,…

  10. Developing the next generation of forest ecosystem models

    Treesearch

    Christopher R. Schwalm; Alan R. Ek

    2002-01-01

    Forest ecology and management are model-rich areas for research. Models are often cast as either empirical or mechanistic. With evolving climate change, hybrid models gain new relevance because of their ability to integrate existing mechanistic knowledge with empiricism based on causal thinking. The utility of hybrid platforms results in the combination of...

  11. Lessons for Religious Education from Cognitive Science of Religion

    ERIC Educational Resources Information Center

    Brelsford, Theodore

    2005-01-01

    Recent work in the cognitive sciences provides new neurological/biological and evolutionary bases for understanding the construction of knowledge (in the form of sets of ideas containing functionally useful inferences) and the capacity for imagination (as the ability to run inferences and generate ideas from information) in the human mind. In…

  12. A Collaborative Learning Environment for Management Education Based on Experiential Learning

    ERIC Educational Resources Information Center

    Lidon, Ivan; Rebollar, Ruben; Moller, Charles

    2011-01-01

    In many areas of applied sciences, such as management and engineering, the generation and dissemination of theory and knowledge is increasingly woven into practice. This leaves teaching and research institutions with the challenge of developing and organising teaching activities that are effective from a student learning perspective. This paper…

  13. Methodological Reflections: Supervisory Discourses and Practice-Based Learning

    ERIC Educational Resources Information Center

    Sarja, Anneli; Janhonen, Sirpa

    2009-01-01

    The concept of dialogue is often examined apart from the social and historical context in which it is embedded. This paper identifies how dialogue between a superior and a subordinate generates a reorganisation of situated knowledge in the education and training of nurse teachers. We created an analytic method of supervisory discourse founded on…

  14. Knowledge-based graphical interfaces for presenting technical information

    NASA Technical Reports Server (NTRS)

    Feiner, Steven

    1988-01-01

    Designing effective presentations of technical information is extremely difficult and time-consuming. Moreover, the combination of increasing task complexity and declining job skills makes the need for high-quality technical presentations especially urgent. We believe that this need can ultimately be met through the development of knowledge-based graphical interfaces that can design and present technical information. Since much material is most naturally communicated through pictures, our work has stressed the importance of well-designed graphics, concentrating on generating pictures and laying out displays containing them. We describe APEX, a testbed picture generation system that creates sequences of pictures that depict the performance of simple actions in a world of 3D objects. Our system supports rules for determining automatically the objects to be shown in a picture, the style and level of detail with which they should be rendered, the method by which the action itself should be indicated, and the picture's camera specification. We then describe work on GRIDS, an experimental display layout system that addresses some of the problems in designing displays containing these pictures, determining the position and size of the material to be presented.

  15. Enhancing biomedical text summarization using semantic relation extraction.

    PubMed

    Shang, Yue; Li, Yanpeng; Lin, Hongfei; Yang, Zhihao

    2011-01-01

    Automatic text summarization for a biomedical concept can help researchers to get the key points of a certain topic from large amount of biomedical literature efficiently. In this paper, we present a method for generating text summary for a given biomedical concept, e.g., H1N1 disease, from multiple documents based on semantic relation extraction. Our approach includes three stages: 1) We extract semantic relations in each sentence using the semantic knowledge representation tool SemRep. 2) We develop a relation-level retrieval method to select the relations most relevant to each query concept and visualize them in a graphic representation. 3) For relations in the relevant set, we extract informative sentences that can interpret them from the document collection to generate text summary using an information retrieval based method. Our major focus in this work is to investigate the contribution of semantic relation extraction to the task of biomedical text summarization. The experimental results on summarization for a set of diseases show that the introduction of semantic knowledge improves the performance and our results are better than the MEAD system, a well-known tool for text summarization.

  16. Attitude, knowledge and behaviour towards evidence-based medicine of physical therapists, students, teachers and supervisors in the Netherlands: a survey.

    PubMed

    Scholten-Peeters, Gwendolijne G M; Beekman-Evers, Monique S; van Boxel, Annemiek C J W; van Hemert, Sjanna; Paulis, Winifred D; van der Wouden, Johannes C; Verhagen, Arianne P

    2013-08-01

    Evidence-based medicine (EBM) has gained widespread acceptance in physical therapy. However, because little is known about the attitudes, knowledge and behaviour of physical therapists towards EBM, and their participation in research to generate EBM, we explored these aspects among physical therapy students, teachers, supervisors and practising physical therapists. This is a cross-sectional survey in which participants completed a web-based questionnaire to determine their attitudes, knowledge and behaviour regarding EBM, and their participation in research. Questionnaires were sent to 814 participants of which 165 were returned. The overall mean score for attitude was 4.3 [standard deviation (SD) 1.0; range 1-7], which indicates a weak positive attitude. Teachers scored the highest (4.9, SD 1.2) and students the lowest (4.1, SD 0.8). Although most participants had some understanding of the technical terms used in EBM, only teachers felt able to explain these terms to others. Of the students, 45% rated their perceived EBM knowledge as bad and 45% as average, whereas 78% of the teachers considered that they had good knowledge. To answer clinical questions, most students generally use textbooks (96%) and the opinion of their supervisors (87.7%). There is a weak positive attitude of physical therapists, teachers, supervisors and students towards participating in research in general practice, but there is a lack of knowledge and active behaviour regarding EBM, especially among physical therapy students. © 2011 John Wiley & Sons Ltd.

  17. Exploration of Novel Materials for Development of Next Generation OPV Devices: Cooperative Research and Development Final Report, CRADA Number CRD-10-398

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olson, D.

    2012-09-01

    Organic-based solar cells offer the potential for low cost, scalable conversion of solar energy. This project will try to utilize the extensive organic synthetic capabilities of ConocoPhillips to produce novel acceptor and donor materials as well potentially as interface modifiers to produce improved OPV devices with greater efficiency and stability. The synthetic effort will be based on the knowledge base and modeling being done at NREL to identify new candidate materials.

  18. Secure, Autonomous, Intelligent Controller for Integrating Distributed Sensor Webs

    NASA Technical Reports Server (NTRS)

    Ivancic, William D.

    2007-01-01

    This paper describes the infrastructure and protocols necessary to enable near-real-time commanding, access to space-based assets, and the secure interoperation between sensor webs owned and controlled by various entities. Select terrestrial and aeronautics-base sensor webs will be used to demonstrate time-critical interoperability between integrated, intelligent sensor webs both terrestrial and between terrestrial and space-based assets. For this work, a Secure, Autonomous, Intelligent Controller and knowledge generation unit is implemented using Virtual Mission Operation Center technology.

  19. Development of Asset Fault Signatures for Prognostic and Health Management in the Nuclear Industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vivek Agarwal; Nancy J. Lybeck; Randall Bickford

    2014-06-01

    Proactive online monitoring in the nuclear industry is being explored using the Electric Power Research Institute’s Fleet-Wide Prognostic and Health Management (FW-PHM) Suite software. The FW-PHM Suite is a set of web-based diagnostic and prognostic tools and databases that serves as an integrated health monitoring architecture. The FW-PHM Suite has four main modules: Diagnostic Advisor, Asset Fault Signature (AFS) Database, Remaining Useful Life Advisor, and Remaining Useful Life Database. This paper focuses on development of asset fault signatures to assess the health status of generator step-up generators and emergency diesel generators in nuclear power plants. Asset fault signatures describe themore » distinctive features based on technical examinations that can be used to detect a specific fault type. At the most basic level, fault signatures are comprised of an asset type, a fault type, and a set of one or more fault features (symptoms) that are indicative of the specified fault. The AFS Database is populated with asset fault signatures via a content development exercise that is based on the results of intensive technical research and on the knowledge and experience of technical experts. The developed fault signatures capture this knowledge and implement it in a standardized approach, thereby streamlining the diagnostic and prognostic process. This will support the automation of proactive online monitoring techniques in nuclear power plants to diagnose incipient faults, perform proactive maintenance, and estimate the remaining useful life of assets.« less

  20. Autonomous mental development with selective attention, object perception, and knowledge representation

    NASA Astrophysics Data System (ADS)

    Ban, Sang-Woo; Lee, Minho

    2008-04-01

    Knowledge-based clustering and autonomous mental development remains a high priority research topic, among which the learning techniques of neural networks are used to achieve optimal performance. In this paper, we present a new framework that can automatically generate a relevance map from sensory data that can represent knowledge regarding objects and infer new knowledge about novel objects. The proposed model is based on understating of the visual what pathway in our brain. A stereo saliency map model can selectively decide salient object areas by additionally considering local symmetry feature. The incremental object perception model makes clusters for the construction of an ontology map in the color and form domains in order to perceive an arbitrary object, which is implemented by the growing fuzzy topology adaptive resonant theory (GFTART) network. Log-polar transformed color and form features for a selected object are used as inputs of the GFTART. The clustered information is relevant to describe specific objects, and the proposed model can automatically infer an unknown object by using the learned information. Experimental results with real data have demonstrated the validity of this approach.

  1. Design, implementation, use, and preliminary evaluation of SEBASTIAN, a standards-based Web service for clinical decision support.

    PubMed

    Kawamoto, Kensaku; Lobach, David F

    2005-01-01

    Despite their demonstrated ability to improve care quality, clinical decision support systems are not widely used. In part, this limited use is due to the difficulty of sharing medical knowledge in a machine-executable format. To address this problem, we developed a decision support Web service known as SEBASTIAN. In SEBASTIAN, individual knowledge modules define the data requirements for assessing a patient, the conclusions that can be drawn using that data, and instructions on how to generate those conclusions. Using standards-based XML messages transmitted over HTTP, client decision support applications provide patient data to SEBASTIAN and receive patient-specific assessments and recommendations. SEBASTIAN has been used to implement four distinct decision support systems; an architectural overview is provided for one of these systems. Preliminary assessments indicate that SEBASTIAN fulfills all original design objectives, including the re-use of executable medical knowledge across diverse applications and care settings, the straightforward authoring of knowledge modules, and use of the framework to implement decision support applications with significant clinical utility.

  2. Knowledge Acquisition and Management for the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Votava, P.; Michaelis, A.; Nemani, R. R.

    2013-12-01

    NASA Earth Exchange (NEX) is a data, computing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform with access to large supercomputing resources. As more and more projects are being executed on NEX, we are increasingly focusing on capturing the knowledge of the NEX users and provide mechanisms for sharing it with the community in order to facilitate reuse and accelerate research. There are many possible knowledge contributions to NEX, it can be a wiki entry on the NEX portal contributed by a developer, information extracted from a publication in an automated way, or a workflow captured during code execution on the supercomputing platform. The goal of the NEX knowledge platform is to capture and organize this information and make it easily accessible to the NEX community and beyond. The knowledge acquisition process consists of three main faucets - data and metadata, workflows and processes, and web-based information. Once the knowledge is acquired, it is processed in a number of ways ranging from custom metadata parsers to entity extraction using natural language processing techniques. The processed information is linked with existing taxonomies and aligned with internal ontology (which heavily reuses number of external ontologies). This forms a knowledge graph that can then be used to improve users' search query results as well as provide additional analytics capabilities to the NEX system. Such a knowledge graph will be an important building block in creating a dynamic knowledge base for the NEX community where knowledge is both generated and easily shared.

  3. Networks as Opportunities for Knowledge Creation among Professionals: How Optimized by Counsellor Educators?

    ERIC Educational Resources Information Center

    Obi, Ifeoma E.

    2012-01-01

    Knowledge creation involves the generation of new ideas, facts and insights through interaction with people to meet challenges and changes. Online and offline professional groups and networks are some of the avenues for generating new knowledge and innovation in practices. Guidance counselling is one the areas that needs to constantly remain…

  4. An Analysis of Three Different Approaches to Student Teacher Mentoring and Their Impact on Knowledge Generation in Practicum Settings

    ERIC Educational Resources Information Center

    Mena, Juanjo; García, Marisa; Clarke, Anthony; Barkatsas, Anastasios

    2016-01-01

    Mentoring in Teacher Education is a key component in the professional development of student teachers. However, little research focuses on the knowledge shared and generated in mentoring conversations. In this paper, we explore the knowledge student teachers articulate in mentoring conversations under three different post-lesson approaches to…

  5. Entrepreneurial Regions: Do Macro-Psychological Cultural Characteristics of Regions Help Solve the “Knowledge Paradox” of Economics?

    PubMed Central

    Obschonka, Martin; Stuetzer, Michael; Gosling, Samuel D.; Rentfrow, Peter J.; Lamb, Michael E.; Potter, Jeff; Audretsch, David B.

    2015-01-01

    In recent years, modern economies have shifted away from being based on physical capital and towards being based on new knowledge (e.g., new ideas and inventions). Consequently, contemporary economic theorizing and key public policies have been based on the assumption that resources for generating knowledge (e.g., education, diversity of industries) are essential for regional economic vitality. However, policy makers and scholars have discovered that, contrary to expectations, the mere presence of, and investments in, new knowledge does not guarantee a high level of regional economic performance (e.g., high entrepreneurship rates). To date, this “knowledge paradox” has resisted resolution. We take an interdisciplinary perspective to offer a new explanation, hypothesizing that “hidden” regional culture differences serve as a crucial factor that is missing from conventional economic analyses and public policy strategies. Focusing on entrepreneurial activity, we hypothesize that the statistical relation between knowledge resources and entrepreneurial vitality (i.e., high entrepreneurship rates) in a region will depend on “hidden” regional differences in entrepreneurial culture. To capture such “hidden” regional differences, we derive measures of entrepreneurship-prone culture from two large personality datasets from the United States (N = 935,858) and Great Britain (N = 417,217). In both countries, the findings were consistent with the knowledge-culture-interaction hypothesis. A series of nine additional robustness checks underscored the robustness of these results. Naturally, these purely correlational findings cannot provide direct evidence for causal processes, but the results nonetheless yield a remarkably consistent and robust picture in the two countries. In doing so, the findings raise the idea of regional culture serving as a new causal candidate, potentially driving the knowledge paradox; such an explanation would be consistent with research on the psychological characteristics of entrepreneurs. PMID:26098674

  6. Entrepreneurial Regions: Do Macro-Psychological Cultural Characteristics of Regions Help Solve the "Knowledge Paradox" of Economics?

    PubMed

    Obschonka, Martin; Stuetzer, Michael; Gosling, Samuel D; Rentfrow, Peter J; Lamb, Michael E; Potter, Jeff; Audretsch, David B

    2015-01-01

    In recent years, modern economies have shifted away from being based on physical capital and towards being based on new knowledge (e.g., new ideas and inventions). Consequently, contemporary economic theorizing and key public policies have been based on the assumption that resources for generating knowledge (e.g., education, diversity of industries) are essential for regional economic vitality. However, policy makers and scholars have discovered that, contrary to expectations, the mere presence of, and investments in, new knowledge does not guarantee a high level of regional economic performance (e.g., high entrepreneurship rates). To date, this "knowledge paradox" has resisted resolution. We take an interdisciplinary perspective to offer a new explanation, hypothesizing that "hidden" regional culture differences serve as a crucial factor that is missing from conventional economic analyses and public policy strategies. Focusing on entrepreneurial activity, we hypothesize that the statistical relation between knowledge resources and entrepreneurial vitality (i.e., high entrepreneurship rates) in a region will depend on "hidden" regional differences in entrepreneurial culture. To capture such "hidden" regional differences, we derive measures of entrepreneurship-prone culture from two large personality datasets from the United States (N = 935,858) and Great Britain (N = 417,217). In both countries, the findings were consistent with the knowledge-culture-interaction hypothesis. A series of nine additional robustness checks underscored the robustness of these results. Naturally, these purely correlational findings cannot provide direct evidence for causal processes, but the results nonetheless yield a remarkably consistent and robust picture in the two countries. In doing so, the findings raise the idea of regional culture serving as a new causal candidate, potentially driving the knowledge paradox; such an explanation would be consistent with research on the psychological characteristics of entrepreneurs.

  7. Reflections on Wittrock's Generative Model of Learning: A Motivation Perspective

    ERIC Educational Resources Information Center

    Anderman, Eric M.

    2010-01-01

    In this article, I examine developments in research on achievement motivation and comment on how those developments are reflected in Wittrock's generative model of learning. Specifically, I focus on the roles of prior knowledge, the generation of knowledge, and beliefs about ability. Examples from Wittrock's theory and from current motivational…

  8. Automatic computation of 2D cardiac measurements from B-mode echocardiography

    NASA Astrophysics Data System (ADS)

    Park, JinHyeong; Feng, Shaolei; Zhou, S. Kevin

    2012-03-01

    We propose a robust and fully automatic algorithm which computes the 2D echocardiography measurements recommended by America Society of Echocardiography. The algorithm employs knowledge-based imaging technologies which can learn the expert's knowledge from the training images and expert's annotation. Based on the models constructed from the learning stage, the algorithm searches initial location of the landmark points for the measurements by utilizing heart structure of left ventricle including mitral valve aortic valve. It employs the pseudo anatomic M-mode image generated by accumulating the line images in 2D parasternal long axis view along the time to refine the measurement landmark points. The experiment results with large volume of data show that the algorithm runs fast and is robust comparable to expert.

  9. Requirements analysis, domain knowledge, and design

    NASA Technical Reports Server (NTRS)

    Potts, Colin

    1988-01-01

    Two improvements to current requirements analysis practices are suggested: domain modeling, and the systematic application of analysis heuristics. Domain modeling is the representation of relevant application knowledge prior to requirements specification. Artificial intelligence techniques may eventually be applicable for domain modeling. In the short term, however, restricted domain modeling techniques, such as that in JSD, will still be of practical benefit. Analysis heuristics are standard patterns of reasoning about the requirements. They usually generate questions of clarification or issues relating to completeness. Analysis heuristics can be represented and therefore systematically applied in an issue-based framework. This is illustrated by an issue-based analysis of JSD's domain modeling and functional specification heuristics. They are discussed in the context of the preliminary design of simple embedded systems.

  10. The need and potential for building a integrated knowledge-base of the Earth-Human system

    NASA Astrophysics Data System (ADS)

    Jacobs, Clifford

    2011-03-01

    The pursuit of scientific understanding is increasingly based on interdisciplinary research. To understand more deeply the planet and its interactions requires a progressively more holistic approach, exploring knowledge coming from all scientific and engineering disciplines including but not limited to, biology, chemistry, computer sciences, geosciences, material sciences, mathematics, physics, cyberinfrastucture, and social sciences. Nowhere is such an approach more critical than in the study of global climate change in which one of the major challenges is the development of next-generation Earth System Models that include coupled and interactive representations of ecosystems, agricultural working lands and forests, urban environments, biogeochemistry, atmospheric chemistry, ocean and atmospheric currents, the water cycle, land ice, and human activities.

  11. MLM Builder: An Integrated Suite for Development and Maintenance of Arden Syntax Medical Logic Modules

    PubMed Central

    Sailors, R. Matthew

    1997-01-01

    The Arden Syntax specification for sharable computerized medical knowledge bases has not been widely utilized in the medical informatics community because of a lack of tools for developing Arden Syntax knowledge bases (Medical Logic Modules). The MLM Builder is a Microsoft Windows-hosted CASE (Computer Aided Software Engineering) tool designed to aid in the development and maintenance of Arden Syntax Medical Logic Modules (MLMs). The MLM Builder consists of the MLM Writer (an MLM generation tool), OSCAR (an anagram of Object-oriented ARden Syntax Compiler), a test database, and the MLManager (an MLM management information system). Working together, these components form a self-contained, unified development environment for the creation, testing, and maintenance of Arden Syntax Medical Logic Modules.

  12. Evaluating the Process of Generating a Clinical Trial Protocol

    PubMed Central

    Franciosi, Lui G.; Butterfield, Noam N.; MacLeod, Bernard A.

    2002-01-01

    The research protocol is the principal document in the conduct of a clinical trial. Its generation requires knowledge about the research problem, the potential experimental confounders, and the relevant Good Clinical Practices for conducting the trial. However, such information is not always available to authors during the writing process. A checklist of over 80 items has been developed to better understand the considerations made by authors in generating a protocol. It is based on the most cited requirements for designing and implementing the randomised controlled trial. Items are categorised according to the trial's research question, experimental design, statistics, ethics, and standard operating procedures. This quality assessment tool evaluates the extent that a generated protocol deviates from the best-planned clinical trial.

  13. WormBase 2014: new views of curated biology

    PubMed Central

    Harris, Todd W.; Baran, Joachim; Bieri, Tamberlyn; Cabunoc, Abigail; Chan, Juancarlos; Chen, Wen J.; Davis, Paul; Done, James; Grove, Christian; Howe, Kevin; Kishore, Ranjana; Lee, Raymond; Li, Yuling; Muller, Hans-Michael; Nakamura, Cecilia; Ozersky, Philip; Paulini, Michael; Raciti, Daniela; Schindelman, Gary; Tuli, Mary Ann; Auken, Kimberly Van; Wang, Daniel; Wang, Xiaodong; Williams, Gary; Wong, J. D.; Yook, Karen; Schedl, Tim; Hodgkin, Jonathan; Berriman, Matthew; Kersey, Paul; Spieth, John; Stein, Lincoln; Sternberg, Paul W.

    2014-01-01

    WormBase (http://www.wormbase.org/) is a highly curated resource dedicated to supporting research using the model organism Caenorhabditis elegans. With an electronic history predating the World Wide Web, WormBase contains information ranging from the sequence and phenotype of individual alleles to genome-wide studies generated using next-generation sequencing technologies. In recent years, we have expanded the contents to include data on additional nematodes of agricultural and medical significance, bringing the knowledge of C. elegans to bear on these systems and providing support for underserved research communities. Manual curation of the primary literature remains a central focus of the WormBase project, providing users with reliable, up-to-date and highly cross-linked information. In this update, we describe efforts to organize the original atomized and highly contextualized curated data into integrated syntheses of discrete biological topics. Next, we discuss our experiences coping with the vast increase in available genome sequences made possible through next-generation sequencing platforms. Finally, we describe some of the features and tools of the new WormBase Web site that help users better find and explore data of interest. PMID:24194605

  14. Adaptive interface for personalizing information seeking.

    PubMed

    Narayanan, S; Koppaka, Lavanya; Edala, Narasimha; Loritz, Don; Daley, Raymond

    2004-12-01

    An adaptive interface autonomously adjusts its display and available actions to current goals and abilities of the user by assessing user status, system task, and the context. Knowledge content adaptability is needed for knowledge acquisition and refinement tasks. In the case of knowledge content adaptability, the requirements of interface design focus on the elicitation of information from the user and the refinement of information based on patterns of interaction. In such cases, the emphasis on adaptability is on facilitating information search and knowledge discovery. In this article, we present research on adaptive interfaces that facilitates personalized information seeking from a large data warehouse. The resulting proof-of-concept system, called source recommendation system (SRS), assists users in locating and navigating data sources in the repository. Based on the initial user query and an analysis of the content of the search results, the SRS system generates a profile of the user tailored to the individual's context during information seeking. The user profiles are refined successively and are used in progressively guiding the user to the appropriate set of sources within the knowledge base. The SRS system is implemented as an Internet browser plug-in to provide a seamless and unobtrusive, personalized experience to the users during the information search process. The rationale behind our approach, system design, empirical evaluation, and implications for research on adaptive interfaces are described in this paper.

  15. Concept of operations for knowledge discovery from Big Data across enterprise data warehouses

    NASA Astrophysics Data System (ADS)

    Sukumar, Sreenivas R.; Olama, Mohammed M.; McNair, Allen W.; Nutaro, James J.

    2013-05-01

    The success of data-driven business in government, science, and private industry is driving the need for seamless integration of intra and inter-enterprise data sources to extract knowledge nuggets in the form of correlations, trends, patterns and behaviors previously not discovered due to physical and logical separation of datasets. Today, as volume, velocity, variety and complexity of enterprise data keeps increasing, the next generation analysts are facing several challenges in the knowledge extraction process. Towards addressing these challenges, data-driven organizations that rely on the success of their analysts have to make investment decisions for sustainable data/information systems and knowledge discovery. Options that organizations are considering are newer storage/analysis architectures, better analysis machines, redesigned analysis algorithms, collaborative knowledge management tools, and query builders amongst many others. In this paper, we present a concept of operations for enabling knowledge discovery that data-driven organizations can leverage towards making their investment decisions. We base our recommendations on the experience gained from integrating multi-agency enterprise data warehouses at the Oak Ridge National Laboratory to design the foundation of future knowledge nurturing data-system architectures.

  16. Organizational culture and knowledge management in the electric power generation industry

    NASA Astrophysics Data System (ADS)

    Mayfield, Robert D.

    Scarcity of knowledge and expertise is a challenge in the electric power generation industry. Today's most pervasive knowledge issues result from employee turnover and the constant movement of employees from project to project inside organizations. To address scarcity of knowledge and expertise, organizations must enable employees to capture, transfer, and use mission-critical explicit and tacit knowledge. The purpose of this qualitative grounded theory research was to examine the relationship between and among organizations within the electric power generation industry developing knowledge management processes designed to retain, share, and use the industry, institutional, and technical knowledge upon which the organizations depend. The research findings show that knowledge management is a business problem within the domain of information systems and management. The risks associated with losing mission critical-knowledge can be measured using metrics on employee retention, recruitment, productivity, training and benchmarking. Certain enablers must be in place in order to engage people, encourage cooperation, create a knowledge-sharing culture, and, ultimately change behavior. The research revealed the following change enablers that support knowledge management strategies: (a) training - blended learning, (b) communities of practice, (c) cross-functional teams, (d) rewards and recognition programs, (e) active senior management support, (f) communication and awareness, (g) succession planning, and (h) team organizational culture.

  17. Hybrid approach for robust diagnostics of cutting tools

    NASA Astrophysics Data System (ADS)

    Ramamurthi, K.; Hough, C. L., Jr.

    1994-03-01

    A new multisensor based hybrid technique has been developed for robust diagnosis of cutting tools. The technique combines the concepts of pattern classification and real-time knowledge based systems (RTKBS) and draws upon their strengths; learning facility in the case of pattern classification and a higher level of reasoning in the case of RTKBS. It eliminates some of their major drawbacks: false alarms or delayed/lack of diagnosis in case of pattern classification and tedious knowledge base generation in case of RTKBS. It utilizes a dynamic distance classifier, developed upon a new separability criterion and a new definition of robust diagnosis for achieving these benefits. The promise of this technique has been proven concretely through an on-line diagnosis of drill wear. Its suitability for practical implementation is substantiated by the use of practical, inexpensive, machine-mounted sensors and low-cost delivery systems.

  18. An AI-based communication system for motor and speech disabled persons: design methodology and prototype testing.

    PubMed

    Sy, B K; Deller, J R

    1989-05-01

    An intelligent communication device is developed to assist the nonverbal, motor disabled in the generation of written and spoken messages. The device is centered on a knowledge base of the grammatical rules and message elements. A "belief" reasoning scheme based on both the information from external sources and the embedded knowledge is used to optimize the process of message search. The search for the message elements is conceptualized as a path search in the language graph, and a special frame architecture is used to construct and to partition the graph. Bayesian "belief" reasoning from the Dempster-Shafer theory of evidence is augmented to cope with time-varying evidence. An "information fusion" strategy is also introduced to integrate various forms of external information. Experimental testing of the prototype system is discussed.

  19. An Ontology-Based Approach to Incorporate User-Generated Geo-Content Into Sdi

    NASA Astrophysics Data System (ADS)

    Deng, D.-P.; Lemmens, R.

    2011-08-01

    The Web is changing the way people share and communicate information because of emergence of various Web technologies, which enable people to contribute information on the Web. User-Generated Geo-Content (UGGC) is a potential resource of geographic information. Due to the different production methods, UGGC often cannot fit in geographic information model. There is a semantic gap between UGGC and formal geographic information. To integrate UGGC into geographic information, this study conducts an ontology-based process to bridge this semantic gap. This ontology-based process includes five steps: Collection, Extraction, Formalization, Mapping, and Deployment. In addition, this study implements this process on Twitter messages, which is relevant to Japan Earthquake disaster. By using this process, we extract disaster relief information from Twitter messages, and develop a knowledge base for GeoSPARQL queries in disaster relief information.

  20. Putting Priors in Mixture Density Mercer Kernels

    NASA Technical Reports Server (NTRS)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  1. Model authoring system for fail safe analysis

    NASA Technical Reports Server (NTRS)

    Sikora, Scott E.

    1990-01-01

    The Model Authoring System is a prototype software application for generating fault tree analyses and failure mode and effects analyses for circuit designs. Utilizing established artificial intelligence and expert system techniques, the circuits are modeled as a frame-based knowledge base in an expert system shell, which allows the use of object oriented programming and an inference engine. The behavior of the circuit is then captured through IF-THEN rules, which then are searched to generate either a graphical fault tree analysis or failure modes and effects analysis. Sophisticated authoring techniques allow the circuit to be easily modeled, permit its behavior to be quickly defined, and provide abstraction features to deal with complexity.

  2. Executable medical guidelines with Arden Syntax-Applications in dermatology and obstetrics.

    PubMed

    Seitinger, Alexander; Rappelsberger, Andrea; Leitich, Harald; Binder, Michael; Adlassnig, Klaus-Peter

    2016-08-12

    Clinical decision support systems (CDSSs) are being developed to assist physicians in processing extensive data and new knowledge based on recent scientific advances. Structured medical knowledge in the form of clinical alerts or reminder rules, decision trees or tables, clinical protocols or practice guidelines, score algorithms, and others, constitute the core of CDSSs. Several medical knowledge representation and guideline languages have been developed for the formal computerized definition of such knowledge. One of these languages is Arden Syntax for Medical Logic Systems, an International Health Level Seven (HL7) standard whose development started in 1989. Its latest version is 2.10, which was presented in 2014. In the present report we discuss Arden Syntax as a modern medical knowledge representation and processing language, and show that this language is not only well suited to define clinical alerts, reminders, and recommendations, but can also be used to implement and process computerized medical practice guidelines. This section describes how contemporary software such as Java, server software, web-services, XML, is used to implement CDSSs based on Arden Syntax. Special emphasis is given to clinical decision support (CDS) that employs practice guidelines as its clinical knowledge base. Two guideline-based applications using Arden Syntax for medical knowledge representation and processing were developed. The first is a software platform for implementing practice guidelines from dermatology. This application employs fuzzy set theory and logic to represent linguistic and propositional uncertainty in medical data, knowledge, and conclusions. The second application implements a reminder system based on clinically published standard operating procedures in obstetrics to prevent deviations from state-of-the-art care. A to-do list with necessary actions specifically tailored to the gestational week/labor/delivery is generated. Today, with the latest versions of Arden Syntax and the application of contemporary software development methods, Arden Syntax has become a powerful and versatile medical knowledge representation and processing language, well suited to implement a large range of CDSSs, including clinical-practice-guideline-based CDSSs. Moreover, such CDS is provided and can be shared as a service by different medical institutions, redefining the sharing of medical knowledge. Arden Syntax is also highly flexible and provides developers the freedom to use up-to-date software design and programming patterns for external patient data access. Copyright © 2016. Published by Elsevier B.V.

  3. Research Trend Visualization by MeSH Terms from PubMed.

    PubMed

    Yang, Heyoung; Lee, Hyuck Jai

    2018-05-30

    Motivation : PubMed is a primary source of biomedical information comprising search tool function and the biomedical literature from MEDLINE which is the US National Library of Medicine premier bibliographic database, life science journals and online books. Complimentary tools to PubMed have been developed to help the users search for literature and acquire knowledge. However, these tools are insufficient to overcome the difficulties of the users due to the proliferation of biomedical literature. A new method is needed for searching the knowledge in biomedical field. Methods : A new method is proposed in this study for visualizing the recent research trends based on the retrieved documents corresponding to a search query given by the user. The Medical Subject Headings (MeSH) are used as the primary analytical element. MeSH terms are extracted from the literature and the correlations between them are calculated. A MeSH network, called MeSH Net, is generated as the final result based on the Pathfinder Network algorithm. Results : A case study for the verification of proposed method was carried out on a research area defined by the search query (immunotherapy and cancer and "tumor microenvironment"). The MeSH Net generated by the method is in good agreement with the actual research activities in the research area (immunotherapy). Conclusion : A prototype application generating MeSH Net was developed. The application, which could be used as a "guide map for travelers", allows the users to quickly and easily acquire the knowledge of research trends. Combination of PubMed and MeSH Net is expected to be an effective complementary system for the researchers in biomedical field experiencing difficulties with search and information analysis.

  4. Using Data Crawlers and Semantic Web to Build Financial XBRL Data Generators: The SONAR Extension Approach

    PubMed Central

    Rodríguez-García, Miguel Ángel; Rodríguez-González, Alejandro; Valencia-García, Rafael; Gómez-Berbís, Juan Miguel

    2014-01-01

    Precise, reliable and real-time financial information is critical for added-value financial services after the economic turmoil from which markets are still struggling to recover. Since the Web has become the most significant data source, intelligent crawlers based on Semantic Technologies have become trailblazers in the search of knowledge combining natural language processing and ontology engineering techniques. In this paper, we present the SONAR extension approach, which will leverage the potential of knowledge representation by extracting, managing, and turning scarce and disperse financial information into well-classified, structured, and widely used XBRL format-oriented knowledge, strongly supported by a proof-of-concept implementation and a thorough evaluation of the benefits of the approach. PMID:24587726

  5. Using data crawlers and semantic Web to build financial XBRL data generators: the SONAR extension approach.

    PubMed

    Rodríguez-García, Miguel Ángel; Rodríguez-González, Alejandro; Colomo-Palacios, Ricardo; Valencia-García, Rafael; Gómez-Berbís, Juan Miguel; García-Sánchez, Francisco

    2014-01-01

    Precise, reliable and real-time financial information is critical for added-value financial services after the economic turmoil from which markets are still struggling to recover. Since the Web has become the most significant data source, intelligent crawlers based on Semantic Technologies have become trailblazers in the search of knowledge combining natural language processing and ontology engineering techniques. In this paper, we present the SONAR extension approach, which will leverage the potential of knowledge representation by extracting, managing, and turning scarce and disperse financial information into well-classified, structured, and widely used XBRL format-oriented knowledge, strongly supported by a proof-of-concept implementation and a thorough evaluation of the benefits of the approach.

  6. A History of U.S. Navy Airborne and Shipboard Periscope Detection Radar Design and Development

    DTIC Science & Technology

    2014-01-01

    military applications were originally large ground-based units designed, developed, and employed by the British for detecting inbound German aircraft...evaluation (RDT&E) and the operational employment of PDR sensors has involved a rich and proud history of military endeavor. This history is embodied in...retire from the military and civilian workforce, their knowledge base, their memory, and the lessons learned become lost to subsequent generations

  7. Geometric pre-patterning based tuning of the period doubling onset strain during thin film wrinkling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saha, Sourabh K.

    Wrinkling of supported thin films is an easy-to-implement and low-cost fabrication technique for generation of stretch-tunable periodic micro and nano-scale structures. However, the tunability of such structures is often limited by the emergence of an undesirable period doubled mode at high strains. Predictively tuning the onset strain for period doubling via existing techniques requires one to have extensive knowledge about the nonlinear pattern formation behavior. Herein, a geometric pre-patterning based technique is introduced to delay the onset of period doubling that can be implemented to predictively tune the onset strain even with limited system knowledge. The technique comprises pre-patterning themore » film/base bilayer with a sinusoidal pattern that has the same period as the natural wrinkle period of the system. The effectiveness of this technique has been verified via physical and computational experiments on the polydimethylsiloxane/glass bilayer system. It is observed that the period doubling onset strain can be increased from the typical value of 20% for flat films to greater than 30% with a modest pre-pattern aspect ratio (2∙amplitude/period) of 0.15. In addition, finite element simulations reveal that (i) the onset strain can be increased up to a limit by increasing the amplitude of the pre-patterns and (ii) the delaying effect can be captured entirely by the pre-pattern geometry. As a result, one can implement this technique even with limited system knowledge, such as material properties or film thickness, by simply replicating pre-existing wrinkled patterns to generate prepatterned bilayers. Thus, geometric pre-patterning is a practical scheme to suppress period doubling that can increase the operating range of stretch-tunable wrinkle-based devices by at least 50%.« less

  8. The Indigenous Phenology Network: Engage, Observe, and Adapt to Change

    NASA Astrophysics Data System (ADS)

    Miller, B. W.; Davíd-Chavez, D. M.; Elevitch, C.; Hamilton, A.; Hatfield, S. C.; Jones, K. D.; Rabin, R.; Rosemartin, A.; Souza, M. K.; Sparrow, E.

    2017-12-01

    The Indigenous Phenology Network (IPN) is a grassroots organization whose participants are interested in understanding changes to seasonality and timing of life cycle events, and forecasting impacts to lands and species of importance to native peoples. The group focuses on building relationships, ensuring benefit to indigenous communities, and integrating indigenous and western knowledge systems. The IPN's work is guided by the Relational Doctrine, a set of principles founded on the notion that all things are connected. This multimedia presentation and dialogue will bring together IPN members and their experiences in diverse communities and landscapes facing impacts from a changing climate and extreme weather events. Impacts on water supply, vegetation, wildlife, and living conditions, and ideas for minimizing and responding to the projected impacts of continued change will be discussed in the context of multi-generational, place-based traditional knowledge and community resilience. Scalable, community-based gardens, for example, provide a sustainable source of traditional, locally grown food, most valuable in times of disaster when supplies from the outside world are unavailable. Following the concept of Victory Gardens, the model of small-scale agroforestry (VICTree Gardens - Virtually Interconnected Community Tree Gardens), being implemented in Hawaii, has the potential to provide a diverse diet of food grown in very limited space. Gardens build resilience by connecting people with each other, with local food, and with nature. We envision community-based projects which will apply local, multi-generational knowledge to adapt the gardens to changing environments. Going forward, direct observation of garden conditions can be combined with satellite and ground-based measurements of environmental conditions, such as soil moisture, soil and air temperature, precipitation, and phenology, to further assess and manage these gardens in the context of the surrounding landscape.

  9. Latino Definitions of Success: A Cultural Model of Intercultural Competence

    PubMed Central

    Torres, Lucas

    2010-01-01

    The present study sought to examine Latino intercultural competence via two separate methodologies. Phase 1 entailed discovering and generating themes regarding the features of intercultural competence based on semistructured interviews of 15 Latino adults. Phase 2 included conducting a cultural consensus analysis from the quantitative responses of 46 Latino adults to determine the cultural model of intercultural competence. The major results indicated that the participants, despite variations in socioeconomic and generational statuses, shared a common knowledge base regarding the competencies needed for Latinos to successfully navigate different cultures. Overall, the cultural model of Latino intercultural competence includes a set of skills that integrates traditional cultural values along with attributes of self-efficacy. The findings are discussed within a competence-based conceptualization of cultural adaptation and potential advancements in acculturation research. PMID:20333325

  10. System and method for integrating and accessing multiple data sources within a data warehouse architecture

    DOEpatents

    Musick, Charles R [Castro Valley, CA; Critchlow, Terence [Livermore, CA; Ganesh, Madhaven [San Jose, CA; Slezak, Tom [Livermore, CA; Fidelis, Krzysztof [Brentwood, CA

    2006-12-19

    A system and method is disclosed for integrating and accessing multiple data sources within a data warehouse architecture. The metadata formed by the present method provide a way to declaratively present domain specific knowledge, obtained by analyzing data sources, in a consistent and useable way. Four types of information are represented by the metadata: abstract concepts, databases, transformations and mappings. A mediator generator automatically generates data management computer code based on the metadata. The resulting code defines a translation library and a mediator class. The translation library provides a data representation for domain specific knowledge represented in a data warehouse, including "get" and "set" methods for attributes that call transformation methods and derive a value of an attribute if it is missing. The mediator class defines methods that take "distinguished" high-level objects as input and traverse their data structures and enter information into the data warehouse.

  11. Genetic counselors' (GC) knowledge, awareness, understanding of clinical next-generation sequencing (NGS) genomic testing.

    PubMed

    Boland, P M; Ruth, K; Matro, J M; Rainey, K L; Fang, C Y; Wong, Y N; Daly, M B; Hall, M J

    2015-12-01

    Genomic tests are increasingly complex, less expensive, and more widely available with the advent of next-generation sequencing (NGS). We assessed knowledge and perceptions among genetic counselors pertaining to NGS genomic testing via an online survey. Associations between selected characteristics and perceptions were examined. Recent education on NGS testing was common, but practical experience limited. Perceived understanding of clinical NGS was modest, specifically concerning tumor testing. Greater perceived understanding of clinical NGS testing correlated with more time spent in cancer-related counseling, exposure to NGS testing, and NGS-focused education. Substantial disagreement about the role of counseling for tumor-based testing was seen. Finally, a majority of counselors agreed with the need for more education about clinical NGS testing, supporting this approach to optimizing implementation. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Studying biodiversity: is a new paradigm really needed?

    USGS Publications Warehouse

    Nichols, James D.; Cooch, Evan G.; Nichols, Jonathan M.; Sauer, John R.

    2012-01-01

    Authors in this journal have recommended a new approach to the conduct of biodiversity science. This data-driven approach requires the organization of large amounts of ecological data, analysis of these data to discover complex patterns, and subsequent development of hypotheses corresponding to detected patterns. This proposed new approach has been contrasted with more-traditional knowledge-based approaches in which investigators deduce consequences of competing hypotheses to be confronted with actual data, providing a basis for discriminating among the hypotheses. We note that one approach is directed at hypothesis generation, whereas the other is also focused on discriminating among competing hypotheses. Here, we argue for the importance of using existing knowledge to the separate issues of (a) hypothesis selection and generation and (b) hypothesis discrimination and testing. In times of limited conservation funding, the relative efficiency of different approaches to learning should be an important consideration in decisions about how to study biodiversity.

  13. Expert system development for commonality analysis in space programs

    NASA Technical Reports Server (NTRS)

    Yeager, Dorian P.

    1987-01-01

    This report is a combination of foundational mathematics and software design. A mathematical model of the Commonality Analysis problem was developed and some important properties discovered. The complexity of the problem is described herein and techniques, both deterministic and heuristic, for reducing that complexity are presented. Weaknesses are pointed out in the existing software (System Commonality Analysis Tool) and several improvements are recommended. It is recommended that: (1) an expert system for guiding the design of new databases be developed; (2) a distributed knowledge base be created and maintained for the purpose of encoding the commonality relationships between design items in commonality databases; (3) a software module be produced which automatically generates commonality alternative sets from commonality databases using the knowledge associated with those databases; and (4) a more complete commonality analysis module be written which is capable of generating any type of feasible solution.

  14. Reasoning and Knowledge Acquisition Framework for 5G Network Analytics

    PubMed Central

    2017-01-01

    Autonomic self-management is a key challenge for next-generation networks. This paper proposes an automated analysis framework to infer knowledge in 5G networks with the aim to understand the network status and to predict potential situations that might disrupt the network operability. The framework is based on the Endsley situational awareness model, and integrates automated capabilities for metrics discovery, pattern recognition, prediction techniques and rule-based reasoning to infer anomalous situations in the current operational context. Those situations should then be mitigated, either proactive or reactively, by a more complex decision-making process. The framework is driven by a use case methodology, where the network administrator is able to customize the knowledge inference rules and operational parameters. The proposal has also been instantiated to prove its adaptability to a real use case. To this end, a reference network traffic dataset was used to identify suspicious patterns and to predict the behavior of the monitored data volume. The preliminary results suggest a good level of accuracy on the inference of anomalous traffic volumes based on a simple configuration. PMID:29065473

  15. A knowledge based system for scientific data visualization

    NASA Technical Reports Server (NTRS)

    Senay, Hikmet; Ignatius, Eve

    1992-01-01

    A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.

  16. Reasoning and Knowledge Acquisition Framework for 5G Network Analytics.

    PubMed

    Sotelo Monge, Marco Antonio; Maestre Vidal, Jorge; García Villalba, Luis Javier

    2017-10-21

    Autonomic self-management is a key challenge for next-generation networks. This paper proposes an automated analysis framework to infer knowledge in 5G networks with the aim to understand the network status and to predict potential situations that might disrupt the network operability. The framework is based on the Endsley situational awareness model, and integrates automated capabilities for metrics discovery, pattern recognition, prediction techniques and rule-based reasoning to infer anomalous situations in the current operational context. Those situations should then be mitigated, either proactive or reactively, by a more complex decision-making process. The framework is driven by a use case methodology, where the network administrator is able to customize the knowledge inference rules and operational parameters. The proposal has also been instantiated to prove its adaptability to a real use case. To this end, a reference network traffic dataset was used to identify suspicious patterns and to predict the behavior of the monitored data volume. The preliminary results suggest a good level of accuracy on the inference of anomalous traffic volumes based on a simple configuration.

  17. Informatics, evidence-based care, and research; implications for national policy: a report of an American Medical Informatics Association health policy conference.

    PubMed

    Bloomrosen, Meryl; Detmer, Don E

    2010-01-01

    There is an increased level of activity in the biomedical and health informatics world (e-prescribing, electronic health records, personal health records) that, in the near future, will yield a wealth of available data that we can exploit meaningfully to strengthen knowledge building and evidence creation, and ultimately improve clinical and preventive care. The American Medical Informatics Association (AMIA) 2008 Health Policy Conference was convened to focus and propel discussions about informatics-enabled evidence-based care, clinical research, and knowledge management. Conference participants explored the potential of informatics tools and technologies to improve the evidence base on which providers and patients can draw to diagnose and treat health problems. The paper presents a model of an evidence continuum that is dynamic, collaborative, and powered by health informatics technologies. The conference's findings are described, and recommendations on terminology harmonization, facilitation of the evidence continuum in a "wired" world, development and dissemination of clinical practice guidelines and other knowledge support strategies, and the role of diverse stakeholders in the generation and adoption of evidence are presented.

  18. Empirical study of fuzzy compatibility measures and aggregation operators

    NASA Astrophysics Data System (ADS)

    Cross, Valerie V.; Sudkamp, Thomas A.

    1992-02-01

    Two fundamental requirements for the generation of support using incomplete and imprecise information are the ability to measure the compatibility of discriminatory information with domain knowledge and the ability to fuse information obtained from disparate sources. A generic architecture utilizing the generalized fuzzy relational database model has been developed to empirically investigate the support generation capabilities of various compatibility measures and aggregation operators. This paper examines the effectiveness of combinations of compatibility measures from the set-theoretic, geometric distance, and logic- based classes paired with t-norm and generalized mean families of aggregation operators.

  19. Embedded Process Modeling, Analogy-Based Option Generation and Analytical Graphic Interaction for Enhanced User-Computer Interaction: An Interactive Storyboard of Next Generation User-Computer Interface Technology. Phase 1

    DTIC Science & Technology

    1988-03-01

    structure of the interface is a mapping from the physical world [for example, the use of icons, which S have inherent meaning to users but represent...design alternatives. Mechanisms for linking the user to the computer include physical devices (keyboards), actions taken with the devices (keystrokes...VALUATION AIDES TEMLATEI IITCOM1I LATOR IACTICAL KNOWLEDGE ACGIUISITION MICNnII t 1 Fig. 9. INTACVAL. * OtJiCTs ARE PHYSICAL ENTITIES OR CONCEPTUAL EN

  20. The Effect of Concept Mapping with Different Levels of Generativity and Learners' Self-Regulated Learning Skills on Knowledge Acquisition and Representation

    ERIC Educational Resources Information Center

    Lim, Kyu Yon

    2008-01-01

    The purpose of this study was to investigate the effectiveness of concept mapping strategies with different levels of generativity in terms of knowledge acquisition and knowledge representation. Also, it examined whether or not learners' self-regulated learning (SRL) skills influenced the effectiveness of concept mapping strategies with different…

Top