Science.gov

Sample records for generation knowledge base

  1. Knowledge-based zonal grid generation for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.

  2. Incorporating Feature-Based Annotations into Automatically Generated Knowledge Representations

    NASA Astrophysics Data System (ADS)

    Lumb, L. I.; Lederman, J. I.; Aldridge, K. D.

    2006-12-01

    Earth Science Markup Language (ESML) is efficient and effective in representing scientific data in an XML- based formalism. However, features of the data being represented are not accounted for in ESML. Such features might derive from events (e.g., a gap in data collection due to instrument servicing), identifications (e.g., a scientifically interesting area/volume in an image), or some other source. In order to account for features in an ESML context, we consider them from the perspective of annotation, i.e., the addition of information to existing documents without changing the originals. Although it is possible to extend ESML to incorporate feature-based annotations internally (e.g., by extending the XML schema for ESML), there are a number of complicating factors that we identify. Rather than pursuing the ESML-extension approach, we focus on an external representation for feature-based annotations via XML Pointer Language (XPointer). In previous work (Lumb &Aldridge, HPCS 2006, IEEE, doi:10.1109/HPCS.2006.26), we have shown that it is possible to extract relationships from ESML-based representations, and capture the results in the Resource Description Format (RDF). Thus we explore and report on this same requirement for XPointer-based annotations of ESML representations. As in our past efforts, the Global Geodynamics Project (GGP) allows us to illustrate with a real-world example this approach for introducing annotations into automatically generated knowledge representations.

  3. Automated knowledge generation

    NASA Technical Reports Server (NTRS)

    Myler, Harley R.; Gonzalez, Avelino J.

    1988-01-01

    The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).

  4. Generating New Knowledge Bases in Educational Administration Professional Preparation Programs.

    ERIC Educational Resources Information Center

    Powers, P. J.

    This paper examines college and university educational administration (EDAD) professional-preparation programs and their current inertia caused by an intellectually based "war over standards" of knowledge and information. It describes how much of EDAD professional-preparation programs' approach to knowledge is largely premised in conventional…

  5. Automated knowledge acquisition for second generation knowledge base systems: A conceptual analysis and taxonomy

    SciTech Connect

    Williams, K.E.; Kotnour, T.

    1991-12-31

    In this paper, we present a conceptual analysis of knowledge-base development methodologies. The purpose of this research is to help overcome the high cost and lack of efficiency in developing knowledge base representations for artificial intelligence applications. To accomplish this purpose, we analyzed the available methodologies and developed a knowledge-base development methodology taxonomy. We review manual, machine-aided, and machine-learning methodologies. A set of developed characteristics allows description and comparison among the methodologies. We present the results of this conceptual analysis of methodologies and recommendations for development of more efficient and effective tools.

  6. Automated knowledge acquisition for second generation knowledge base systems: A conceptual analysis and taxonomy

    SciTech Connect

    Williams, K.E.; Kotnour, T.

    1991-01-01

    In this paper, we present a conceptual analysis of knowledge-base development methodologies. The purpose of this research is to help overcome the high cost and lack of efficiency in developing knowledge base representations for artificial intelligence applications. To accomplish this purpose, we analyzed the available methodologies and developed a knowledge-base development methodology taxonomy. We review manual, machine-aided, and machine-learning methodologies. A set of developed characteristics allows description and comparison among the methodologies. We present the results of this conceptual analysis of methodologies and recommendations for development of more efficient and effective tools.

  7. Knowledge-based reasoning in the Paladin tactical decision generation system

    NASA Technical Reports Server (NTRS)

    Chappell, Alan R.

    1993-01-01

    A real-time tactical decision generation system for air combat engagements, Paladin, has been developed. A pilot's job in air combat includes tasks that are largely symbolic. These symbolic tasks are generally performed through the application of experience and training (i.e. knowledge) gathered over years of flying a fighter aircraft. Two such tasks, situation assessment and throttle control, are identified and broken out in Paladin to be handled by specialized knowledge based systems. Knowledge pertaining to these tasks is encoded into rule-bases to provide the foundation for decisions. Paladin uses a custom built inference engine and a partitioned rule-base structure to give these symbolic results in real-time. This paper provides an overview of knowledge-based reasoning systems as a subset of rule-based systems. The knowledge used by Paladin in generating results as well as the system design for real-time execution is discussed.

  8. Automatic generation of a metamodel from an existing knowledge base to assist the development of a new analogous knowledge base.

    PubMed

    Bouaud, J; Séroussi, B

    2002-01-01

    Knowledge acquisition is a key step in the development of knowledge-based systems and methods have been proposed to help elicitating a domain-specific task model from a generic task model. We explored how an existing validated knowledge base (KB) represented by a decision tree could be automatically processed to infer a higher level domain-specific task model. On-codoc is a guideline-based decision support system applied to breast cancer therapy. Assuming task identity and ontological proximity between breast and lung cancer domains, the generalization of the breast can-cer KB should allow to build a metamodel to serve as a guide for the elaboration of a new specific KB on lung cancer. Two types of parametrized generalization methods based on tree structure simplification and ontological abstraction were used. We defined a similarity distance and a generalization coefficient to select the best metamodel identified as the closest to the original decision tree of the most generalized metamodels. PMID:12463788

  9. Travel-time correction surface generation for the DOE Knowledge Base

    SciTech Connect

    Hipp, J.; Young, C.; Keyser, R.

    1997-08-01

    The DOE Knowledge Base data storage and access model consists of three parts: raw data processing, intermediate surface generation, and final output surface interpolation. The paper concentrates on the second step, surface generation, specifically applied to travel-time correction data. The surface generation for the intermediate step is accomplished using a modified kriging solution that provides robust error estimates for each for each interpolated point and satisfies many important physical requirements including differing quality data points, user-definable range of influence for each point, blend to background values for both interpolated values and error estimates beyond the ranges, and the ability to account for the effects of geologic region boundaries. These requirements are outlined and discussed and are linked to requirements specified for the final output model in the DOE Knowledge Base. Future work will focus on testing the entire Knowledge Base model using the regional calibration data sets which are being gathered by researchers at Los Alamos and Lawrence Livermore National Laboratories.

  10. Knowledge-based approach for generating target system specifications from a domain model

    NASA Technical Reports Server (NTRS)

    Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan

    1992-01-01

    Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.

  11. Generating Topic Headings during Reading of Screen-Based Text Facilitates Learning of Structural Knowledge and Impairs Learning of Lower-Level Knowledge

    ERIC Educational Resources Information Center

    Clariana, Roy B.; Marker, Anthony W.

    2007-01-01

    This investigation considers the effects of learner-generated headings on memory. Participants (N = 63) completed a computer-based lesson with or without learner-generated text topic headings. Posttests included a cued recall test of factual knowledge and a sorting task measure of structural knowledge. A significant disordinal interaction was…

  12. Scaling up explanation generation: Large-scale knowledge bases and empirical studies

    SciTech Connect

    Lester, J.C.; Porter, B.W.

    1996-12-31

    To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. We first describe Knight, a robust explanation system that constructs multi-sentential and multi-paragraph explanations from the Biology Knowledge Base, a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We then introduce the Two Panel evaluation methodology and describe how Knight`s performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, Knight scored within {open_quotes}half a grade{close_quote} of domain experts, and its performance exceeded that of one of the domain experts.

  13. Foundations on Generation of Relationships Between Classes Based on Initial Business Knowledge

    NASA Astrophysics Data System (ADS)

    Nikiforova, Oksana; Pavlova, Natalya

    This chapter focuses on the development of the main component of platform independent model (PIM) of Model Driven Architecture, e.g., class diagram defined in Unified Modeling Language (UML), which has necessary details for transformation into platform specific model (PSM). It is important to formulate core principles of development of well-structured class diagram at a conceptual level, using knowledge of the problem domain, which consists of two interrelated models of system aspects - business processes and concept presentation. Definition of relationships of classes is important for PSM generation; therefore, the research on how it could be defined is performed. The hypothesis that it is possible to derive a class structure from initial business information is adduced. Information about the problem domain is presented in the form of two-hemisphere model that describes two interrelated parts of the most important aspects of a system, namely business process and concept models. These models serve as a source model for class diagram receiving. Capacity for the class diagram generation, based on the two-hemisphere model, is represented by a collection of graph transformations and illustrated with examples, where definition of different kinds of relationships (namely aggregation, dependency, generalization) is displayed.

  14. Knowledge Base for Automatic Generation of Online IMS LD Compliant Course Structures

    ERIC Educational Resources Information Center

    Pacurar, Ecaterina Giacomini; Trigano, Philippe; Alupoaie, Sorin

    2006-01-01

    Our article presents a pedagogical scenarios-based web application that allows the automatic generation and development of pedagogical websites. These pedagogical scenarios are represented in the IMS Learning Design standard. Our application is a web portal helping teachers to dynamically generate web course structures, to edit pedagogical content…

  15. Knowledge-based design of generate-and-patch problem solvers that solve global resource assignment problems

    NASA Technical Reports Server (NTRS)

    Voigt, Kerstin

    1992-01-01

    We present MENDER, a knowledge based system that implements software design techniques that are specialized to automatically compile generate-and-patch problem solvers that satisfy global resource assignments problems. We provide empirical evidence of the superior performance of generate-and-patch over generate-and-test: even with constrained generation, for a global constraint in the domain of '2D-floorplanning'. For a second constraint in '2D-floorplanning' we show that even when it is possible to incorporate the constraint into a constrained generator, a generate-and-patch problem solver may satisfy the constraint more rapidly. We also briefly summarize how an extended version of our system applies to a constraint in the domain of 'multiprocessor scheduling'.

  16. Automated knowledge-based fuzzy models generation for weaning of patients receiving ventricular assist device (VAD) therapy.

    PubMed

    Tsipouras, Markos G; Karvounis, Evaggelos C; Tzallas, Alexandros T; Goletsis, Yorgos; Fotiadis, Dimitrios I; Adamopoulos, Stamatis; Trivella, Maria G

    2012-01-01

    The SensorART project focus on the management of heart failure (HF) patients which are treated with implantable ventricular assist devices (VADs). This work presents the way that crisp models are transformed into fuzzy in the weaning module, which is one of the core modules of the specialist's decision support system (DSS) in SensorART. The weaning module is a DSS that supports the medical expert on the weaning and remove VAD from the patient decision. Weaning module has been developed following a "mixture of experts" philosophy, with the experts being fuzzy knowledge-based models, automatically generated from initial crisp knowledge-based set of rules and criteria for weaning. PMID:23366361

  17. A knowledge generation model via the hypernetwork.

    PubMed

    Liu, Jian-Guo; Yang, Guang-Yong; Hu, Zhao-Long

    2014-01-01

    The influence of the statistical properties of the network on the knowledge diffusion has been extensively studied. However, the structure evolution and the knowledge generation processes are always integrated simultaneously. By introducing the Cobb-Douglas production function and treating the knowledge growth as a cooperative production of knowledge, in this paper, we present two knowledge-generation dynamic evolving models based on different evolving mechanisms. The first model, named "HDPH model," adopts the hyperedge growth and the hyperdegree preferential attachment mechanisms. The second model, named "KSPH model," adopts the hyperedge growth and the knowledge stock preferential attachment mechanisms. We investigate the effect of the parameters (α,β) on the total knowledge stock of the two models. The hyperdegree distribution of the HDPH model can be theoretically analyzed by the mean-field theory. The analytic result indicates that the hyperdegree distribution of the HDPH model obeys the power-law distribution and the exponent is γ = 2 + 1/m. Furthermore, we present the distributions of the knowledge stock for different parameters (α,β). The findings indicate that our proposed models could be helpful for deeply understanding the scientific research cooperation.

  18. Knowledge Grid Based Knowledge Supply Model

    NASA Astrophysics Data System (ADS)

    Zhen, Lu; Jiang, Zuhua

    This paper is mainly concerned with a knowledge supply model in the environment of knowledge grid to realize the knowledge sharing globally. By integrating members, roles, and tasks in a workflow, three sorts of knowledge demands are gained. Based on knowledge demand information, a knowledge supply model is proposed for the purpose of delivering the right knowledge to the right persons. Knowledge grid, acting as a platform for implementing the knowledge supply, is also discussed mainly from the view of knowledge space. A prototype system of knowledge supply has been implemented and applied in product development.

  19. Knowledge based programming at KSC

    NASA Technical Reports Server (NTRS)

    Tulley, J. H., Jr.; Delaune, C. I.

    1986-01-01

    Various KSC knowledge-based systems projects are discussed. The objectives of the knowledge-based automatic test equipment and Shuttle connector analysis network projects are described. It is observed that knowledge-based programs must handle factual and expert knowledge; the characteristics of these two types of knowledge are examined. Applications for the knowledge-based programming technique are considered.

  20. Medical Knowledge Bases.

    ERIC Educational Resources Information Center

    Miller, Randolph A.; Giuse, Nunzia B.

    1991-01-01

    Few commonly available, successful computer-based tools exist in medical informatics. Faculty expertise can be included in computer-based medical information systems. Computers allow dynamic recombination of knowledge to answer questions unanswerable with print textbooks. Such systems can also create stronger ties between academic and clinical…

  1. The role of textual semantic constraints in knowledge-based inference generation during reading comprehension: A computational approach.

    PubMed

    Yeari, Menahem; van den Broek, Paul

    2015-01-01

    The present research adopted a computational approach to explore the extent to which the semantic content of texts constrains the activation of knowledge-based inferences. Specifically, we examined whether textual semantic constraints (TSC) can explain (1) the activation of predictive inferences, (2) the activation of bridging inferences and (3) the higher prevalence of the activation of bridging inferences compared to predictive inferences. To examine these hypotheses, we computed the strength of semantic associations between texts and probe items as presented to human readers in previous behavioural studies, using the Latent Semantic Analysis (LSA) algorithm. We tested whether stronger semantic associations are observed for inferred items compared to control items. Our results show that in 15 out of 17 planned comparisons, the computed strength of semantic associations successfully simulated the activation of inferences. These findings suggest that TSC play a central role in the activation of knowledge-based inferences.

  2. Geospatial Standards and the Knowledge Generation Lifescycle

    NASA Technical Reports Server (NTRS)

    Khalsa, Siri Jodha S.; Ramachandran, Rahul

    2014-01-01

    Standards play an essential role at each stage in the sequence of processes by which knowledge is generated from geoscience observations, simulations and analysis. This paper provides an introduction to the field of informatics and the knowledge generation lifecycle in the context of the geosciences. In addition we discuss how the newly formed Earth Science Informatics Technical Committee is helping to advance the application of standards and best practices to make data and data systems more usable and interoperable.

  3. Health Knowledge Among the Millennial Generation

    PubMed Central

    Lloyd, Tom; Shaffer, Michele L.; Christy, Stetter; Widome, Mark D.; Repke, John; Weitekamp, Michael R.; Eslinger, Paul J.; Bargainnier, Sandra S.; Paul, Ian M.

    2013-01-01

    The Millennial Generation, also known as Generation Y, is the demographic cohort following Generation X, and is generally regarded to be composed of those individuals born between 1980 and 2000. They are the first to grow up in an environment where health-related information is widely available by internet, TV and other electronic media, yet we know very little about the scope of their health knowledge. This study was undertaken to quantify two domains of clinically relevant health knowledge: factual content and ability to solve health related questions (application) in nine clinically related medical areas. Study subjects correctly answered, on average, 75% of health application questions but only 54% of health content questions. Since students were better able to correctly answer questions dealing with applications compared to those on factual content contemporary US high school students may not use traditional hierarchical learning models in acquisition of their health knowledge. PMID:25170479

  4. Generating tsunami risk knowledge at community level as a base for planning and implementation of risk reduction strategies

    NASA Astrophysics Data System (ADS)

    Wegscheider, S.; Post, J.; Zosseder, K.; Mück, M.; Strunz, G.; Riedlinger, T.; Muhari, A.; Anwar, H. Z.

    2011-02-01

    More than 4 million Indonesians live in tsunami-prone areas along the southern and western coasts of Sumatra, Java and Bali. Although a Tsunami Early Warning Center in Jakarta now exists, installed after the devastating 2004 tsunami, it is essential to develop tsunami risk knowledge within the exposed communities as a basis for tsunami disaster management. These communities need to implement risk reduction strategies to mitigate potential consequences. The major aims of this paper are to present a risk assessment methodology which (1) identifies areas of high tsunami risk in terms of potential loss of life, (2) bridges the gaps between research and practical application, and (3) can be implemented at community level. High risk areas have a great need for action to improve people's response capabilities towards a disaster, thus reducing the risk. The methodology developed here is based on a GIS approach and combines hazard probability, hazard intensity, population density and people's response capability to assess the risk. Within the framework of the GITEWS (German-Indonesian Tsunami Early Warning System) project, the methodology was applied to three pilot areas, one of which is southern Bali. Bali's tourism is concentrated for a great part in the communities of Kuta, Legian and Seminyak. Here alone, about 20 000 people live in high and very high tsunami risk areas. The development of risk reduction strategies is therefore of significant interest. A risk map produced for the study area in Bali can be used for local planning activities and the development of risk reduction strategies.

  5. Knowledge Generation Model for Visual Analytics.

    PubMed

    Sacha, Dominik; Stoffel, Andreas; Stoffel, Florian; Kwon, Bum Chul; Ellis, Geoffrey; Keim, Daniel A

    2014-12-01

    Visual analytics enables us to analyze huge information spaces in order to support complex decision making and data exploration. Humans play a central role in generating knowledge from the snippets of evidence emerging from visual data analysis. Although prior research provides frameworks that generalize this process, their scope is often narrowly focused so they do not encompass different perspectives at different levels. This paper proposes a knowledge generation model for visual analytics that ties together these diverse frameworks, yet retains previously developed models (e.g., KDD process) to describe individual segments of the overall visual analytic processes. To test its utility, a real world visual analytics system is compared against the model, demonstrating that the knowledge generation process model provides a useful guideline when developing and evaluating such systems. The model is used to effectively compare different data analysis systems. Furthermore, the model provides a common language and description of visual analytic processes, which can be used for communication between researchers. At the end, our model reflects areas of research that future researchers can embark on. PMID:26356874

  6. Knowledge Generation Model for Visual Analytics.

    PubMed

    Sacha, Dominik; Stoffel, Andreas; Stoffel, Florian; Kwon, Bum Chul; Ellis, Geoffrey; Keim, Daniel A

    2014-12-01

    Visual analytics enables us to analyze huge information spaces in order to support complex decision making and data exploration. Humans play a central role in generating knowledge from the snippets of evidence emerging from visual data analysis. Although prior research provides frameworks that generalize this process, their scope is often narrowly focused so they do not encompass different perspectives at different levels. This paper proposes a knowledge generation model for visual analytics that ties together these diverse frameworks, yet retains previously developed models (e.g., KDD process) to describe individual segments of the overall visual analytic processes. To test its utility, a real world visual analytics system is compared against the model, demonstrating that the knowledge generation process model provides a useful guideline when developing and evaluating such systems. The model is used to effectively compare different data analysis systems. Furthermore, the model provides a common language and description of visual analytic processes, which can be used for communication between researchers. At the end, our model reflects areas of research that future researchers can embark on.

  7. Generating tsunami risk knowledge at community level as a base for planning and implementation of risk reduction strategies

    NASA Astrophysics Data System (ADS)

    Wegscheider, Stephanie; Post, Joachim; Mück, Matthias; Zosseder, Kai; Muhari, Abdul; Anwar, Herryal Z.; Gebert, Niklas; Strunz, Günter; Riedlinger, Torsten

    2010-05-01

    More than 4 million Indonesians live in tsunami-prone areas on the southern and western coasts of Sumatra, Java and Bali. Depending on the location of the tsunamigenic earthquake, in many cases the time to reach a tsunami-safe area is as short as 15 or 20 minutes. To increase the chances of a successful evacuation a comprehensive and thorough planning and preparation is necessary. For this purpose, detailed knowledge on potential hazard impact and safe areas, exposed elements such as people, critical facilities and lifelines, deficiencies in response capabilities and evacuation routes is crucial. The major aims of this paper are (i) to assess and quantify people's response capabilities and (ii) to identify high risk areas which have a high need of action to improve the response capabilities and thus to reduce the risk. The major factor influencing people's ability to evacuate successfully is the factor time. The estimated time of arrival of a tsunami at the coast which determines the overall available time for evacuation after triggering of a tsunami can be derived by analyzing modeled tsunami scenarios for a respective area. But in most cases, this available time frame is diminished by other time components including the time until natural or technical warning signs are received and the time until reaction follows a warning (understanding a warning and decision to take appropriate action). For the time to receive a warning we assume that the early warning centre is able to fulfil the Indonesian presidential decree to issue a warning within 5 minutes. Reaction time is difficult to quantify as here human intrinsic factors as educational level, believe, tsunami knowledge and experience play a role. Although we are aware of the great importance of this factor and the importance to minimize the reaction time, it is not considered in this paper. Quantifying the needed evacuation time is based on a GIS approach. This approach is relatively simple and enables local

  8. Generating tsunami risk knowledge at community level as a base for planning and implementation of risk reduction strategies

    NASA Astrophysics Data System (ADS)

    Wegscheider, Stephanie; Post, Joachim; Mück, Matthias; Zosseder, Kai; Muhari, Abdul; Anwar, Herryal Z.; Gebert, Niklas; Strunz, Günter; Riedlinger, Torsten

    2010-05-01

    More than 4 million Indonesians live in tsunami-prone areas on the southern and western coasts of Sumatra, Java and Bali. Depending on the location of the tsunamigenic earthquake, in many cases the time to reach a tsunami-safe area is as short as 15 or 20 minutes. To increase the chances of a successful evacuation a comprehensive and thorough planning and preparation is necessary. For this purpose, detailed knowledge on potential hazard impact and safe areas, exposed elements such as people, critical facilities and lifelines, deficiencies in response capabilities and evacuation routes is crucial. The major aims of this paper are (i) to assess and quantify people's response capabilities and (ii) to identify high risk areas which have a high need of action to improve the response capabilities and thus to reduce the risk. The major factor influencing people's ability to evacuate successfully is the factor time. The estimated time of arrival of a tsunami at the coast which determines the overall available time for evacuation after triggering of a tsunami can be derived by analyzing modeled tsunami scenarios for a respective area. But in most cases, this available time frame is diminished by other time components including the time until natural or technical warning signs are received and the time until reaction follows a warning (understanding a warning and decision to take appropriate action). For the time to receive a warning we assume that the early warning centre is able to fulfil the Indonesian presidential decree to issue a warning within 5 minutes. Reaction time is difficult to quantify as here human intrinsic factors as educational level, believe, tsunami knowledge and experience play a role. Although we are aware of the great importance of this factor and the importance to minimize the reaction time, it is not considered in this paper. Quantifying the needed evacuation time is based on a GIS approach. This approach is relatively simple and enables local

  9. Reflexive Professionalism as a Second Generation of Evidence-Based Practice: Some Considerations on the Special Issue "What Works? Modernizing the Knowledge-Base of Social Work"

    ERIC Educational Resources Information Center

    Otto, Hans-Uwe; Polutta, Andreas; Ziegler, Holger

    2009-01-01

    This article refers sympathetically to the thoughtful debates and positions in the "Research on Social Work Practice" ("RSWP"; Special Issue, July, 2008 issue) on "What Works? Modernizing the Knowledge-Base of Social Work." It highlights the need for empirical efficacy and effectiveness research in social work and appreciates empirical rigor…

  10. Foundation: Transforming data bases into knowledge bases

    NASA Technical Reports Server (NTRS)

    Purves, R. B.; Carnes, James R.; Cutts, Dannie E.

    1987-01-01

    One approach to transforming information stored in relational data bases into knowledge based representations and back again is described. This system, called Foundation, allows knowledge bases to take advantage of vast amounts of pre-existing data. A benefit of this approach is inspection, and even population, of data bases through an intelligent knowledge-based front-end.

  11. Need to Knowledge (NtK) Model: an evidence-based framework for generating technological innovations with socio-economic impacts

    PubMed Central

    2013-01-01

    Background Traditional government policies suggest that upstream investment in scientific research is necessary and sufficient to generate technological innovations. The expected downstream beneficial socio-economic impacts are presumed to occur through non-government market mechanisms. However, there is little quantitative evidence for such a direct and formulaic relationship between public investment at the input end and marketplace benefits at the impact end. Instead, the literature demonstrates that the technological innovation process involves a complex interaction between multiple sectors, methods, and stakeholders. Discussion The authors theorize that accomplishing the full process of technological innovation in a deliberate and systematic manner requires an operational-level model encompassing three underlying methods, each designed to generate knowledge outputs in different states: scientific research generates conceptual discoveries; engineering development generates prototype inventions; and industrial production generates commercial innovations. Given the critical roles of engineering and business, the entire innovation process should continuously consider the practical requirements and constraints of the commercial marketplace. The Need to Knowledge (NtK) Model encompasses the activities required to successfully generate innovations, along with associated strategies for effectively communicating knowledge outputs in all three states to the various stakeholders involved. It is intentionally grounded in evidence drawn from academic analysis to facilitate objective and quantitative scrutiny, and industry best practices to enable practical application. Summary The Need to Knowledge (NtK) Model offers a practical, market-oriented approach that avoids the gaps, constraints and inefficiencies inherent in undirected activities and disconnected sectors. The NtK Model is a means to realizing increased returns on public investments in those science and technology

  12. Knowledge-based nursing diagnosis

    NASA Astrophysics Data System (ADS)

    Roy, Claudette; Hay, D. Robert

    1991-03-01

    Nursing diagnosis is an integral part of the nursing process and determines the interventions leading to outcomes for which the nurse is accountable. Diagnoses under the time constraints of modern nursing can benefit from a computer assist. A knowledge-based engineering approach was developed to address these problems. A number of problems were addressed during system design to make the system practical extended beyond capture of knowledge. The issues involved in implementing a professional knowledge base in a clinical setting are discussed. System functions, structure, interfaces, health care environment, and terminology and taxonomy are discussed. An integrated system concept from assessment through intervention and evaluation is outlined.

  13. Reusing Design Knowledge Based on Design Cases and Knowledge Map

    ERIC Educational Resources Information Center

    Yang, Cheng; Liu, Zheng; Wang, Haobai; Shen, Jiaoqi

    2013-01-01

    Design knowledge was reused for innovative design work to support designers with product design knowledge and help designers who lack rich experiences to improve their design capacity and efficiency. First, based on the ontological model of product design knowledge constructed by taxonomy, implicit and explicit knowledge was extracted from some…

  14. Knowledge-based flow field zoning

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation flow field zoning in two dimensions is an important step towards easing the three-dimensional grid generation bottleneck in computational fluid dynamics. A knowledge based approach works well, but certain aspects of flow field zoning make the use of such an approach challenging. A knowledge based flow field zoner, called EZGrid, was implemented and tested on representative two-dimensional aerodynamic configurations. Results are shown which illustrate the way in which EZGrid incorporates the effects of physics, shape description, position, and user bias in a flow field zoning.

  15. Expert and Knowledge Based Systems.

    ERIC Educational Resources Information Center

    Demaid, Adrian; Edwards, Lyndon

    1987-01-01

    Discusses the nature and current state of knowledge-based systems and expert systems. Describes an expert system from the viewpoints of a computer programmer and an applications expert. Addresses concerns related to materials selection and forecasts future developments in the teaching of materials engineering. (ML)

  16. Population Education: A Knowledge Base.

    ERIC Educational Resources Information Center

    Jacobson, Willard J.

    To aid junior high and high school educators and curriculum planners as they develop population education programs, the book provides an overview of the population education knowledge base. In addition, it suggests learning activities, discussion questions, and background information which can be integrated into courses dealing with population,…

  17. Epistemology of knowledge based simulation

    SciTech Connect

    Reddy, R.

    1987-04-01

    Combining artificial intelligence concepts, with traditional simulation methodologies yields a powerful design support tool known as knowledge based simulation. This approach turns a descriptive simulation tool into a prescriptive tool, one which recommends specific goals. Much work in the area of general goal processing and explanation of recommendations remains to be done.

  18. Case-Based Tutoring from a Medical Knowledge Base

    PubMed Central

    Chin, Homer L.

    1988-01-01

    The past decade has seen the emergence of programs that make use of large knowledge bases to assist physicians in diagnosis within the general field of internal medicine. One such program, Internist-I, contains knowledge about over 600 diseases, covering a significant proportion of internal medicine. This paper describes the process of converting a subset of this knowledge base--in the area of cardiovascular diseases--into a probabilistic format, and the use of this resulting knowledge base to teach medical diagnostic knowledge. The system (called KBSimulator--for Knowledge-Based patient Simulator) generates simulated patient cases and uses these cases as a focal point from which to teach medical knowledge. It interacts with the student in a mixed-initiative fashion, presenting patients for the student to diagnose, and allowing the student to obtain further information on his/her own initiative in the context of that patient case. The system scores the student, and uses these scores to form a rudimentary model of the student. This resulting model of the student is then used to direct the generation of subsequent patient cases. This project demonstrates the feasibility of building an intelligent, flexible instructional system that uses a knowledge base constructed primarily for medical diagnosis.

  19. Automated knowledge-base refinement

    NASA Technical Reports Server (NTRS)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  20. New Proposals for Generating and Exploiting Solution-Oriented Knowledge

    ERIC Educational Resources Information Center

    Gredig, Daniel; Sommerfeld, Peter

    2008-01-01

    The claim that professional social work should be based on scientific knowledge is many decades old with knowledge transfer usually moving in the direction from science to practice. The authors critique this model of knowledge transfer and support a hybrid one that places more of an emphasis on professional knowledge and action occurring in the…

  1. Knowledge based jet engine diagnostics

    NASA Technical Reports Server (NTRS)

    Jellison, Timothy G.; Dehoff, Ronald L.

    1987-01-01

    A fielded expert system automates equipment fault isolation and recommends corrective maintenance action for Air Force jet engines. The knowledge based diagnostics tool was developed as an expert system interface to the Comprehensive Engine Management System, Increment IV (CEMS IV), the standard Air Force base level maintenance decision support system. XMAM (trademark), the Expert Maintenance Tool, automates procedures for troubleshooting equipment faults, provides a facility for interactive user training, and fits within a diagnostics information feedback loop to improve the troubleshooting and equipment maintenance processes. The application of expert diagnostics to the Air Force A-10A aircraft TF-34 engine equipped with the Turbine Engine Monitoring System (TEMS) is presented.

  2. Generative and Item-Specific Knowledge of Language

    ERIC Educational Resources Information Center

    Morgan, Emily Ida Popper

    2016-01-01

    The ability to generate novel utterances compositionally using generative knowledge is a hallmark property of human language. At the same time, languages contain non-compositional or idiosyncratic items, such as irregular verbs, idioms, etc. This dissertation asks how and why language achieves a balance between these two systems--generative and…

  3. Cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward A.; Buchanan, Bruce G.

    1988-01-01

    This final report covers work performed under Contract NCC2-220 between NASA Ames Research Center and the Knowledge Systems Laboratory, Stanford University. The period of research was from March 1, 1987 to February 29, 1988. Topics covered were as follows: (1) concurrent architectures for knowledge-based systems; (2) methods for the solution of geometric constraint satisfaction problems, and (3) reasoning under uncertainty. The research in concurrent architectures was co-funded by DARPA, as part of that agency's Strategic Computing Program. The research has been in progress since 1985, under DARPA and NASA sponsorship. The research in geometric constraint satisfaction has been done in the context of a particular application, that of determining the 3-D structure of complex protein molecules, using the constraints inferred from NMR measurements.

  4. Drawing on Dynamic Local Knowledge through Student-Generated Photography

    ERIC Educational Resources Information Center

    Coles-Ritchie, Marilee; Monson, Bayley; Moses, Catherine

    2015-01-01

    In this research, the authors explored how teachers using student-generated photography draw on local knowledge. The study draws on the framework of funds of knowledge to highlight the assets marginalized students bring to the classroom and the need for culturally relevant pedagogy to address the needs of a diverse public school population. The…

  5. A Property Restriction Based Knowledge Merging Method

    NASA Astrophysics Data System (ADS)

    Che, Haiyan; Chen, Wei; Feng, Tie; Zhang, Jiachen

    Merging new instance knowledge extracted from the Web according to certain domain ontology into the knowledge base (KB for short) is essential for the knowledge management and should be processed carefully, since this may introduce redundant or contradictory knowledge, and the quality of the knowledge in the KB, which is very important for a knowledge-based system to provide users high quality services, will suffer from such "bad" knowledge. Advocates a property restriction based knowledge merging method, it can identify the equivalent instances, redundant or contradictory knowledge according to the property restrictions defined in the domain ontology and can consolidate the knowledge about equivalent instances and discard the redundancy and conflict to keep the KB compact and consistent. This knowledge merging method has been used in a semantic-based search engine project: CRAB and the effect is satisfactory.

  6. Sustaining knowledge in the neutron generator community and benchmarking study.

    SciTech Connect

    Barrentine, Tameka C.; Kennedy, Bryan C.; Saba, Anthony W.; Turgeon, Jennifer L.; Schneider, Julia Teresa; Stubblefield, William Anthony; Baldonado, Esther

    2008-03-01

    In 2004, the Responsive Neutron Generator Product Deployment department embarked upon a partnership with the Systems Engineering and Analysis knowledge management (KM) team to develop knowledge management systems for the neutron generator (NG) community. This partnership continues today. The most recent challenge was to improve the current KM system (KMS) development approach by identifying a process that will allow staff members to capture knowledge as they learn it. This 'as-you-go' approach will lead to a sustainable KM process for the NG community. This paper presents a historical overview of NG KMSs, as well as research conducted to move toward sustainable KM.

  7. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  8. A Knowledge-Based Imagery Exploitation System

    NASA Astrophysics Data System (ADS)

    Smyrniotis, Chuck; Payton, Paul M.; Barrett, Eamon B.

    1989-03-01

    Automation of major portions of the imagery exploitation process is becoming a necessity for meeting current and future imagery exploitation needs. In this paper we describe a prototype Automated Exploitation System (AES) which addresses requirements for monitoring objects of interest and situation assessment in large geographic areas. The purpose of AES is to aid the image analyst in performing routine, commonplace tasks more effectively. AES consists of four main subsystems: Cue Extractor (CE), Knowledge-Based Exploitation (KBE), Interactive Work-Station (IWS), and a database subsystem. The CE processes raw image data, and identifies objects and target cues based on pixel- and object-model data. Cues and image registration coefficients are passed to KBE for screening and verification, situation assessment and planning. KBE combines the cues with ground-truth and doctrinal knowledge in screening the cues to determine their importance. KBE generates reports on image analysis which passes on to the IWS from which an image analyst can monitor, observe, and evaluate system functionality as well as respond to critical items identified by KBE. The database subsystem stores and shares reference imagery, collateral information and digital terrain data to support both automated and interactive processing. This partitioning of functions to subsystems facilitates hierarchical application of knowledge in image interpretation. The AES current prototype helps in identification, capture, representation, and refinement of knowledge. The KBE subsystem, which is the primary focus of the present paper, runs on a Symbolics 3675 computer and its software is written in the ART expert system and LISP language.

  9. Knowledge Base Editor (SharpKBE)

    NASA Technical Reports Server (NTRS)

    Tikidjian, Raffi; James, Mark; Mackey, Ryan

    2007-01-01

    The SharpKBE software provides a graphical user interface environment for domain experts to build and manage knowledge base systems. Knowledge bases can be exported/translated to various target languages automatically, including customizable target languages.

  10. Acquisition, representation and rule generation for procedural knowledge

    NASA Technical Reports Server (NTRS)

    Ortiz, Chris; Saito, Tim; Mithal, Sachin; Loftin, R. Bowen

    1991-01-01

    Current research into the design and continuing development of a system for the acquisition of procedural knowledge, its representation in useful forms, and proposed methods for automated C Language Integrated Production System (CLIPS) rule generation is discussed. The Task Analysis and Rule Generation Tool (TARGET) is intended to permit experts, individually or collectively, to visually describe and refine procedural tasks. The system is designed to represent the acquired knowledge in the form of graphical objects with the capacity for generating production rules in CLIPS. The generated rules can then be integrated into applications such as NASA's Intelligent Computer Aided Training (ICAT) architecture. Also described are proposed methods for use in translating the graphical and intermediate knowledge representations into CLIPS rules.

  11. Knowledge-Based Entrepreneurship in a Boundless Research System

    ERIC Educational Resources Information Center

    Dell'Anno, Davide

    2008-01-01

    International entrepreneurship and knowledge-based entrepreneurship have recently generated considerable academic and non-academic attention. This paper explores the "new" field of knowledge-based entrepreneurship in a boundless research system. Cultural barriers to the development of business opportunities by researchers persist in some academic…

  12. A Discussion of Knowledge Based Design

    NASA Technical Reports Server (NTRS)

    Wood, Richard M.; Bauer, Steven X. S.

    1999-01-01

    A discussion of knowledge and Knowledge- Based design as related to the design of aircraft is presented. The paper discusses the perceived problem with existing design studies and introduces the concepts of design and knowledge for a Knowledge- Based design system. A review of several Knowledge-Based design activities is provided. A Virtual Reality, Knowledge-Based system is proposed and reviewed. The feasibility of Virtual Reality to improve the efficiency and effectiveness of aerodynamic and multidisciplinary design, evaluation, and analysis of aircraft through the coupling of virtual reality technology and a Knowledge-Based design system is also reviewed. The final section of the paper discusses future directions for design and the role of Knowledge-Based design.

  13. Knowledge-based expert system configurator

    SciTech Connect

    Wakefield, K.A.; Gould, S.S.

    1990-01-01

    The term knowledge-based expert system'' usually brings to mind a rather extensive list of commercially available expert system shells with the associated complexity of implementing the given inferencing strategies to drive a rule base of knowledge for solving particular classes of problems. A significant amount of learning time is required to understand all of the intricacies of the systems in order to effectively utilize their salient features while working around the canned'' constraints. The amount of effort required to prototype the first attempt'' is therefore substantial and can quickly lead to the unfortunate effect of reticence toward applying expert systems. This paper describes an alternative approach to use of specialized shells in developing or prototyping first-attempting knowledge-based expert systems using Lotus 123, a commonly used spreadsheet software package. The advantages of using this approach are discussed. The working example presented makes use of the forward-chaining capabilities available to determine automatically the hardware jumper and switch configuration for a distributed process control system. Hardware configuration control documentation is generated for use by field engineers and maintenance technicians. 4 refs., 4 figs.

  14. Knowledge-Based Network Operations

    NASA Astrophysics Data System (ADS)

    Wu, Chuan-lin; Hung, Chaw-Kwei; Stedry, Steven P.; McClure, James P.; Yeh, Show-Way

    1988-03-01

    An expert system is being implemented for enhancing operability of the Ground Communication Facility (GCF) of Jet Propulsion Laboratory's (JPL) Deep Space Network (DSN). The DSN is a tracking network for all of JPL's spacecraft plus a subset of spacecrafts launched by other NASA centers. A GCF upgrade task is set to replace the current GCF aging system with new, modern equipments which are capable of using knowledge-based monitor and control approach. The expert system, implemented in terms of KEE and SUN workstation, is used for performing network fault management, configuration management, and performance management in real-time. Monitor data are collected from each processor and DSCC's in every five seconds. In addition to serving as input parameters of the expert system, extracted management information is used to update a management information database. For the monitor and control purpose, software of each processor is divided into layers following the OSI standard. Each layer is modeled as a finite state machine. A System Management Application Process (SMAP) is implemented at application layer, which coordinates layer managers of the same processor and communicates with peer SMAPs of other processors. The expert system will be tuned by augmenting the production rules as the operation is going on, and its performance will be measured.

  15. Effective knowledge-based potentials.

    PubMed

    Ferrada, Evandro; Melo, Francisco

    2009-07-01

    Empirical or knowledge-based potentials have many applications in structural biology such as the prediction of protein structure, protein-protein, and protein-ligand interactions and in the evaluation of stability for mutant proteins, the assessment of errors in experimentally solved structures, and the design of new proteins. Here, we describe a simple procedure to derive and use pairwise distance-dependent potentials that rely on the definition of effective atomic interactions, which attempt to capture interactions that are more likely to be physically relevant. Based on a difficult benchmark test composed of proteins with different secondary structure composition and representing many different folds, we show that the use of effective atomic interactions significantly improves the performance of potentials at discriminating between native and near-native conformations. We also found that, in agreement with previous reports, the potentials derived from the observed effective atomic interactions in native protein structures contain a larger amount of mutual information. A detailed analysis of the effective energy functions shows that atom connectivity effects, which mostly arise when deriving the potential by the incorporation of those indirect atomic interactions occurring beyond the first atomic shell, are clearly filtered out. The shape of the energy functions for direct atomic interactions representing hydrogen bonding and disulfide and salt bridges formation is almost unaffected when effective interactions are taken into account. On the contrary, the shape of the energy functions for indirect atom interactions (i.e., those describing the interaction between two atoms bound to a direct interacting pair) is clearly different when effective interactions are considered. Effective energy functions for indirect interacting atom pairs are not influenced by the shape or the energy minimum observed for the corresponding direct interacting atom pair. Our results

  16. NASDA knowledge-based network planning system

    NASA Technical Reports Server (NTRS)

    Yamaya, K.; Fujiwara, M.; Kosugi, S.; Yambe, M.; Ohmori, M.

    1993-01-01

    One of the SODS (space operation and data system) sub-systems, NP (network planning) was the first expert system used by NASDA (national space development agency of Japan) for tracking and control of satellite. The major responsibilities of the NP system are: first, the allocation of network and satellite control resources and, second, the generation of the network operation plan data (NOP) used in automated control of the stations and control center facilities. Up to now, the first task of network resource scheduling was done by network operators. NP system automatically generates schedules using its knowledge base, which contains information on satellite orbits, station availability, which computer is dedicated to which satellite, and how many stations must be available for a particular satellite pass or a certain time period. The NP system is introduced.

  17. Generating Pedagogical Content Knowledge in Teacher Education Students

    ERIC Educational Resources Information Center

    van den Berg, Ed

    2015-01-01

    Some pre-service teaching activities can contribute much to the learning of pedagogical content knowledge (PCK) and subsequent teaching as these activities are "generating" PCK within the pre-service teacher's own classroom. Three examples are described: preparing exhibitions of science experiments, assessing preconceptions, and teaching…

  18. Ethics, Inclusiveness, and the UCEA Knowledge Base.

    ERIC Educational Resources Information Center

    Strike, Kenneth A.

    1995-01-01

    Accepts most of Bull and McCarthy's rejection of the ethical boundary thesis in this same "EAQ" issue. Reinterprets their argument, using a three-part model of administrative knowledge. Any project for constructing an educational administration knowledge base is suspect, since little "pure" empirical and instrumental knowledge will be confirmed by…

  19. Multi-Generational Knowledge Sharing for NASA Engineers

    NASA Technical Reports Server (NTRS)

    Topousis, Daria E.

    2009-01-01

    NASA, like many other organizations, is facing major challenges when it comes to its workforce. The average age of its personnel is 46, and 68 percent of its population is between 35 and 55. According to the U.S. Government Accounting Office, if the workforce continues aging, not enough engineers will have moved up the ranks and have the requisite skills to enable NASA to meet its vision for space exploration. In order to meet its goals of developing a new generation of spacecraft to support human spaceflight to the moon and Mars, the agency must engage and retain younger generations of workers and bridge the gaps between the four generations working today. Knowledge sharing among the generations is more critical than ever. This paper describes the strategies used to develop the NASA Engineering Network with the goal of engaging different generations.

  20. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  1. The experimenters' regress reconsidered: Replication, tacit knowledge, and the dynamics of knowledge generation.

    PubMed

    Feest, Uljana

    2016-08-01

    This paper revisits the debate between Harry Collins and Allan Franklin, concerning the experimenters' regress. Focusing my attention on a case study from recent psychology (regarding experimental evidence for the existence of a Mozart Effect), I argue that Franklin is right to highlight the role of epistemological strategies in scientific practice, but that his account does not sufficiently appreciate Collins's point about the importance of tacit knowledge in experimental practice. In turn, Collins rightly highlights the epistemic uncertainty (and skepticism) surrounding much experimental research. However, I will argue that his analysis of tacit knowledge fails to elucidate the reasons why scientists often are (and should be) skeptical of other researchers' experimental results. I will present an analysis of tacit knowledge in experimental research that not only answers to this desideratum, but also shows how such skepticism can in fact be a vital enabling factor for the dynamic processes of experimental knowledge generation. PMID:27474184

  2. Decision Support and Knowledge-Based Systems.

    ERIC Educational Resources Information Center

    Konsynski, Benn R.; And Others

    1988-01-01

    A series of articles addresses issues concerning decision support and knowledge based systems. Topics covered include knowledge-based systems for information centers; object oriented systems; strategic information systems case studies; user perception; manipulation of certainty factors by individuals and expert systems; spreadsheet program use;…

  3. A knowledge base browser using hypermedia

    NASA Technical Reports Server (NTRS)

    Pocklington, Tony; Wang, Lui

    1990-01-01

    A hypermedia system is being developed to browse CLIPS (C Language Integrated Production System) knowledge bases. This system will be used to help train flight controllers for the Mission Control Center. Browsing this knowledge base will be accomplished either by having navigating through the various collection nodes that have already been defined, or through the query languages.

  4. Reality based scenarios facilitate knowledge network development.

    PubMed

    Manning, J; Broughton, V; McConnell, E A

    1995-03-01

    The challenge in nursing education is to create a learning environment that enables students to learn new knowledge, access previously acquired information from a variety of disciplines, and apply this newly constructed knowledge to the complex and constantly changing world of practice. Faculty at the University of South Australia, School of Nursing, City Campus describe the use of reality based scenarios to acquire domain-specific knowledge and develop well connected associative knowledge networks, both of which facilitate theory based practice and the student's transition to the role of registered nurse.

  5. Games for Learning: Which Template Generates Social Construction of Knowledge?

    ERIC Educational Resources Information Center

    Garcia, Francisco A.

    2015-01-01

    The purpose of this study was to discover how three person teams use game templates (trivia, role-play, or scavenger hunt) to socially construct knowledge. The researcher designed an experimental Internet-based database to facilitate teams creating each game. Teams consisted of teachers, students, hobbyist, and business owners who shared similar…

  6. Introducing T-shaped managers. Knowledge management's next generation.

    PubMed

    Hansen, M T; von Oetinger, B

    2001-03-01

    Most companies do a poor job of capitalizing on the wealth of expertise scattered across their organizations. That's because they tend to rely on centralized knowledge-management systems and technologies. But such systems are really only good at distributing explicit knowledge, the kind that can be captured and codified for general use. They're not very good at transferring implicit knowledge, the kind needed to generate new insights and creative ways of tackling business problems or opportunities. The authors suggest another approach, something they call T-shaped management, which requires executives to share knowledge freely across their organization (the horizontal part of the "T"), while remaining fiercely committed to their individual business unit's performance (the vertical part). A few companies are starting to use this approach, and one--BP Amoco--has been especially successful. From BP's experience, the authors have gleaned five ways that T-shaped managers help companies capitalize on their inherent knowledge. They increase efficiency by transferring best practices. They improve the quality of decision making companywide. They grow revenues through shared expertise. They develop new business opportunities through the cross-pollination of ideas. And they make bold strategic moves possible by delivering well-coordinated implementation. All that takes time, and BP's managers have had to learn how to balance that time against the attention they must pay to their own units. The authors suggest, however, that it's worth the effort to find such a balance to more fully realize the immense value of the knowledge lying idle within so many companies.

  7. Knowledge-based robotic grasping

    SciTech Connect

    Stansfield, S.A.

    1989-01-01

    In this paper, we describe a general-purpose robotic grasping system for use in unstructured environments. Using computer vision and a compact set of heuristics, the system automatically generates the robot arm and hand motions required for grasping an unmodeled object. The utility of such a system is most evident in environments where the robot will have to grasp and manipulate a variety of unknown objects, but where many of the manipulation tasks may be relatively simple. Examples of such domains are planetary exploration and astronaut assistance, undersea salvage and rescue, and nuclear waste site clean-up. This work implements a two-stage model of grasping: stage one is an orientation of the hand and wrist and a ballistic reach toward the object; stage two is hand preshaping and adjustment. Visual features are first extracted from the unmodeled object. These features and their relations are used by an expert system to generate a set of valid reach/grasps for the object. These grasps are then used in driving the robot hand and arm to bring the fingers into contact with the object in the desired configuration. 16 refs., 14 figs.

  8. Knowledge Based Systems and Metacognition in Radar

    NASA Astrophysics Data System (ADS)

    Capraro, Gerard T.; Wicks, Michael C.

    An airborne ground looking radar sensor's performance may be enhanced by selecting algorithms adaptively as the environment changes. A short description of an airborne intelligent radar system (AIRS) is presented with a description of the knowledge based filter and detection portions. A second level of artificial intelligence (AI) processing is presented that monitors, tests, and learns how to improve and control the first level. This approach is based upon metacognition, a way forward for developing knowledge based systems.

  9. Verification of knowledge bases based on containment checking

    SciTech Connect

    Levy. A.Y.; Rousset, M.C.

    1996-12-31

    Building complex knowledge based applications requires encoding large amounts of domain knowledge. After acquiring knowledge from domain experts, much of the effort in building a knowledge base goes into verifying that the knowledge is encoded correctly. We consider the problem of verifying hybrid knowledge bases that contain both Horn rules and a terminology in a description logic. Our approach to the verification problem is based on showing a close relationship to the problem of query containment. Our first contribution, based on this relationship, is presenting a thorough analysis of the decidability and complexity of the verification problem, for knowledge bases containing recursive rules and the interpreted predicates =, {le}, < and {ne}. Second, we show that important new classes of constraints on correct inputs and outputs can be expressed in a hybrid setting, in which a description logic class hierarchy is also considered, and we present the first complete algorithm for verifying such hybrid knowledge bases.

  10. Ontology-Based Multiple Choice Question Generation

    PubMed Central

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  11. Ontology-based multiple choice question generation.

    PubMed

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework. PMID:24982937

  12. Ontology-based multiple choice question generation.

    PubMed

    Al-Yahya, Maha

    2014-01-01

    With recent advancements in Semantic Web technologies, a new trend in MCQ item generation has emerged through the use of ontologies. Ontologies are knowledge representation structures that formally describe entities in a domain and their relationships, thus enabling automated inference and reasoning. Ontology-based MCQ item generation is still in its infancy, but substantial research efforts are being made in the field. However, the applicability of these models for use in an educational setting has not been thoroughly evaluated. In this paper, we present an experimental evaluation of an ontology-based MCQ item generation system known as OntoQue. The evaluation was conducted using two different domain ontologies. The findings of this study show that ontology-based MCQ generation systems produce satisfactory MCQ items to a certain extent. However, the evaluation also revealed a number of shortcomings with current ontology-based MCQ item generation systems with regard to the educational significance of an automatically constructed MCQ item, the knowledge level it addresses, and its language structure. Furthermore, for the task to be successful in producing high-quality MCQ items for learning assessments, this study suggests a novel, holistic view that incorporates learning content, learning objectives, lexical knowledge, and scenarios into a single cohesive framework.

  13. The Knowledge Bases of the Expert Teacher.

    ERIC Educational Resources Information Center

    Turner-Bisset, Rosie

    1999-01-01

    Presents a model for knowledge bases for teaching that will act as a mental map for understanding the complexity of teachers' professional knowledge. Describes the sources and evolution of the model, explains how the model functions in practice, and provides an illustration using an example of teaching in history. (CMK)

  14. The Knowledge Base for Teaching. ERIC Digest.

    ERIC Educational Resources Information Center

    Strom, Sharon

    This digest examines the knowledge base for teaching, noting that many critical decisions about educational structure, policy, and assessment rely on it. The professionalization of teaching depends on showing that teaching requires mastery of a specialized body of knowledge that is applied ethically. Serious deliberation is needed in the…

  15. Updating knowledge bases with disjunctive information

    SciTech Connect

    Zhang, Yan; Foo, Norman Y.

    1996-12-31

    It is well known that the minimal change principle was widely used in knowledge base updates. However, recent research has shown that conventional minimal change methods, eg. the PMA, are generally problematic for updating knowledge bases with disjunctive information. In this paper, we propose two different approaches to deal with this problem - one is called the minimal change with exceptions (MCE), the other is called the minimal change with maximal disjunctive inclusions (MCD). The first method is syntax-based, while the second is model-theoretic. We show that these two approaches are equivalent for propositional knowledge base updates, and the second method is also appropriate for first order knowledge base updates. We then prove that our new update approaches still satisfy the standard Katsuno and Mendelzon`s update postulates.

  16. The Role of Domain Knowledge in Creative Generation

    ERIC Educational Resources Information Center

    Ward, Thomas B.

    2008-01-01

    Previous studies have shown that a predominant tendency in creative generation tasks is to base new ideas on well-known, specific instances of previous ideas (e.g., basing ideas for imaginary aliens on dogs, cats or bears). However, a substantial minority of individuals has been shown to adopt more abstract approaches to the task and to develop…

  17. The Coming of Knowledge-Based Business.

    ERIC Educational Resources Information Center

    Davis, Stan; Botkin, Jim

    1994-01-01

    Economic growth will come from knowledge-based businesses whose "smart" products filter and interpret information. Businesses will come to think of themselves as educators and their customers as learners. (SK)

  18. Knowledge-based system for computer security

    SciTech Connect

    Hunteman, W.J.

    1988-01-01

    The rapid expansion of computer security information and technology has provided little support for the security officer to identify and implement the safeguards needed to secure a computing system. The Department of Energy Center for Computer Security is developing a knowledge-based computer security system to provide expert knowledge to the security officer. The system is policy-based and incorporates a comprehensive list of system attack scenarios and safeguards that implement the required policy while defending against the attacks. 10 figs.

  19. Methodology for testing and validating knowledge bases

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  20. Factors Influencing the Creation of a Wiki Culture for Knowledge Management in a Cross-Generational Organizational Setting

    ERIC Educational Resources Information Center

    Macro, Kenneth L., Jr.

    2011-01-01

    Initiatives within organizations that promote sharing of knowledge may be hampered by generational differences. Research on relationships between generations and technology-based knowledge sharing campaigns provides little managerial guidance for practitioners. The purpose of this ethnographic study was to identify the factors that influence the…

  1. An Insulating Glass Knowledge Base

    SciTech Connect

    Michael L. Doll; Gerald Hendrickson; Gerard Lagos; Russell Pylkki; Chris Christensen; Charlie Cureija

    2005-08-01

    This report will discuss issues relevant to Insulating Glass (IG) durability performance by presenting the observations and developed conclusions in a logical sequential format. This concluding effort discusses Phase II activities and focuses on beginning to quantifying IG durability issues while continuing the approach presented in the Phase I activities (Appendix 1) which discuss a qualitative assessment of durability issues. Phase II developed a focus around two specific IG design classes previously presented in Phase I of this project. The typical box spacer and thermoplastic spacer design including their Failure Modes and Effect Analysis (FMEA) and Fault Tree diagrams were chosen to address two currently used IG design options with varying components and failure modes. The system failures occur due to failures of components or their interfaces. Efforts to begin quantifying the durability issues focused on the development and delivery of an included computer based IG durability simulation program. The focus/effort to deliver the foundation for a comprehensive IG durability simulation tool is necessary to address advancements needed to meet current and future building envelope energy performance goals. This need is based upon the current lack of IG field failure data and the lengthy field observation time necessary for this data collection. Ultimately, the simulation program is intended to be used by designers throughout the current and future industry supply chain. Its use is intended to advance IG durability as expectations grow around energy conservation and with the growth of embedded technologies as required to meet energy needs. In addition the tool has the immediate benefit of providing insight for research and improvement prioritization. Included in the simulation model presentation are elements and/or methods to address IG materials, design, process, quality, induced stress (environmental and other factors), validation, etc. In addition, acquired data

  2. Distributed, cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    Some current research in the development and application of distributed, cooperating knowledge-based systems technology is addressed. The focus of the current research is the spacecraft ground operations environment. The underlying hypothesis is that, because of the increasing size, complexity, and cost of planned systems, conventional procedural approaches to the architecture of automated systems will give way to a more comprehensive knowledge-based approach. A hallmark of these future systems will be the integration of multiple knowledge-based agents which understand the operational goals of the system and cooperate with each other and the humans in the loop to attain the goals. The current work includes the development of a reference model for knowledge-base management, the development of a formal model of cooperating knowledge-based agents, the use of testbed for prototyping and evaluating various knowledge-based concepts, and beginning work on the establishment of an object-oriented model of an intelligent end-to-end (spacecraft to user) system. An introductory discussion of these activities is presented, the major concepts and principles being investigated are highlighted, and their potential use in other application domains is indicated.

  3. Building a knowledge base to support a digital library.

    PubMed

    Mendonça, E A; Cimino, J J

    2001-01-01

    As part of an effort to develop a knowledge base to support searching online medical literature according to individual needs, we have studied the possibility of using the co-occurrence of MeSH terms in MEDLINE citations associated with the search strategies optimal for evidence based medicine to automated construction of a knowledge base. This study evaluates the relevance of the relationships between the semantic relationship pairs generated by the process, and the clinical validity of the semantic types involved in the process. From the semantic pairs proposed by our method, a group of clinicians judge sixty percent to be relevant. The remaining forty percent included semantic types considered unimportant by clinicians. The knowledge extraction method showed reasonable results. We believe it can be appropriate for the task of retrieving information from the medical record in order to guide users during a searching and retrieval process. Future directions include the validation of the knowledge, based on an evaluation of system performance.

  4. The importance of knowledge-based technology.

    PubMed

    Cipriano, Pamela F

    2012-01-01

    Nurse executives are responsible for a workforce that can provide safer and more efficient care in a complex sociotechnical environment. National quality priorities rely on technologies to provide data collection, share information, and leverage analytic capabilities to interpret findings and inform approaches to care that will achieve better outcomes. As a key steward for quality, the nurse executive exercises leadership to provide the infrastructure to build and manage nursing knowledge and instill accountability for following evidence-based practices. These actions contribute to a learning health system where new knowledge is captured as a by-product of care delivery enabled by knowledge-based electronic systems. The learning health system also relies on rigorous scientific evidence embedded into practice at the point of care. The nurse executive optimizes use of knowledge-based technologies, integrated throughout the organization, that have the capacity to help transform health care.

  5. Reuse: A knowledge-based approach

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui

    1992-01-01

    This paper describes our research in automating the reuse process through the use of application domain models. Application domain models are explicit formal representations of the application knowledge necessary to understand, specify, and generate application programs. Furthermore, they provide a unified repository for the operational structure, rules, policies, and constraints of a specific application area. In our approach, domain models are expressed in terms of a transaction-based meta-modeling language. This paper has described in detail the creation and maintenance of hierarchical structures. These structures are created through a process that includes reverse engineering of data models with supplementary enhancement from application experts. Source code is also reverse engineered but is not a major source of domain model instantiation at this time. In the second phase of the software synthesis process, program specifications are interactively synthesized from an instantiated domain model. These specifications are currently integrated into a manual programming process but will eventually be used to derive executable code with mechanically assisted transformations. This research is performed within the context of programming-in-the-large types of systems. Although our goals are ambitious, we are implementing the synthesis system in an incremental manner through which we can realize tangible results. The client/server architecture is capable of supporting 16 simultaneous X/Motif users and tens of thousands of attributes and classes. Domain models have been partially synthesized from five different application areas. As additional domain models are synthesized and additional knowledge is gathered, we will inevitably add to and modify our representation. However, our current experience indicates that it will scale and expand to meet our modeling needs.

  6. The Knowledge Building Paradigm: A Model of Learning for Net Generation Students

    ERIC Educational Resources Information Center

    Philip, Donald

    2005-01-01

    In this article Donald Philip describes Knowledge Building, a pedagogy based on the way research organizations function. The global economy, Philip argues, is driving a shift from older, industrial models to the model of the business as a learning organization. The cognitive patterns of today's Net Generation students, formed by lifetime exposure…

  7. Metadata based mediator generation

    SciTech Connect

    Critchlow, T

    1998-03-01

    Mediators are a critical component of any data warehouse, particularly one utilizing partially materialized views; they transform data from its source format to the warehouse representation while resolving semantic and syntactic conflicts. The close relationship between mediators and databases, requires a mediator to be updated whenever an associated schema is modified. This maintenance may be a significant undertaking if a warehouse integrates several dynamic data sources. However, failure to quickly perform these updates significantly reduces the reliability of the warehouse because queries do not have access to the m current data. This may result in incorrect or misleading responses, and reduce user confidence in the warehouse. This paper describes a metadata framework, and associated software designed to automate a significant portion of the mediator generation task and thereby reduce the effort involved in adapting the schema changes. By allowing the DBA to concentrate on identifying the modifications at a high level, instead of reprogramming the mediator, turnaround time is reduced and warehouse reliability is improved.

  8. Ontology-based knowledge discovery in pharmacogenomics.

    PubMed

    Coulet, Adrien; Smaïl-Tabbone, Malika; Napoli, Amedeo; Devignes, Marie-Dominique

    2011-01-01

    One current challenge in biomedicine is to analyze large amounts of complex biological data for extracting domain knowledge. This work holds on the use of knowledge-based techniques such as knowledge discovery (KD) and knowledge representation (KR) in pharmacogenomics, where knowledge units represent genotype-phenotype relationships in the context of a given treatment. An objective is to design knowledge base (KB, here also mentioned as an ontology) and then to use it in the KD process itself. A method is proposed for dealing with two main tasks: (1) building a KB from heterogeneous data related to genotype, phenotype, and treatment, and (2) applying KD techniques on knowledge assertions for extracting genotype-phenotype relationships. An application was carried out on a clinical trial concerned with the variability of drug response to montelukast treatment. Genotype-genotype and genotype-phenotype associations were retrieved together with new associations, allowing the extension of the initial KB. This experiment shows the potential of KR and KD processes, especially for designing KB, checking KB consistency, and reasoning for problem solving.

  9. Bridging the gap: simulations meet knowledge bases

    NASA Astrophysics Data System (ADS)

    King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.

    2003-09-01

    Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.

  10. Web-Based Learning as a Tool of Knowledge Continuity

    ERIC Educational Resources Information Center

    Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita; Rambely, Azmin Sham

    2013-01-01

    The outbreak of information in a borderless world has prompted lecturers to move forward together with the technological innovation and erudition of knowledge in performing his/her responsibility to educate the young generations to be able to stand above the crowd at the global scene. Teaching and Learning through web-based learning platform is a…

  11. Knowledge structure of form specification of the next generation GPS information system

    NASA Astrophysics Data System (ADS)

    Lu, Wenlong; Liu, Xiaojun; Jiang, Xiangqian; Xu, Zhengao

    2006-11-01

    The next generation GPS (Dimensional and Geometrical Product Specification and Verification) is a very important basic technique standard system for manufacturing that aims to enrich the GPS specification language to express the functional requirements of the products, thus to reduce the correlation uncertainty and specification uncertainty, etc. On one facet, it seems that the indication may be richer, precise, and therefore more verbose, and thus probably to take longer time for design. And on another facet, the designer can't use the standards effectively even if he has a well comprehension of them. To resolve the problem, this paper proposes a GPS information system, which will further help to reduce the development cycle and the cost of the products greatly. Acquisition and representation of knowledge are one of the most difficult steps on successfully developing of the knowledge base of this GPS information system, because it affects the development efficiency, speed, and maintenance of the system as data structure in ordinary programming. For knowledge modeling of this GPS information system, a new modeling mechanism based on category theory is put forward in this paper. The knowledge model based on category theory is called Geometrical Knowledge Model (GKM). This information system is built up on the category theory due to its formality and high level of abstraction. Finally, the basic knowledge structure of the next generation GPS roundness specification is given in the paper.

  12. A Natural Language Interface Concordant with a Knowledge Base.

    PubMed

    Han, Yong-Jin; Park, Seong-Bae; Park, Se-Young

    2016-01-01

    The discordance between expressions interpretable by a natural language interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate natural language questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible natural language expressions corresponding to each formal query can also be collected in advance. A natural language expression has a one-to-one mapping with a formal query. Hence, a natural language question is translated into a formal query by matching the question with the most appropriate natural language expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively. PMID:26904105

  13. A Natural Language Interface Concordant with a Knowledge Base

    PubMed Central

    Han, Yong-Jin; Park, Seong-Bae; Park, Se-Young

    2016-01-01

    The discordance between expressions interpretable by a natural language interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate natural language questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible natural language expressions corresponding to each formal query can also be collected in advance. A natural language expression has a one-to-one mapping with a formal query. Hence, a natural language question is translated into a formal query by matching the question with the most appropriate natural language expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively. PMID:26904105

  14. A Natural Language Interface Concordant with a Knowledge Base.

    PubMed

    Han, Yong-Jin; Park, Seong-Bae; Park, Se-Young

    2016-01-01

    The discordance between expressions interpretable by a natural language interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate natural language questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible natural language expressions corresponding to each formal query can also be collected in advance. A natural language expression has a one-to-one mapping with a formal query. Hence, a natural language question is translated into a formal query by matching the question with the most appropriate natural language expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively.

  15. Knowledge-based Autonomous Test Engineer (KATE)

    NASA Technical Reports Server (NTRS)

    Parrish, Carrie L.; Brown, Barbara L.

    1991-01-01

    Mathematical models of system components have long been used to allow simulators to predict system behavior to various stimuli. Recent efforts to monitor, diagnose, and control real-time systems using component models have experienced similar success. NASA Kennedy is continuing the development of a tool for implementing real-time knowledge-based diagnostic and control systems called KATE (Knowledge based Autonomous Test Engineer). KATE is a model-based reasoning shell designed to provide autonomous control, monitoring, fault detection, and diagnostics for complex engineering systems by applying its reasoning techniques to an exchangeable quantitative model describing the structure and function of the various system components and their systemic behavior.

  16. The Value of Knowledge and the Values of the New Knowledge Worker: Generation X in the New Economy.

    ERIC Educational Resources Information Center

    Bogdanowicz, Maureen S.; Bailey, Elaine K.

    2002-01-01

    Knowledge is increasingly a corporate asset, but it poses a challenges human resource development, especially with workers such as those in Generation X who are concerned with their employability. Companies that value knowledge must value knowledge workers. (Contains 31 references.) (SK)

  17. Knowledge-based diagnosis for aerospace systems

    NASA Technical Reports Server (NTRS)

    Atkinson, David J.

    1988-01-01

    The need for automated diagnosis in aerospace systems and the approach of using knowledge-based systems are examined. Research issues in knowledge-based diagnosis which are important for aerospace applications are treated along with a review of recent relevant research developments in Artificial Intelligence. The design and operation of some existing knowledge-based diagnosis systems are described. The systems described and compared include the LES expert system for liquid oxygen loading at NASA Kennedy Space Center, the FAITH diagnosis system developed at the Jet Propulsion Laboratory, the PES procedural expert system developed at SRI International, the CSRL approach developed at Ohio State University, the StarPlan system developed by Ford Aerospace, the IDM integrated diagnostic model, and the DRAPhys diagnostic system developed at NASA Langley Research Center.

  18. Knowledge-based scheduling of arrival aircraft

    NASA Technical Reports Server (NTRS)

    Krzeczowski, K.; Davis, T.; Erzberger, H.; Lev-Ram, I.; Bergh, C.

    1995-01-01

    A knowledge-based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real-time simulation. The scheduling system automatically sequences, assigns landing times, and assigns runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithms is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real-time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reduction, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithms is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper will describe the scheduling algorithms, give examples of their use, and present data regarding their potential benefits to the air traffic system.

  19. Knowledge-based commodity distribution planning

    NASA Technical Reports Server (NTRS)

    Saks, Victor; Johnson, Ivan

    1994-01-01

    This paper presents an overview of a Decision Support System (DSS) that incorporates Knowledge-Based (KB) and commercial off the shelf (COTS) technology components. The Knowledge-Based Logistics Planning Shell (KBLPS) is a state-of-the-art DSS with an interactive map-oriented graphics user interface and powerful underlying planning algorithms. KBLPS was designed and implemented to support skilled Army logisticians to prepare and evaluate logistics plans rapidly, in order to support corps-level battle scenarios. KBLPS represents a substantial advance in graphical interactive planning tools, with the inclusion of intelligent planning algorithms that provide a powerful adjunct to the planning skills of commodity distribution planners.

  20. Knowledge-based programming support tool

    SciTech Connect

    Harandi, M.T.

    1983-01-01

    This paper presents an overview of a knowledge based programming support tool. Although the system would not synthesize programs automatically, it has the capability of aiding programmers in various phases of program production such as design, coding, debugging and testing. The underlying design principles of this system are similar to those governing the implementation of knowledge-based expertise in other domains of human mental skill. The system is composed of several major units, each an expert system for a sub-domain of program development process. It implements various elements of programming expertise as an interactive system equipped with provisions by which the domain specialist could easily and effectively transfer to the system the knowledge it needs for its decision making. 19 references.

  1. Knowledge-based machine indexing from natural language text: Knowledge base design, development, and maintenance

    NASA Technical Reports Server (NTRS)

    Genuardi, Michael T.

    1993-01-01

    One strategy for machine-aided indexing (MAI) is to provide a concept-level analysis of the textual elements of documents or document abstracts. In such systems, natural-language phrases are analyzed in order to identify and classify concepts related to a particular subject domain. The overall performance of these MAI systems is largely dependent on the quality and comprehensiveness of their knowledge bases. These knowledge bases function to (1) define the relations between a controlled indexing vocabulary and natural language expressions; (2) provide a simple mechanism for disambiguation and the determination of relevancy; and (3) allow the extension of concept-hierarchical structure to all elements of the knowledge file. After a brief description of the NASA Machine-Aided Indexing system, concerns related to the development and maintenance of MAI knowledge bases are discussed. Particular emphasis is given to statistically-based text analysis tools designed to aid the knowledge base developer. One such tool, the Knowledge Base Building (KBB) program, presents the domain expert with a well-filtered list of synonyms and conceptually-related phrases for each thesaurus concept. Another tool, the Knowledge Base Maintenance (KBM) program, functions to identify areas of the knowledge base affected by changes in the conceptual domain (for example, the addition of a new thesaurus term). An alternate use of the KBM as an aid in thesaurus construction is also discussed.

  2. Photography-based image generator

    NASA Astrophysics Data System (ADS)

    Dalton, Nicholas M.; Deering, Charles S.

    1989-09-01

    A two-channel Photography Based Image Generator system was developed to drive the Helmet Mounted Laser Projector at the Naval Training System Center at Orlando, Florida. This projector is a two-channel system that displays a wide field-of-view color image with a high-resolution inset to efficiently match the pilot's visual capability. The image generator is a derivative of the LTV-developed visual system installed in the A-7E Weapon System Trainer at NAS Cecil Field. The Photography Based Image Generator is based on patented LTV technology for high resolution, multi-channel, real world visual simulation. Special provisions were developed for driving the NTSC-developed and patented Helmet Mounted Laser Projector. These include a special 1023-line raster format, an electronic image blending technique, spherical lens mapping for dome projection, a special computer interface for head/eye tracking and flight parameters, special software, and a number of data bases. Good gaze angle tracking is critical to the use of the NTSC projector in a flight simulation environment. The Photography Based Image Generator provides superior dynamic response by performing a relatively simple perspective transformation on stored, high-detail photography instead of generating this detail by "brute force" computer image generation methods. With this approach, high detail can be displayed and updated at the television field rate (60 Hz).

  3. Viewing Knowledge Bases as Qualitative Models.

    ERIC Educational Resources Information Center

    Clancey, William J.

    The concept of a qualitative model provides a unifying perspective for understanding how expert systems differ from conventional programs. Knowledge bases contain qualitative models of systems in the world, that is, primarily non-numeric descriptions that provide a basis for explaining and predicting behavior and formulating action plans. The…

  4. The adverse outcome pathway knowledge base

    EPA Science Inventory

    The rapid advancement of the Adverse Outcome Pathway (AOP) framework has been paralleled by the development of tools to store, analyse, and explore AOPs. The AOP Knowledge Base (AOP-KB) project has brought three independently developed platforms (Effectopedia, AOP-Wiki, and AOP-X...

  5. Improving the Knowledge Base in Teacher Education.

    ERIC Educational Resources Information Center

    Rockler, Michael J.

    Education in the United States for most of the last 50 years has built its knowledge base on a single dominating foundation--behavioral psychology. This paper analyzes the history of behaviorism. Syntheses are presented of the theories of Ivan P. Pavlov, J. B. Watson, and B. F. Skinner, all of whom contributed to the body of works on behaviorism.…

  6. Improving structural similarity based virtual screening using background knowledge

    PubMed Central

    2013-01-01

    Background Virtual screening in the form of similarity rankings is often applied in the early drug discovery process to rank and prioritize compounds from a database. This similarity ranking can be achieved with structural similarity measures. However, their general nature can lead to insufficient performance in some application cases. In this paper, we provide a link between ranking-based virtual screening and fragment-based data mining methods. The inclusion of binding-relevant background knowledge into a structural similarity measure improves the quality of the similarity rankings. This background knowledge in the form of binding relevant substructures can either be derived by hand selection or by automated fragment-based data mining methods. Results In virtual screening experiments we show that our approach clearly improves enrichment factors with both applied variants of our approach: the extension of the structural similarity measure with background knowledge in the form of a hand-selected relevant substructure or the extension of the similarity measure with background knowledge derived with data mining methods. Conclusion Our study shows that adding binding relevant background knowledge can lead to significantly improved similarity rankings in virtual screening and that even basic data mining approaches can lead to competitive results making hand-selection of the background knowledge less crucial. This is especially important in drug discovery and development projects where no receptor structure is available or more frequently no verified binding mode is known and mostly ligand based approaches can be applied to generate hit compounds. PMID:24341870

  7. PharmGKB: The Pharmacogenomics Knowledge Base

    PubMed Central

    Thorn, Caroline F.; Klein, Teri E.; Altman, Russ B.

    2014-01-01

    The Pharmacogenomics Knowledge Base, PharmGKB, is an interactive tool for researchers investigating how genetic variation affects drug response. The PharmGKB Web site, http://www.pharmgkb.org, displays genotype, molecular, and clinical knowledge integrated into pathway representations and Very Important Pharmacogene (VIP) summaries with links to additional external resources. Users can search and browse the knowledgebase by genes, variants, drugs, diseases, and pathways. Registration is free to the entire research community, but subject to agreement to use for research purposes only and not to redistribute. Registered users can access and download data to aid in the design of future pharmacogenetics and pharmacogenomics studies. PMID:23824865

  8. Tools for constructing knowledge-based systems

    SciTech Connect

    Cross, G.R.

    1986-03-01

    The original expert systems for the most part were handcrafted directly, using various dialects of the LISP programming language. The inference and knowledge representation components of these systems can be separated from the domain-specific portion of the expert system and can be used again for an entirely different task. Some of these tools, generically called shells, are discussed. Although these shells provide help in building knowledge-based systems, considerable skill in artificial intelligence programming is still necessary to create an expert system that accomplishes a nontrivial task.

  9. An Ontology-Based Conceptual Model For Accumulating And Reusing Knowledge In A DMAIC Process

    NASA Astrophysics Data System (ADS)

    Nguyen, ThanhDat; Kifor, Claudiu Vasile

    2015-09-01

    DMAIC (Define, Measure, Analyze, Improve, and Control) is an important process used to enhance quality of processes basing on knowledge. However, it is difficult to access DMAIC knowledge. Conventional approaches meet a problem arising from structuring and reusing DMAIC knowledge. The main reason is that DMAIC knowledge is not represented and organized systematically. In this article, we overcome the problem basing on a conceptual model that is a combination of DMAIC process, knowledge management, and Ontology engineering. The main idea of our model is to utilizing Ontologies to represent knowledge generated by each of DMAIC phases. We build five different knowledge bases for storing all knowledge of DMAIC phases with the support of necessary tools and appropriate techniques in Information Technology area. Consequently, these knowledge bases provide knowledge available to experts, managers, and web users during or after DMAIC execution in order to share and reuse existing knowledge.

  10. An Ebola virus-centered knowledge base

    PubMed Central

    Kamdar, Maulik R.; Dumontier, Michel

    2015-01-01

    Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard. Database URL: http://ebola.semanticscience.org. PMID:26055098

  11. An Ebola virus-centered knowledge base.

    PubMed

    Kamdar, Maulik R; Dumontier, Michel

    2015-01-01

    Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard.

  12. An Ebola virus-centered knowledge base.

    PubMed

    Kamdar, Maulik R; Dumontier, Michel

    2015-01-01

    Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard. PMID:26055098

  13. Satellite Contamination and Materials Outgassing Knowledge base

    NASA Technical Reports Server (NTRS)

    Minor, Jody L.; Kauffman, William J. (Technical Monitor)

    2001-01-01

    Satellite contamination continues to be a design problem that engineers must take into account when developing new satellites. To help with this issue, NASA's Space Environments and Effects (SEE) Program funded the development of the Satellite Contamination and Materials Outgassing Knowledge base. This engineering tool brings together in one location information about the outgassing properties of aerospace materials based upon ground-testing data, the effects of outgassing that has been observed during flight and measurements of the contamination environment by on-orbit instruments. The knowledge base contains information using the ASTM Standard E- 1559 and also consolidates data from missions using quartz-crystal microbalances (QCM's). The data contained in the knowledge base was shared with NASA by government agencies and industry in the US and international space agencies as well. The term 'knowledgebase' was used because so much information and capability was brought together in one comprehensive engineering design tool. It is the SEE Program's intent to continually add additional material contamination data as it becomes available - creating a dynamic tool whose value to the user is ever increasing. The SEE Program firmly believes that NASA, and ultimately the entire contamination user community, will greatly benefit from this new engineering tool and highly encourages the community to not only use the tool but add data to it as well.

  14. Presentation planning using an integrated knowledge base

    NASA Technical Reports Server (NTRS)

    Arens, Yigal; Miller, Lawrence; Sondheimer, Norman

    1988-01-01

    A description is given of user interface research aimed at bringing together multiple input and output modes in a way that handles mixed mode input (commands, menus, forms, natural language), interacts with a diverse collection of underlying software utilities in a uniform way, and presents the results through a combination of output modes including natural language text, maps, charts and graphs. The system, Integrated Interfaces, derives much of its ability to interact uniformly with the user and the underlying services and to build its presentations, from the information present in a central knowledge base. This knowledge base integrates models of the application domain (Navy ships in the Pacific region, in the current demonstration version); the structure of visual displays and their graphical features; the underlying services (data bases and expert systems); and interface functions. The emphasis is on a presentation planner that uses the knowledge base to produce multi-modal output. There has been a flurry of recent work in user interface management systems. (Several recent examples are listed in the references). Existing work is characterized by an attempt to relieve the software designer of the burden of handcrafting an interface for each application. The work has generally focused on intelligently handling input. This paper deals with the other end of the pipeline - presentations.

  15. XML-Based SHINE Knowledge Base Interchange Language

    NASA Technical Reports Server (NTRS)

    James, Mark; Mackey, Ryan; Tikidjian, Raffi

    2008-01-01

    The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.

  16. Knowledge-Based Query Construction Using the CDSS Knowledge Base for Efficient Evidence Retrieval.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Lee, Sungyoung; Kang, Byeong Ho

    2015-08-28

    Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important.

  17. Knowledge-Based Query Construction Using the CDSS Knowledge Base for Efficient Evidence Retrieval

    PubMed Central

    Afzal, Muhammad; Hussain, Maqbool; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Lee, Sungyoung; Kang, Byeong Ho

    2015-01-01

    Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important. PMID:26343669

  18. Micromotor-based energy generation.

    PubMed

    Singh, Virendra V; Soto, Fernando; Kaufmann, Kevin; Wang, Joseph

    2015-06-01

    A micromotor-based strategy for energy generation, utilizing the conversion of liquid-phase hydrogen to usable hydrogen gas (H2), is described. The new motion-based H2-generation concept relies on the movement of Pt-black/Ti Janus microparticle motors in a solution of sodium borohydride (NaBH4) fuel. This is the first report of using NaBH4 for powering micromotors. The autonomous motion of these catalytic micromotors, as well as their bubble generation, leads to enhanced mixing and transport of NaBH4 towards the Pt-black catalytic surface (compared to static microparticles or films), and hence to a substantially faster rate of H2 production. The practical utility of these micromotors is illustrated by powering a hydrogen-oxygen fuel cell car by an on-board motion-based hydrogen and oxygen generation. The new micromotor approach paves the way for the development of efficient on-site energy generation for powering external devices or meeting growing demands on the energy grid.

  19. Clips as a knowledge based language

    NASA Technical Reports Server (NTRS)

    Harrington, James B.

    1987-01-01

    CLIPS is a language for writing expert systems applications on a personal or small computer. Here, the CLIPS programming language is described and compared to three other artificial intelligence (AI) languages (LISP, Prolog, and OPS5) with regard to the processing they provide for the implementation of a knowledge based system (KBS). A discussion is given on how CLIPS would be used in a control system.

  20. Empirical Analysis and Refinement of Expert System Knowledge Bases

    PubMed Central

    Weiss, Sholom M.; Politakis, Peter; Ginsberg, Allen

    1986-01-01

    Recent progress in knowledge base refinement for expert systems is reviewed. Knowledge base refinement is characterized by the constrained modification of rule-components in an existing knowledge base. The goals are to localize specific weaknesses in a knowledge base and to improve an expert system's performance. Systems that automate some aspects of knowledge base refinement can have a significant impact on the related problems of knowledge base acquisition, maintenance, verification, and learning from experience. The SEEK empiricial analysis and refinement system is reviewed and its successor system, SEEK2, is introduced. Important areas for future research in knowledge base refinement are described.

  1. Generative Knowledge Interviewing: A Method for Knowledge Transfer and Talent Management at the University of Michigan

    ERIC Educational Resources Information Center

    Peet, Melissa R.; Walsh, Katherine; Sober, Robin; Rawak, Christine S.

    2010-01-01

    Experts and leaders within most fields possess knowledge that is largely tacit and unconscious in nature. The leaders of most organizations do not "know what they know" and cannot share their knowledge with others. The loss of this essential knowledge is of major concern to organizations. This study tested an innovative method of tacit knowledge…

  2. Knowledge-based systems in Japan

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward; Engelmore, Robert S.; Friedland, Peter E.; Johnson, Bruce B.; Nii, H. Penny; Schorr, Herbert; Shrobe, Howard

    1994-01-01

    This report summarizes a study of the state-of-the-art in knowledge-based systems technology in Japan, organized by the Japanese Technology Evaluation Center (JTEC) under the sponsorship of the National Science Foundation and the Advanced Research Projects Agency. The panel visited 19 Japanese sites in March 1992. Based on these site visits plus other interactions with Japanese organizations, both before and after the site visits, the panel prepared a draft final report. JTEC sent the draft to the host organizations for their review. The final report was published in May 1993.

  3. A knowledge based software engineering environment testbed

    NASA Technical Reports Server (NTRS)

    Gill, C.; Reedy, A.; Baker, L.

    1985-01-01

    The Carnegie Group Incorporated and Boeing Computer Services Company are developing a testbed which will provide a framework for integrating conventional software engineering tools with Artifical Intelligence (AI) tools to promote automation and productivity. The emphasis is on the transfer of AI technology to the software development process. Experiments relate to AI issues such as scaling up, inference, and knowledge representation. In its first year, the project has created a model of software development by representing software activities; developed a module representation formalism to specify the behavior and structure of software objects; integrated the model with the formalism to identify shared representation and inheritance mechanisms; demonstrated object programming by writing procedures and applying them to software objects; used data-directed and goal-directed reasoning to, respectively, infer the cause of bugs and evaluate the appropriateness of a configuration; and demonstrated knowledge-based graphics. Future plans include introduction of knowledge-based systems for rapid prototyping or rescheduling; natural language interfaces; blackboard architecture; and distributed processing

  4. Explanation-based knowledge acquisition of electronics

    NASA Astrophysics Data System (ADS)

    Kieras, David E.

    1992-08-01

    This is the final report in a project that examined how knowledge of practical electronics could be acquired from materials similar to that appearing in electronics training textbooks, from both an artificial intelligence perspective and an experimental psychology perspective. Practical electronics training materials present a series of basic circuits accompanied by an explanation of how the circuit performs the desired function. More complex circuits are then explained in terms of these basic circuits. This material thus presents schema knowledge for individual circuit types in the form of explanations of circuit behavior. Learning from such material would thus consist of first instantiating any applicable schemas, and then constructing a new schema based on the circuit structure and behavior described in the explanation. If the basic structure of the material is an effective approach to learning, learning about a new circuit should be easier if the relevant schemas are available than not. This result was obtained for both an artificial intelligence system that used standard explanation-based learning mechanisms and with human learners in a laboratory setting, but the benefits of already having the relevant schemas were not large in these materials. The close examination of learning in this domain, and the structure of knowledge, should be useful to future cognitive analyses of training in technical domains.

  5. Compilation for critically constrained knowledge bases

    SciTech Connect

    Schrag, R.

    1996-12-31

    We show that many {open_quotes}critically constrained{close_quotes} Random 3SAT knowledge bases (KBs) can be compiled into disjunctive normal form easily by using a variant of the {open_quotes}Davis-Putnam{close_quotes} proof procedure. From these compiled KBs we can answer all queries about entailment of conjunctive normal formulas, also easily - compared to a {open_quotes}brute-force{close_quotes} approach to approximate knowledge compilation into unit clauses for the same KBs. We exploit this fact to develop an aggressive hybrid approach which attempts to compile a KB exactly until a given resource limit is reached, then falls back to approximate compilation into unit clauses. The resulting approach handles all of the critically constrained Random 3SAT KBs with average savings of an order of magnitude over the brute-force approach.

  6. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  7. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery. PMID:19727614

  8. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  9. Wavelet-Based Grid Generation

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1996-01-01

    Wavelets can provide a basis set in which the basis functions are constructed by dilating and translating a fixed function known as the mother wavelet. The mother wavelet can be seen as a high pass filter in the frequency domain. The process of dilating and expanding this high-pass filter can be seen as altering the frequency range that is 'passed' or detected. The process of translation moves this high-pass filter throughout the domain, thereby providing a mechanism to detect the frequencies or scales of information at every location. This is exactly the type of information that is needed for effective grid generation. This paper provides motivation to use wavelets for grid generation in addition to providing the final product: source code for wavelet-based grid generation.

  10. Design of a knowledge-based welding advisor

    SciTech Connect

    Kleban, S.D.

    1996-06-01

    Expert system implementation can take numerous forms ranging form traditional declarative rule-based systems with if-then syntax to imperative programming languages that capture expertise in procedural code. The artificial intelligence community generally thinks of expert systems as rules or rule-bases and an inference engine to process the knowledge. The welding advisor developed at Sandia National Laboratories and described in this paper deviates from this by codifying expertise using object representation and methods. Objects allow computer scientists to model the world as humans perceive it giving us a very natural way to encode expert knowledge. The design of the welding advisor, which generates and evaluates solutions, will be compared and contrasted to a traditional rule- based system.

  11. A prototype knowledge-based simulation support system

    SciTech Connect

    Hill, T.R.; Roberts, S.D.

    1987-04-01

    As a preliminary step toward the goal of an intelligent automated system for simulation modeling support, we explore the feasibility of the overall concept by generating and testing a prototypical framework. A prototype knowledge-based computer system was developed to support a senior level course in industrial engineering so that the overall feasibility of an expert simulation support system could be studied in a controlled and observable setting. The system behavior mimics the diagnostic (intelligent) process performed by the course instructor and teaching assistants, finding logical errors in INSIGHT simulation models and recommending appropriate corrective measures. The system was programmed in a non-procedural language (PROLOG) and designed to run interactively with students working on course homework and projects. The knowledge-based structure supports intelligent behavior, providing its users with access to an evolving accumulation of expert diagnostic knowledge. The non-procedural approach facilitates the maintenance of the system and helps merge the roles of expert and knowledge engineer by allowing new knowledge to be easily incorporated without regard to the existing flow of control. The background, features and design of the system are describe and preliminary results are reported. Initial success is judged to demonstrate the utility of the reported approach and support the ultimate goal of an intelligent modeling system which can support simulation modelers outside the classroom environment. Finally, future extensions are suggested.

  12. Knowledge-based public health situation awareness

    NASA Astrophysics Data System (ADS)

    Mirhaji, Parsa; Zhang, Jiajie; Srinivasan, Arunkumar; Richesson, Rachel L.; Smith, Jack W.

    2004-09-01

    There have been numerous efforts to create comprehensive databases from multiple sources to monitor the dynamics of public health and most specifically to detect the potential threats of bioterrorism before widespread dissemination. But there are not many evidences for the assertion that these systems are timely and dependable, or can reliably identify man made from natural incident. One must evaluate the value of so called 'syndromic surveillance systems' along with the costs involved in design, development, implementation and maintenance of such systems and the costs involved in investigation of the inevitable false alarms1. In this article we will introduce a new perspective to the problem domain with a shift in paradigm from 'surveillance' toward 'awareness'. As we conceptualize a rather different approach to tackle the problem, we will introduce a different methodology in application of information science, computer science, cognitive science and human-computer interaction concepts in design and development of so called 'public health situation awareness systems'. We will share some of our design and implementation concepts for the prototype system that is under development in the Center for Biosecurity and Public Health Informatics Research, in the University of Texas Health Science Center at Houston. The system is based on a knowledgebase containing ontologies with different layers of abstraction, from multiple domains, that provide the context for information integration, knowledge discovery, interactive data mining, information visualization, information sharing and communications. The modular design of the knowledgebase and its knowledge representation formalism enables incremental evolution of the system from a partial system to a comprehensive knowledgebase of 'public health situation awareness' as it acquires new knowledge through interactions with domain experts or automatic discovery of new knowledge.

  13. Irrelevance Reasoning in Knowledge Based Systems

    NASA Technical Reports Server (NTRS)

    Levy, A. Y.

    1993-01-01

    This dissertation considers the problem of reasoning about irrelevance of knowledge in a principled and efficient manner. Specifically, it is concerned with two key problems: (1) developing algorithms for automatically deciding what parts of a knowledge base are irrelevant to a query and (2) the utility of relevance reasoning. The dissertation describes a novel tool, the query-tree, for reasoning about irrelevance. Based on the query-tree, we develop several algorithms for deciding what formulas are irrelevant to a query. Our general framework sheds new light on the problem of detecting independence of queries from updates. We present new results that significantly extend previous work in this area. The framework also provides a setting in which to investigate the connection between the notion of irrelevance and the creation of abstractions. We propose a new approach to research on reasoning with abstractions, in which we investigate the properties of an abstraction by considering the irrelevance claims on which it is based. We demonstrate the potential of the approach for the cases of abstraction of predicates and projection of predicate arguments. Finally, we describe an application of relevance reasoning to the domain of modeling physical devices.

  14. Adaptive Knowledge Management of Project-Based Learning

    ERIC Educational Resources Information Center

    Tilchin, Oleg; Kittany, Mohamed

    2016-01-01

    The goal of an approach to Adaptive Knowledge Management (AKM) of project-based learning (PBL) is to intensify subject study through guiding, inducing, and facilitating development knowledge, accountability skills, and collaborative skills of students. Knowledge development is attained by knowledge acquisition, knowledge sharing, and knowledge…

  15. MetaShare: Enabling Knowledge-Based Data Management

    NASA Astrophysics Data System (ADS)

    Pennington, D. D.; Salayandia, L.; Gates, A.; Osuna, F.

    2013-12-01

    MetaShare is a free and open source knowledge-based system for supporting data management planning, now required by some agencies and publishers. MetaShare supports users as they describe the types of data they will collect, expected standards, and expected policies for sharing. MetaShare's semantic model captures relationships between disciplines, tools, data types, data formats, and metadata standards. As the user plans their data management activities, MetaShare recommends choices based on practices and decisions from a community that has used the system for similar purposes, and extends the knowledge base to capture new relationships. The MetaShare knowledge base is being seeded with information for geoscience and environmental science domains, and is currently undergoing testing on at the University of Texas at El Paso. Through time and usage, it is expected to grow to support a variety of research domains, enabling community-based learning of data management practices. Knowledge of a user's choices during the planning phase can be used to support other tasks in the data life cycle, e.g., collecting, disseminating, and archiving data. A key barrier to scientific data sharing is the lack of sufficient metadata that provides context under which data were collected. The next phase of MetaShare development will automatically generate data collection instruments with embedded metadata and semantic annotations based on the information provided during the planning phase. While not comprehensive, this metadata will be sufficient for discovery and will enable user's to focus on more detailed descriptions of their data. Details are available at: Salayandia, L., Pennington, D., Gates, A., and Osuna, F. (accepted). MetaShare: From data management plans to knowledge base systems. AAAI Fall Symposium Series Workshop on Discovery Informatics, November 15-17, 2013, Arlington, VA.

  16. A knowledge-based care protocol system for ICU.

    PubMed

    Lau, F; Vincent, D D

    1995-01-01

    There is a growing interest in using care maps in ICU. So far, the emphasis has been on developing the critical path, problem/outcome, and variance reporting for specific diagnoses. This paper presents a conceptual knowledge-based care protocol system design for the ICU. It is based on the manual care map currently in use for managing myocardial infarction in the ICU of the Sturgeon General Hospital in Alberta. The proposed design uses expert rules, object schemas, case-based reasoning, and quantitative models as sources of its knowledge. Also being developed is a decision model with explicit linkages for outcome-process-measure from the care map. The resulting system is intended as a bedside charting and decision-support tool for caregivers. Proposed usage includes charting by acknowledgment, generation of alerts, and critiques on variances/events recorded, recommendations for planned interventions, and comparison with historical cases. Currently, a prototype is being developed on a PC-based network with Visual Basic, Level-Expert Object, and xBase. A clinical trial is also planned to evaluate whether this knowledge-based care protocol can reduce the length of stay of patients with myocardial infarction in the ICU. PMID:8591604

  17. Generating Synergy between Conceptual Change and Knowledge Building

    ERIC Educational Resources Information Center

    Lee, Chwee Beng

    2010-01-01

    This paper is an initial effort to review the reciprocity between the theoretical traditions of "conceptual change" and "knowledge building" by discussing the underlying epistemological assumptions, objectives, conceptions of concepts and ideas, and mechanisms that bring forth the respective goals of these two traditions. The basis for generating…

  18. Examining the "Whole Child" to Generate Usable Knowledge

    ERIC Educational Resources Information Center

    Rappolt-Schlichtmann, Gabrielle; Ayoub, Catherine C.; Gravel, Jenna W.

    2009-01-01

    Despite the promise of scientific knowledge contributing to issues facing vulnerable children, families, and communities, typical approaches to research have made applications challenging. While contemporary theories of human development offer appropriate complexity, research has mostly failed to address dynamic developmental processes. Research…

  19. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  20. Microcomputer logarithmic time base generator

    NASA Astrophysics Data System (ADS)

    Wills, L. J.; Ly, Nhan G.

    1985-11-01

    A new circuit is introduced to generate the logarithmic time base function with good resolution. By using a single-chip microcomputer with EPROM program storage, the circuitry is simplified and can be easily reproduced. The output function covers more than six decades of time and has 590 discrete points per decade with an accuracy of one discrete point per decade or ±0.16%. The design overcomes two well-known problems in using the logarithmic time base. First because the time increments are derived from a real-time register there is a precise reference for zero time, and second a series of time base interval marks are output for correctly calibrating the time axis.

  1. Knowledge-based decision support for patient monitoring in cardioanesthesia.

    PubMed

    Schecke, T; Langen, M; Popp, H J; Rau, G; Käsmacher, H; Kalff, G

    1992-01-01

    An approach to generating 'intelligent alarms' is presented that aggregates many information items, i.e. measured vital signs, recent medications, etc., into state variables that more directly reflect the patient's physiological state. Based on these state variables the described decision support system AES-2 also provides therapy recommendations. The assessment of the state variables and the generation of therapeutic advice follow a knowledge-based approach. Aspects of uncertainty, e.g. a gradual transition between 'normal' and 'below normal', are considered applying a fuzzy set approach. Special emphasis is laid on the ergonomic design of the user interface, which is based on color graphics and finger touch input on the screen. Certain simulation techniques considerably support the design process of AES-2 as is demonstrated with a typical example from cardioanesthesia. PMID:1402299

  2. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  3. Selection of construction methods: a knowledge-based approach.

    PubMed

    Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  4. What's in a Word? Using Content Vocabulary to "Generate" Growth in General Academic Vocabulary Knowledge

    ERIC Educational Resources Information Center

    Flanigan, Kevin; Templeton, Shane; Hayes, Latisha

    2012-01-01

    The role of vocabulary knowledge in supporting students' comprehension and understanding of their content-area reading is critical. This article explores how content-area teachers can help students become aware of, understand, and apply generative knowledge about English words to grow and develop their vocabularies. Generative vocabulary…

  5. An object-based methodology for knowledge representation in SGML

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object-based methodology for knowledge representation and its Standard Generalized Markup Language (SGML) implementation is presented. The methodology includes class, perspective domain, and event constructs for representing knowledge within an object paradigm. The perspective construct allows for representation of knowledge from multiple and varying viewpoints. The event construct allows actual use of knowledge to be represented. The SGML implementation of the methodology facilitates usability, structured, yet flexible knowledge design, and sharing and reuse of knowledge class libraries.

  6. Knowledge based cluster ensemble for cancer discovery from biomolecular data.

    PubMed

    Yu, Zhiwen; Wongb, Hau-San; You, Jane; Yang, Qinmin; Liao, Hongying

    2011-06-01

    The adoption of microarray techniques in biological and medical research provides a new way for cancer diagnosis and treatment. In order to perform successful diagnosis and treatment of cancer, discovering and classifying cancer types correctly is essential. Class discovery is one of the most important tasks in cancer classification using biomolecular data. Most of the existing works adopt single clustering algorithms to perform class discovery from biomolecular data. However, single clustering algorithms have limitations, which include a lack of robustness, stability, and accuracy. In this paper, we propose a new cluster ensemble approach called knowledge based cluster ensemble (KCE) which incorporates the prior knowledge of the data sets into the cluster ensemble framework. Specifically, KCE represents the prior knowledge of a data set in the form of pairwise constraints. Then, the spectral clustering algorithm (SC) is adopted to generate a set of clustering solutions. Next, KCE transforms pairwise constraints into confidence factors for these clustering solutions. After that, a consensus matrix is constructed by considering all the clustering solutions and their corresponding confidence factors. The final clustering result is obtained by partitioning the consensus matrix. Comparison with single clustering algorithms and conventional cluster ensemble approaches, knowledge based cluster ensemble approaches are more robust, stable and accurate. The experiments on cancer data sets show that: 1) KCE works well on these data sets; 2) KCE not only outperforms most of the state-of-the-art single clustering algorithms, but also outperforms most of the state-of-the-art cluster ensemble approaches.

  7. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  8. Model Based Analysis and Test Generation for Flight Software

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Schumann, Johann M.; Mehlitz, Peter C.; Lowry, Mike R.; Karsai, Gabor; Nine, Harmon; Neema, Sandeep

    2009-01-01

    We describe a framework for model-based analysis and test case generation in the context of a heterogeneous model-based development paradigm that uses and combines Math- Works and UML 2.0 models and the associated code generation tools. This paradigm poses novel challenges to analysis and test case generation that, to the best of our knowledge, have not been addressed before. The framework is based on a common intermediate representation for different modeling formalisms and leverages and extends model checking and symbolic execution tools for model analysis and test case generation, respectively. We discuss the application of our framework to software models for a NASA flight mission.

  9. Automated knowledge base development from CAD/CAE databases

    NASA Technical Reports Server (NTRS)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  10. The Weather Lab: An Instruction-Based Assessment Tool Built from a Knowledge-Based System.

    ERIC Educational Resources Information Center

    Mioduser, David; Venezky, Richard L.; Gong, Brian

    1998-01-01

    Presents the Weather Lab, a computer-based tool for assessing student knowledge and understanding of weather phenomena by involving students in generating weather forecasts or manipulating weather components affecting the final formulation of a forecast. Contains 37 references. (Author/ASK)

  11. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  12. Knowledge-based generalization of metabolic models.

    PubMed

    Zhukova, Anna; Sherman, David James

    2014-07-01

    Genome-scale metabolic model reconstruction is a complicated process beginning with (semi-)automatic inference of the reactions participating in the organism's metabolism, followed by many iterations of network analysis and improvement. Despite advances in automatic model inference and analysis tools, reconstruction may still miss some reactions or add erroneous ones. Consequently, a human expert's analysis of the model will continue to play an important role in all the iterations of the reconstruction process. This analysis is hampered by the size of the genome-scale models (typically thousands of reactions), which makes it hard for a human to understand them. To aid human experts in curating and analyzing metabolic models, we have developed a method for knowledge-based generalization that provides a higher-level view of a metabolic model, masking its inessential details while presenting its essential structure. The method groups biochemical species in the model into semantically equivalent classes based on the ChEBI ontology, identifies reactions that become equivalent with respect to the generalized species, and factors those reactions into generalized reactions. Generalization allows curators to quickly identify divergences from the expected structure of the model, such as alternative paths or missing reactions, that are the priority targets for further curation. We have applied our method to genome-scale yeast metabolic models and shown that it improves understanding by helping to identify both specificities and potential errors. PMID:24766276

  13. Sustaining knowledge in the neutron generator community and benchmarking study. Phase II.

    SciTech Connect

    Huff, Tameka B.; Stubblefield, William Anthony; Cole, Benjamin Holland, II; Baldonado, Esther

    2010-08-01

    This report documents the second phase of work under the Sustainable Knowledge Management (SKM) project for the Neutron Generator organization at Sandia National Laboratories. Previous work under this project is documented in SAND2008-1777, Sustaining Knowledge in the Neutron Generator Community and Benchmarking Study. Knowledge management (KM) systems are necessary to preserve critical knowledge within organizations. A successful KM program should focus on people and the process for sharing, capturing, and applying knowledge. The Neutron Generator organization is developing KM systems to ensure knowledge is not lost. A benchmarking study involving site visits to outside industry plus additional resource research was conducted during this phase of the SKM project. The findings presented in this report are recommendations for making an SKM program successful. The recommendations are activities that promote sharing, capturing, and applying knowledge. The benchmarking effort, including the site visits to Toyota and Halliburton, provided valuable information on how the SEA KM team could incorporate a KM solution for not just the neutron generators (NG) community but the entire laboratory. The laboratory needs a KM program that allows members of the workforce to access, share, analyze, manage, and apply knowledge. KM activities, such as communities of practice (COP) and sharing best practices, provide a solution towards creating an enabling environment for KM. As more and more people leave organizations through retirement and job transfer, the need to preserve knowledge is essential. Creating an environment for the effective use of knowledge is vital to achieving the laboratory's mission.

  14. Ontology-based knowledge base model construction-OntoKBCF.

    PubMed

    Jing, Xia; Kay, Stephen; Hardiker, Nicholas; Marley, Tom

    2007-01-01

    Semantic web technologies are used in the construction of a bio-health knowledge base model, which, when coupled with an Electronic Health Record (EHR), is to be used by clinicians. Specifically, this ontology provides the basis for a domain knowledge resource that attempts to bridge biological and clinical information. The prototype is focused on a Cystic Fibrosis exemplar, and the content of the model includes: Cochrane reviews; a time-oriented description; gene therapy; and the most common cystic fibrosis gene mutations. The facts within the model range from nucleo-base mutation and amino acid change to clinical phenotype. The knowledge is represented by layers from the micro level to the macro level. Here, emphasis is placed upon the details between levels (i.e., the vertical axis) and these are made available to bridge the knowledge from different levels. The description of gender, age, mutation and clinical manifestations are clues for matching points within an EHR system. OWL is the ontology representation language used and the output from Protégé-OWL is a XML-based file format, which facilitates further application and communication.

  15. Knowledge-based reusable software synthesis system

    NASA Technical Reports Server (NTRS)

    Donaldson, Cammie

    1989-01-01

    The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.

  16. Improving the implementation of evidence-based practice: a knowledge management perspective.

    PubMed

    Sandars, John; Heller, Richard

    2006-06-01

    Experience of knowledge management initiatives in non-health care organizations can offer useful insights, and strategies, to implement evidence-based practice in health care. Knowledge management offers a structured process for the generation, storage, distribution and application of knowledge in organizations. This includes both tacit knowledge (personal experience) and explicit knowledge (evidence). Communities of practice are a key component of knowledge management and have been recognized to be essential for the implementation of change in organizations. It is within communities of practice that tacit knowledge is actively integrated with explicit knowledge. Organizational factors that limit the development of knowledge management, including communities of practice, in non-health care organizations need to be overcome if the potential is to be achieved within health care.

  17. Case-based reasoning: The marriage of knowledge base and data base

    NASA Technical Reports Server (NTRS)

    Pulaski, Kirt; Casadaban, Cyprian

    1988-01-01

    The coupling of data and knowledge has a synergistic effect when building an intelligent data base. The goal is to integrate the data and knowledge almost to the point of indistinguishability, permitting them to be used interchangeably. Examples given in this paper suggest that Case-Based Reasoning is a more integrated way to link data and knowledge than pure rule-based reasoning.

  18. Generating HRD Related "General Knowledge" from Mode 2 "Design Science" Research: A Cumulative Study of Manager and Managerial Leader Effectiveness

    ERIC Educational Resources Information Center

    Hamlin, Robert G.

    2007-01-01

    This paper illustrates how Mode 2 "design science" research can generate HRD related "general knowledge" in support of evidence-based practice. It describes a "derived-etic" study that compares and contrasts the findings of six previous "emic" studies previously carried out within six different public and private/corporate sector organizations in…

  19. The representation of knowledge within model-based control systems

    SciTech Connect

    Weygand, D.P.; Koul, R.

    1987-01-01

    Representation of knowledge in artificially intelligent systems is discussed. Types of knowledge that might need to be represented in AI systems are listed, and include knowledge about objects, events, knowledge about how to do things, and knowledge about what human beings know (meta-knowledge). The use of knowledge in AI systems is discussed in terms of acquiring and retrieving knowledge and reasoning about known facts. Different kinds of reasonings or representations are ghen described with some examples given. These include formal reasoning or logical representation, which is related to mathematical logic, production systems, which are based on the idea of condition-action pairs (production), procedural reasoning, which uses pre-formed plans to solve problems, frames, which provide a structure for representing knowledge in an organized manner, direct analogical representations, which represent knowledge in such a manner that permits some observation without deduction. (LEW)

  20. Processing large sensor data sets for safeguards : the knowledge generation system.

    SciTech Connect

    Thomas, Maikel A.; Smartt, Heidi Anne; Matthews, Robert F.

    2012-04-01

    Modern nuclear facilities, such as reprocessing plants, present inspectors with significant challenges due in part to the sheer amount of equipment that must be safeguarded. The Sandia-developed and patented Knowledge Generation system was designed to automatically analyze large amounts of safeguards data to identify anomalous events of interest by comparing sensor readings with those expected from a process of interest and operator declarations. This paper describes a demonstration of the Knowledge Generation system using simulated accountability tank sensor data to represent part of a reprocessing plant. The demonstration indicated that Knowledge Generation has the potential to address several problems critical to the future of safeguards. It could be extended to facilitate remote inspections and trigger random inspections. Knowledge Generation could analyze data to establish trust hierarchies, to facilitate safeguards use of operator-owned sensors.

  1. Managing, Understanding, Applying, and Creating Knowledge in the Information Age: Next-Generation Challenges and Opportunities

    ERIC Educational Resources Information Center

    Goldman, Susan R.; Scardamalia, Marlene

    2013-01-01

    New media, new knowledge practices, and concepts point to the need for greater understanding of cognitive processes underlying knowledge acquisition and generation in open informational worlds. The authors of the articles in this special issue address cognitive and instructional challenges surrounding multiple document comprehension--a…

  2. The Effects of Domain Knowledge and Instructional Manipulation on Creative Idea Generation

    ERIC Educational Resources Information Center

    Hao, Ning

    2010-01-01

    The experiment was designed to explore the effects of domain knowledge, instructional manipulation, and the interaction between them on creative idea generation. Three groups of participants who respectively possessed the domain knowledge of biology, sports, or neither were asked to finish two tasks: imagining an extraterrestrial animal and…

  3. A Knowledge-Based System Developer for aerospace applications

    NASA Technical Reports Server (NTRS)

    Shi, George Z.; Wu, Kewei; Fensky, Connie S.; Lo, Ching F.

    1993-01-01

    A prototype Knowledge-Based System Developer (KBSD) has been developed for aerospace applications by utilizing artificial intelligence technology. The KBSD directly acquires knowledge from domain experts through a graphical interface then builds expert systems from that knowledge. This raises the state of the art of knowledge acquisition/expert system technology to a new level by lessening the need for skilled knowledge engineers. The feasibility, applicability , and efficiency of the proposed concept was established, making a continuation which would develop the prototype to a full-scale general-purpose knowledge-based system developer justifiable. The KBSD has great commercial potential. It will provide a marketable software shell which alleviates the need for knowledge engineers and increase productivity in the workplace. The KBSD will therefore make knowledge-based systems available to a large portion of industry.

  4. Route Generation for a Synthetic Character (BOT) Using a Partial or Incomplete Knowledge Route Generation Algorithm in UT2004 Virtual Environment

    NASA Technical Reports Server (NTRS)

    Hanold, Gregg T.; Hanold, David T.

    2010-01-01

    This paper presents a new Route Generation Algorithm that accurately and realistically represents human route planning and navigation for Military Operations in Urban Terrain (MOUT). The accuracy of this algorithm in representing human behavior is measured using the Unreal Tournament(Trademark) 2004 (UT2004) Game Engine to provide the simulation environment in which the differences between the routes taken by the human player and those of a Synthetic Agent (BOT) executing the A-star algorithm and the new Route Generation Algorithm can be compared. The new Route Generation Algorithm computes the BOT route based on partial or incomplete knowledge received from the UT2004 game engine during game play. To allow BOT navigation to occur continuously throughout the game play with incomplete knowledge of the terrain, a spatial network model of the UT2004 MOUT terrain is captured and stored in an Oracle 11 9 Spatial Data Object (SOO). The SOO allows a partial data query to be executed to generate continuous route updates based on the terrain knowledge, and stored dynamic BOT, Player and environmental parameters returned by the query. The partial data query permits the dynamic adjustment of the planned routes by the Route Generation Algorithm based on the current state of the environment during a simulation. The dynamic nature of this algorithm more accurately allows the BOT to mimic the routes taken by the human executing under the same conditions thereby improving the realism of the BOT in a MOUT simulation environment.

  5. Creating a knowledge base of biological research papers

    SciTech Connect

    Hafner, C.D.; Baclawski, K.; Futrelle, R.P.; Fridman, N.

    1994-12-31

    Intelligent text-oriented tools for representing and searching the biological research literature are being developed, which combine object-oriented databases with artificial intelligence techniques to create a richly structured knowledge base of Materials and Methods sections of biological research papers. A knowledge model of experimental processes, biological and chemical substances, and analytical techniques is described, based on the representation techniques of taxonomic semantic nets and knowledge frames. Two approaches to populating the knowledge base with the contents of biological research papers are described: natural language processing and an interactive knowledge definition tool.

  6. [Trends on generation and reproduction of knowledge about economic evaluation and health].

    PubMed

    Arredondo, A; Parada, I

    2001-08-01

    This paper identifies the trends and recent progress in the generation and reproduction of knowledge on health economic evaluation. Analysis is organized along nine public health action fields, namely: health determinants and predictors, economic value of health, healthcare demand, healthcare supply, microeconomic evaluation of healthcare, healthcare market balance, evaluation of policy instruments, general evaluation of the health system, and healthcare planning, regulation and supervision. Each action field is defined to place the reader in the proper setting and level of analysis. In addition, thematic research topics developed in each action field are proposed and discussed. The generation and reproduction of knowledge on the different action fields was based on the review of the bibliographic databases MEDLINE and LILACS for the 1992-2000 period. Results lead to the conclusion that development and application of economic evaluation of healthcare has been uneven across different countries and that there is a growing increase of applications starting in 1994, the year of initiation of healthcare reform in Latin America.

  7. System Engineering for the NNSA Knowledge Base

    NASA Astrophysics Data System (ADS)

    Young, C.; Ballard, S.; Hipp, J.

    2006-05-01

    To improve ground-based nuclear explosion monitoring capability, GNEM R&E (Ground-based Nuclear Explosion Monitoring Research & Engineering) researchers at the national laboratories have collected an extensive set of raw data products. These raw data are used to develop higher level products (e.g. 2D and 3D travel time models) to better characterize the Earth at regional scales. The processed products and selected portions of the raw data are stored in an archiving and access system known as the NNSA (National Nuclear Security Administration) Knowledge Base (KB), which is engineered to meet the requirements of operational monitoring authorities. At its core, the KB is a data archive, and the effectiveness of the KB is ultimately determined by the quality of the data content, but access to that content is completely controlled by the information system in which that content is embedded. Developing this system has been the task of Sandia National Laboratories (SNL), and in this paper we discuss some of the significant challenges we have faced and the solutions we have engineered. One of the biggest system challenges with raw data has been integrating database content from the various sources to yield an overall KB product that is comprehensive, thorough and validated, yet minimizes the amount of disk storage required. Researchers at different facilities often use the same data to develop their products, and this redundancy must be removed in the delivered KB, ideally without requiring any additional effort on the part of the researchers. Further, related data content must be grouped together for KB user convenience. Initially SNL used whatever tools were already available for these tasks, and did the other tasks manually. The ever-growing volume of KB data to be merged, as well as a need for more control of merging utilities, led SNL to develop our own java software package, consisting of a low- level database utility library upon which we have built several

  8. Organizational culture and knowledge management in the electric power generation industry

    NASA Astrophysics Data System (ADS)

    Mayfield, Robert D.

    Scarcity of knowledge and expertise is a challenge in the electric power generation industry. Today's most pervasive knowledge issues result from employee turnover and the constant movement of employees from project to project inside organizations. To address scarcity of knowledge and expertise, organizations must enable employees to capture, transfer, and use mission-critical explicit and tacit knowledge. The purpose of this qualitative grounded theory research was to examine the relationship between and among organizations within the electric power generation industry developing knowledge management processes designed to retain, share, and use the industry, institutional, and technical knowledge upon which the organizations depend. The research findings show that knowledge management is a business problem within the domain of information systems and management. The risks associated with losing mission critical-knowledge can be measured using metrics on employee retention, recruitment, productivity, training and benchmarking. Certain enablers must be in place in order to engage people, encourage cooperation, create a knowledge-sharing culture, and, ultimately change behavior. The research revealed the following change enablers that support knowledge management strategies: (a) training - blended learning, (b) communities of practice, (c) cross-functional teams, (d) rewards and recognition programs, (e) active senior management support, (f) communication and awareness, (g) succession planning, and (h) team organizational culture.

  9. Joint Knowledge Generation Between Climate Science and Infrastructure Engineering

    NASA Astrophysics Data System (ADS)

    Stoner, A. M. K.; Hayhoe, K.; Jacobs, J. M.

    2015-12-01

    Over the past decade the engineering community has become increasingly aware of the need to incorporate climate projections into the planning and design of sensitive infrastructure. However, this is a task that is easier said than done. This presentation will discuss some of the successes and hurdles experiences through the past year, from a climate scientist's perspective, working with engineers in infrastructure research and applied engineering through the Infrastructure & Climate Network (ICNet). Engineers rely on strict building codes and ordinances, and can be the subject of lawsuits if those codes are not followed. Matters are further complicated by the uncertainty inherent to climate projections, which include short-term natural variability, as well as the influence of scientific uncertainty and even human behavior on the rate and magnitude of change. Climate scientists typically address uncertainty by creating projections based on multiple models following different future scenarios. This uncertainty is difficult to incorporate into engineering projects, however, due to the fact that they cannot build two different bridges, one allowing for a lower amount of change, and another for a higher. More often than not there is a considerable difference between the costs of building two such bridges, which means that available funds often are the deciding factor. Discussions of climate science are often well received with engineers who work in the research area of infrastructure; going a step further, however, and implementing it in applied engineering projects can be challenging. This presentation will discuss some of the challenges and opportunities inherent to collaborations between climate scientists and transportation engineers, drawing from a range of studies including truck weight restrictions on roads during the spring thaw, and bridge deck performance due to environmental forcings.

  10. Agent-Based Knowledge Discovery for Modeling and Simulation

    SciTech Connect

    Haack, Jereme N.; Cowell, Andrew J.; Marshall, Eric J.; Fligg, Alan K.; Gregory, Michelle L.; McGrath, Liam R.

    2009-09-15

    This paper describes an approach to using agent technology to extend the automated discovery mechanism of the Knowledge Encapsulation Framework (KEF). KEF is a suite of tools to enable the linking of knowledge inputs (relevant, domain-specific evidence) to modeling and simulation projects, as well as other domains that require an effective collaborative workspace for knowledge-based tasks. This framework can be used to capture evidence (e.g., trusted material such as journal articles and government reports), discover new evidence (covering both trusted and social media), enable discussions surrounding domain-specific topics and provide automatically generated semantic annotations for improved corpus investigation. The current KEF implementation is presented within a semantic wiki environment, providing a simple but powerful collaborative space for team members to review, annotate, discuss and align evidence with their modeling frameworks. The novelty in this approach lies in the combination of automatically tagged and user-vetted resources, which increases user trust in the environment, leading to ease of adoption for the collaborative environment.

  11. Knowledge-based environment for hierarchical modeling and simulation

    SciTech Connect

    Kim, Taggon.

    1988-01-01

    This dissertation develops a knowledge-based environment for hierarchical modeling and simulation of discrete-event systems as the major part of a longer, ongoing research project in artificial intelligence and distributed simulation. In developing the environment, a knowledge representation framework for modeling and simulation, which unifies structural and behavioral knowledge of simulation models, is proposed by incorporating knowledge-representation schemes in artificial intelligence within simulation models. The knowledge base created using the framework is composed of a structural knowledge base called entity structure base and a behavioral knowledge base called model base. The DEVS-Scheme, a realization of DEVS (Discrete Event System Specifiation) formalism in a LISP-based, object-oriented environment, is extended to facilitate the specification of behavioral knowledge of models, especially for kernel models that are suited to model massively parallel computer architectures. The ESP Scheme, a realization of entity structure formalism in a frame-theoretic representation, is extended to represent structural knowledge of models and to manage it in the structural knowledge base.

  12. An object-based methodology for knowledge representation

    SciTech Connect

    Kelsey, R.L.; Hartley, R.T.; Webster, R.B.

    1997-11-01

    An object based methodology for knowledge representation is presented. The constructs and notation to the methodology are described and illustrated with examples. The ``blocks world,`` a classic artificial intelligence problem, is used to illustrate some of the features of the methodology including perspectives and events. Representing knowledge with perspectives can enrich the detail of the knowledge and facilitate potential lines of reasoning. Events allow example uses of the knowledge to be represented along with the contained knowledge. Other features include the extensibility and maintainability of knowledge represented in the methodology.

  13. Knowledge Sharing in an American Multinational Company Based in Malaysia

    ERIC Educational Resources Information Center

    Ling, Chen Wai; Sandhu, Manjit S.; Jain, Kamal Kishore

    2009-01-01

    Purpose: This paper seeks to examine the views of executives working in an American based multinational company (MNC) about knowledge sharing, barriers to knowledge sharing, and strategies to promote knowledge sharing. Design/methodology/approach: This study was carried out in phases. In the first phase, a topology of organizational mechanisms for…

  14. Knowledge-Based Aid: A Four Agency Comparative Study

    ERIC Educational Resources Information Center

    McGrath, Simon; King, Kenneth

    2004-01-01

    Part of the response of many development cooperation agencies to the challenges of globalisation, ICTs and the knowledge economy is to emphasise the importance of knowledge for development. This paper looks at the discourses and practices of ''knowledge-based aid'' through an exploration of four agencies: the World Bank, DFID, Sida and JICA. It…

  15. Weather, knowledge base and life-style

    NASA Astrophysics Data System (ADS)

    Bohle, Martin

    2015-04-01

    Why to main-stream curiosity for earth-science topics, thus to appraise these topics as of public interest? Namely, to influence practices how humankind's activities intersect the geosphere. How to main-stream that curiosity for earth-science topics? Namely, by weaving diverse concerns into common threads drawing on a wide range of perspectives: be it beauty or particularity of ordinary or special phenomena, evaluating hazards for or from mundane environments, or connecting the scholarly investigation with concerns of citizens at large; applying for threading traditional or modern media, arts or story-telling. Three examples: First "weather"; weather is a topic of primordial interest for most people: weather impacts on humans lives, be it for settlement, for food, for mobility, for hunting, for fishing, or for battle. It is the single earth-science topic that went "prime-time" since in the early 1950-ties the broadcasting of weather forecasts started and meteorologists present their work to the public, daily. Second "knowledge base"; earth-sciences are a relevant for modern societies' economy and value setting: earth-sciences provide insights into the evolution of live-bearing planets, the functioning of Earth's systems and the impact of humankind's activities on biogeochemical systems on Earth. These insights bear on production of goods, living conditions and individual well-being. Third "life-style"; citizen's urban culture prejudice their experiential connections: earth-sciences related phenomena are witnessed rarely, even most weather phenomena. In the past, traditional rural communities mediated their rich experiences through earth-centric story-telling. In course of the global urbanisation process this culture has given place to society-centric story-telling. Only recently anthropogenic global change triggered discussions on geoengineering, hazard mitigation, demographics, which interwoven with arts, linguistics and cultural histories offer a rich narrative

  16. Knowledge Generation

    SciTech Connect

    BRABSON,JOHN M.; DELAND,SHARON M.

    2000-11-02

    Unattended monitoring systems are being studied as a means of reducing both the cost and intrusiveness of present nuclear safeguards approaches. Such systems present the classic information overload problem to anyone trying to interpret the resulting data not only because of the sheer quantity of data but also because of the problems inherent in trying to correlate information from more than one source. As a consequence, analysis efforts to date have mostly concentrated on checking thresholds or diagnosing failures. Clearly more sophisticated analysis techniques are required to enable automated verification of expected activities level concepts in order to make automated judgments about safety, sensor system integrity, sensor data quality, diversion, and accountancy.

  17. A Knowledge Representation Language for Large Knowledge Bases and "Intelligent" Information Retrieval Systems.

    ERIC Educational Resources Information Center

    Zarri, Gian Piero

    1990-01-01

    Describes a conceptual Knowledge Representation Language (KRL) developed at the French National Center for Scientific Research, that is used for the construction and use of Large Knowledge Bases (LKBs) and/or Intelligent Information Retrieval Systems (IIRSs). Semantic factors are discussed, and the specialization hierarchies used are explained.…

  18. Approximate Degrees of Similarity between a User's Knowledge and the Tutorial Systems' Knowledge Base

    ERIC Educational Resources Information Center

    Mogharreban, Namdar

    2004-01-01

    A typical tutorial system functions by means of interaction between four components: the expert knowledge base component, the inference engine component, the learner's knowledge component and the user interface component. In typical tutorial systems the interaction and the sequence of presentation as well as the mode of evaluation are…

  19. Construction of Expert Knowledge Monitoring and Assessment System Based on Integral Method of Knowledge Evaluation

    ERIC Educational Resources Information Center

    Golovachyova, Viktoriya N.; Menlibekova, Gulbakhyt Zh.; Abayeva, Nella F.; Ten, Tatyana L.; Kogaya, Galina D.

    2016-01-01

    Using computer-based monitoring systems that rely on tests could be the most effective way of knowledge evaluation. The problem of objective knowledge assessment by means of testing takes on a new dimension in the context of new paradigms in education. The analysis of the existing test methods enabled us to conclude that tests with selected…

  20. Genetic counselors' (GC) knowledge, awareness, understanding of clinical next-generation sequencing (NGS) genomic testing.

    PubMed

    Boland, P M; Ruth, K; Matro, J M; Rainey, K L; Fang, C Y; Wong, Y N; Daly, M B; Hall, M J

    2015-12-01

    Genomic tests are increasingly complex, less expensive, and more widely available with the advent of next-generation sequencing (NGS). We assessed knowledge and perceptions among genetic counselors pertaining to NGS genomic testing via an online survey. Associations between selected characteristics and perceptions were examined. Recent education on NGS testing was common, but practical experience limited. Perceived understanding of clinical NGS was modest, specifically concerning tumor testing. Greater perceived understanding of clinical NGS testing correlated with more time spent in cancer-related counseling, exposure to NGS testing, and NGS-focused education. Substantial disagreement about the role of counseling for tumor-based testing was seen. Finally, a majority of counselors agreed with the need for more education about clinical NGS testing, supporting this approach to optimizing implementation.

  1. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs

    PubMed Central

    Wright, A.; Krousel-Wood, M.; Thomas, E. J.; McCoy, J. A.; Sittig, D. F.

    2015-01-01

    Summary Background Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. Objective We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. Methods We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. Results The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. Conclusions We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes. PMID:26171079

  2. Children's ability to impute inferentially based knowledge.

    PubMed

    Rai, Roshan; Mitchell, Peter

    2006-01-01

    Do young children appreciate the importance of access to premises when judging what another person knows? In Experiment 1, 5-year-olds (N=31) were sensitive to another person's access to premises when predicting that person's ability to point to a target after eliminating alternatives in a set of 3 cartoon characters. Experiment 2 replicated the finding when 5- to 6-year-olds (N=102) judged who the other person thought the target was, and whether the other person knew who the target was. Experiment 3 demonstrated that children aged 5-7 years (N=107) more successfully imputed inference by elimination than syllogistical inferential knowledge. Findings suggest that an early understanding of inference by elimination offers a route into understanding that people can sometimes gain knowledge without direct perceptual access.

  3. Advancing the hydrogen safety knowledge base

    SciTech Connect

    Weiner, S. C.

    2014-08-29

    The International Energy Agency's Hydrogen Implementing Agreement (IEA HIA) was established in 1977 to pursue collaborative hydrogen research and development and information exchange among its member countries. Information and knowledge dissemination is a key aspect of the work within IEA HIA tasks, and case studies, technical reports and presentations/publications often result from the collaborative efforts. The work conducted in hydrogen safety under Task 31 and its predecessor, Task 19, can positively impact the objectives of national programs even in cases for which a specific task report is not published. As a result, the interactions within Task 31 illustrate how technology information and knowledge exchange among participating hydrogen safety experts serve the objectives intended by the IEA HIA.

  4. Advancing the hydrogen safety knowledge base

    DOE PAGES

    Weiner, S. C.

    2014-08-29

    The International Energy Agency's Hydrogen Implementing Agreement (IEA HIA) was established in 1977 to pursue collaborative hydrogen research and development and information exchange among its member countries. Information and knowledge dissemination is a key aspect of the work within IEA HIA tasks, and case studies, technical reports and presentations/publications often result from the collaborative efforts. The work conducted in hydrogen safety under Task 31 and its predecessor, Task 19, can positively impact the objectives of national programs even in cases for which a specific task report is not published. As a result, the interactions within Task 31 illustrate how technologymore » information and knowledge exchange among participating hydrogen safety experts serve the objectives intended by the IEA HIA.« less

  5. How Quality Improvement Practice Evidence Can Advance the Knowledge Base.

    PubMed

    OʼRourke, Hannah M; Fraser, Kimberly D

    2016-01-01

    Recommendations for the evaluation of quality improvement interventions have been made in order to improve the evidence base of whether, to what extent, and why quality improvement interventions affect chosen outcomes. The purpose of this article is to articulate why these recommendations are appropriate to improve the rigor of quality improvement intervention evaluation as a research endeavor, but inappropriate for the purposes of everyday quality improvement practice. To support our claim, we describe the differences between quality improvement interventions that occur for the purpose of practice as compared to research. We then carefully consider how feasibility, ethics, and the aims of evaluation each impact how quality improvement interventions that occur in practice, as opposed to research, can or should be evaluated. Recommendations that fit the evaluative goals of practice-based quality improvement interventions are needed to support fair appraisal of the distinct evidence they produce. We describe a current debate on the nature of evidence to assist in reenvisioning how quality improvement evidence generated from practice might complement that generated from research, and contribute in a value-added way to the knowledge base. PMID:27584696

  6. A knowledge based system for scientific data visualization

    NASA Technical Reports Server (NTRS)

    Senay, Hikmet; Ignatius, Eve

    1992-01-01

    A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.

  7. Knowledge base and neural network approach for protein secondary structure prediction.

    PubMed

    Patel, Maulika S; Mazumdar, Himanshu S

    2014-11-21

    Protein structure prediction is of great relevance given the abundant genomic and proteomic data generated by the genome sequencing projects. Protein secondary structure prediction is addressed as a sub task in determining the protein tertiary structure and function. In this paper, a novel algorithm, KB-PROSSP-NN, which is a combination of knowledge base and modeling of the exceptions in the knowledge base using neural networks for protein secondary structure prediction (PSSP), is proposed. The knowledge base is derived from a proteomic sequence-structure database and consists of the statistics of association between the 5-residue words and corresponding secondary structure. The predicted results obtained using knowledge base are refined with a Backpropogation neural network algorithm. Neural net models the exceptions of the knowledge base. The Q3 accuracy of 90% and 82% is achieved on the RS126 and CB396 test sets respectively which suggest improvement over existing state of art methods.

  8. THINK Back: KNowledge-based Interpretation of High Throughput data.

    PubMed

    Farfán, Fernando; Ma, Jun; Sartor, Maureen A; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Results of high throughput experiments can be challenging to interpret. Current approaches have relied on bulk processing the set of expression levels, in conjunction with easily obtained external evidence, such as co-occurrence. While such techniques can be used to reason probabilistically, they are not designed to shed light on what any individual gene, or a network of genes acting together, may be doing. Our belief is that today we have the information extraction ability and the computational power to perform more sophisticated analyses that consider the individual situation of each gene. The use of such techniques should lead to qualitatively superior results. The specific aim of this project is to develop computational techniques to generate a small number of biologically meaningful hypotheses based on observed results from high throughput microarray experiments, gene sequences, and next-generation sequences. Through the use of relevant known biomedical knowledge, as represented in published literature and public databases, we can generate meaningful hypotheses that will aide biologists to interpret their experimental data. We are currently developing novel approaches that exploit the rich information encapsulated in biological pathway graphs. Our methods perform a thorough and rigorous analysis of biological pathways, using complex factors such as the topology of the pathway graph and the frequency in which genes appear on different pathways, to provide more meaningful hypotheses to describe the biological phenomena captured by high throughput experiments, when compared to other existing methods that only consider partial information captured by biological pathways. PMID:22536867

  9. THINK Back: KNowledge-based Interpretation of High Throughput data

    PubMed Central

    2012-01-01

    Results of high throughput experiments can be challenging to interpret. Current approaches have relied on bulk processing the set of expression levels, in conjunction with easily obtained external evidence, such as co-occurrence. While such techniques can be used to reason probabilistically, they are not designed to shed light on what any individual gene, or a network of genes acting together, may be doing. Our belief is that today we have the information extraction ability and the computational power to perform more sophisticated analyses that consider the individual situation of each gene. The use of such techniques should lead to qualitatively superior results. The specific aim of this project is to develop computational techniques to generate a small number of biologically meaningful hypotheses based on observed results from high throughput microarray experiments, gene sequences, and next-generation sequences. Through the use of relevant known biomedical knowledge, as represented in published literature and public databases, we can generate meaningful hypotheses that will aide biologists to interpret their experimental data. We are currently developing novel approaches that exploit the rich information encapsulated in biological pathway graphs. Our methods perform a thorough and rigorous analysis of biological pathways, using complex factors such as the topology of the pathway graph and the frequency in which genes appear on different pathways, to provide more meaningful hypotheses to describe the biological phenomena captured by high throughput experiments, when compared to other existing methods that only consider partial information captured by biological pathways. PMID:22536867

  10. THINK Back: KNowledge-based Interpretation of High Throughput data.

    PubMed

    Farfán, Fernando; Ma, Jun; Sartor, Maureen A; Michailidis, George; Jagadish, Hosagrahar V

    2012-03-13

    Results of high throughput experiments can be challenging to interpret. Current approaches have relied on bulk processing the set of expression levels, in conjunction with easily obtained external evidence, such as co-occurrence. While such techniques can be used to reason probabilistically, they are not designed to shed light on what any individual gene, or a network of genes acting together, may be doing. Our belief is that today we have the information extraction ability and the computational power to perform more sophisticated analyses that consider the individual situation of each gene. The use of such techniques should lead to qualitatively superior results. The specific aim of this project is to develop computational techniques to generate a small number of biologically meaningful hypotheses based on observed results from high throughput microarray experiments, gene sequences, and next-generation sequences. Through the use of relevant known biomedical knowledge, as represented in published literature and public databases, we can generate meaningful hypotheses that will aide biologists to interpret their experimental data. We are currently developing novel approaches that exploit the rich information encapsulated in biological pathway graphs. Our methods perform a thorough and rigorous analysis of biological pathways, using complex factors such as the topology of the pathway graph and the frequency in which genes appear on different pathways, to provide more meaningful hypotheses to describe the biological phenomena captured by high throughput experiments, when compared to other existing methods that only consider partial information captured by biological pathways.

  11. Bermuda Triangle or three to tango: generation Y, e-health and knowledge management.

    PubMed

    Yee, Kwang Chien

    2007-01-01

    Generation Y workers are slowly gathering critical mass in the healthcare sector. The sustainability of future healthcare is highly dependent on this group of workers. This generation of workers loves technology and thrives in stimulating environments. They have great thirst for life-experience and therefore they move from one working environment to the other. The healthcare system has a hierarchical operational, information and knowledge structure, which unfortunately might not be the ideal ground to integrate with generation Y. The challenges ahead present a fantastic opportunity for electronic health implementation and knowledge management to flourish. Generation Y workers, however, have very different expectation of technology utilisation, technology design and knowledge presentation. This paper will argue that a clear understanding of this group of workers is essential for researchers in health informatics and knowledge management in order to provide socio-technical integrated solution for this group of future workers. The sustainability of a quality healthcare system will depend upon the integration of generation Y, health informatics and knowledge management strategies in a re-invented healthcare system. PMID:17911902

  12. Bermuda Triangle or three to tango: generation Y, e-health and knowledge management.

    PubMed

    Yee, Kwang Chien

    2007-01-01

    Generation Y workers are slowly gathering critical mass in the healthcare sector. The sustainability of future healthcare is highly dependent on this group of workers. This generation of workers loves technology and thrives in stimulating environments. They have great thirst for life-experience and therefore they move from one working environment to the other. The healthcare system has a hierarchical operational, information and knowledge structure, which unfortunately might not be the ideal ground to integrate with generation Y. The challenges ahead present a fantastic opportunity for electronic health implementation and knowledge management to flourish. Generation Y workers, however, have very different expectation of technology utilisation, technology design and knowledge presentation. This paper will argue that a clear understanding of this group of workers is essential for researchers in health informatics and knowledge management in order to provide socio-technical integrated solution for this group of future workers. The sustainability of a quality healthcare system will depend upon the integration of generation Y, health informatics and knowledge management strategies in a re-invented healthcare system.

  13. Toffler's Powershift: Creating New Knowledge Bases in Higher Education.

    ERIC Educational Resources Information Center

    Powers, Patrick James

    This paper examines the creation of new knowledge bases in higher education in light of the ideas of Alvin Toffler, whose trilogy "Future Shock" (1970), "The Third Wave" (1980), and "Powershift" (1990) focus on the processes, directions, and control of change, respectively. It discusses the increasingly important role that knowledge bases, the…

  14. Applying Knowledge-Based Techniques to Software Development.

    ERIC Educational Resources Information Center

    Harandi, Mehdi T.

    1986-01-01

    Reviews overall structure and design principles of a knowledge-based programming support tool, the Knowledge-Based Programming Assistant, which is being developed at University of Illinois Urbana-Champaign. The system's major units (program design program coding, and intelligent debugging) and additional functions are described. (MBR)

  15. Knowledge Generation via a Simple Grammar Supporting an Intelligent User Interface

    PubMed Central

    Carmony, Lowell A.; Naeymi-Rad, Frank; Rosenthal, Robert; Naeymi-Rad, Shon; Trace, David A.; Rackow, Eric; Weil, Max Harry; Evens, Martha

    1988-01-01

    A dictionary of standard medical terms (called the Feature Dictionary), a grammar to control the format of features, and a standard portable file in which to archive patient data will permit the automatic comparison and evaluation of competing knowledge bases for MEDAS (Medical Emergency Decision Assistance System), as well as provide the user with an intelligent interface for the entry of patient data. The feature dictionary consists of simple binary features such as “Abdominal Pain”, continuous-valued features such as “White Blood Count = 14,000”, and derived or computed features such as “Blood Pressure = Systolic - Diastolic”, but the medical expert describes knowledge to the system in terms of compound features such as “Sex = Female & Age > 2 & Hematocrit 37 to 42”. The new system contains a grammar for compound and complex features and a run time parser to translate these features into reverse Polish notation. The parse trees are used to generate rules to support an intelligent user interface. Thus, the user need only set the binary features and enter the values for the continuous features and then the system at run time automatically sets the derived features as well as the range and compound features that are needed for MEDAS' Bayesian multi-membership Inference.

  16. Knowledge Management in Role Based Agents

    NASA Astrophysics Data System (ADS)

    Kır, Hüseyin; Ekinci, Erdem Eser; Dikenelli, Oguz

    In multi-agent system literature, the role concept is getting increasingly researched to provide an abstraction to scope beliefs, norms, goals of agents and to shape relationships of the agents in the organization. In this research, we propose a knowledgebase architecture to increase applicability of roles in MAS domain by drawing inspiration from the self concept in the role theory of sociology. The proposed knowledgebase architecture has granulated structure that is dynamically organized according to the agent's identification in a social environment. Thanks to this dynamic structure, agents are enabled to work on consistent knowledge in spite of inevitable conflicts between roles and the agent. The knowledgebase architecture is also implemented and incorporated into the SEAGENT multi-agent system development framework.

  17. Analyzing Knowledge Base Content Development and Review: Recommendations for a Robust Knowledge Management Infrastructure

    PubMed Central

    Wilkinson, Steven G.; Rocha, Roberto A.; Rhodes, Julie

    2002-01-01

    Change is a necessary function of good medicine and quality health care and will undoubtedly be vital for the future. As advances in medicine continue, so will change, requiring the need for maintenance of existing knowledge as well as the need for integrating new knowledge. In order to understand the current process at Intermountain Health Care and to see how we might improve this process, we retrospectively studied the changes made to a knowledge base during the year 2001. The findings discovered have implications that are guiding our efforts in designing a knowledge management infrastructure. Additionally, we propose to integrate recommendations from other researchers into the design that will not only assist in the development and maintenance of knowledge, but will also support change tracking and version control.

  18. KAT: A Flexible XML-based Knowledge Authoring Environment

    PubMed Central

    Hulse, Nathan C.; Rocha, Roberto A.; Del Fiol, Guilherme; Bradshaw, Richard L.; Hanna, Timothy P.; Roemer, Lorrie K.

    2005-01-01

    As part of an enterprise effort to develop new clinical information systems at Intermountain Health Care, the authors have built a knowledge authoring tool that facilitates the development and refinement of medical knowledge content. At present, users of the application can compose order sets and an assortment of other structured clinical knowledge documents based on XML schemas. The flexible nature of the application allows the immediate authoring of new types of documents once an appropriate XML schema and accompanying Web form have been developed and stored in a shared repository. The need for a knowledge acquisition tool stems largely from the desire for medical practitioners to be able to write their own content for use within clinical applications. We hypothesize that medical knowledge content for clinical use can be successfully created and maintained through XML-based document frameworks containing structured and coded knowledge. PMID:15802477

  19. pfSNP: An integrated potentially functional SNP resource that facilitates hypotheses generation through knowledge syntheses.

    PubMed

    Wang, Jingbo; Ronaghi, Mostafa; Chong, Samuel S; Lee, Caroline G L

    2011-01-01

    Currently, >14,000,000 single nucleotide polymorphisms (SNPs) are reported. Identifying phenotype-affecting SNPs among these many SNPs pose significant challenges. Although several Web resources are available that can inform about the functionality of SNPs, these resources are mainly annotation databases and are not very comprehensive. In this article, we present a comprehensive, well-annotated, integrated pfSNP (potentially functional SNPs) Web resource (http://pfs.nus.edu.sg/), which is aimed to facilitate better hypothesis generation through knowledge syntheses mediated by better data integration and a user-friendly Web interface. pfSNP integrates >40 different algorithms/resources to interrogate >14,000,000 SNPs from the dbSNP database for SNPs of potential functional significance based on previous published reports, inferred potential functionality from genetic approaches as well as predicted potential functionality from sequence motifs. Its query interface has the user-friendly "auto-complete, prompt-as-you-type" feature and is highly customizable, facilitating different combination of queries using Boolean-logic. Additionally, to facilitate better understanding of the results and aid in hypotheses generation, gene/pathway-level information with text clouds highlighting enriched tissues/pathways as well as detailed-related information are also provided on the results page. Hence, the pfSNP resource will be of great interest to scientists focusing on association studies as well as those interested to experimentally address the functionality of SNPs.

  20. AOP Knowledge Base/Wiki Tool Set

    EPA Science Inventory

    Utilizing ToxCast Data and Lifestage Physiologically-Based Pharmacokinetic (PBPK) models to Drive Adverse Outcome Pathways (AOPs)-Based Margin of Exposures (ABME) to Chemicals. Hisham A. El-Masri1, Nicole C. Klienstreur2, Linda Adams1, Tamara Tal1, Stephanie Padilla1, Kristin Is...

  1. EHR based Genetic Testing Knowledge Base (iGTKB) Development

    PubMed Central

    2015-01-01

    Background The gap between a large growing number of genetic tests and a suboptimal clinical workflow of incorporating these tests into regular clinical practice poses barriers to effective reliance on advanced genetic technologies to improve quality of healthcare. A promising solution to fill this gap is to develop an intelligent genetic test recommendation system that not only can provide a comprehensive view of genetic tests as education resources, but also can recommend the most appropriate genetic tests to patients based on clinical evidence. In this study, we developed an EHR based Genetic Testing Knowledge Base for Individualized Medicine (iGTKB). Methods We extracted genetic testing information and patient medical records from EHR systems at Mayo Clinic. Clinical features have been semi-automatically annotated from the clinical notes by applying a Natural Language Processing (NLP) tool, MedTagger suite. To prioritize clinical features for each genetic test, we compared odds ratio across four population groups. Genetic tests, genetic disorders and clinical features with their odds ratios have been applied to establish iGTKB, which is to be integrated into the Genetic Testing Ontology (GTO). Results Overall, there are five genetic tests operated with sample size greater than 100 in 2013 at Mayo Clinic. A total of 1,450 patients who was tested by one of the five genetic tests have been selected. We assembled 243 clinical features from the Human Phenotype Ontology (HPO) for these five genetic tests. There are 60 clinical features with at least one mention in clinical notes of patients taking the test. Twenty-eight clinical features with high odds ratio (greater than 1) have been selected as dominant features and deposited into iGTKB with their associated information about genetic tests and genetic disorders. Conclusions In this study, we developed an EHR based genetic testing knowledge base, iGTKB. iGTKB will be integrated into the GTO by providing relevant

  2. Knowledge Management

    ERIC Educational Resources Information Center

    Deepak

    2005-01-01

    Knowledge Management (KM) is the process through which organizations generate value from their intellectual and knowledge-based assets. Frequently generating value from such assets means sharing them among employees, divisions and even with other companies in order to develop best practices. This article discusses three basic aspects of…

  3. The data dictionary: A view into the CTBT knowledge base

    SciTech Connect

    Shepherd, E.R.; Keyser, R.G.; Armstrong, H.M.

    1997-08-01

    The data dictionary for the Comprehensive Test Ban Treaty (CTBT) knowledge base provides a comprehensive, current catalog of the projected contents of the knowledge base. It is written from a data definition view of the knowledge base and therefore organizes information in a fashion that allows logical storage within the computer. The data dictionary introduces two organization categories of data: the datatype, which is a broad, high-level category of data, and the dataset, which is a specific instance of a datatype. The knowledge base, and thus the data dictionary, consist of a fixed, relatively small number of datatypes, but new datasets are expected to be added on a regular basis. The data dictionary is a tangible result of the design effort for the knowledge base and is intended to be used by anyone who accesses the knowledge base for any purpose, such as populating the knowledge base with data, or accessing the data for use with automatic data processing (ADP) routines, or browsing through the data for verification purposes. For these two reasons, it is important to discuss the development of the data dictionary as well as to describe its contents to better understand its usefulness; that is the purpose of this paper.

  4. The process for integrating the NNSA knowledge base.

    SciTech Connect

    Wilkening, Lisa K.; Carr, Dorthe Bame; Young, Christopher John; Hampton, Jeff; Martinez, Elaine

    2009-03-01

    From 2002 through 2006, the Ground Based Nuclear Explosion Monitoring Research & Engineering (GNEMRE) program at Sandia National Laboratories defined and modified a process for merging different types of integrated research products (IRPs) from various researchers into a cohesive, well-organized collection know as the NNSA Knowledge Base, to support operational treaty monitoring. This process includes defining the KB structure, systematically and logically aggregating IRPs into a complete set, and verifying and validating that the integrated Knowledge Base works as expected.

  5. Caregiving Antecedents of Secure Base Script Knowledge: A Comparative Analysis of Young Adult Attachment Representations

    ERIC Educational Resources Information Center

    Steele, Ryan D.; Waters, Theodore E. A.; Bost, Kelly K.; Vaughn, Brian E.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn; Roisman, Glenn I.

    2014-01-01

    Based on a subsample (N = 673) of the NICHD Study of Early Child Care and Youth Development (SECCYD) cohort, this article reports data from a follow-up assessment at age 18 years on the antecedents of "secure base script knowledge", as reflected in the ability to generate narratives in which attachment-related difficulties are…

  6. Using Knowledge-Based Systems to Support Learning of Organizational Knowledge: A Case Study

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Nash, Rebecca L.; Phan, Tu-Anh T.; Bailey, Teresa R.

    2003-01-01

    This paper describes the deployment of a knowledge system to support learning of organizational knowledge at the Jet Propulsion Laboratory (JPL), a US national research laboratory whose mission is planetary exploration and to 'do what no one has done before.' Data collected over 19 weeks of operation were used to assess system performance with respect to design considerations, participation, effectiveness of communication mechanisms, and individual-based learning. These results are discussed in the context of organizational learning research and implications for practice.

  7. Knowledge-based graphical interfaces for presenting technical information

    NASA Technical Reports Server (NTRS)

    Feiner, Steven

    1988-01-01

    Designing effective presentations of technical information is extremely difficult and time-consuming. Moreover, the combination of increasing task complexity and declining job skills makes the need for high-quality technical presentations especially urgent. We believe that this need can ultimately be met through the development of knowledge-based graphical interfaces that can design and present technical information. Since much material is most naturally communicated through pictures, our work has stressed the importance of well-designed graphics, concentrating on generating pictures and laying out displays containing them. We describe APEX, a testbed picture generation system that creates sequences of pictures that depict the performance of simple actions in a world of 3D objects. Our system supports rules for determining automatically the objects to be shown in a picture, the style and level of detail with which they should be rendered, the method by which the action itself should be indicated, and the picture's camera specification. We then describe work on GRIDS, an experimental display layout system that addresses some of the problems in designing displays containing these pictures, determining the position and size of the material to be presented.

  8. Learning Science-Based Fitness Knowledge in Constructivist Physical Education

    ERIC Educational Resources Information Center

    Sun, Haichun; Chen, Ang; Zhu, Xihe; Ennis, Catherine D.

    2012-01-01

    Teaching fitness-related knowledge has become critical in developing children's healthful living behavior. The purpose of this study was to examine the effects of a science-based, constructivist physical education curriculum on learning fitness knowledge critical to healthful living in elementary school students. The schools (N = 30) were randomly…

  9. Developing Learning Progression-Based Teacher Knowledge Measures

    ERIC Educational Resources Information Center

    Jin, Hui; Shin, HyoJeong; Johnson, Michele E.; Kim, JinHo; Anderson, Charles W.

    2015-01-01

    This study developed learning progression-based measures of science teachers' content knowledge (CK) and pedagogical content knowledge (PCK). The measures focus on an important topic in secondary science curriculum using scientific reasoning (i.e., tracing matter, tracing energy, and connecting scales) to explain plants gaining weight and…

  10. Knowledge-Based Hierarchies: Using Organizations to Understand the Economy

    ERIC Educational Resources Information Center

    Garicano, Luis; Rossi-Hansberg, Esteban

    2015-01-01

    Incorporating the decision of how to organize the acquisition, use, and communication of knowledge into economic models is essential to understand a wide variety of economic phenomena. We survey the literature that has used knowledge-based hierarchies to study issues such as the evolution of wage inequality, the growth and productivity of firms,…

  11. PROUST: Knowledge-Based Program Understanding.

    ERIC Educational Resources Information Center

    Johnson, W. Lewis; Soloway, Elliot

    This report describes PROUST, a computer-based system for online analyses and understanding of PASCAL programs written by novice programmers, which takes as input a program and a non-algorithmic description of the program requirements and finds the most likely mapping between the requirements and the code. Both the theory and processing techniques…

  12. Knowledge-Creative Learning with Data Bases.

    ERIC Educational Resources Information Center

    Hunter, Beverly

    1987-01-01

    Provides examples of computer-based classroom activities which support the skills, content, and democratic values goals of social studies. It also outlines an approach to teaching inquiry, information handling, and group interaction skills; explains the role of databases in supporting content objectives; and gives examples of values issues that…

  13. Thinking with Images: An Exploration into Information Retrieval and Knowledge Generation.

    ERIC Educational Resources Information Center

    Weedman, Judith

    2002-01-01

    Explored how images were used in a social science research project and analyzed the use of verbs to identify three functions that images fill. Argues that the separation between finding documents and using them is artificial, and explores the use of image documents to help integrate information retrieval more closely with knowledge generation.…

  14. How To Manage the Emerging Generational Divide in the Contemporary Knowledge-Rich Workplace.

    ERIC Educational Resources Information Center

    Novicevic, Milorad M.; Buckley, M. Ronald

    2001-01-01

    Addresses the manager's dilemmas and options in resolving emerging latent intergenerational conflict in the contemporary knowledge-rich workplace. Topics include a theoretical framework for generational divide management; the polarization in task requirements; social and environmental factors; differences in employee needs and expectations; and…

  15. "Comments on Greenhow, Robelia, and Hughes": Technologies that Facilitate Generating Knowledge and Possibly Wisdom

    ERIC Educational Resources Information Center

    Dede, Chris

    2009-01-01

    Greenhow, Robelia, and Hughes (2009) argue that Web 2.0 media are well suited to enhancing the education research community's purpose of generating and sharing knowledge. The author of this comment article first articulates how a research infrastructure with capabilities for communal bookmarking, photo and video sharing, social networking, wikis,…

  16. Towards a Reconceptualisation of "Word" for High Frequency Word Generation in Word Knowledge Studies

    ERIC Educational Resources Information Center

    Sibanda, Jabulani; Baxen, Jean

    2014-01-01

    The present paper derives from a PhD study investigating the nexus between Grade 4 textbook vocabulary demands and Grade 3 isiXhosa-speaking learners' knowledge of that vocabulary to enable them to read to learn in Grade 4. The paper challenges the efficacy of the four current definitions of "word" for generating high frequency…

  17. A relational data-knowledge base system and its potential in developing a distributed data-knowledge system

    NASA Technical Reports Server (NTRS)

    Rahimian, Eric N.; Graves, Sara J.

    1988-01-01

    A new approach used in constructing a rational data knowledge base system is described. The relational database is well suited for distribution due to its property of allowing data fragmentation and fragmentation transparency. An example is formulated of a simple relational data knowledge base which may be generalized for use in developing a relational distributed data knowledge base system. The efficiency and ease of application of such a data knowledge base management system is briefly discussed. Also discussed are the potentials of the developed model for sharing the data knowledge base as well as the possible areas of difficulty in implementing the relational data knowledge base management system.

  18. Knowledge Based Estimation of Material Release Transients

    1998-07-29

    KBERT is an easy to use desktop decision support tool for estimating public and in-facility worker doses and consequences of radioactive material releases in non-reactort nuclear facilities. It automatically calculates release and respirable fractions based on published handbook data, and calculates material transport concurrently with personnel evacuation simulations. Any facility layout can be modeled easily using the intuitive graphical user interface.

  19. A clinical trial of a knowledge-based medical record.

    PubMed

    Safran, C; Rind, D M; Davis, R B; Sands, D Z; Caraballo, E; Rippel, K; Wang, Q; Rury, C; Makadon, H J; Cotton, D J

    1995-01-01

    To meet the needs of primary care physicians caring for patients with HIV infection, we developed a knowledge-based medical record to allow the on-line patient record to play an active role in the care process. These programs integrate the on-line patient record, rule-based decision support, and full-text information retrieval into a clinical workstation for the practicing clinician. To determine whether use of a knowledge-based medical record was associated with more rapid and complete adherence to practice guidelines and improved quality of care, we performed a controlled clinical trial among physicians and nurse practitioners caring for 349 patients infected with the human immuno-deficiency virus (HIV); 191 patients were treated by 65 physicians and nurse practitioners assigned to the intervention group, and 158 patients were treated by 61 physicians and nurse practitioners assigned to the control group. During the 18-month study period, the computer generated 303 alerts in the intervention group and 388 in the control group. The median response time of clinicians to these alerts was 11 days in the intervention group and 52 days in the control group (PJJ0.0001, log-rank test). During the study, the computer generated 432 primary care reminders for the intervention group and 360 reminders for the control group. The median response time of clinicians to these alerts was 114 days in the intervention group and more than 500 days in the control group (PJJ0.0001, log-rank test). Of the 191 patients in the intervention group, 67 (35%) had one or more hospitalizations, compared with 70 (44%) of the 158 patients in the control group (PJ=J0.04, Wilcoxon test stratified for initial CD4 count). There was no difference in survival between the intervention and control groups (P = 0.18, log-rank test). We conclude that our clinical workstation significantly changed physicians' behavior in terms of their response to alerts regarding primary care interventions and that these

  20. Nonlinear Knowledge in Kernel-Based Multiple Criteria Programming Classifier

    NASA Astrophysics Data System (ADS)

    Zhang, Dongling; Tian, Yingjie; Shi, Yong

    Kernel-based Multiple Criteria Linear Programming (KMCLP) model is used as classification methods, which can learn from training examples. Whereas, in traditional machine learning area, data sets are classified only by prior knowledge. Some works combine the above two classification principle to overcome the defaults of each approach. In this paper, we propose a model to incorporate the nonlinear knowledge into KMCLP in order to solve the problem when input consists of not only training example, but also nonlinear prior knowledge. In dealing with real world case breast cancer diagnosis, the model shows its better performance than the model solely based on training data.

  1. Online Knowledge-Based Model for Big Data Topic Extraction

    PubMed Central

    Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan

    2016-01-01

    Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004

  2. PmiRKB: a plant microRNA knowledge base

    PubMed Central

    Meng, Yijun; Gou, Lingfeng; Chen, Dijun; Mao, Chuanzao; Jin, Yongfeng; Wu, Ping; Chen, Ming

    2011-01-01

    MicroRNAs (miRNAs), one type of small RNAs (sRNAs) in plants, play an essential role in gene regulation. Several miRNA databases were established; however, successively generated new datasets need to be collected, organized and analyzed. To this end, we have constructed a plant miRNA knowledge base (PmiRKB) that provides four major functional modules. In the ‘SNP’ module, single nucleotide polymorphism (SNP) data of seven Arabidopsis (Arabidopsis thaliana) accessions and 21 rice (Oryza sativa) subspecies were collected to inspect the SNPs within pre-miRNAs (precursor microRNAs) and miRNA—target RNA duplexes. Depending on their locations, SNPs can affect the secondary structures of pre-miRNAs, or interactions between miRNAs and their targets. A second module, ‘Pri-miR’, can be used to investigate the tissue-specific, transcriptional contexts of pre- and pri-miRNAs (primary microRNAs), based on massively parallel signature sequencing data. The third module, ‘MiR–Tar’, was designed to validate thousands of miRNA—target pairs by using parallel analysis of RNA end (PARE) data. Correspondingly, the fourth module, ‘Self-reg’, also used PARE data to investigate the metabolism of miRNA precursors, including precursor processing and miRNA- or miRNA*-mediated self-regulation effects on their host precursors. PmiRKB can be freely accessed at http://bis.zju.edu.cn/pmirkb/. PMID:20719744

  3. Analyzing Data Generated Through Deliberative Dialogue: Bringing Knowledge Translation Into Qualitative Analysis.

    PubMed

    Plamondon, Katrina M; Bottorff, Joan L; Cole, Donald C

    2015-11-01

    Deliberative dialogue (DD) is a knowledge translation strategy that can serve to generate rich data and bridge health research with action. An intriguing alternative to other modes of generating data, the purposeful and evidence-informed conversations characteristic of DD generate data inclusive of collective interpretations. These data are thus dialogic, presenting complex challenges for qualitative analysis. In this article, we discuss the nature of data generated through DD, orienting ourselves toward a theoretically grounded approach to analysis. We offer an integrated framework for analysis, balancing analytical strategies of categorizing and connecting with the use of empathetic and suspicious interpretive lenses. In this framework, data generation and analysis occur in concert, alongside engaging participants and synthesizing evidence. An example of application is provided, demonstrating nuances of the framework. We conclude with reflections on the strengths and limitations of the framework, suggesting how it may be relevant in other qualitative health approaches.

  4. Evolution of co-management: role of knowledge generation, bridging organizations and social learning.

    PubMed

    Berkes, Fikret

    2009-04-01

    Over a period of some 20 years, different aspects of co-management (the sharing of power and responsibility between the government and local resource users) have come to the forefront. The paper focuses on a selection of these: knowledge generation, bridging organizations, social learning, and the emergence of adaptive co-management. Co-management can be considered a knowledge partnership. Different levels of organization, from local to international, have comparative advantages in the generation and mobilization of knowledge acquired at different scales. Bridging organizations provide a forum for the interaction of these different kinds of knowledge, and the coordination of other tasks that enable co-operation: accessing resources, bringing together different actors, building trust, resolving conflict, and networking. Social learning is one of these tasks, essential both for the co-operation of partners and an outcome of the co-operation of partners. It occurs most efficiently through joint problem solving and reflection within learning networks. Through successive rounds of learning and problem solving, learning networks can incorporate new knowledge to deal with problems at increasingly larger scales, with the result that maturing co-management arrangements become adaptive co-management in time.

  5. Evolution of co-management: role of knowledge generation, bridging organizations and social learning.

    PubMed

    Berkes, Fikret

    2009-04-01

    Over a period of some 20 years, different aspects of co-management (the sharing of power and responsibility between the government and local resource users) have come to the forefront. The paper focuses on a selection of these: knowledge generation, bridging organizations, social learning, and the emergence of adaptive co-management. Co-management can be considered a knowledge partnership. Different levels of organization, from local to international, have comparative advantages in the generation and mobilization of knowledge acquired at different scales. Bridging organizations provide a forum for the interaction of these different kinds of knowledge, and the coordination of other tasks that enable co-operation: accessing resources, bringing together different actors, building trust, resolving conflict, and networking. Social learning is one of these tasks, essential both for the co-operation of partners and an outcome of the co-operation of partners. It occurs most efficiently through joint problem solving and reflection within learning networks. Through successive rounds of learning and problem solving, learning networks can incorporate new knowledge to deal with problems at increasingly larger scales, with the result that maturing co-management arrangements become adaptive co-management in time. PMID:19110363

  6. Big Data Analytics in Immunology: A Knowledge-Based Approach

    PubMed Central

    Zhang, Guang Lan

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677

  7. Big data analytics in immunology: a knowledge-based approach.

    PubMed

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677

  8. Big data analytics in immunology: a knowledge-based approach.

    PubMed

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  9. A situational approach to the design of a patient-oriented disease-specific knowledge base.

    PubMed Central

    Kim, Matthew I.; Ladenson, Paul; Johnson, Kevin B.

    2002-01-01

    We have developed a situational approach to the organization of disease-specific information that seeks to provide patients with targeted access to content in a knowledge base. Our approach focuses on dividing a defined knowledge base into sections corresponding to discrete clinical events associated with the evaluation and treatment of a specific disorder. Common reasons for subspecialty referral are used to generate situational statements that serve as entry points into the knowledge base. Each section includes defining questions generated using keywords associated with specific topics. Defining questions are linked to patient-focused answers. Evaluation of a thyroid cancer web site designed using this approach has identified high ratings for usability, relevance, and comprehension of retrieved information. This approach may be particularly useful in the development of resources for newly diagnosed patients. PMID:12463852

  10. A knowledge-based approach to automated flow-field zoning for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1989-01-01

    An automated three-dimensional zonal grid generation capability for computational fluid dynamics is shown through the development of a demonstration computer program capable of automatically zoning the flow field of representative two-dimensional (2-D) aerodynamic configurations. The applicability of a knowledge-based programming approach to the domain of flow-field zoning is examined. Several aspects of flow-field zoning make the application of knowledge-based techniques challenging: the need for perceptual information, the role of individual bias in the design and evaluation of zonings, and the fact that the zoning process is modeled as a constructive, design-type task (for which there are relatively few examples of successful knowledge-based systems in any domain). Engineering solutions to the problems arising from these aspects are developed, and a demonstration system is implemented which can design, generate, and output flow-field zonings for representative 2-D aerodynamic configurations.

  11. Knowledge.

    ERIC Educational Resources Information Center

    Online-Offline, 1999

    1999-01-01

    This theme issue on knowledge includes annotated listings of Web sites, CD-ROMs and computer software, videos, books, and additional resources that deal with knowledge and differences between how animals and humans learn. Sidebars discuss animal intelligence, learning proper behavior, and getting news from the Internet. (LRW)

  12. Evaluation of database technologies for the CTBT Knowledge Base prototype

    SciTech Connect

    Keyser, R.; Shepard-Dombroski, E.; Baur, D.; Hipp, J.; Moore, S.; Young, C.; Chael, E.

    1996-11-01

    This document examines a number of different software technologies in the rapidly changing field of database management systems, evaluates these systems in light of the expected needs of the Comprehensive Test Ban Treaty (CTBT) Knowledge Base, and makes some recommendations for the initial prototypes of the Knowledge Base. The Knowledge Base requirements are examined and then used as criteria for evaluation of the database management options. A mock-up of the data expected in the Knowledge Base is used as a basis for examining how four different database technologies deal with the problems of storing and retrieving the data. Based on these requirement and the results of the evaluation, the recommendation is that the Illustra database be considered for the initial prototype of the Knowledge Base. Illustra offers a unique blend of performance, flexibility, and features that will aid in the implementation of the prototype. At the same time, Illustra provides a high level of compatibility with the hardware and software environments present at the US NDC (National Data Center) and the PIDC (Prototype International Data Center).

  13. Knowledge-based fault diagnosis system for refuse collection vehicle

    SciTech Connect

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-15

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  14. Knowledge-based fault diagnosis system for refuse collection vehicle

    NASA Astrophysics Data System (ADS)

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-01

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  15. Spark, an application based on Serendipitous Knowledge Discovery.

    PubMed

    Workman, T Elizabeth; Fiszman, Marcelo; Cairelli, Michael J; Nahl, Diane; Rindflesch, Thomas C

    2016-04-01

    Findings from information-seeking behavior research can inform application development. In this report we provide a system description of Spark, an application based on findings from Serendipitous Knowledge Discovery studies and data structures known as semantic predications. Background information and the previously published IF-SKD model (outlining Serendipitous Knowledge Discovery in online environments) illustrate the potential use of information-seeking behavior in application design. A detailed overview of the Spark system illustrates how methodologies in design and retrieval functionality enable production of semantic predication graphs tailored to evoke Serendipitous Knowledge Discovery in users.

  16. Adults' Autonomic and Subjective Emotional Responses to Infant Vocalizations: The Role of Secure Base Script Knowledge

    ERIC Educational Resources Information Center

    Groh, Ashley M.; Roisman, Glenn I.

    2009-01-01

    This article examines the extent to which secure base script knowledge--as reflected in an adult's ability to generate narratives in which attachment-related threats are recognized, competent help is provided, and the problem is resolved--is associated with adults' autonomic and subjective emotional responses to infant distress and nondistress…

  17. Knowledge discovery based on experiential learning corporate culture management

    NASA Astrophysics Data System (ADS)

    Tu, Kai-Jan

    2014-10-01

    A good corporate culture based on humanistic theory can make the enterprise's management very effective, all enterprise's members have strong cohesion and centripetal force. With experiential learning model, the enterprise can establish an enthusiastic learning spirit corporate culture, have innovation ability to gain the positive knowledge growth effect, and to meet the fierce global marketing competition. A case study on Trend's corporate culture can offer the proof of industry knowledge growth rate equation as the contribution to experiential learning corporate culture management.

  18. Arranging ISO 13606 archetypes into a knowledge base.

    PubMed

    Kopanitsa, Georgy

    2014-01-01

    To enable the efficient reuse of standard based medical data we propose to develop a higher level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analyzed for their ability to be applied in the implementation of a higher level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future. PMID:25160140

  19. [Medical practice and clinical research: keys to generate knowledge and improve care].

    PubMed

    Martínez Castuera-Gómez, Carla; Talavera, Juan O

    2013-01-01

    The increased quality in medical care may be immediately accomplished if clinical research is integrated into daily clinical practice. In the generation of medical knowledge are four steps: an unanswered question awakened from clinical practice, the critical analysis of specialized literature, the development of a research protocol, and, finally, the publication of outcomes. Decision making and continuous training are becoming part of an effective strategy of medical attention improvement.

  20. Knowledge management: An abstraction of knowledge base and database management systems

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel D.

    1990-01-01

    Artificial intelligence application requirements demand powerful representation capabilities as well as efficiency for real-time domains. Many tools exist, the most prevalent being expert systems tools such as ART, KEE, OPS5, and CLIPS. Other tools just emerging from the research environment are truth maintenance systems for representing non-monotonic knowledge, constraint systems, object oriented programming, and qualitative reasoning. Unfortunately, as many knowledge engineers have experienced, simply applying a tool to an application requires a large amount of effort to bend the application to fit. Much work goes into supporting work to make the tool integrate effectively. A Knowledge Management Design System (KNOMAD), is described which is a collection of tools built in layers. The layered architecture provides two major benefits; the ability to flexibly apply only those tools that are necessary for an application, and the ability to keep overhead, and thus inefficiency, to a minimum. KNOMAD is designed to manage many knowledge bases in a distributed environment providing maximum flexibility and expressivity to the knowledge engineer while also providing support for efficiency.

  1. Intelligent technique for knowledge reuse of dental medical records based on case-based reasoning.

    PubMed

    Gu, Dong-Xiao; Liang, Chang-Yong; Li, Xing-Guo; Yang, Shan-Lin; Zhang, Pei

    2010-04-01

    With the rapid development of both information technology and the management of modern medical regulation, the generation of medical records tends to be increasingly intelligent. In this paper, Case-Based Reasoning is applied to the process of generating records of dental cases. Based on the analysis of the features of dental records, a case base is constructed. A mixed case retrieval method (FAIES) is proposed for the knowledge reuse of dental records by adopting Fuzzy Mathematics, which improves similarity algorithm based on Euclidian-Lagrangian Distance, and PULL & PUSH weight adjustment strategy. Finally, an intelligent system of dental cases generation (CBR-DENT) is constructed. The effectiveness of the system, the efficiency of the retrieval method, the extent of adaptation and the adaptation efficiency are tested using the constructed case base. It is demonstrated that FAIES is very effective in terms of reducing the time of writing medical records and improving the efficiency and quality. FAIES is also proven to be an effective aid for diagnoses and provides a new idea for the management of medical records and its applications.

  2. Dynamic Strategic Planning in a Professional Knowledge-Based Organization

    ERIC Educational Resources Information Center

    Olivarius, Niels de Fine; Kousgaard, Marius Brostrom; Reventlow, Susanne; Quelle, Dan Grevelund; Tulinius, Charlotte

    2010-01-01

    Professional, knowledge-based institutions have a particular form of organization and culture that makes special demands on the strategic planning supervised by research administrators and managers. A model for dynamic strategic planning based on a pragmatic utilization of the multitude of strategy models was used in a small university-affiliated…

  3. Developing and Assessing Teachers' Knowledge of Game-Based Learning

    ERIC Educational Resources Information Center

    Shah, Mamta; Foster, Aroutis

    2015-01-01

    Research focusing on the development and assessment of teacher knowledge in game-based learning is in its infancy. A mixed-methods study was undertaken to educate pre-service teachers in game-based learning using the Game Network Analysis (GaNA) framework. Fourteen pre-service teachers completed a methods course, which prepared them in game…

  4. Category vs. Object Knowledge in Category-Based Induction

    ERIC Educational Resources Information Center

    Murphy, Gregory L.; Ross, Brian H.

    2010-01-01

    In one form of category-based induction, people make predictions about unknown properties of objects. There is a tension between predictions made based on the object's specific features (e.g., objects above a certain size tend not to fly) and those made by reference to category-level knowledge (e.g., birds fly). Seven experiments with artificial…

  5. Malaysia Transitions toward a Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Mustapha, Ramlee; Abdullah, Abu

    2004-01-01

    The emergence of a knowledge-based economy (k-economy) has spawned a "new" notion of workplace literacy, changing the relationship between employers and employees. The traditional covenant where employees expect a stable or lifelong employment will no longer apply. The retention of employees will most probably be based on their skills and…

  6. The Knowledge-Based Technology Applications Center (KBTAC) seminar series. Volume 1, Introduction to knowledge-based systems

    SciTech Connect

    Meyer, W.; Scherer, J.; DeLuke, R.; Wood, R.M.

    1992-12-01

    Knowledge-based systems are a means of capturing and productively and efficiently using utility accumulated knowledge and expertise. The first step in this process is to identify what types of problems and applications can benefit from the use of expert systems. Once potential applications have been identified, it is necessary to involve management in supporting the use and developing of the expert system. To do that, management must be made aware of the costs of benefits associated with the development, routine use and maintenance of these systems. To truly understand how knowledge-based systems differ from conventional programming the manager and potential user needs to become familiar with the concept of symbolic reasoning or programming where knowledge is manipulated, not just data as in conventional programming. Knowledge-based systems use all the information manipulation that is found in conventional programming but adds to that knowledge-based programming. How does a program use knowledge? That is accomplished in a knowledge-based system by the inferencing process. Rules allow reasoning to flow backward from a conclusion or a result to circumstances or a causes. Alternatively, certain data or information can lead to a conclusion or a result. The reader will be lead through this process of symbolic reasoning or programming including the presentation of several examples. The software available to develop expert systems is discussed as is the hardware on which that software is operable. Costs and other features of the hardware are presented in detail. Finally, the many different ways in which KBTAC can assist in developing expert systems is discussed. This assistance ranges from phone calls to assistance at KBTAC`s site or at your utility.

  7. Error Generation in CATS-Based Agents

    NASA Technical Reports Server (NTRS)

    Callantine, Todd

    2003-01-01

    This research presents a methodology for generating errors from a model of nominally preferred correct operator activities, given a particular operational context, and maintaining an explicit link to the erroneous contextual information to support analyses. It uses the Crew Activity Tracking System (CATS) model as the basis for error generation. This report describes how the process works, and how it may be useful for supporting agent-based system safety analyses. The report presents results obtained by applying the error-generation process and discusses implementation issues. The research is supported by the System-Wide Accident Prevention Element of the NASA Aviation Safety Program.

  8. Simulation-Based Rule Generation Considering Readability

    PubMed Central

    Yahagi, H.; Shimizu, S.; Ogata, T.; Hara, T.; Ota, J.

    2015-01-01

    Rule generation method is proposed for an aircraft control problem in an airport. Designing appropriate rules for motion coordination of taxiing aircraft in the airport is important, which is conducted by ground control. However, previous studies did not consider readability of rules, which is important because it should be operated and maintained by humans. Therefore, in this study, using the indicator of readability, we propose a method of rule generation based on parallel algorithm discovery and orchestration (PADO). By applying our proposed method to the aircraft control problem, the proposed algorithm can generate more readable and more robust rules and is found to be superior to previous methods. PMID:27347501

  9. Pneumatic tire-based piezoelectric power generation

    NASA Astrophysics Data System (ADS)

    Makki, Noaman; Pop-Iliev, Remon

    2011-03-01

    Plug-in Hybrid Electric Vehicles (PHEVs) and Extended Range Electric Vehicles (EREVs) currently mainly rely on Internal Combustion Engines (ICE) utilizing conventional fuels to recharge batteries in order to extend their range. Even though Piezo-based power generation devices have surfaced in recent years harvesting vibration energy, their output has only been sufficient to power up sensors and other such smaller devices. The permanent need for a cleaner power generation technique still remains. This paper investigates the possibility of using piezoceramics for power generation within the vehicle's wheel assembly by exploiting the rotational motion of the wheel and the continuously variable contact point between the pneumatic tire and the road.

  10. Using affective knowledge to generate and validate a set of emotion-related, action words.

    PubMed

    Portch, Emma; Havelka, Jelena; Brown, Charity; Giner-Sorolla, Roger

    2015-01-01

    Emotion concepts are built through situated experience. Abstract word meaning is grounded in this affective knowledge, giving words the potential to evoke emotional feelings and reactions (e.g., Vigliocco et al., 2009). In the present work we explore whether words differ in the extent to which they evoke 'specific' emotional knowledge. Using a categorical approach, in which an affective 'context' is created, it is possible to assess whether words proportionally activate knowledge relevant to different emotional states (e.g., 'sadness', 'anger', Stevenson, Mikels & James, 2007a). We argue that this method may be particularly effective when assessing the emotional meaning of action words (e.g., Schacht & Sommer, 2009). In study 1 we use a constrained feature generation task to derive a set of action words that participants associated with six, basic emotional states (see full list in Appendix S1). Generation frequencies were taken to indicate the likelihood that the word would evoke emotional knowledge relevant to the state to which it had been paired. In study 2 a rating task was used to assess the strength of association between the six most frequently generated, or 'typical', action words and corresponding emotion labels. Participants were presented with a series of sentences, in which action words (typical and atypical) and labels were paired e.g., "If you are feeling 'sad' how likely would you be to act in the following way?" … 'cry.' Findings suggest that typical associations were robust. Participants always gave higher ratings to typical vs. atypical action word and label pairings, even when (a) rating direction was manipulated (the label or verb appeared first in the sentence), and (b) the typical behaviours were to be performed by the rater themselves, or others. Our findings suggest that emotion-related action words vary in the extent to which they evoke knowledge relevant for different emotional states. When measuring affective grounding, it may then be

  11. KBGIS-2: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, T.; Peuquet, D.; Menon, S.; Agarwal, P.

    1986-01-01

    The architecture and working of a recently implemented knowledge-based geographic information system (KBGIS-2) that was designed to satisfy several general criteria for the geographic information system are described. The system has four major functions that include query-answering, learning, and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial objects language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is currently performing all its designated tasks successfully, although currently implemented on inadequate hardware. Future reports will detail the performance characteristics of the system, and various new extensions are planned in order to enhance the power of KBGIS-2.

  12. KBGIS-II: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, Terence; Peuquet, Donna; Menon, Sudhakar; Agarwal, Pankaj

    1986-01-01

    The architecture and working of a recently implemented Knowledge-Based Geographic Information System (KBGIS-II), designed to satisfy several general criteria for the GIS, is described. The system has four major functions including query-answering, learning and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial object language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is performing all its designated tasks successfully. Future reports will relate performance characteristics of the system.

  13. Knowledge representation to support reasoning based on multiple models

    NASA Technical Reports Server (NTRS)

    Gillam, April; Seidel, Jorge P.; Parker, Alice C.

    1990-01-01

    Model Based Reasoning is a powerful tool used to design and analyze systems, which are often composed of numerous interactive, interrelated subsystems. Models of the subsystems are written independently and may be used together while they are still under development. Thus the models are not static. They evolve as information becomes obsolete, as improved artifact descriptions are developed, and as system capabilities change. Researchers are using three methods to support knowledge/data base growth, to track the model evolution, and to handle knowledge from diverse domains. First, the representation methodology is based on having pools, or types, of knowledge from which each model is constructed. In addition information is explicit. This includes the interactions between components, the description of the artifact structure, and the constraints and limitations of the models. The third principle we have followed is the separation of the data and knowledge from the inferencing and equation solving mechanisms. This methodology is used in two distinct knowledge-based systems: one for the design of space systems and another for the synthesis of VLSI circuits. It has facilitated the growth and evolution of our models, made accountability of results explicit, and provided credibility for the user community. These capabilities have been implemented and are being used in actual design projects.

  14. Knowledge Management System Based on Web 2.0 Technologies

    NASA Astrophysics Data System (ADS)

    Jimenez, Guillermo; Barradas, Carlos

    Most of the research work on knowledge management systems has been addressed to knowledge representation, storage, and retrieval. However, user interaction has suffered from the same limitations faced by most current Web-based systems. Web 2.0 technologies bring completely new elements that make possible designing user interfaces similar to those that could be built in windowing environments of current desktop platforms. These technologies open new possibilities to enhance user experience when working with Web-based applications. This chapter shows how Web 2.0 technologies could be used to design user interaction in a knowledge management system. Details presented could be useful to improve online interaction with Web-based support systems (WSS) in other application domains.

  15. TVS: An Environment For Building Knowledge-Based Vision Systems

    NASA Astrophysics Data System (ADS)

    Weymouth, Terry E.; Amini, Amir A.; Tehrani, Saeid

    1989-03-01

    Advances in the field of knowledge-guided computer vision require the development of large scale projects and experimentation with them. One factor which impedes such development is the lack of software environments which combine standard image processing and graphics abilities with the ability to perform symbolic processing. In this paper, we describe a software environment that assists in the development of knowledge-based computer vision projects. We have built, upon Common LISP and C, a software development environment which combines standard image processing tools and a standard blackboard-based system, with the flexibility of the LISP programming environment. This environment has been used to develop research projects in knowledge-based computer vision and dynamic vision for robot navigation.

  16. A knowledge-based approach to software development

    SciTech Connect

    White, D.A.

    1995-09-01

    Traditional software development consists of many knowledge intensive and intellectual activities related to understanding a problem to be solved and designing a solution to that problem. These activities are informal, subjective, and undocumented and are the same for original development and subsequent support. Since 1982, the USAF Rome Laboratory has been developing the Knowledge-Based Software Assistant (KBSA), a revolutionary new paradigm for software development that will achieve orders of magnitude improvement in productivity and quality. KBSA does not pursue the improvement of traditional technologies or methodologies such as new programming languages and management procedures to fulfill this objective, but has instead adopted a revolutionary new approach. KBSA is a knowledge-based, computer-mediated paradigm for the evolutionary definition, specification, development, and long-term support of software. The computer becomes an `intelligent partner` and `corporate memory` in this paradigm, formally capturing the appropriate knowledge and actively using this knowledge to provide assistance and automation. The productivity of developers will dramatically improve because of the increased assistance, automation and re-utilization of domain and programming knowledge. The quality of software, both correctness and satisfying requirements, will also improve because the development process is formal and easier to use.

  17. Effects of Delays on 6-Year-Old Children's Self-Generation and Retention of Knowledge through Integration

    ERIC Educational Resources Information Center

    Varga, Nicole L.; Bauer, Patricia J.

    2013-01-01

    The current research was an investigation of the effect of delay on self-generation and retention of knowledge derived through integration by 6-year-old children. Children were presented with novel facts from passages read aloud to them (i.e., "stem" facts) and tested for self-generation of new knowledge through integration of the facts. In…

  18. A knowledge-based system for prototypical reasoning

    NASA Astrophysics Data System (ADS)

    Lieto, Antonio; Minieri, Andrea; Piana, Alberto; Radicioni, Daniele P.

    2015-04-01

    In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired architecture for the representation of conceptual information. The proposed system aims at extending the classical representational and reasoning capabilities of the ontology-based frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base, composed of a classical symbolic component (grounded on a formal ontology) with a typicality based one (grounded on the conceptual spaces framework). The resulting system attempts to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual process theories of reasoning and rationality. The system has been experimentally assessed in a conceptual categorisation task where common sense linguistic descriptions were given in input, and the corresponding target concepts had to be identified. The results show that the proposed solution substantially extends the representational and reasoning 'conceptual' capabilities of standard ontology-based systems.

  19. Database and knowledge base integration in decision support systems.

    PubMed Central

    Johansson, B.; Shahsavar, N.; Ahlfeldt, H.; Wigertz, O.

    1996-01-01

    Since decision support systems (DSS) in medicine often are linked to clinical databases it is important to find methods that facilitate the work for DSS developers to implement database queries in the knowledge base (KB). This paper presents a method for linking clinical databases to a KB with Arden Syntax modules. The method is based on a query meta database including templates for SQL queries. During knowledge module authoring the medical expert only refers to a code in the query meta database. Our method uses standard tools so it can be implemented on different platforms and linked to different clinical databases. PMID:8947666

  20. Reducing a Knowledge-Base Search Space When Data Are Missing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  1. The Generation of Social Education Knowledge and the Problem of Relativism.

    ERIC Educational Resources Information Center

    Maxcy, Spencer J.

    What is deemed to be socially relevant knowledge as it comes from social educational theorists and inquirers is not a singular conception. The prevailing notion that only pluralistic and relativist, or positivist epistemological concepts of truth adequately capture social education inquiry and products, and that claims to human action based on…

  2. Polynomial driven time base and PN generator

    NASA Technical Reports Server (NTRS)

    Brokl, S. S.

    1983-01-01

    In support of the planetary radar upgrade new hardware was designed to increase resolution and take advantage of new technology. Included is a description of the Polynomial Driven Time Base and PN Generator which is used for range gate coding in the planetary radar system.

  3. Knowledge-based simulation using object-oriented programming

    NASA Technical Reports Server (NTRS)

    Sidoran, Karen M.

    1993-01-01

    Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.

  4. Developing a Knowledge Base and Taxonomy in Instructional Technology.

    ERIC Educational Resources Information Center

    Caffarella, Edward P.; Fly, Kenneth

    The purpose of this study was to test the feasibility of using a model adapted from the instructional design and technology (ID&T) taxonomy model proposed by the Association for Educational Communications and Technology (AECT) Definitions and Terminology Committee to build an ID&T knowledge base. The model was tested by mapping a random sample of…

  5. Knowledge Based Engineering for Spatial Database Management and Use

    NASA Technical Reports Server (NTRS)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  6. Cataloging and Expert Systems: AACR2 as a Knowledge Base.

    ERIC Educational Resources Information Center

    Hjerppe, Roland; Olander, Birgitta

    1989-01-01

    Describes a project that developed two expert systems for library cataloging using the second edition of the Anglo American Cataloging Rules (AACR2) as a knowledge base. The discussion covers cataloging as interpretation, the structure of AACR2, and the feasibility of using expert systems for cataloging in traditional library settings. (26…

  7. Planning and Implementing a High Performance Knowledge Base.

    ERIC Educational Resources Information Center

    Cortez, Edwin M.

    1999-01-01

    Discusses the conceptual framework for developing a rapid-prototype high-performance knowledge base for the four mission agencies of the United States Department of Agriculture and their university partners. Describes the background of the project and methods used for establishing the requirements; examines issues and problems surrounding semantic…

  8. CACTUS: Command and Control Training Using Knowledge-Based Simulations

    ERIC Educational Resources Information Center

    Hartley, Roger; Ravenscroft, Andrew; Williams, R. J.

    2008-01-01

    The CACTUS project was concerned with command and control training of large incidents where public order may be at risk, such as large demonstrations and marches. The training requirements and objectives of the project are first summarized justifying the use of knowledge-based computer methods to support and extend conventional training…

  9. PLAN-IT - Knowledge-based mission sequencing

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric W.

    1987-01-01

    PLAN-IT (Plan-Integrated Timelines), a knowledge-based approach to assist in mission sequencing, is discussed. PLAN-IT uses a large set of scheduling techniques known as strategies to develop and maintain a mission sequence. The approach implemented by PLAN-IT and the current applications of PLAN-IT for sequencing at NASA are reported.

  10. Value Creation in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Liu, Fang-Chun

    2013-01-01

    Effective investment strategies help companies form dynamic core organizational capabilities allowing them to adapt and survive in today's rapidly changing knowledge-based economy. This dissertation investigates three valuation issues that challenge managers with respect to developing business-critical investment strategies that can have…

  11. After the Crash: Research-Based Theater for Knowledge Transfer

    ERIC Educational Resources Information Center

    Colantonio, Angela; Kontos, Pia C.; Gilbert, Julie E.; Rossiter, Kate; Gray, Julia; Keightley, Michelle L.

    2008-01-01

    Introduction: The aim of this project was to develop and evaluate a research-based dramatic production for the purpose of transferring knowledge about traumatic brain injury (TBI) to health care professionals, managers, and decision makers. Methods: Using results drawn from six focus group discussions with key stakeholders (consumers, informal…

  12. Integrating knowledge based functionality in commercial hospital information systems.

    PubMed

    Müller, M L; Ganslandt, T; Eich, H P; Lang, K; Ohmann, C; Prokosch, H U

    2000-01-01

    Successful integration of knowledge-based functions in the electronic patient record depends on direct and context-sensitive accessibility and availability to clinicians and must suit their workflow. In this paper we describe an exemplary integration of an existing standalone scoring system for acute abdominal pain into two different commercial hospital information systems using Java/Corba technolgy.

  13. Ada as an implementation language for knowledge based systems

    NASA Technical Reports Server (NTRS)

    Rochowiak, Daniel

    1990-01-01

    Debates about the selection of programming languages often produce cultural collisions that are not easily resolved. This is especially true in the case of Ada and knowledge based programming. The construction of programming tools provides a desirable alternative for resolving the conflict.

  14. An Empirical Analysis of Knowledge Based Hypertext Navigation

    PubMed Central

    Snell, J.R.; Boyle, C.

    1990-01-01

    Our purpose is to investigate the effectiveness of knowledge-based navigation in a dermatology hypertext network. The chosen domain is a set of dermatology class notes implemented in Hypercard and SINS. The study measured time, number of moves, and success rates for subjects to find solutions to ten questions. The subjects were required to navigate within a dermatology hypertext network in order to find the solutions to a question. Our results indicate that knowledge-based navigation can assist the user in finding information of interest in a fewer number of node visits (moves) than with traditional button-based browsing or keyword searching. The time necessary to find an item of interest was lower for traditional-based methods. There was no difference in success rates for the two test groups.

  15. 'Medical Knowledge' and 'Tradition' of Colonial Korea: Focused on Kudo's "Gynecology"-based Knowledge.

    PubMed

    Hong, Yang Hee

    2013-08-01

    This article attempts to illuminate the ways in which Kudo's medical knowledge based on 'gynecological science' constructed the cultural 'traditions' of colonial Korea. Kudo appears to have been quite an influential figure in colonial Korea in that his writings on the relationship between women's crime, gynecological science and the Chosŏn society granted a significant amount of intellectual authority. Here, I examine Kudo's position within colonial Korea as a producer and propagator of medical knowledge, and then see how women's bodies were understood according to his gynecological knowledge. It also traces the ways in which Kudo's gynecological knowledge represents Chosŏn society and in turn invents the 'traditions' of Chosŏn. Kudo's knowledge of "gynecology" which had been formed while it traveled the states such as Japan, Germany and France served as an important reference for his representation of colonial Korean society. Kudo was a proponent of biological evolution, particularly the rules of 'atavism' put forth by the criminal anthropologist Cesare Lombroso, and argued that an unique social environment caused 'alteration of sexual urges' and primitive cruelty in Chosŏn women. According to Kudo, The social environment was none other than the practice of 'early marriage,' which went against the physiology of women. To Kudo, 'early marriage' was an old 'tradition' of Chosŏn and the cause of heinous crimes, as well as an unmistakable indicator of both the primitiveness and savageness of Chosŏn. While Lombroso considered personal factors such as stress as the cause of women's crimes, Kudo saw Chosŏn women's crimes as a national characteristic. Moreover, he compared the occurrence rate of husband murders by provinces, based on which he categorized the northern population of Chosŏn as barbaric Manchurian and the southern population as the superior Japanese, a combination of racism and scientific knowledge. Kudo's writings provide an insight into the

  16. 'Medical Knowledge' and 'Tradition' of Colonial Korea: Focused on Kudo's "Gynecology"-based Knowledge.

    PubMed

    Hong, Yang Hee

    2013-08-01

    This article attempts to illuminate the ways in which Kudo's medical knowledge based on 'gynecological science' constructed the cultural 'traditions' of colonial Korea. Kudo appears to have been quite an influential figure in colonial Korea in that his writings on the relationship between women's crime, gynecological science and the Chosŏn society granted a significant amount of intellectual authority. Here, I examine Kudo's position within colonial Korea as a producer and propagator of medical knowledge, and then see how women's bodies were understood according to his gynecological knowledge. It also traces the ways in which Kudo's gynecological knowledge represents Chosŏn society and in turn invents the 'traditions' of Chosŏn. Kudo's knowledge of "gynecology" which had been formed while it traveled the states such as Japan, Germany and France served as an important reference for his representation of colonial Korean society. Kudo was a proponent of biological evolution, particularly the rules of 'atavism' put forth by the criminal anthropologist Cesare Lombroso, and argued that an unique social environment caused 'alteration of sexual urges' and primitive cruelty in Chosŏn women. According to Kudo, The social environment was none other than the practice of 'early marriage,' which went against the physiology of women. To Kudo, 'early marriage' was an old 'tradition' of Chosŏn and the cause of heinous crimes, as well as an unmistakable indicator of both the primitiveness and savageness of Chosŏn. While Lombroso considered personal factors such as stress as the cause of women's crimes, Kudo saw Chosŏn women's crimes as a national characteristic. Moreover, he compared the occurrence rate of husband murders by provinces, based on which he categorized the northern population of Chosŏn as barbaric Manchurian and the southern population as the superior Japanese, a combination of racism and scientific knowledge. Kudo's writings provide an insight into the

  17. Recognition mechanisms for schema-based knowledge representations

    SciTech Connect

    Havens, W.S.

    1983-01-01

    The author considers generalizing formal recognition methods from parsing theory to schemata knowledge representations. Within artificial intelligence, recognition tasks include aspects of natural language understanding, computer vision, episode understanding, speech recognition, and others. The notion of schemata as a suitable knowledge representation for these tasks is discussed. A number of problems with current schemata-based recognition systems are presented. To gain insight into alternative approaches, the formal context-free parsing method of earley is examined. It is shown to suggest a useful control structure model for integrating top-down and bottom-up search in schemata representations. 46 references.

  18. Integrating knowledge and control into hypermedia-based training environments: Experiments with HyperCLIPS

    NASA Technical Reports Server (NTRS)

    Hill, Randall W., Jr.

    1990-01-01

    The issues of knowledge representation and control in hypermedia-based training environments are discussed. The main objective is to integrate the flexible presentation capability of hypermedia with a knowledge-based approach to lesson discourse management. The instructional goals and their associated concepts are represented in a knowledge representation structure called a 'concept network'. Its functional usages are many: it is used to control the navigation through a presentation space, generate tests for student evaluation, and model the student. This architecture was implemented in HyperCLIPS, a hybrid system that creates a bridge between HyperCard, a popular hypertext-like system used for building user interfaces to data bases and other applications, and CLIPS, a highly portable government-owned expert system shell.

  19. The Influence of Self-Regulated Learning and Prior Knowledge on Knowledge Acquisition in Computer-Based Learning Environments

    ERIC Educational Resources Information Center

    Bernacki, Matthew

    2010-01-01

    This study examined how learners construct textbase and situation model knowledge in hypertext computer-based learning environments (CBLEs) and documented the influence of specific self-regulated learning (SRL) tactics, prior knowledge, and characteristics of the learner on posttest knowledge scores from exposure to a hypertext. A sample of 160…

  20. ISPE: A knowledge-based system for fluidization studies

    SciTech Connect

    Reddy, S.

    1991-01-01

    Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.

  1. Design Study: Rocket Based MHD Generator

    NASA Technical Reports Server (NTRS)

    1997-01-01

    This report addresses the technical feasibility and design of a rocket based MHD generator using a sub-scale LOx/RP rocket motor. The design study was constrained by assuming the generator must function within the performance and structural limits of an existing magnet and by assuming realistic limits on (1) the axial electric field, (2) the Hall parameter, (3) current density, and (4) heat flux (given the criteria of heat sink operation). The major results of the work are summarized as follows: (1) A Faraday type of generator with rectangular cross section is designed to operate with a combustor pressure of 300 psi. Based on a magnetic field strength of 1.5 Tesla, the electrical power output from this generator is estimated to be 54.2 KW with potassium seed (weight fraction 3.74%) and 92 KW with cesium seed (weight fraction 9.66%). The former corresponds to a enthalpy extraction ratio of 2.36% while that for the latter is 4.16%; (2) A conceptual design of the Faraday MHD channel is proposed, based on a maximum operating time of 10 to 15 seconds. This concept utilizes a phenolic back wall for inserting the electrodes and inter-electrode insulators. Copper electrode and aluminum oxide insulator are suggested for this channel; and (3) A testing configuration for the sub-scale rocket based MHD system is proposed. An estimate of performance of an ideal rocket based MHD accelerator is performed. With a current density constraint of 5 Amps/cm(exp 2) and a conductivity of 30 Siemens/m, the push power density can be 250, 431, and 750 MW/m(sup 3) when the induced voltage uB have values of 5, 10, and 15 KV/m, respectively.

  2. The Knowledge-Based Software Assistant: Beyond CASE

    NASA Technical Reports Server (NTRS)

    Carozzoni, Joseph A.

    1993-01-01

    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.

  3. Knowledge acquisition for case-based reasoning systems

    NASA Technical Reports Server (NTRS)

    Riesbeck, Christopher K.

    1988-01-01

    Case-based reasoning (CBR) is a simple idea: solve new problems by adapting old solutions to similar problems. The CBR approach offers several potential advantages over rule-based reasoning: rules are not combined blindly in a search for solutions, solutions can be explained in terms of concrete examples, and performance can improve automatically as new problems are solved and added to the case library. Moving CBR for the university research environment to the real world requires smooth interfaces for getting knowledge from experts. Described are the basic elements of an interface for acquiring three basic bodies of knowledge that any case-based reasoner requires: the case library of problems and their solutions, the analysis rules that flesh out input problem specifications so that relevant cases can be retrieved, and the adaptation rules that adjust old solutions to fit new problems.

  4. Spinning fantasy: themes, structure, and the knowledge base.

    PubMed

    Lucariello, J

    1987-04-01

    The influence of the child's knowledge base, in terms of event schemas, on symbolic play behavior was investigated. The pretend play behavior of 10 mother-child (2-0 to 2-4) dyads was observed in 2 play contexts. Play was examined for thematic content and the following structural components: self-other relations, substitute/imaginary objects, action integration, and planfulness. The highest levels of symbolic play behavior emerged in pretense episodes whose thematic content was event based. Additionally, thematic content affected the respective roles of mother and child in the construction of pretense. In pretense activity based on themes with which the child was familiar (e.g., routine events), the child, as well as the mother, participated in advanced levels of symbolic play activity, coconstructing pretense. In pretense based on themes unfamiliar to the child, the mother was almost exclusively responsible for the pretense. Thus, the development of child symbolic play appears to be related to the knowledge base in that its emergence is domain-specific--limited to themes for which the child has knowledge--before being more widely manifested. PMID:2435465

  5. Using the DOE Knowledge Base for Special Event Analysis

    SciTech Connect

    Armstrong, H.M.; Harris, J.M.; Young, C.J.

    1998-10-20

    The DOE Knowledge Base is a library of detailed information whose purpose is to support the United States National Data Center (USNDC) in its mission to monitor compliance with the Comprehensive Test Ban Treaty (CTBT). One of the important tasks which the USNDC must accomplish is to periodically perform detailed analysis of events of high interest, so-called "Special Events", to provide the national authority with information needed to make policy decisions. In this paper we investigate some possible uses of the Knowledge Base for Special Event Analysis (SEA), and make recommendations for improving Knowledge Base support for SEA. To analyze an event in detail, there are two basic types of data which must be used sensor-derived data (wave- forms, arrivals, events, etc.) and regiohalized contextual data (known sources, geological characteristics, etc.). Cur- rently there is no single package which can provide full access to both types of data, so for our study we use a separate package for each MatSeis, the Sandia Labs-developed MATLAB-based seismic analysis package, for wave- form data analysis, and ArcView, an ESRI product, for contextual data analysis. Both packages are well-suited to pro- totyping because they provide a rich set of currently available functionality and yet are also flexible and easily extensible, . Using these tools and Phase I Knowledge Base data sets, we show how the Knowledge Base can improve both the speed and the quality of SEA. Empirically-derived interpolated correction information can be accessed to improve both location estimates and associated error estimates. This information can in turn be used to identi~ any known nearby sources (e.g. mines, volcanos), which may then trigger specialized processing of the sensor data. Based on the location estimate, preferred magnitude formulas and discriminants can be retrieved, and any known blockages can be identified to prevent miscalculations. Relevant historic events can be identilled either by

  6. SAFOD Brittle Microstructure and Mechanics Knowledge Base (BM2KB)

    NASA Astrophysics Data System (ADS)

    Babaie, Hassan A.; Broda Cindi, M.; Hadizadeh, Jafar; Kumar, Anuj

    2013-07-01

    Scientific drilling near Parkfield, California has established the San Andreas Fault Observatory at Depth (SAFOD), which provides the solid earth community with short range geophysical and fault zone material data. The BM2KB ontology was developed in order to formalize the knowledge about brittle microstructures in the fault rocks sampled from the SAFOD cores. A knowledge base, instantiated from this domain ontology, stores and presents the observed microstructural and analytical data with respect to implications for brittle deformation and mechanics of faulting. These data can be searched on the knowledge base‧s Web interface by selecting a set of terms (classes, properties) from different drop-down lists that are dynamically populated from the ontology. In addition to this general search, a query can also be conducted to view data contributed by a specific investigator. A search by sample is done using the EarthScope SAFOD Core Viewer that allows a user to locate samples on high resolution images of core sections belonging to different runs and holes. The class hierarchy of the BM2KB ontology was initially designed using the Unified Modeling Language (UML), which was used as a visual guide to develop the ontology in OWL applying the Protégé ontology editor. Various Semantic Web technologies such as the RDF, RDFS, and OWL ontology languages, SPARQL query language, and Pellet reasoning engine, were used to develop the ontology. An interactive Web application interface was developed through Jena, a java based framework, with AJAX technology, jsp pages, and java servlets, and deployed via an Apache tomcat server. The interface allows the registered user to submit data related to their research on a sample of the SAFOD core. The submitted data, after initial review by the knowledge base administrator, are added to the extensible knowledge base and become available in subsequent queries to all types of users. The interface facilitates inference capabilities in the

  7. Knowledge-based architecture for airborne mine and minefield detection

    NASA Astrophysics Data System (ADS)

    Agarwal, Sanjeev; Menon, Deepak; Swonger, C. W.

    2004-09-01

    One of the primary lessons learned from airborne mid-wave infrared (MWIR) based mine and minefield detection research and development over the last few years has been the fact that no single algorithm or static detection architecture is able to meet mine and minefield detection performance specifications. This is true not only because of the highly varied environmental and operational conditions under which an airborne sensor is expected to perform but also due to the highly data dependent nature of sensors and algorithms employed for detection. Attempts to make the algorithms themselves more robust to varying operating conditions have only been partially successful. In this paper, we present a knowledge-based architecture to tackle this challenging problem. The detailed algorithm architecture is discussed for such a mine/minefield detection system, with a description of each functional block and data interface. This dynamic and knowledge-driven architecture will provide more robust mine and minefield detection for a highly multi-modal operating environment. The acquisition of the knowledge for this system is predominantly data driven, incorporating not only the analysis of historical airborne mine and minefield imagery data collection, but also other "all source data" that may be available such as terrain information and time of day. This "all source data" is extremely important and embodies causal information that drives the detection performance. This information is not being used by current detection architectures. Data analysis for knowledge acquisition will facilitate better understanding of the factors that affect the detection performance and will provide insight into areas for improvement for both sensors and algorithms. Important aspects of this knowledge-based architecture, its motivations and the potential gains from its implementation are discussed, and some preliminary results are presented.

  8. Local knowledge in community-based approaches to medicinal plant conservation: lessons from India

    PubMed Central

    Shukla, Shailesh; Gardner, James

    2006-01-01

    Background Community-based approaches to conservation of natural resources, in particular medicinal plants, have attracted attention of governments, non governmental organizations and international funding agencies. This paper highlights the community-based approaches used by an Indian NGO, the Rural Communes Medicinal Plant Conservation Centre (RCMPCC). The RCMPCC recognized and legitimized the role of local medicinal knowledge along with other knowledge systems to a wider audience, i.e. higher levels of government. Methods Besides a review of relevant literature, the research used a variety of qualitative techniques, such as semi-structured, in-depth interviews and participant observations in one of the project sites of RCMPCC. Results The review of local medicinal plant knowledge systems reveals that even though medicinal plants and associated knowledge systems (particularly local knowledge) are gaining wider recognition at the global level, the efforts to recognize and promote the un-codified folk systems of medicinal knowledge are still inadequate. In country like India, such neglect is evident through the lack of legal recognition and supporting policies. On the other hand, community-based approaches like local healers' workshops or village biologist programs implemented by RCMPCC are useful in combining both local (folk and codified) and formal systems of medicine. Conclusion Despite the high reliance on the local medicinal knowledge systems for health needs in India, the formal policies and national support structures are inadequate for traditional systems of medicine and almost absent for folk medicine. On the other hand, NGOs like the RCMPCC have demonstrated that community-based and local approaches such as local healer's workshops and village biologist program can synergistically forge linkages between local knowledge with the formal sciences (in this case botany and ecology) and generate positive impacts at various levels. PMID:16603082

  9. Knowledge/geometry-based Mobile Autonomous Robot Simulator (KMARS)

    NASA Technical Reports Server (NTRS)

    Cheng, Linfu; Mckendrick, John D.; Liu, Jeffrey

    1990-01-01

    Ongoing applied research is focused on developing guidance system for robot vehicles. Problems facing the basic research needed to support this development (e.g., scene understanding, real-time vision processing, etc.) are major impediments to progress. Due to the complexity and the unpredictable nature of a vehicle's area of operation, more advanced vehicle control systems must be able to learn about obstacles within the range of its sensor(s). A better understanding of the basic exploration process is needed to provide critical support to developers of both sensor systems and intelligent control systems which can be used in a wide spectrum of autonomous vehicles. Elcee Computek, Inc. has been working under contract to the Flight Dynamics Laboratory, Wright Research and Development Center, Wright-Patterson AFB, Ohio to develop a Knowledge/Geometry-based Mobile Autonomous Robot Simulator (KMARS). KMARS has two parts: a geometry base and a knowledge base. The knowledge base part of the system employs the expert-system shell CLIPS ('C' Language Integrated Production System) and necessary rules that control both the vehicle's use of an obstacle detecting sensor and the overall exploration process. The initial phase project has focused on the simulation of a point robot vehicle operating in a 2D environment.

  10. Elder knowledge and sustainable livelihoods in post-Soviet Russia: finding dialogue across the generations.

    PubMed

    Crate, Susan A

    2006-01-01

    Russia's indigenous peoples have been struggling with economic, environmental, and socio-cultural dislocation since the fall of the Soviet Union in 1991. In northern rural areas, the end of the Soviet Union most often meant the end of agro-industrial state farm operations that employed and fed surrounding rural populations. Most communities adapted to this loss by reinstating some form of pre-Soviet household-level food production based on hunting, fishing, and/or herding. However, mass media, globalization, and modernity challenge the intergenerational knowledge exchange that grounds subsistence practices. Parts of the circumpolar north have been relatively successful in valuing and integrating elder knowledge within their communities. This has not been the case in Russia. This article presents results of an elder knowledge project in northeast Siberia, Russia that shows how rural communities can both document and use elder knowledge to bolster local definitions of sustainability and, at the same time, initiate new modes of communication between village youth and elders.

  11. Measuring Knowledge Elaboration Based on a Computer-Assisted Knowledge Map Analytical Approach to Collaborative Learning

    ERIC Educational Resources Information Center

    Zheng, Lanqin; Huang, Ronghuai; Hwang, Gwo-Jen; Yang, Kaicheng

    2015-01-01

    The purpose of this study is to quantitatively measure the level of knowledge elaboration and explore the relationships between prior knowledge of a group, group performance, and knowledge elaboration in collaborative learning. Two experiments were conducted to investigate the level of knowledge elaboration. The collaborative learning objective in…

  12. NSIDC Knowledge Base: Using Knowledge Networking Tools to Help Data Users to Help Themselves

    NASA Astrophysics Data System (ADS)

    McAllister, M.; Tressel, S.

    2012-12-01

    In the age of information, scientists and non-scientists alike expect answers to their questions to be available on LCD display with just a few clicks of a mouse. Over the past decade, NSIDC User Services has seen a sizable increase in total data users, with a growing percentage coming from non-science backgrounds. In order to meet the demands of so many curious minds and to better appeal to the diversifying user community, NSIDC User Services is in the process of utilizing professional helpdesk software to create NSIDC Knowledge Base: a multimedia platform for supporting data users. Ultimately, searchable, referenced articles on common user problems and FAQ's will appear beside video tutorials demonstrating how to use the data. Links to other data centers' user support departments will be offered when questions expand beyond the scope of NSIDC. NSIDC Knowledge Base aims to be a resource allowing users to help themselves as well as a gateway to finding resources at related data centers.

  13. Distance learning, problem based learning and dynamic knowledge networks.

    PubMed

    Giani, U; Martone, P

    1998-06-01

    This paper is an attempt to develop a distance learning model grounded upon a strict integration of problem based learning (PBL), dynamic knowledge networks (DKN) and web tools, such as hypermedia documents, synchronous and asynchronous communication facilities, etc. The main objective is to develop a theory of distance learning based upon the idea that learning is a highly dynamic cognitive process aimed at connecting different concepts in a network of mutually supporting concepts. Moreover, this process is supposed to be the result of a social interaction that has to be facilitated by the web. The model was tested by creating a virtual classroom of medical and nursing students and activating a learning session on the concept of knowledge representation in health sciences.

  14. Network fingerprint: a knowledge-based characterization of biomedical networks.

    PubMed

    Cui, Xiuliang; He, Haochen; He, Fuchu; Wang, Shengqi; Li, Fei; Bo, Xiaochen

    2015-08-26

    It can be difficult for biomedical researchers to understand complex molecular networks due to their unfamiliarity with the mathematical concepts employed. To represent molecular networks with clear meanings and familiar forms for biomedical researchers, we introduce a knowledge-based computational framework to decipher biomedical networks by making systematic comparisons to well-studied "basic networks". A biomedical network is characterized as a spectrum-like vector called "network fingerprint", which contains similarities to basic networks. This knowledge-based multidimensional characterization provides a more intuitive way to decipher molecular networks, especially for large-scale network comparisons and clustering analyses. As an example, we extracted network fingerprints of 44 disease networks in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The comparisons among the network fingerprints of disease networks revealed informative disease-disease and disease-signaling pathway associations, illustrating that the network fingerprinting framework will lead to new approaches for better understanding of biomedical networks.

  15. Developing genomic knowledge bases and databases to support clinical management: current perspectives.

    PubMed

    Huser, Vojtech; Sincan, Murat; Cimino, James J

    2014-01-01

    Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward.

  16. A knowledge based model of electric utility operations. Final report

    SciTech Connect

    1993-08-11

    This report consists of an appendix to provide a documentation and help capability for an analyst using the developed expert system of electric utility operations running in CLIPS. This capability is provided through a separate package running under the WINDOWS Operating System and keyed to provide displays of text, graphics and mixed text and graphics that explain and elaborate on the specific decisions being made within the knowledge based expert system.

  17. Current and future trends in metagenomics : Development of knowledge bases

    NASA Astrophysics Data System (ADS)

    Mori, Hiroshi; Yamada, Takuji; Kurokawa, Ken

    Microbes are essential for every part of life on Earth. Numerous microbes inhabit the biosphere, many of which are uncharacterized or uncultivable. They form a complex microbial community that deeply affects against surrounding environments. Metagenome analysis provides a radically new way of examining such complex microbial community without isolation or cultivation of individual bacterial community members. In this article, we present a brief discussion about a metagenomics and the development of knowledge bases, and also discuss about the future trends in metagenomics.

  18. Generation of DNA nanocircles containing mismatched bases.

    PubMed

    Xiao, Yu; Jung, Caroline; Marx, Andreas D; Winkler, Ines; Wyman, Claire; Lebbink, Joyce H G; Friedhoff, Peter; Cristovao, Michele

    2011-10-01

    The DNA mismatch repair (MMR) system recognizes and repairs errors that escaped the proofreading function of DNA polymerases. To study molecular details of the MMR mechanism, in vitro biochemical assays require specific DNA substrates carrying mismatches and strand discrimination signals. Current approaches used to generate MMR substrates are time-consuming and/or not very flexible with respect to sequence context. Here we report an approach to generate small circular DNA containing a mismatch (nanocircles). Our method is based on the nicking of PCR products resulting in single-stranded 3' overhangs, which form DNA circles after annealing and ligation. Depending on the DNA template, one can generate mismatched circles containing a single hemimethylated GATC site (for use with the bacterial system) and/or nicking sites to generate DNA circles nicked in the top or bottom strand (for assays with the bacterial or eukaryotic MMR system). The size of the circles varied (323 to 1100 bp), their sequence was determined by the template DNA, and purification of the circles was achieved by ExoI/ExoIII digestion and/or gel extraction. The quality of the nanocircles was assessed by scanning-force microscopy and their suitability for in vitro repair initiation was examined using recombinant Escherichia coli MMR proteins.

  19. Knowledge-based system for the design of heat exchangers

    NASA Astrophysics Data System (ADS)

    Cochran, W. J.; Hainley, Don; Khartabil, Loay

    1993-03-01

    A knowledge based system has been developed to assist engineers in the design of compact heat exchangers. The main objectives of this project were to: (1) automate aspects of heat exchanger design; (2) produce multiple successful designs quickly; and (3) optimize these designs based on specific constraints or criteria. Productivity improvements from use of this system have been as much as two orders of magnitude. The design of heat exchangers is a time-consuming, iterative process. For a given set of requirements a design engineer uses his knowledge and experience to pick an initial design point and then calculates (with a large Fortran program) the performance for that design. If performance data do not meet requirements, various design parameters are modified and performance is calculated again. An expert system now embodies design expertise (rules for design decisions) allowing automation of this iterative process and substantial time savings for engineers. In addition, optimizing successful designs is now practical, whereas in the past it was generally infeasible due to the amount of labor involved. A configuration system was also developed that serves as a `front- end' for the design system. The configuration system matches design requirements to existing products and offers suggestions for initial design points. Both were developed with the KAPPA knowledge based system shell. The two KAPPA programs and the Fortran program for numerical calculations are integrated within a Windows 3.1 environment on a 486 PC.

  20. A proven knowledge-based approach to prioritizing process information

    NASA Technical Reports Server (NTRS)

    Corsberg, Daniel R.

    1991-01-01

    Many space-related processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect is rapid analysis of the changing process information. During a disturbance, this task can overwhelm humans as well as computers. Humans deal with this by applying heuristics in determining significant information. A simple, knowledge-based approach to prioritizing information is described. The approach models those heuristics that humans would use in similar circumstances. The approach described has received two patents and was implemented in the Alarm Filtering System (AFS) at the Idaho National Engineering Laboratory (INEL). AFS was first developed for application in a nuclear reactor control room. It has since been used in chemical processing applications, where it has had a significant impact on control room environments. The approach uses knowledge-based heuristics to analyze data from process instrumentation and respond to that data according to knowledge encapsulated in objects and rules. While AFS cannot perform the complete diagnosis and control task, it has proven to be extremely effective at filtering and prioritizing information. AFS was used for over two years as a first level of analysis for human diagnosticians. Given the approach's proven track record in a wide variety of practical applications, it should be useful in both ground- and space-based systems.

  1. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  2. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul

    1994-01-01

    This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.

  3. Embedded knowledge-based system for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Aboutalib, A. O.

    1990-10-01

    The development of a reliable Automatic Target Recognition (ATE) system is considered a very critical and challenging problem. Existing ATE Systems have inherent limitations in terms of recognition performance and the ability to learn and adapt. Artificial Intelligence Techniques have the potential to improve the performance of ATh Systems. In this paper, we presented a novel Knowledge-Engineering tool, termed, the Automatic Reasoning Process (ARP) , that can be used to automatically develop and maintain a Knowledge-Base (K-B) for the ATR Systems. In its learning mode, the ARP utilizes Learning samples to automatically develop the ATR K-B, which consists of minimum size sets of necessary and sufficient conditions for each target class. In its operational mode, the ARP infers the target class from sensor data using the ATh K-B System. The ARP also has the capability to reason under uncertainty, and can support both statistical and model-based approaches for ATR development. The capabilities of the ARP are compared and contrasted to those of another Knowledge-Engineering tool, termed, the Automatic Rule Induction (ARI) which is based on maximizing the mutual information. The AR? has been implemented in LISP on a VAX-GPX workstation.

  4. Virus-based piezoelectric energy generation

    NASA Astrophysics Data System (ADS)

    Lee, Byung Yang; Zhang, Jinxing; Zueger, Chris; Chung, Woo-Jae; Yoo, So Young; Wang, Eddie; Meyer, Joel; Ramesh, Ramamoorthy; Lee, Seung-Wuk

    2012-06-01

    Piezoelectric materials can convert mechanical energy into electrical energy, and piezoelectric devices made of a variety of inorganic materials and organic polymers have been demonstrated. However, synthesizing such materials often requires toxic starting compounds, harsh conditions and/or complex procedures. Previously, it was shown that hierarchically organized natural materials such as bones, collagen fibrils and peptide nanotubes can display piezoelectric properties. Here, we demonstrate that the piezoelectric and liquid-crystalline properties of M13 bacteriophage (phage) can be used to generate electrical energy. Using piezoresponse force microscopy, we characterize the structure-dependent piezoelectric properties of the phage at the molecular level. We then show that self-assembled thin films of phage can exhibit piezoelectric strengths of up to 7.8 pm V-1. We also demonstrate that it is possible to modulate the dipole strength of the phage, hence tuning the piezoelectric response, by genetically engineering the major coat proteins of the phage. Finally, we develop a phage-based piezoelectric generator that produces up to 6 nA of current and 400 mV of potential and use it to operate a liquid-crystal display. Because biotechnology techniques enable large-scale production of genetically modified phages, phage-based piezoelectric materials potentially offer a simple and environmentally friendly approach to piezoelectric energy generation.

  5. An Analysis of Three Different Approaches to Student Teacher Mentoring and Their Impact on Knowledge Generation in Practicum Settings

    ERIC Educational Resources Information Center

    Mena, Juanjo; García, Marisa; Clarke, Anthony; Barkatsas, Anastasios

    2016-01-01

    Mentoring in Teacher Education is a key component in the professional development of student teachers. However, little research focuses on the knowledge shared and generated in mentoring conversations. In this paper, we explore the knowledge student teachers articulate in mentoring conversations under three different post-lesson approaches to…

  6. Shapelearner: Towards Shape-Based Visual Knowledge Harvesting

    NASA Astrophysics Data System (ADS)

    Wang, Zheng; Liang, Ti

    2016-06-01

    The explosion of images on the Web has led to a number of efforts to organize images semantically and compile collections of visual knowledge. While there has been enormous progress on categorizing entire images or bounding boxes, only few studies have targeted fine-grained image understanding at the level of specific shape contours. For example, given an image of a cat, we would like a system to not merely recognize the existence of a cat, but also to distinguish between the cat's legs, head, tail, and so on. In this paper, we present ShapeLearner, a system that acquires such visual knowledge about object shapes and their parts. ShapeLearner jointly learns this knowledge from sets of segmented images. The space of label and segmentation hypotheses is pruned and then evaluated using Integer Linear Programming. ShapeLearner places the resulting knowledge in a semantic taxonomy based on WordNet and is able to exploit this hierarchy in order to analyze new kinds of objects that it has not observed before. We conduct experiments using a variety of shape classes from several representative categories and demonstrate the accuracy and robustness of our method.

  7. Pointing knowledge accuracy of the star tracker based ATP system

    NASA Astrophysics Data System (ADS)

    Lee, Shinhak; Ortiz, Gerardo G.; Alexander, James W.

    2005-04-01

    The pointing knowledge for the deep space optical communications should be accurate and the estimate update rate needs to be sufficiently higher to compensate the spacecraft vibration. Our objective is to meet these two requirements, high accuracy and update rate, using the combinations of star trackers and inertial sensors. Star trackers are very accurate and provide absolute pointing knowledge with low update rate depending on the star magnitude. On the other hand, inertial sensors provide relative pointing knowledge with high update rates. In this paper, we describe how the star tracker and inertial sensor measurements are combined to reduce the pointing knowledge jitter. This method is based on the 'iterative averaging' of the star tracker and gyro measurements. Angle sensor measurements are to fill in between the two gyro measurements for higher update rate and the total RMS error (or jitter) increases in RSS (Root-Sum-Squared) sense. The estimated pointing jitter is on the order of 150 nrad which is well below the typical requirements of the deep space optical communications. This 150 nrad jitter can be achieved with 8 cm diameter of telescope aperture. Additional expectations include 1/25 pixel accuracy per star, SIRTF class gyros (ARW = 0.0001 deg/root-hr), 5 Hz star trackers with ~5.0 degree FOV, detector of 1000 by 1000 pixels, and stars of roughly 9 to 9.5 magnitudes.

  8. Development of an Inquiry-Based Learning Support System Based on an Intelligent Knowledge Exploration Approach

    ERIC Educational Resources Information Center

    Wu, Ji-Wei; Tseng, Judy C. R.; Hwang, Gwo-Jen

    2015-01-01

    Inquiry-Based Learning (IBL) is an effective approach for promoting active learning. When inquiry-based learning is incorporated into instruction, teachers provide guiding questions for students to actively explore the required knowledge in order to solve the problems. Although the World Wide Web (WWW) is a rich knowledge resource for students to…

  9. Designing and Developing a NASA Research Projects Knowledge Base and Implementing Knowledge Management and Discovery Techniques

    NASA Astrophysics Data System (ADS)

    Dabiru, L.; O'Hara, C. G.; Shaw, D.; Katragadda, S.; Anderson, D.; Kim, S.; Shrestha, B.; Aanstoos, J.; Frisbie, T.; Policelli, F.; Keblawi, N.

    2006-12-01

    The Research Project Knowledge Base (RPKB) is currently being designed and will be implemented in a manner that is fully compatible and interoperable with enterprise architecture tools developed to support NASA's Applied Sciences Program. Through user needs assessment, collaboration with Stennis Space Center, Goddard Space Flight Center, and NASA's DEVELOP Staff personnel insight to information needs for the RPKB were gathered from across NASA scientific communities of practice. To enable efficient, consistent, standard, structured, and managed data entry and research results compilation a prototype RPKB has been designed and fully integrated with the existing NASA Earth Science Systems Components database. The RPKB will compile research project and keyword information of relevance to the six major science focus areas, 12 national applications, and the Global Change Master Directory (GCMD). The RPKB will include information about projects awarded from NASA research solicitations, project investigator information, research publications, NASA data products employed, and model or decision support tools used or developed as well as new data product information. The RPKB will be developed in a multi-tier architecture that will include a SQL Server relational database backend, middleware, and front end client interfaces for data entry. The purpose of this project is to intelligently harvest the results of research sponsored by the NASA Applied Sciences Program and related research program results. We present various approaches for a wide spectrum of knowledge discovery of research results, publications, projects, etc. from the NASA Systems Components database and global information systems and show how this is implemented in SQL Server database. The application of knowledge discovery is useful for intelligent query answering and multiple-layered database construction. Using advanced EA tools such as the Earth Science Architecture Tool (ESAT), RPKB will enable NASA and

  10. Temporal and contextual knowledge in model-based expert systems

    NASA Technical Reports Server (NTRS)

    Toth-Fejel, Tihamer; Heher, Dennis

    1987-01-01

    A basic paradigm that allows representation of physical systems with a focus on context and time is presented. Paragon provides the capability to quickly capture an expert's knowledge in a cognitively resonant manner. From that description, Paragon creates a simulation model in LISP, which when executed, verifies that the domain expert did not make any mistakes. The Achille's heel of rule-based systems has been the lack of a systematic methodology for testing, and Paragon's developers are certain that the model-based approach overcomes that problem. The reason this testing is now possible is that software, which is very difficult to test, has in essence been transformed into hardware.

  11. Conceptualizing In-service Secondary School Science Teachers' Knowledge Base for Climate Change Content

    NASA Astrophysics Data System (ADS)

    Campbell, K. M.; Roehrig, G.; Dalbotten, D. M.; Bhattacharya, D.; Nam, Y.; Varma, K.; Wang, J.

    2011-12-01

    The need to deepen teachers' knowledge of the science of climate change is crucial under a global climate change (GCC) scenario. With effective collaboration between researchers, scientists and teachers, conceptual frameworks can be developed for creating climate change content for classroom implementation. Here, we discuss how teachers' conceptualized content knowledge about GCC changes over the course of a professional development program in which they are provided with place-based and culturally congruent content. The NASA-funded Global Climate Change Education (GCCE) project, "CYCLES: Teachers Discovering Climate Change from a Native Perspective", is a 3-year teacher professional development program designed to develop culturally-sensitive approaches for GCCE in Native American communities using traditional knowledge, data and tools. As a part of this program, we assessed the progression in the content knowledge of participating teachers about GCC. Teachers were provided thematic GCC content focused on the elements of the medicine wheel-Earth, Fire, Air, Water, and Life -during a one week summer workshop. Content was organized to emphasize explanations of the natural world as interconnected and cyclical processes and to align with the Climate and Earth Science Literacy Principles and NASA resources. Year 1 workshop content was focused on the theme of "Earth" and teacher knowledge was progressively increased by providing content under the themes of 1) understanding of timescale, 2) understanding of local and global perspectives, 3) understanding of proxy data and 4) ecosystem connectivity. We used a phenomenographical approach for data analysis to qualitatively investigate different ways in which the teachers experienced and conceptualized GCC. We analyzed categories of teachers' climate change knowledge using information generated by tools such as photo elicitation interviews, concept maps and reflective journal perceptions. Preliminary findings from the pre

  12. A knowledge-based system for optimization of fuel reload configurations

    SciTech Connect

    Galperin, A.; Kimhi, S.; Segev, M. )

    1989-05-01

    The authors discuss a knowledge-based production system developed for generating optimal fuel reload configurations. The system was based on a heuristic search method and implemented in Common Lisp programming language. The knowledge base embodied the reactor physics, reactor operations, and a general approach to fuel management strategy. The data base included a description of the physical system involved, i.e., the core geometry and fuel storage. The fifth cycle of the Three Mile Island Unit 1 pressurized water reactor was chosen as a test case. Application of the system to the test case revealed a self-learning process by which a relatively large number of near-optimal configurations were discovered. Several selected solutions were subjected to detailed analysis and demonstrated excellent performance. To summarize, applicability of the proposed heuristic search method in the domain of nuclear fuel management was proved unequivocally.

  13. Knowledge-based inference engine for online video dissemination

    NASA Astrophysics Data System (ADS)

    Zhou, Wensheng; Kuo, C.-C. Jay

    2000-10-01

    To facilitate easy access to rich information of multimedia over the Internet, we develop a knowledge-based classification system that supports automatic Indexing and filtering based on semantic concepts for the dissemination of on-line real-time media. Automatic segmentation, annotation and summarization of media for fast information browsing and updating are achieved in the same time. In the proposed system, a real-time scene-change detection proxy performs an initial video structuring process by splitting a video clip into scenes. Motional and visual features are extracted in real time for every detected scene by using online feature extraction proxies. Higher semantics are then derived through a joint use of low-level features along with inference rules in the knowledge base. Inference rules are derived through a supervised learning process based on representative samples. On-line media filtering based on semantic concepts becomes possible by using the proposed video inference engine. Video streams are either blocked or sent to certain channels depending on whether or not the video stream is matched with the user's profile. The proposed system is extensively evaluated by applying the engine to video of basketball games.

  14. Knowledge retrieval as one type of knowledge-based decision support in medicine: results of an evaluation study.

    PubMed

    Haux, R; Grothe, W; Runkel, M; Schackert, H K; Windeler, H J; Winter, A; Wirtz, R; Herfarth, C; Kunze, S

    1996-04-01

    We report on a prospective, prolective observational study, supplying information on how physicians and other health care professionals retrieve medical knowledge on-line within the Heidelberg University Hospital information system. Within this hospital information system, on-line access to medical knowledge has been realised by installing a medical knowledge server in the range of about 24 GB and by providing access to it by health care professional workstations in wards, physicians' rooms, etc. During the study, we observed about 96 accesses per working day. The main group of health care professionals retrieving medical knowledge were physicians and medical students. Primary reasons for its utilisation were identified as support for the users' scientific work (50%), own clinical cases (19%), general medical problems (14%) and current clinical problems (13%). Health care professionals had accesses to medical knowledge bases such as MEDLINE (79%), drug bases ('Rote Liste', 6%), and to electronic text books and knowledge base systems as well. Sixty-five percent of accesses to medical knowledge were judged to be successful. In our opinion, medical knowledge retrieval can serve as a first step towards knowledge processing in medicine. We point out the consequences for the management of hospital information systems in order to provide the prerequisites for such a type of knowledge retrieval.

  15. Knowledge-based imaging-sensor fusion system

    NASA Technical Reports Server (NTRS)

    Westrom, George

    1989-01-01

    An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.

  16. Knowledge-based approach to fault diagnosis and control in distributed process environments

    NASA Astrophysics Data System (ADS)

    Chung, Kwangsue; Tou, Julius T.

    1991-03-01

    This paper presents a new design approach to knowledge-based decision support systems for fault diagnosis and control for quality assurance and productivity improvement in automated manufacturing environments. Based on the observed manifestations, the knowledge-based diagnostic system hypothesizes a set of the most plausible disorders by mimicking the reasoning process of a human diagnostician. The data integration technique is designed to generate error-free hierarchical category files. A novel approach to diagnostic problem solving has been proposed by integrating the PADIKS (Pattern-Directed Knowledge-Based System) concept and the symbolic model of diagnostic reasoning based on the categorical causal model. The combination of symbolic causal reasoning and pattern-directed reasoning produces a highly efficient diagnostic procedure and generates a more realistic expert behavior. In addition, three distinctive constraints are designed to further reduce the computational complexity and to eliminate non-plausible hypotheses involved in the multiple disorders problem. The proposed diagnostic mechanism, which consists of three different levels of reasoning operations, significantly reduces the computational complexity in the diagnostic problem with uncertainty by systematically shrinking the hypotheses space. This approach is applied to the test and inspection data collected from a PCB manufacturing operation.

  17. User Generated Content Consumption and Social Networking in Knowledge-Sharing OSNs

    NASA Astrophysics Data System (ADS)

    Lussier, Jake T.; Raeder, Troy; Chawla, Nitesh V.

    Knowledge-sharing online social networks are becoming increasingly pervasive and popular. While the user-to-user interactions in these networks have received substantial attention, the consumption of user generated content has not been studied extensively. In this work, we use data gathered from digg.com to present novel findings and draw important sociological conclusions regarding the intimate relationship between consumption and social networking. We first demonstrate that individuals' consumption habits influence their friend networks, consistent with the concept of homophily. We then show that one's social network can also influence the consumption of a submission through the activation of an extended friend network. Finally, we investigate the level of reciprocity, or balance, in the network and uncover relationships that are significantly less balanced than expected.

  18. MEGen: A Physiologically Based Pharmacokinetic Model Generator

    PubMed Central

    Loizou, George; Hogg, Alex

    2011-01-01

    Physiologically based pharmacokinetic models are being used in an increasing number of different areas. However, they are perceived as complex, data hungry, resource intensive, and time consuming. In addition, model validation and verification are hindered by the relative complexity of the equations. To begin to address these issues a web application called MEGen for the rapid construction and documentation of bespoke deterministic PBPK model code is under development. MEGen comprises a parameter database and a model code generator that produces code for use in several commercial software packages and one that is freely available. Here we present an overview of the current capabilities of MEGen, and discuss future developments. PMID:22084631

  19. Knowledge-based system for flight information management. Thesis

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.

    1990-01-01

    The use of knowledge-based system (KBS) architectures to manage information on the primary flight display (PFD) of commercial aircraft is described. The PFD information management strategy used tailored the information on the PFD to the tasks the pilot performed. The KBS design and implementation of the task-tailored PFD information management application is described. The knowledge acquisition and subsequent system design of a flight-phase-detection KBS is also described. The flight-phase output of this KBS was used as input to the task-tailored PFD information management KBS. The implementation and integration of this KBS with existing aircraft systems and the other KBS is described. The flight tests are examined of both KBS's, collectively called the Task-Tailored Flight Information Manager (TTFIM), which verified their implementation and integration, and validated the software engineering advantages of the KBS approach in an operational environment.

  20. Structure of the knowledge base for an expert labeling system

    NASA Technical Reports Server (NTRS)

    Rajaram, N. S.

    1981-01-01

    One of the principal objectives of the NASA AgRISTARS program is the inventory of global crop resources using remotely sensed data gathered by Land Satellites (LANDSAT). A central problem in any such crop inventory procedure is the interpretation of LANDSAT images and identification of parts of each image which are covered by a particular crop of interest. This task of labeling is largely a manual one done by trained human analysts and consequently presents obstacles to the development of totally automated crop inventory systems. However, development in knowledge engineering as well as widespread availability of inexpensive hardware and software for artificial intelligence work offers possibilities for developing expert systems for labeling of crops. Such a knowledge based approach to labeling is presented.

  1. Disease Related Knowledge Summarization Based on Deep Graph Search

    PubMed Central

    Wu, Xiaofang; Yang, Zhihao; Li, ZhiHeng; Lin, Hongfei; Wang, Jian

    2015-01-01

    The volume of published biomedical literature on disease related knowledge is expanding rapidly. Traditional information retrieval (IR) techniques, when applied to large databases such as PubMed, often return large, unmanageable lists of citations that do not fulfill the searcher's information needs. In this paper, we present an approach to automatically construct disease related knowledge summarization from biomedical literature. In this approach, firstly Kullback-Leibler Divergence combined with mutual information metric is used to extract disease salient information. Then deep search based on depth first search (DFS) is applied to find hidden (indirect) relations between biomedical entities. Finally random walk algorithm is exploited to filter out the weak relations. The experimental results show that our approach achieves a precision of 60% and a recall of 61% on salient information extraction for Carcinoma of bladder and outperforms the method of Combo. PMID:26413521

  2. The COPD Knowledge Base: enabling data analysis and computational simulation in translational COPD research

    PubMed Central

    2014-01-01

    Background Previously we generated a chronic obstructive pulmonary disease (COPD) specific knowledge base (http://www.copdknowledgebase.eu) from clinical and experimental data, text-mining results and public databases. This knowledge base allowed the retrieval of specific molecular networks together with integrated clinical and experimental data. Results The COPDKB has now been extended to integrate over 40 public data sources on functional interaction (e.g. signal transduction, transcriptional regulation, protein-protein interaction, gene-disease association). In addition we integrated COPD-specific expression and co-morbidity networks connecting over 6 000 genes/proteins with physiological parameters and disease states. Three mathematical models describing different aspects of systemic effects of COPD were connected to clinical and experimental data. We have completely redesigned the technical architecture of the user interface and now provide html and web browser-based access and form-based searches. A network search enables the use of interconnecting information and the generation of disease-specific sub-networks from general knowledge. Integration with the Synergy-COPD Simulation Environment enables multi-scale integrated simulation of individual computational models while integration with a Clinical Decision Support System allows delivery into clinical practice. Conclusions The COPD Knowledge Base is the only publicly available knowledge resource dedicated to COPD and combining genetic information with molecular, physiological and clinical data as well as mathematical modelling. Its integrated analysis functions provide overviews about clinical trends and connections while its semantically mapped content enables complex analysis approaches. We plan to further extend the COPDKB by offering it as a repository to publish and semantically integrate data from relevant clinical trials. The COPDKB is freely available after registration at http

  3. Social studies of volcanology: knowledge generation and expert advice on active volcanoes

    NASA Astrophysics Data System (ADS)

    Donovan, Amy; Oppenheimer, Clive; Bravo, Michael

    2012-04-01

    This paper examines the philosophy and evolution of volcanological science in recent years, particularly in relation to the growth of volcanic hazard and risk science. It uses the lens of Science and Technology Studies to examine the ways in which knowledge generation is controlled and directed by social forces, particularly during eruptions, which constitute landmarks in the development of new technologies and models. It also presents data from a survey of volcanologists carried out during late 2008 and early 2009. These data concern the felt purpose of the science according to the volcanologists who participated and their impressions of the most important eruptions in historical time. It demonstrates that volcanologists are motivated both by the academic science environment and by a social concern for managing the impact of volcanic hazards on populations. Also discussed are the eruptions that have most influenced the discipline and the role of scientists in policymaking on active volcanoes. Expertise in volcanology can become the primary driver of public policy very suddenly when a volcano erupts, placing immense pressure on volcanologists. In response, the epistemological foundations of volcanology are on the move, with an increasing volume of research into risk assessment and management. This requires new, integrated methodologies for knowledge collection that transcend scientific disciplinary boundaries.

  4. Evidence-based medicine and the reconfiguration of medical knowledge.

    PubMed

    Timmermans, Stefan; Kolker, Emily S

    2004-01-01

    Over the past decade, different parties in the health care field have developed and disseminated clinical practice guidelines as part of evidence-based medicine. These formal tools based on a scientific evaluation of the research literature purport to tell health care professionals how to practice medicine. Because clinical practice guidelines shift the knowledge base in the health care field through standardization, they remain controversial within and outside medicine. In this paper, we evaluate the predictive accuracy of four medical professionalization theories--functionalism, Freidson's theory of professional dominance, deprofessionalization theory, and the theory of countervailing powers--to account for (1) the shift from pathophysiology to epidemiology with guidelines, (2) the creation of practice guidelines, and (3) the effects of clinical practice guidelines on the autonomy of health professionals. In light of the mixed predictive record of professionalization theories, we conclude with a need for "evidence-based sociology" and a recalibration of basic premises underlying professionalization theories.

  5. Dynamic reasoning in a knowledge-based system

    NASA Technical Reports Server (NTRS)

    Rao, Anand S.; Foo, Norman Y.

    1988-01-01

    Any space based system, whether it is a robot arm assembling parts in space or an onboard system monitoring the space station, has to react to changes which cannot be foreseen. As a result, apart from having domain-specific knowledge as in current expert systems, a space based AI system should also have general principles of change. This paper presents a modal logic which can not only represent change but also reason with it. Three primitive operations, expansion, contraction and revision are introduced and axioms which specify how the knowledge base should change when the external world changes are also specified. Accordingly the notion of dynamic reasoning is introduced, which unlike the existing forms of reasoning, provide general principles of change. Dynamic reasoning is based on two main principles, namely minimize change and maximize coherence. A possible-world semantics which incorporates the above two principles is also discussed. The paper concludes by discussing how the dynamic reasoning system can be used to specify actions and hence form an integral part of an autonomous reasoning and planning system.

  6. Knowledge-Based Systems Approach to Wilderness Fire Management.

    NASA Astrophysics Data System (ADS)

    Saveland, James M.

    The 1988 and 1989 forest fire seasons in the Intermountain West highlight the shortcomings of current fire policy. To fully implement an optimization policy that minimizes the costs and net value change of resources affected by fire, long-range fire severity information is essential, yet lacking. This information is necessary for total mobility of suppression forces, implementing contain and confine suppression strategies, effectively dealing with multiple fire situations, scheduling summer prescribed burning, and wilderness fire management. A knowledge-based system, Delphi, was developed to help provide long-range information. Delphi provides: (1) a narrative of advice on where a fire might spread, if allowed to burn, (2) a summary of recent weather and fire danger information, and (3) a Bayesian analysis of long-range fire danger potential. Uncertainty is inherent in long-range information. Decision theory and judgment research can be used to help understand the heuristics experts use to make decisions under uncertainty, heuristics responsible both for expert performance and bias. Judgment heuristics and resulting bias are examined from a fire management perspective. Signal detection theory and receiver operating curve (ROC) analysis can be used to develop a long-range forecast to improve decisions. ROC analysis mimics some of the heuristics and compensates for some of the bias. Most importantly, ROC analysis displays a continuum of bias from which an optimum operating point can be selected. ROC analysis is especially appropriate for long-range forecasting since (1) the occurrence of possible future events is stated in terms of probability, (2) skill prediction is displayed, (3) inherent trade-offs are displayed, and (4) fire danger is explicitly defined. Statements on the probability of the energy release component of the National Fire Danger Rating System exceeding a critical value later in the fire season can be made early July in the Intermountain West

  7. Governing Long-Term Risks in Radioactive Waste Management: Reversibility and Knowledge Transfer Across Generations

    NASA Astrophysics Data System (ADS)

    Lehtonen, M.

    2014-12-01

    Safe management of the long-lived and high-level radioactive waste originating primarily from nuclear power stations requires isolating and confining the waste for periods up to 100 000 years. Disposal in deep geological formations is currently the solution advocated by international organisations (e.g. the IAEA and the OECD-NEA) and governments, but nowhere in the world is such repository for civilian nuclear waste in operation yet. Concerns about the governance of the involved risks and uncertainties for such long periods lie at the heart of the controversies that have slowed down the identification of a solution. In order to draw lessons potentially relevant for the governance of long-term climate risks, this paper examines the ways in which two interrelated aspects have been addressed in nuclear waste management in France, the US, and the Nordic countries. The first issue concerns "reversibility" - i.e. the possibility on one hand to retrieve the waste once it has been disposed of in a repository, and on the other to return at any point in time along the decision-making process to the previous decision-making phase. Reversibility constitutes today a fundamental, legally binding requirement in French radioactive waste policy. A strategy for managing risk and uncertainty as such, reversibility nevertheless also poses significant safety challenges of its own. The second topic goes beyond the timescales (max. 300 years) in which reversibility is usually considered applicable, addressing the question of intergenerational knowledge transfer, comparing the Nordic and the American approaches to the issue. The key challenge here is ensuring the transfer to the future generations - for periods up to 100 000 years - of sufficient knowledge concerning the siting, characteristics and management of the waste deposited in a repository. Even more fundamentally, instead of knowledge transfer, should we rather aim at "active forgetting", in order to prevent the curious in the

  8. Next Generation Climate Change Experiments Needed to Advance Knowledge and for Assessment of CMIP6

    SciTech Connect

    Katzenberger, John; Arnott, James; Wright, Alyson

    2014-10-30

    The Aspen Global Change Institute hosted a technical science workshop entitled, “Next generation climate change experiments needed to advance knowledge and for assessment of CMIP6,” on August 4-9, 2013 in Aspen, CO. Jerry Meehl (NCAR), Richard Moss (PNNL), and Karl Taylor (LLNL) served as co-chairs for the workshop which included the participation of 32 scientists representing most of the major climate modeling centers for a total of 160 participant days. In August 2013, AGCI gathered a high level meeting of representatives from major climate modeling centers around the world to assess achievements and lessons learned from the most recent generation of coordinated modeling experiments known as the Coupled Model Intercomparison Project – 5 (CMIP5) as well as to scope out the science questions and coordination structure desired for the next anticipated phase of modeling experiments called CMIP6. The workshop allowed for reflection on the coordination of the CMIP5 process as well as intercomparison of model results, such as were assessed in the most recent IPCC 5th Assessment Report, Working Group 1. For example, this slide from Masahiro Watanabe examines performance on a range of models capturing Atlantic Meridional Overturning Circulation (AMOC).

  9. Knowledge based system for Satellite data product selection

    NASA Astrophysics Data System (ADS)

    Goyal, R.; Jayasudha, T.; Pandey, P.; Rama Devi, D.; Rebecca, A.; Manju Sarma, M.; Lakshmi, B.

    2014-11-01

    In recent years, the use of satellite data for geospatial applications has multiplied and contributed significantly towards development of the society. Satellite data requirements, in terms of spatial and spectral resolution, periodicity of data, level of correction and other parameters, vary for different applications. For major applications, remote sensing data alone may not suffice and may require additional data like field data. An application user, even though being versatile in his application, may not know which satellite data is best suited for his application, how to use the data and what information can be derived from the data. Remote sensing domain experts have the proficiency of using appropriate data for remote sensing applications. Entrenching domain expertise into the system and building a knowledge base system for satellite data product selection is vital. Non specialist data users need a user-friendly software which guides them to the most suitable satellite data product on the basis of their application. Such tool will aid the usage for apt remote sensed data for various sectors of application users. Additionally, the consumers will be less concerned about the technical particulars of the platforms that provide satellite data, instead focusing on the content and values in the data product, meeting the timelines and ease of access. Embedding knowledge is a popular and effective means of increasing the power of using a system. This paper describes a system, driven by the built-in knowledge of domain experts, for satellite data products selection for geospatial applications.

  10. Verification of Legal Knowledge-base with Conflictive Concept

    NASA Astrophysics Data System (ADS)

    Hagiwara, Shingo; Tojo, Satoshi

    In this paper, we propose a verification methodology of large-scale legal knowledge. With a revision of legal code, we are forced to revise also other affected code to keep the consistency of law. Thus, our task is to revise the affected area properly and to investigate its adequacy. In this study, we extend the notion of inconsistency besides of the ordinary logical inconsistency, to include the conceptual conflicts. We obtain these conflictions from taxonomy data, and thus, we can avoid tedious manual declarations of opponent words. In the verification process, we adopt extended disjunctive logic programming (EDLP) to tolerate multiple consequences for a given set of antecedents. In addition, we employ abductive logic programming (ALP) regarding the situations to which the rules are applied as premises. Also, we restrict a legal knowledge-base to acyclic program to avoid the circulation of definitions, to justify the relevance of verdicts. Therefore, detecting cyclic parts of legal knowledge would be one of our objectives. The system is composed of two subsystems; we implement the preprocessor in Ruby to facilitate string manipulation, and the verifier in Prolog to exert the logical inference. Also, we employ XML format in the system to retain readability. In this study, we verify actual code of ordinances of Toyama prefecture, and show the experimental results.

  11. Caregiving Antecedents of Secure Base Script Knowledge: A Comparative Analysis of Young Adult Attachment Representations

    PubMed Central

    Steele, Ryan D.; Waters, Theodore E. A.; Bost, Kelly K.; Vaughn, Brian E.; Truitt, Warren; Waters, Harriet S.; Booth-LaForce, Cathryn; Roisman, Glenn I.

    2015-01-01

    Based on a sub-sample (N = 673) of the NICHD Study of Early Child Care and Youth Development (SECCYD) cohort, this paper reports data from a follow-up assessment at age 18 years on the antecedents of secure base script knowledge, as reflected in the ability to generate narratives in which attachment-related difficulties are recognized, competent help is provided, and the problem is resolved. Secure base script knowledge was (a) modestly to moderately correlated with more well established assessments of adult attachment, (b) associated with mother-child attachment in the first three years of life and with observations of maternal and paternal sensitivity from childhood to adolescence, and (c) partially accounted for associations previously documented in the SECCYD cohort between early caregiving experiences and Adult Attachment Interview states of mind (Booth-LaForce & Roisman, 2014) as well as self-reported attachment styles (Fraley, Roisman, Booth-LaForce, Owen, & Holland, 2013). PMID:25264703

  12. A knowledge-based framework for image enhancement in aviation security.

    PubMed

    Singh, Maneesha; Singh, Sameer; Partridge, Derek

    2004-12-01

    The main aim of this paper is to present a knowledge-based framework for automatically selecting the best image enhancement algorithm from several available on a per image basis in the context of X-ray images of airport luggage. The approach detailed involves a system that learns to map image features that represent its viewability to one or more chosen enhancement algorithms. Viewability measures have been developed to provide an automatic check on the quality of the enhanced image, i.e., is it really enhanced? The choice is based on ground-truth information generated by human X-ray screening experts. Such a system, for a new image, predicts the best-suited enhancement algorithm. Our research details the various characteristics of the knowledge-based system and shows extensive results on real images.

  13. ProbOnto: ontology and knowledge base of probability distributions

    PubMed Central

    Swat, Maciej J.; Grenon, Pierre; Wimalaratne, Sarala

    2016-01-01

    Motivation: Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. Results: ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. Availability and Implementation: http://probonto.org Contact: mjswat@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153608

  14. Category vs. Object Knowledge in Category-based Induction

    PubMed Central

    Murphy, Gregory L.; Ross, Brian H.

    2009-01-01

    In one form of category-based induction, people make predictions about unknown properties of objects. There is a tension between predictions made based on the object’s specific features (e.g., objects above a certain size tend not to fly) and those made by reference to category-level knowledge (e.g., birds fly). Seven experiments with artificial categories investigated these two sources of induction by looking at whether people used information about correlated features within categories, suggesting that they focused on feature-feature relations rather than summary categorical information. The results showed that people relied heavily on such correlations, even when there was no reason to think that the correlations exist in the population. The results suggested that people’s use of this strategy is largely unreflective, rather than strategically chosen. These findings have important implications for models of category-based induction, which generally ignore feature-feature relations. PMID:20526447

  15. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.

    1991-01-01

    The purpose is to develop algorithms and architectures for embedding artificial intelligence in aircraft guidance and control systems. With the approach adopted, AI-computing is used to create an outer guidance loop for driving the usual aircraft autopilot. That is, a symbolic processor monitors the operation and performance of the aircraft. Then, based on rules and other stored knowledge, commands are automatically formulated for driving the autopilot so as to accomplish desired flight operations. The focus is on developing a software system which can respond to linguistic instructions, input in a standard format, so as to formulate a sequence of simple commands to the autopilot. The instructions might be a fairly complex flight clearance, input either manually or by data-link. Emphasis is on a software system which responds much like a pilot would, employing not only precise computations, but, also, knowledge which is less precise, but more like common-sense. The approach is based on prior work to develop a generic 'shell' architecture for an AI-processor, which may be tailored to many applications by describing the application in appropriate processor data bases (libraries). Such descriptions include numerical models of the aircraft and flight control system, as well as symbolic (linguistic) descriptions of flight operations, rules, and tactics.

  16. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  17. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  18. Portable Knowledge-Based Diagnostic And Maintenance Systems

    NASA Astrophysics Data System (ADS)

    Darvish, John; Olson, Noreen S.

    1989-03-01

    It is difficult to diagnose faults and maintain weapon systems because (1) they are highly complex pieces of equipment composed of multiple mechanical, electrical, and hydraulic assemblies, and (2) talented maintenance personnel are continuously being lost through the attrition process. To solve this problem, we developed a portable diagnostic and maintenance aid that uses a knowledge-based expert system. This aid incorporates diagnostics, operational procedures, repair and replacement procedures, and regularly scheduled maintenance into one compact, 18-pound graphics workstation. Drawings and schematics can be pulled up from the CD-ROM to assist the operator in answering the expert system's questions. Work for this aid began with the development of the initial knowledge-based expert system in a fast prototyping environment using a LISP machine. The second phase saw the development of a personal computer-based system that used videodisc technology to pictorially assist the operator. The current version of the aid eliminates the high expenses associated with videodisc preparation by scanning in the art work already in the manuals. A number of generic software tools have been developed that streamlined the construction of each iteration of the aid; these tools will be applied to the development of future systems.

  19. Manned spaceflight activity planning with knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Mogilensky, J.; Dalton, R. E.; Scarl, E. A.

    1983-01-01

    An on-board expert system, capable of assisting with crew-activity planning and platform-status monitoring, could provide unprecedented autonomy to the crew of a permanently manned space station. To demonstrate this concept's feasibility, an existing knowledge-based system is adapted to support Space Shuttle crew-activity timeline planning. Proposed timeline changes are to be checked for compliance with crew capabilities and mission operating guidelines, so that a nonexpert can be guided through a successful plan modification. Early lessons that have been learned about the scope of the adaptation needed to achieve this objective are presented.

  20. Melody-based knowledge discovery in musical pieces

    NASA Astrophysics Data System (ADS)

    Rybnik, Mariusz; Jastrzebska, Agnieszka

    2016-06-01

    The paper is focused on automated knowledge discovery in musical pieces, based on transformations of digital musical notation. Usually a single musical piece is analyzed, to discover the structure as well as traits of separate voices. Melody and rhythm is processed with the use of three proposed operators, that serve as meta-data. In this work we focus on melody, so the processed data is labeled using fuzzy labels, created for detecting various voice characteristics. A comparative analysis of two musical pieces may be performed as well, that compares them in terms of various rhythmic or melodic traits (as a whole or with voice separation).

  1. A Metadata based Knowledge Discovery Methodology for Seeding Translational Research.

    PubMed

    Kothari, Cartik R; Payne, Philip R O

    2015-01-01

    In this paper, we present a semantic, metadata based knowledge discovery methodology for identifying teams of researchers from diverse backgrounds who can collaborate on interdisciplinary research projects: projects in areas that have been identified as high-impact areas at The Ohio State University. This methodology involves the semantic annotation of keywords and the postulation of semantic metrics to improve the efficiency of the path exploration algorithm as well as to rank the results. Results indicate that our methodology can discover groups of experts from diverse areas who can collaborate on translational research projects.

  2. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  3. Co-Constructing Artefacts and Knowledge in Net-Based Teams: Implications for the Design of Collaborative Learning Environments

    ERIC Educational Resources Information Center

    Reimann, Peter

    2005-01-01

    Computer-based learning environments for science and mathematics education support predominantly individual learning; from first generation drill and practice programs to today's advanced, knowledge-based tutorial systems, one learner interacting with one computer has been the typical setting. Mathematics educators, however, increasingly…

  4. Knowledge-Based Reinforcement Learning for Data Mining

    NASA Astrophysics Data System (ADS)

    Kudenko, Daniel; Grzes, Marek

    experts have developed heuristics that help them in planning and scheduling resources in their work place. However, this domain knowledge is often rough and incomplete. When the domain knowledge is used directly by an automated expert system, the solutions are often sub-optimal, due to the incompleteness of the knowledge, the uncertainty of environments, and the possibility to encounter unexpected situations. RL, on the other hand, can overcome the weaknesses of the heuristic domain knowledge and produce optimal solutions. In the talk we propose two techniques, which represent first steps in the area of knowledge-based RL (KBRL). The first technique [1] uses high-level STRIPS operator knowledge in reward shaping to focus the search for the optimal policy. Empirical results show that the plan-based reward shaping approach outperforms other RL techniques, including alternative manual and MDP-based reward shaping when it is used in its basic form. We showed that MDP-based reward shaping may fail and successful experiments with STRIPS-based shaping suggest modifications which can overcome encountered problems. The STRIPSbased method we propose allows expressing the same domain knowledge in a different way and the domain expert can choose whether to define an MDP or STRIPS planning task. We also evaluated the robustness of the proposed STRIPS-based technique to errors in the plan knowledge. In case that STRIPS knowledge is not available, we propose a second technique [2] that shapes the reward with hierarchical tile coding. Where the Q-function is represented with low-level tile coding, a V-function with coarser tile coding can be learned in parallel and used to approximate the potential for ground states. In the context of data mining, our KBRL approaches can also be used for any data collection task where the acquisition of data may incur considerable cost. In addition, observing the data collection agent in specific scenarios may lead to new insights into optimal data

  5. Knowledge of asthma guidelines: results of a UK General Practice Airways Group (GPIAG) web-based 'Test your Knowledge' quiz.

    PubMed

    Pinnock, Hilary; Holmes, Steve; Levy, Mark L; McArthur, Ruth; Small, Iain

    2010-06-01

    A web-based questionnaire, comprising 11 multiple choice questions, tested the knowledge of visitors to the General Practice Airways Group (GPIAG) online summary of the British Asthma guideline. On average, the 413 respondents answered less than half the questions correctly. GP scores were significantly lower than practice nurses. Improving clinicians' knowledge of asthma is a prerequisite for improving management.

  6. Ontology-Based Empirical Knowledge Verification for Professional Virtual Community

    ERIC Educational Resources Information Center

    Chen, Yuh-Jen

    2011-01-01

    A professional virtual community provides an interactive platform for enterprise experts to create and share their empirical knowledge cooperatively, and the platform contains a tremendous amount of hidden empirical knowledge that knowledge experts have preserved in the discussion process. Therefore, enterprise knowledge management highly…

  7. A knowledge base for Vitis vinifera functional analysis

    PubMed Central

    2015-01-01

    Background Vitis vinifera (Grapevine) is the most important fruit species in the modern world. Wine and table grapes sales contribute significantly to the economy of major wine producing countries. The most relevant goals in wine production concern quality and safety. In order to significantly improve the achievement of these objectives and to gain biological knowledge about cultivars, a genomic approach is the most reliable strategy. The recent grapevine genome sequencing offers the opportunity to study the potential roles of genes and microRNAs in fruit maturation and other physiological and pathological processes. Although several systems allowing the analysis of plant genomes have been reported, none of them has been designed specifically for the functional analysis of grapevine genomes of cultivars under environmental stress in connection with microRNA data. Description Here we introduce a novel knowledge base, called BIOWINE, designed for the functional analysis of Vitis vinifera genomes of cultivars present in Sicily. The system allows the analysis of RNA-seq experiments of two different cultivars, namely Nero d'Avola and Nerello Mascalese. Samples were taken under different climatic conditions of phenological phases, diseases, and geographic locations. The BIOWINE web interface is equipped with data analysis modules for grapevine genomes. In particular users may analyze the current genome assembly together with the RNA-seq data through a customized version of GBrowse. The web interface allows users to perform gene set enrichment by exploiting third-party databases. Conclusions BIOWINE is a knowledge base implementing a set of bioinformatics tools for the analysis of grapevine genomes. The system aims to increase our understanding of the grapevine varieties and species of Sicilian products focusing on adaptability to different climatic conditions, phenological phases, diseases, and geographic locations. PMID:26050794

  8. Design of Composite Structures Using Knowledge-Based and Case Based Reasoning

    NASA Technical Reports Server (NTRS)

    Lambright, Jonathan Paul

    1996-01-01

    A method of using knowledge based and case based reasoning to assist designers during conceptual design tasks of composite structures was proposed. The cooperative use of heuristics, procedural knowledge, and previous similar design cases suggests a potential reduction in design cycle time and ultimately product lead time. The hypothesis of this work is that the design process of composite structures can be improved by using Case-Based Reasoning (CBR) and Knowledge-Based (KB) reasoning in the early design stages. The technique of using knowledge-based and case-based reasoning facilitates the gathering of disparate information into one location that is easily and readily available. The method suggests that the inclusion of downstream life-cycle issues into the conceptual design phase reduces potential of defective, and sub-optimal composite structures. Three industry experts were interviewed extensively. The experts provided design rules, previous design cases, and test problems. A Knowledge Based Reasoning system was developed using the CLIPS (C Language Interpretive Procedural System) environment and a Case Based Reasoning System was developed using the Design Memory Utility For Sharing Experiences (MUSE) xviii environment. A Design Characteristic State (DCS) was used to document the design specifications, constraints, and problem areas using attribute-value pair relationships. The DCS provided consistent design information between the knowledge base and case base. Results indicated that the use of knowledge based and case based reasoning provided a robust design environment for composite structures. The knowledge base provided design guidance from well defined rules and procedural knowledge. The case base provided suggestions on design and manufacturing techniques based on previous similar designs and warnings of potential problems and pitfalls. The case base complemented the knowledge base and extended the problem solving capability beyond the existence of

  9. Concept maps: A tool for knowledge management and synthesis in web-based conversational learning.

    PubMed

    Joshi, Ankur; Singh, Satendra; Jaswal, Shivani; Badyal, Dinesh Kumar; Singh, Tejinder

    2016-01-01

    Web-based conversational learning provides an opportunity for shared knowledge base creation through collaboration and collective wisdom extraction. Usually, the amount of generated information in such forums is very huge, multidimensional (in alignment with the desirable preconditions for constructivist knowledge creation), and sometimes, the nature of expected new information may not be anticipated in advance. Thus, concept maps (crafted from constructed data) as "process summary" tools may be a solution to improve critical thinking and learning by making connections between the facts or knowledge shared by the participants during online discussion This exploratory paper begins with the description of this innovation tried on a web-based interacting platform (email list management software), FAIMER-Listserv, and generated qualitative evidence through peer-feedback. This process description is further supported by a theoretical construct which shows how social constructivism (inclusive of autonomy and complexity) affects the conversational learning. The paper rationalizes the use of concept map as mid-summary tool for extracting information and further sense making out of this apparent intricacy. PMID:27563577

  10. Concept maps: A tool for knowledge management and synthesis in web-based conversational learning.

    PubMed

    Joshi, Ankur; Singh, Satendra; Jaswal, Shivani; Badyal, Dinesh Kumar; Singh, Tejinder

    2016-01-01

    Web-based conversational learning provides an opportunity for shared knowledge base creation through collaboration and collective wisdom extraction. Usually, the amount of generated information in such forums is very huge, multidimensional (in alignment with the desirable preconditions for constructivist knowledge creation), and sometimes, the nature of expected new information may not be anticipated in advance. Thus, concept maps (crafted from constructed data) as "process summary" tools may be a solution to improve critical thinking and learning by making connections between the facts or knowledge shared by the participants during online discussion This exploratory paper begins with the description of this innovation tried on a web-based interacting platform (email list management software), FAIMER-Listserv, and generated qualitative evidence through peer-feedback. This process description is further supported by a theoretical construct which shows how social constructivism (inclusive of autonomy and complexity) affects the conversational learning. The paper rationalizes the use of concept map as mid-summary tool for extracting information and further sense making out of this apparent intricacy.

  11. Concept maps: A tool for knowledge management and synthesis in web-based conversational learning

    PubMed Central

    Joshi, Ankur; Singh, Satendra; Jaswal, Shivani; Badyal, Dinesh Kumar; Singh, Tejinder

    2016-01-01

    Web-based conversational learning provides an opportunity for shared knowledge base creation through collaboration and collective wisdom extraction. Usually, the amount of generated information in such forums is very huge, multidimensional (in alignment with the desirable preconditions for constructivist knowledge creation), and sometimes, the nature of expected new information may not be anticipated in advance. Thus, concept maps (crafted from constructed data) as “process summary” tools may be a solution to improve critical thinking and learning by making connections between the facts or knowledge shared by the participants during online discussion This exploratory paper begins with the description of this innovation tried on a web-based interacting platform (email list management software), FAIMER-Listserv, and generated qualitative evidence through peer-feedback. This process description is further supported by a theoretical construct which shows how social constructivism (inclusive of autonomy and complexity) affects the conversational learning. The paper rationalizes the use of concept map as mid-summary tool for extracting information and further sense making out of this apparent intricacy. PMID:27563577

  12. Hospital nurses' use of knowledge-based information resources.

    PubMed

    Tannery, Nancy Hrinya; Wessel, Charles B; Epstein, Barbara A; Gadd, Cynthia S

    2007-01-01

    The purpose of this study was to evaluate the information-seeking practices of nurses before and after access to a library's electronic collection of information resources. This is a pre/post intervention study of nurses at a rural community hospital. The hospital contracted with an academic health sciences library for access to a collection of online knowledge-based resources. Self-report surveys were used to obtain information about nurses' computer use and how they locate and access information to answer questions related to their patient care activities. In 2001, self-report surveys were sent to the hospital's 573 nurses during implementation of access to online resources with a post-implementation survey sent 1 year later. At the initiation of access to the library's electronic resources, nurses turned to colleagues and print textbooks or journals to satisfy their information needs. After 1 year of access, 20% of the nurses had begun to use the library's electronic resources. The study outcome suggests ready access to knowledge-based electronic information resources can lead to changes in behavior among some nurses.

  13. Framework Support For Knowledge-Based Software Development

    NASA Astrophysics Data System (ADS)

    Huseth, Steve

    1988-03-01

    The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.

  14. Mindtagger: A Demonstration of Data Labeling in Knowledge Base Construction

    PubMed Central

    Shin, Jaeho; Ré, Christopher; Cafarella, Michael

    2016-01-01

    End-to-end knowledge base construction systems using statistical inference are enabling more people to automatically extract high-quality domain-specific information from unstructured data. As a result of deploying DeepDive framework across several domains, we found new challenges in debugging and improving such end-to-end systems to construct high-quality knowledge bases. DeepDive has an iterative development cycle in which users improve the data. To help our users, we needed to develop principles for analyzing the system's error as well as provide tooling for inspecting and labeling various data products of the system. We created guidelines for error analysis modeled after our colleagues' best practices, in which data labeling plays a critical role in every step of the analysis. To enable more productive and systematic data labeling, we created Mindtagger, a versatile tool that can be configured to support a wide range of tasks. In this demonstration, we show in detail what data labeling tasks are modeled in our error analysis guidelines and how each of them is performed using Mindtagger. PMID:27144082

  15. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  16. Studies in knowledge-based diagnosis of failures in robotic assembly

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Pollard, Nancy S.; Desai, Rajiv S.

    1990-01-01

    The telerobot diagnostic system (TDS) is a knowledge-based system that is being developed for identification and diagnosis of failures in the space robotic domain. The system is able to isolate the symptoms of the failure, generate failure hypotheses based on these symptoms, and test their validity at various levels by interpreting or simulating the effects of the hypotheses on results of plan execution. The implementation of the TDS is outlined. The classification of failures and the types of system models used by the TDS are discussed. A detailed example of the TDS approach to failure diagnosis is provided.

  17. Knowledge-based navigation of complex information spaces

    SciTech Connect

    Burke, R.D.; Hammond, K.J.; Young, B.C.

    1996-12-31

    While the explosion of on-line information has brought new opportunities for finding and using electronic data, it has also brought to the forefront the problem of isolating useful information and making sense of large multi-dimension information spaces. We have built several developed an approach to building data {open_quotes}tour guides,{close_quotes} called FINDME systems. These programs know enough about an information space to be able to help a user navigate through it. The user not only comes away with items of useful information but also insights into the structure of the information space itself. In these systems, we have combined ideas of instance-based browsing, structuring retrieval around the critiquing of previously-retrieved examples, and retrieval strategies, knowledge-based heuristics for finding relevant information. We illustrate these techniques with several examples, concentrating especially on the RENTME system, a FINDME system for helping users find suitable rental apartments in the Chicago metropolitan area.

  18. TMS for Instantiating a Knowledge Base With Incomplete Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A computer program that belongs to the class known among software experts as output truth-maintenance-systems (output TMSs) has been devised as one of a number of software tools for reducing the size of the knowledge base that must be searched during execution of artificial- intelligence software of the rule-based inference-engine type in a case in which data are missing. This program determines whether the consequences of activation of two or more rules can be combined without causing a logical inconsistency. For example, in a case involving hypothetical scenarios that could lead to turning a given device on or off, the program determines whether a scenario involving a given combination of rules could lead to turning the device both on and off at the same time, in which case that combination of rules would not be included in the scenario.

  19. Knowledge-based classification of neuronal fibers in entire brain.

    PubMed

    Xia, Yan; Turken, U; Whitfield-Gabrieli, Susan L; Gabrieli, John D

    2005-01-01

    This work presents a framework driven by parcellation of brain gray matter in standard normalized space to classify the neuronal fibers obtained from diffusion tensor imaging (DTI) in entire human brain. Classification of fiber bundles into groups is an important step for the interpretation of DTI data in terms of functional correlates of white matter structures. Connections between anatomically delineated brain regions that are considered to form functional units, such as a short-term memory network, are identified by first clustering fibers based on their terminations in anatomically defined zones of gray matter according to Talairach Atlas, and then refining these groups based on geometric similarity criteria. Fiber groups identified this way can then be interpreted in terms of their functional properties using knowledge of functional neuroanatomy of individual brain regions specified in standard anatomical space, as provided by functional neuroimaging and brain lesion studies. PMID:16685847

  20. Knowledge-based assistance in costing the space station DMS

    NASA Technical Reports Server (NTRS)

    Henson, Troy; Rone, Kyle

    1988-01-01

    The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.

  1. Systems, methods and apparatus for verification of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Rash, James L. (Inventor); Erickson, John D. (Inventor); Gracinin, Denis (Inventor); Rouff, Christopher A. (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments, domain knowledge is translated into a knowledge-based system. In some embodiments, a formal specification is derived from rules of a knowledge-based system, the formal specification is analyzed, and flaws in the formal specification are used to identify and correct errors in the domain knowledge, from which a knowledge-based system is translated.

  2. A rainfall simulator based on multifractal generator

    NASA Astrophysics Data System (ADS)

    Akrour, Nawal; mallet, Cecile; barthes, Laurent; chazottes, Aymeric

    2015-04-01

    The Precipitations are due to complex meteorological phenomenon's and unlike other geophysical constituents such as water vapour concentration they present a relaxation behaviour leading to an alternation of dry and wet periods. Thus, precipitations can be described as intermittent process. The spatial and temporal variability of this phenomenon is significant and covers large scales. This high variability can cause extreme events which are difficult to observe properly because of their suddenness and their localized character. For all these reasons, the precipitations are therefore difficult to model. This study aims to adapt a one-dimensional time series model previously developed by the authors [Akrour et al., 2013, 2014] to a two-dimensional rainfall generator. The original time series model can be divided into 3 major steps : rain support generation, intra event rain rates generation using multifractal and finally calibration process. We use the same kind of methodology in the present study. Based on dataset obtained from meteorological radar of Météo France with a spatial resolution of 1 km x 1 km we present the used approach : Firstly, the extraction of rain support (rain/no rain area) allowing the retrieval of the rain support structure function (variogram) and fractal properties. This leads us to use either the rain support modelisation proposed by ScleissXXX [ref] or directly real rain support extracted from radar rain maps. Then, the generation (over rain areas) of rain rates is made thanks to a 2D multifractal Fractionnally Integrated Flux (FIF) model [ref]. This second stage is followed by a calibration/forcing step (forcing average rain rate per events) added in order to provide rain rate coherent with observed rain-rate distribution. The forcing process is based on a relation identified from the average rain rate of observed events and their surfaces. The presentation will first explain the different steps presented above, then some results

  3. Machine discovery based on numerical data generated in computer experiments

    SciTech Connect

    Murata, Tsuyoshi; Shimura, Masamichi

    1996-12-31

    In the discovery of useful theorems or formulas, experimental data acquisition plays a fundamental role. Most of the previous discovery systems which have the abilities for experimentation, however, require much knowledge for evaluating experimental results, or require plans of common experiments which are given to the systems in advance. Only few systems have been attempted to make experiments which enable the discovery based on acquired experimental data without depending on given initial knowledge. This paper proposes a new approach for discovering useful theorems in the domain of plane geometry by employing experimentation. In this domain, drawing a figure and observing it correspond to making experimentation since these two processes are preparations for acquiring geometrical data. EXPEDITION, a discovery system based on experimental data acquisition, generates figures by itself and acquires expressions describing relations among line segments and angles in the figures. Such expressions can be extracted from the numerical data obtained in the computer experiments. By using simple heuristics for drawing and observing figures, the system succeeds in discovering many new useful theorems and formulas as well as rediscovering well-known theorems, such as power theorems and Thales` theorem.

  4. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  5. Autonomous Cryogenic Load Operations: Knowledge-Based Autonomous Test Engineer

    NASA Technical Reports Server (NTRS)

    Schrading, J. Nicolas

    2013-01-01

    The Knowledge-Based Autonomous Test Engineer (KATE) program has a long history at KSC. Now a part of the Autonomous Cryogenic Load Operations (ACLO) mission, this software system has been sporadically developed over the past 20 years. Originally designed to provide health and status monitoring for a simple water-based fluid system, it was proven to be a capable autonomous test engineer for determining sources of failure in the system. As part of a new goal to provide this same anomaly-detection capability for a complicated cryogenic fluid system, software engineers, physicists, interns and KATE experts are working to upgrade the software capabilities and graphical user interface. Much progress was made during this effort to improve KATE. A display of the entire cryogenic system's graph, with nodes for components and edges for their connections, was added to the KATE software. A searching functionality was added to the new graph display, so that users could easily center their screen on specific components. The GUI was also modified so that it displayed information relevant to the new project goals. In addition, work began on adding new pneumatic and electronic subsystems into the KATE knowledge base, so that it could provide health and status monitoring for those systems. Finally, many fixes for bugs, memory leaks, and memory errors were implemented and the system was moved into a state in which it could be presented to stakeholders. Overall, the KATE system was improved and necessary additional features were added so that a presentation of the program and its functionality in the next few months would be a success.

  6. The Knowledge Base Interface for Parametric Grid Information

    SciTech Connect

    Hipp, James R.; Simons, Randall W.; Young, Chris J.

    1999-08-03

    The parametric grid capability of the Knowledge Base (KBase) provides an efficient robust way to store and access interpolatable information that is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use an approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation. The method involves three basic steps: data preparation, data storage, and data access. In past presentations we have discussed in detail the first step. In this paper we focus on the latter two, describing in detail the type of information which must be stored and the interface used to retrieve parametric grid data from the Knowledge Base. Once data have been properly prepared, the information (tessellation and associated value surfaces) needed to support the interface functionality, can be entered into the KBase. The primary types of parametric grid data that must be stored include (1) generic header information; (2) base model, station, and phase names and associated ID's used to construct surface identifiers; (3) surface accounting information; (4) tessellation accounting information; (5) mesh data for each tessellation; (6) correction data defined for each surface at each node of the surfaces owning tessellation (7) mesh refinement calculation set-up and flag information; and (8) kriging calculation set-up and flag information. The eight data components not only represent the results of the data preparation process but also include all required input information for several population tools that would enable the complete regeneration of the data results if that should be necessary.

  7. A knowledge-based system design/information tool

    NASA Technical Reports Server (NTRS)

    Allen, James G.; Sikora, Scott E.

    1990-01-01

    The objective of this effort was to develop a Knowledge Capture System (KCS) for the Integrated Test Facility (ITF) at the Dryden Flight Research Facility (DFRF). The DFRF is a NASA Ames Research Center (ARC) facility. This system was used to capture the design and implementation information for NASA's high angle-of-attack research vehicle (HARV), a modified F/A-18A. In particular, the KCS was used to capture specific characteristics of the design of the HARV fly-by-wire (FBW) flight control system (FCS). The KCS utilizes artificial intelligence (AI) knowledge-based system (KBS) technology. The KCS enables the user to capture the following characteristics of automated systems: the system design; the hardware (H/W) design and implementation; the software (S/W) design and implementation; and the utilities (electrical and hydraulic) design and implementation. A generic version of the KCS was developed which can be used to capture the design information for any automated system. The deliverable items for this project consist of the prototype generic KCS and an application, which captures selected design characteristics of the HARV FCS.

  8. Compiling knowledge-based systems from KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.

  9. Evidence-based decision-making 7: Knowledge translation.

    PubMed

    Manns, Braden J

    2015-01-01

    There is a significant gap between what is known and what is implemented by key stakeholders in practice (the evidence to practice gap). The primary purpose of knowledge translation is to address this gap, bridging evidence to clinical practice. The knowledge to action cycle is one framework for knowledge translation that integrates policy-makers throughout the research cycle. The knowledge to action cycle begins with the identification of a problem (usually a gap in care provision). After identification of the problem, knowledge creation is undertaken, depicted at the center of the cycle as a funnel. Knowledge inquiry is at the wide end of the funnel, and moving down the funnel, the primary data is synthesized into knowledge products in the form of educational materials, guidelines, decision aids, or clinical pathways. The remaining components of the knowledge to action cycle refer to the action of applying the knowledge that has been created. This includes adapting knowledge to local context, assessing barriers to knowledge use, selecting, tailoring implementing interventions, monitoring knowledge use, evaluating outcomes, and sustaining knowledge use. Each of these steps is connected by bidirectional arrows and ideally involves healthcare decision-makers and key stakeholders at each transition.

  10. A rule-based software test data generator

    NASA Technical Reports Server (NTRS)

    Deason, William H.; Brown, David B.; Chang, Kai-Hsiung; Cross, James H., II

    1991-01-01

    Rule-based software test data generation is proposed as an alternative to either path/predicate analysis or random data generation. A prototype rule-based test data generator for Ada programs is constructed and compared to a random test data generator. Four Ada procedures are used in the comparison. Approximately 2000 rule-based test cases and 100,000 randomly generated test cases are automatically generated and executed. The success of the two methods is compared using standard coverage metrics. Simple statistical tests showing that even the primitive rule-based test data generation prototype is significantly better than random data generation are performed. This result demonstrates that rule-based test data generation is feasible and shows great promise in assisting test engineers, especially when the rule base is developed further.

  11. Knowledge-based generalization of metabolic networks: a practical study.

    PubMed

    Zhukova, Anna; Sherman, David J

    2014-04-01

    The complex process of genome-scale metabolic network reconstruction involves semi-automatic reaction inference, analysis, and refinement through curation by human experts. Unfortunately, decisions by experts are hampered by the complexity of the network, which can mask errors in the inferred network. In order to aid an expert in making sense out of the thousands of reactions in the organism's metabolism, we developed a method for knowledge-based generalization that provides a higher-level view of the network, highlighting the particularities and essential structure, while hiding the details. In this study, we show the application of this generalization method to 1,286 metabolic networks of organisms in Path2Models that describe fatty acid metabolism. We compare the generalised networks and show that we successfully highlight the aspects that are important for their curation and comparison. PMID:24712528

  12. Image-based querying of urban knowledge databases

    NASA Astrophysics Data System (ADS)

    Cho, Peter; Bae, Soonmin; Durand, Fredo

    2009-05-01

    We extend recent automated computer vision algorithms to reconstruct the global three-dimensional structures for photos and videos shot at fixed points in outdoor city environments. Mosaics of digital stills and embedded videos are georegistered by matching a few of their 2D features with 3D counterparts in aerial ladar imagery. Once image planes are aligned with world maps, abstract urban knowledge can propagate from the latter into the former. We project geotagged annotations from a 3D map into a 2D video stream and demonstrate their tracking buildings and streets in a clip with significant panning motion. We also present an interactive tool which enables users to select city features of interest in video frames and retrieve their geocoordinates and ranges. Implications of this work for future augmented reality systems based upon mobile smart phones are discussed.

  13. Knowledge-based generalization of metabolic networks: a practical study.

    PubMed

    Zhukova, Anna; Sherman, David J

    2014-04-01

    The complex process of genome-scale metabolic network reconstruction involves semi-automatic reaction inference, analysis, and refinement through curation by human experts. Unfortunately, decisions by experts are hampered by the complexity of the network, which can mask errors in the inferred network. In order to aid an expert in making sense out of the thousands of reactions in the organism's metabolism, we developed a method for knowledge-based generalization that provides a higher-level view of the network, highlighting the particularities and essential structure, while hiding the details. In this study, we show the application of this generalization method to 1,286 metabolic networks of organisms in Path2Models that describe fatty acid metabolism. We compare the generalised networks and show that we successfully highlight the aspects that are important for their curation and comparison.

  14. A model for a knowledge-based system's life cycle

    NASA Technical Reports Server (NTRS)

    Kiss, Peter A.

    1990-01-01

    The American Institute of Aeronautics and Astronautics has initiated a Committee on Standards for Artificial Intelligence. Presented here are the initial efforts of one of the working groups of that committee. The purpose here is to present a candidate model for the development life cycle of Knowledge Based Systems (KBS). The intent is for the model to be used by the Aerospace Community and eventually be evolved into a standard. The model is rooted in the evolutionary model, borrows from the spiral model, and is embedded in the standard Waterfall model for software development. Its intent is to satisfy the development of both stand-alone and embedded KBSs. The phases of the life cycle are detailed as are and the review points that constitute the key milestones throughout the development process. The applicability and strengths of the model are discussed along with areas needing further development and refinement by the aerospace community.

  15. Strategic Positioning of HRM in Knowledge-Based Organizations

    ERIC Educational Resources Information Center

    Thite, Mohan

    2004-01-01

    With knowledge management as the strategic intent and learning to learn as the strategic weapon, the current management focus is on how to leverage knowledge faster and better than competitors. Research demonstrates that it is the cultural mindset of the people in the organisation that primarily defines success in knowledge intensive…

  16. Background Knowledge in Learning-Based Relation Extraction

    ERIC Educational Resources Information Center

    Do, Quang Xuan

    2012-01-01

    In this thesis, we study the importance of background knowledge in relation extraction systems. We not only demonstrate the benefits of leveraging background knowledge to improve the systems' performance but also propose a principled framework that allows one to effectively incorporate knowledge into statistical machine learning models for…

  17. A Large-Scale Knowledge Management Method Based on the Analysis of the Use of Online Knowledge Resources

    PubMed Central

    Del Fiol, Guilherme; Cimino, James J; Maviglia, Saverio M; Strasberg, Howard R; Jackson, Brian R; Hulse, Nathan C

    2010-01-01

    Online health knowledge resources can be integrated into electronic health record systems using decision support tools known as “infobuttons.” In this study we describe a knowledge management method based on the analysis of knowledge resource use via infobuttons in multiple institutions. Methods: We conducted a two-phase analysis of laboratory test infobutton sessions at three healthcare institutions accessing two knowledge resources. The primary study measure was session coverage, i.e. the rate of infobutton sessions in which resources retrieved relevant content. Results: In Phase One, resources covered 78.5% of the study sessions. In addition, a subset of 38 noncovered tests that most frequently raised questions was identified. In Phase Two, content development guided by the outcomes of Phase One resulted in a 4% average coverage increase. Conclusion: The described method is a valuable approach to large-scale knowledge management in rapidly changing domains. PMID:21346957

  18. Knowledge-based data analysis comes of age

    PubMed Central

    2010-01-01

    The emergence of high-throughput technologies for measuring biological systems has introduced problems for data interpretation that must be addressed for proper inference. First, analysis techniques need to be matched to the biological system, reflecting in their mathematical structure the underlying behavior being studied. When this is not done, mathematical techniques will generate answers, but the values and reliability estimates may not accurately reflect the biology. Second, analysis approaches must address the vast excess in variables measured (e.g. transcript levels of genes) over the number of samples (e.g. tumors, time points), known as the ‘large-p, small-n’ problem. In large-p, small-n paradigms, standard statistical techniques generally fail, and computational learning algorithms are prone to overfit the data. Here we review the emergence of techniques that match mathematical structure to the biology, the use of integrated data and prior knowledge to guide statistical analysis, and the recent emergence of analysis approaches utilizing simple biological models. We show that novel biological insights have been gained using these techniques. PMID:19854753

  19. Dilemmatic Spaces: High-Stakes Testing and the Possibilities of Collaborative Knowledge Work to Generate Learning Innovations

    ERIC Educational Resources Information Center

    Singh, Parlo; Märtsin, Mariann; Glasswell, Kathryn

    2015-01-01

    This paper examines collaborative researcher-practitioner knowledge work around assessment data in culturally diverse, low socio-economic school communities in Queensland, Australia. Specifically, the paper draws on interview accounts about the work of a cohort of school-based researchers who acted as mediators bridging knowledge flows between a…

  20. A national knowledge-based crop recognition in Mediterranean environment

    NASA Astrophysics Data System (ADS)

    Cohen, Yafit; Shoshany, Maxim

    2002-08-01

    Population growth, urban expansion, land degradation, civil strife and war may place plant natural resources for food and agriculture at risk. Crop and yield monitoring is basic information necessary for wise management of these resources. Satellite remote sensing techniques have proven to be cost-effective in widespread agricultural lands in Africa, America, Europe and Australia. However, they have had limited success in Mediterranean regions that are characterized by a high rate of spatio-temporal ecological heterogeneity and high fragmentation of farming lands. An integrative knowledge-based approach is needed for this purpose, which combines imagery and geographical data within the framework of an intelligent recognition system. This paper describes the development of such a crop recognition methodology and its application to an area that comprises approximately 40% of the cropland in Israel. This area contains eight crop types that represent 70% of Israeli agricultural production. Multi-date Landsat TM images representing seasonal vegetation cover variations were converted to normalized difference vegetation index (NDVI) layers. Field boundaries were delineated by merging Landsat data with SPOT-panchromatic images. Crop recognition was then achieved in two-phases, by clustering multi-temporal NDVI layers using unsupervised classification, and then applying 'split-and-merge' rules to these clusters. These rules were formalized through comprehensive learning of relationships between crop types, imagery properties (spectral and NDVI) and auxiliary data including agricultural knowledge, precipitation and soil types. Assessment of the recognition results using ground data from the Israeli Agriculture Ministry indicated an average recognition accuracy exceeding 85% which accounts for both omission and commission errors. The two-phase strategy implemented in this study is apparently successful for heterogeneous regions. This is due to the fact that it allows

  1. Incremental Knowledge Base Construction Using DeepDive

    PubMed Central

    Shin, Jaeho; Wu, Sen; Wang, Feiran; De Sa, Christopher; Zhang, Ce; Ré, Christopher

    2016-01-01

    Populating a database with unstructured information is a long-standing problem in industry and research that encompasses problems of extraction, cleaning, and integration. Recent names used for this problem include dealing with dark data and knowledge base construction (KBC). In this work, we describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems, and we present techniques to make the KBC process more efficient. We observe that the KBC process is iterative, and we develop techniques to incrementally produce inference results for KBC systems. We propose two methods for incremental inference, based respectively on sampling and variational techniques. We also study the tradeoff space of these methods and develop a simple rule-based optimizer. DeepDive includes all of these contributions, and we evaluate Deep-Dive on five KBC systems, showing that it can speed up KBC inference tasks by up to two orders of magnitude with negligible impact on quality. PMID:27144081

  2. Identification of threats using linguistics-based knowledge extraction.

    SciTech Connect

    Chew, Peter A.

    2008-09-01

    One of the challenges increasingly facing intelligence analysts, along with professionals in many other fields, is the vast amount of data which needs to be reviewed and converted into meaningful information, and ultimately into rational, wise decisions by policy makers. The advent of the world wide web (WWW) has magnified this challenge. A key hypothesis which has guided us is that threats come from ideas (or ideology), and ideas are almost always put into writing before the threats materialize. While in the past the 'writing' might have taken the form of pamphlets or books, today's medium of choice is the WWW, precisely because it is a decentralized, flexible, and low-cost method of reaching a wide audience. However, a factor which complicates matters for the analyst is that material published on the WWW may be in any of a large number of languages. In 'Identification of Threats Using Linguistics-Based Knowledge Extraction', we have sought to use Latent Semantic Analysis (LSA) and other similar text analysis techniques to map documents from the WWW, in whatever language they were originally written, to a common language-independent vector-based representation. This then opens up a number of possibilities. First, similar documents can be found across language boundaries. Secondly, a set of documents in multiple languages can be visualized in a graphical representation. These alone offer potentially useful tools and capabilities to the intelligence analyst whose knowledge of foreign languages may be limited. Finally, we can test the over-arching hypothesis--that ideology, and more specifically ideology which represents a threat, can be detected solely from the words which express the ideology--by using the vector-based representation of documents to predict additional features (such as the ideology) within a framework based on supervised learning. In this report, we present the results of a three-year project of the same name. We believe these results clearly

  3. The Knowledge-Based Economy and E-Learning: Critical Considerations for Workplace Democracy

    ERIC Educational Resources Information Center

    Remtulla, Karim A.

    2007-01-01

    The ideological shift by nation-states to "a knowledge-based economy" (also referred to as "knowledge-based society") is causing changes in the workplace. Brought about by the forces of globalisation and technological innovation, the ideologies of the "knowledge-based economy" are not limited to influencing the production, consumption and economic…

  4. Effects of people knowledge on science learning in a computer-based learning environment

    NASA Astrophysics Data System (ADS)

    Hong, Huang-Yao

    A weakness inherent in science education has been, and continues to be, its emphasis principally on the teaching of scientific knowledge, i.e. knowledge of the object (or the observed). Little attention has been directed to the teaching of people knowledge about scientists, i.e. knowledge of the subject (or the observer), who generates scientific knowledge. This study explored the possible effects of people knowledge on science learning. Participants in the study were 323 tenth graders from nine classes in a public school in Taipei, Taiwan. They were randomly assigned to three groups to self-study science in a computer-based learning environment. The control group was instructed to study various scientific laws discovered by three scientists in three science lessons. The other two groups were instructed to study the same science lessons after studying one of two kinds of people knowledge about the three scientists: achievement-oriented people knowledge (APK) and process-oriented people knowledge (PPK). APK profiles scientists' scientific achievements, and PPK describes scientists' struggles before making the scientific discoveries. The main findings were: Firstly, it was found from problem-solving tests that all three groups performed equally well in applying what they learned from the lessons to solve textbook problems. However, in applying what they learned to interpret the relationships between scientific laws, only the PPK group performed better. Secondly, regarding learning interest, among the students who showed high personal interest in science, the APK group tended to consider the lessons as less interesting than the control group. Among the students who demonstrated low personal interest in science, the PPK group tended to consider the science lessons as more interesting than the control group. Thirdly, in describing their image of the three scientists, the APK group tended to emphasize the abilities and successes of the scientists, whereas the PPK group

  5. Using Teacher-Generated Ecological Models to Assess Knowledge Gained During Teacher Training

    NASA Astrophysics Data System (ADS)

    Dresner, M.; Moldenke, A.

    2005-12-01

    Developing a capacity for systems thinking (ways to understand complex systems) requires both immersion in challenging, real-world problem contexts and exposure to systems analysis language, tools and procedures, such as ecosystem modeling. Modeling is useful as a means of conveying complex, dynamic interactions. Models of ecosystems can facilitate an ability to be attentive to whole systems by illustrating multiple factors of interaction, feedback, subsystems and inputs and outputs, which lead to a greater understanding of ecosystem functioning. Concept mapping, which uses models of students' ideas organized hierarchically is used in assessment, but it does not having any outside utility. Ecosystem models, on the other hand, are legitimate end-products in and of themselves. A change made in a learner-generated model that conforms to patterns observed in nature by the learner can be seen as reflections of his or her understanding. Starting with their own reflections on previous ecological knowledge, teachers will model components of the ecosystem they are about to study. 'Teaching models' will be used to familiarize learners with the symbolic language of models and to teach some basic ecology concepts. Teachers then work directly with ecologists in conducting research, using the steps of a straightforward study as a guide, and then observe and discuss patterns in the data they have collected. Higher-order thinking skills are practiced through the reflective use of ecological models. Through a series of questions including analysis, relational reasoning, synthesis, testing, and explaining, pairs of teacher describe the principles and theories about ecology that they think might be operating in their models to one another. They describe the consequences of human-caused impacts and possible causal patterns. They explain any differences in their understanding of ecosystem interactions before and after their research experiences

  6. A protein relational database and protein family knowledge bases to facilitate structure-based design analyses.

    PubMed

    Mobilio, Dominick; Walker, Gary; Brooijmans, Natasja; Nilakantan, Ramaswamy; Denny, R Aldrin; Dejoannis, Jason; Feyfant, Eric; Kowticwar, Rupesh K; Mankala, Jyoti; Palli, Satish; Punyamantula, Sairam; Tatipally, Maneesh; John, Reji K; Humblet, Christine

    2010-08-01

    The Protein Data Bank is the most comprehensive source of experimental macromolecular structures. It can, however, be difficult at times to locate relevant structures with the Protein Data Bank search interface. This is particularly true when searching for complexes containing specific interactions between protein and ligand atoms. Moreover, searching within a family of proteins can be tedious. For example, one cannot search for some conserved residue as residue numbers vary across structures. We describe herein three databases, Protein Relational Database, Kinase Knowledge Base, and Matrix Metalloproteinase Knowledge Base, containing protein structures from the Protein Data Bank. In Protein Relational Database, atom-atom distances between protein and ligand have been precalculated allowing for millisecond retrieval based on atom identity and distance constraints. Ring centroids, centroid-centroid and centroid-atom distances and angles have also been included permitting queries for pi-stacking interactions and other structural motifs involving rings. Other geometric features can be searched through the inclusion of residue pair and triplet distances. In Kinase Knowledge Base and Matrix Metalloproteinase Knowledge Base, the catalytic domains have been aligned into common residue numbering schemes. Thus, by searching across Protein Relational Database and Kinase Knowledge Base, one can easily retrieve structures wherein, for example, a ligand of interest is making contact with the gatekeeper residue.

  7. Generating procedural and conceptual knowledge of fractions by pre-service teachers

    NASA Astrophysics Data System (ADS)

    Chinnappan, Mohan; Forrester, Tricia

    2014-12-01

    Knowledge that teachers bring to the teaching context is of interest to key stakeholders in improving levels of numeracy attained by learners. In this regard, the centrality of, and the need to investigate, the quality of teachers' mathematical knowledge for teaching mathematics has been gaining momentum in recent years. There is a general consensus that teachers need a robust body of content and pedagogical knowledge related to mathematics and that one impacts on the other. However, in current debates about this interconnection between content knowledge and pedagogical content knowledge, there is limited analysis about the procedural-conceptual nature of content knowledge that, we argue, has significant impact on the development of pedagogical content knowledge. In this report, this issue is investigated by examining the state of procedural and conceptual knowledge of two cohorts of pre-service teachers and analyzing the impact of a representational reasoning teaching and learning (RRTL) approach aimed at supporting a balanced development of these two dimensions of Content Knowledge.

  8. Computer game-based and traditional learning method: a comparison regarding students’ knowledge retention

    PubMed Central

    2013-01-01

    Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention. PMID:23442203

  9. Development of a Prototype Model-Form Uncertainty Knowledge Base

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2016-01-01

    Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.

  10. Mathematics/Arithmetic Knowledge-Based Way of Thinking and Its Maintenance Needed for Engineers

    NASA Astrophysics Data System (ADS)

    Harada, Shoji

    Examining curriculum among universities revealed that no significant difference in math class or related subjects can be seen. However, amount and depth of those studies, in general, differed depending on content of curriculum and the level of achievement at entrance to individual university. Universalization of higher education shows that students have many problems in learning higher level of traditional math and that the memory of math they learned quickly fades away after passing in exam. It means that further development of higher math knowledgebased engineer after graduation from universities. Under these circumstances, the present author, as one of fun of math, propose how to maintain way of thinking generated by math knowledge. What necessary for engineer is to pay attention to common books, dealing with elementary mathematics or arithmetic- related matters. This surely leads engineer to nourish math/arithmetic knowledge-based way of thinking.

  11. Design of knowledge-based image retrieval system: implications from radiologists' cognitive processes

    NASA Astrophysics Data System (ADS)

    Liu Sheng, Olivia R.; Wei, Chih-Ping; Ozeki, Takeshi; Ovitt, Theron W.; Ishida, Jiro

    1992-07-01

    In a radiological examination reading, radiologists usually compare a newly generated examination with previous examinations of the same patient. For this reason, the retrieval of old images is a critical design requirement of totally digital radiology using Picture Archiving and Communication Systems (PACS). To achieve the required performance in a PACS with a hierarchical and possibly distributed image archival system, pre-fetching of images from slower or remote storage devices to the local buffers of workstations is proposed. Image Retrieval Expert System (IRES) is a knowledge-based image retrieval system which will predict and then pre-fetch relevant old images. Previous work on IRES design focused on the knowledge acquisition phase and the development of an efficient modeling methodology and architecture. The goal of this paper is to evaluate the effectiveness of the current IRES design and to identify appropriate directions for exploring other design features and alternatives by means of a cognitive study and an associated survey study.

  12. LLNL Middle East, North Africa and Western Eurasia Knowledge Base

    SciTech Connect

    O'Boyle, J; Ruppert, S D; Hauk, T F; Dodge, D A; Ryall, F; Firpo, M A

    2001-07-12

    The Lawrence Livermore National Laboratory (LLNL) Ground-Based Nuclear Event Monitoring (GNEM) program has made significant progress populating a comprehensive Seismic Research Knowledge Base (SRKB) and deriving calibration parameters for the Middle East, North Africa and Western Eurasia (ME/NA/WE) regions. The LLNL SRKB provides not only a coherent framework in which to store and organize very large volumes of collected seismic waveforms, associated event parameter information, and spatial contextual data, but also provides an efficient data processing/research environment for deriving location and discrimination correction surfaces. The SRKB is a flexible and extensible framework consisting of a relational database (RDB), Geographical Information System (GIS), and associated product/data visualization and data management tools. This SRKB framework is designed to accommodate large volumes of data (almost 3 million waveforms from 57,000 events) in diverse formats from many sources (both LLNL derived research and integrated contractor products), in addition to maintaining detailed quality control and metadata. We have developed expanded look-up tables for critical station parameter information (including location and response) and an integrated and reconciled event catalog data set (including specification of preferred origin solutions and associated phase arrivals) for the PDE, CMT, ISC, REB and selected regional catalogs. Using the SRKB framework, we are combining traveltime observations, event characterization studies, and regional tectonic models to assemble a library of ground truth information and phenomenology (e.g. travel-time and amplitude) correction surfaces required for support of the ME/NA/WE regionalization program. We also use the SRKB to integrate data and research products from a variety of sources, such as contractors and universities, to merge and maintain quality control of the data sets. Corrections and parameters distilled from the LLNL SRKB

  13. SAFOD Brittle Microstructure and Mechanics Knowledge Base (SAFOD BM2KB)

    NASA Astrophysics Data System (ADS)

    Babaie, H. A.; Hadizadeh, J.; di Toro, G.; Mair, K.; Kumar, A.

    2008-12-01

    We have developed a knowledge base to store and present the data collected by a group of investigators studying the microstructures and mechanics of brittle faulting using core samples from the SAFOD (San Andreas Fault Observatory at Depth) project. The investigations are carried out with a variety of analytical and experimental methods primarily to better understand the physics of strain localization in fault gouge. The knowledge base instantiates an specially-designed brittle rock deformation ontology developed at Georgia State University. The inference rules embedded in the semantic web languages, such as OWL, RDF, and RDFS, which are used in our ontology, allow the Pellet reasoner used in this application to derive additional truths about the ontology and knowledge of this domain. Access to the knowledge base is via a public website, which is designed to provide the knowledge acquired by all the investigators involved in the project. The stored data will be products of studies such as: experiments (e.g., high-velocity friction experiment), analyses (e.g., microstructural, chemical, mass transfer, mineralogical, surface, image, texture), microscopy (optical, HRSEM, FESEM, HRTEM]), tomography, porosity measurement, microprobe, and cathodoluminesence. Data about laboratories, experimental conditions, methods, assumptions, equipments, and mechanical properties and lithology of the studied samples will also be presented on the website per investigation. The ontology was modeled applying the UML (Unified Modeling Language) in Rational Rose, and implemented in OWL-DL (Ontology Web Language) using the Protégé ontology editor. The UML model was converted to OWL-DL by first mapping it to Ecore (.ecore) and Generator model (.genmodel) with the help of the EMF (Eclipse Modeling Framework) plugin in Eclipse. The Ecore model was then mapped to a .uml file, which later was converted into an .owl file and subsequently imported into the Protégé ontology editing environment

  14. A Theory of Information Genetics: How Four Subforces Generate Information and the Implications for Total Quality Knowledge Management.

    ERIC Educational Resources Information Center

    Tsai, Bor-sheng

    2002-01-01

    Proposes a model called information genetics to elaborate on the origin of information generating. Explains conceptual and data models; and describes a software program that was developed for citation data mining, infomapping, and information repackaging for total quality knowledge management in Web representation. (Contains 112 references.)…

  15. Prospector II: Towards a knowledge base for mineral deposits

    USGS Publications Warehouse

    McCammon, R.B.

    1994-01-01

    What began in the mid-seventies as a research effort in designing an expert system to aid geologists in exploring for hidden mineral deposits has in the late eighties become a full-sized knowledge-based system to aid geologists in conducting regional mineral resource assessments. Prospector II, the successor to Prospector, is interactive-graphics oriented, flexible in its representation of mineral deposit models, and suited to regional mineral resource assessment. In Prospector II, the geologist enters the findings for an area, selects the deposit models or examples of mineral deposits for consideration, and the program compares the findings with the models or the examples selected, noting the similarities, differences, and missing information. The models or the examples selected are ranked according to scores that are based on the comparisons with the findings. Findings can be reassessed and the process repeated if necessary. The results provide the geologist with a rationale for identifying those mineral deposit types that the geology of an area permits. In future, Prospector II can assist in the creation of new models used in regional mineral resource assessment and in striving toward an ultimate classification of mineral deposits. ?? 1994 International Association for Mathematical Geology.

  16. A knowledge based expert system for condition monitoring

    SciTech Connect

    Selkirk, C.G.; Roberge, P.R.; Fisher, G.F.; Yeung, K.K.

    1994-12-31

    Condition monitoring (CM) is the focus of many maintenance philosophies around the world today. In the Canadian Forces (CF), CM has played an important role in the maintenance of aircraft systems since the introduction of spectrometric oil analysis (SOAP) over twenty years ago. Other techniques in use in the CF today include vibration analysis (VA), ferrography, and filter debris analysis (FDA). To improve the usefulness and utility gained from these CM techniques, work is currently underway to incorporate expert systems into them. An expert system for FDA is being developed which will aid filter debris analysts in identifying wear debris and wear level trends, and which will provide the analyst with reference examples in an attempt to standardize results. Once completed, this knowledge based expert system will provide a blueprint from which other CM expert systems can be created. Amalgamating these specific systems into a broad based global system will provide the CM analyst with a tool that will be able to correlate data and results from each of the techniques, thereby increasing the utility of each individual method of analysis. This paper will introduce FDA and then outline the development of the FDA expert system and future applications.

  17. Speech-Language Pathologists' Knowledge of Genetics: Perceived Confidence, Attitudes, Knowledge Acquisition and Practice-Based Variables

    ERIC Educational Resources Information Center

    Tramontana, G. Michael; Blood, Ingrid M.; Blood, Gordon W.

    2013-01-01

    The purpose of this study was to determine (a) the general knowledge bases demonstrated by school-based speech-language pathologists (SLPs) in the area of genetics, (b) the confidence levels of SLPs in providing services to children and their families with genetic disorders/syndromes, (c) the attitudes of SLPs regarding genetics and communication…

  18. Virus-based piezoelectric energy generator

    SciTech Connect

    2012-01-01

    Lawrence Berkeley National Laboratory scientists have developed a way to generate power using harmless viruses that convert mechanical energy into electricity. The milestone could lead to tiny devices that harvest electrical energy from the vibrations of everyday tasks. The first part of the video shows how Berkeley Lab scientists harness the piezoelectric properties of the virus to convert the force of a finger tap into electricity. The second part reveals the "viral-electric" generators in action, first by pressing only one of the generators, then by pressing two at the same time, which produces more current.

  19. Lynx: a knowledge base and an analytical workbench for integrative medicine.

    PubMed

    Sulakhe, Dinanath; Xie, Bingqing; Taylor, Andrew; D'Souza, Mark; Balasubramanian, Sandhya; Hashemifar, Somaye; White, Steven; Dave, Utpal J; Agam, Gady; Xu, Jinbo; Wang, Sheng; Gilliam, T Conrad; Maltsev, Natalia

    2016-01-01

    Lynx (http://lynx.ci.uchicago.edu) is a web-based database and a knowledge extraction engine. It supports annotation and analysis of high-throughput experimental data and generation of weighted hypotheses regarding genes and molecular mechanisms contributing to human phenotypes or conditions of interest. Since the last release, the Lynx knowledge base (LynxKB) has been periodically updated with the latest versions of the existing databases and supplemented with additional information from public databases. These additions have enriched the data annotations provided by Lynx and improved the performance of Lynx analytical tools. Moreover, the Lynx analytical workbench has been supplemented with new tools for reconstruction of co-expression networks and feature-and-network-based prioritization of genetic factors and molecular mechanisms. These developments facilitate the extraction of meaningful knowledge from experimental data and LynxKB. The Service Oriented Architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces.

  20. Lynx: a knowledge base and an analytical workbench for integrative medicine.

    PubMed

    Sulakhe, Dinanath; Xie, Bingqing; Taylor, Andrew; D'Souza, Mark; Balasubramanian, Sandhya; Hashemifar, Somaye; White, Steven; Dave, Utpal J; Agam, Gady; Xu, Jinbo; Wang, Sheng; Gilliam, T Conrad; Maltsev, Natalia

    2016-01-01

    Lynx (http://lynx.ci.uchicago.edu) is a web-based database and a knowledge extraction engine. It supports annotation and analysis of high-throughput experimental data and generation of weighted hypotheses regarding genes and molecular mechanisms contributing to human phenotypes or conditions of interest. Since the last release, the Lynx knowledge base (LynxKB) has been periodically updated with the latest versions of the existing databases and supplemented with additional information from public databases. These additions have enriched the data annotations provided by Lynx and improved the performance of Lynx analytical tools. Moreover, the Lynx analytical workbench has been supplemented with new tools for reconstruction of co-expression networks and feature-and-network-based prioritization of genetic factors and molecular mechanisms. These developments facilitate the extraction of meaningful knowledge from experimental data and LynxKB. The Service Oriented Architecture provides public access to LynxKB and its analytical tools via user-friendly web services and interfaces. PMID:26590263

  1. Generating Procedural and Conceptual Knowledge of Fractions by Pre-Service Teachers

    ERIC Educational Resources Information Center

    Chinnappan, Mohan; Forrester, Tricia

    2014-01-01

    Knowledge that teachers bring to the teaching context is of interest to key stakeholders in improving levels of numeracy attained by learners. In this regard, the centrality of, and the need to investigate, the quality of teachers' mathematical knowledge for teaching mathematics has been gaining momentum in recent years. There is a general…

  2. Multilayered Knowledge: Understanding the Structure and Enactment of Teacher Educators' Specialized Knowledge Base

    ERIC Educational Resources Information Center

    Selmer, Sarah; Bernstein, Malayna; Bolyard, Johnna

    2016-01-01

    In order to corroborate and grow teacher educator knowledge (TEK) scholarship, this paper describes an in-depth-focused exploration of a group of teacher educators providing professional development. Our grounded data analysis allowed us to define different major elements, sub-elements, and components that comprise TEK, as well as make explicit…

  3. Advanced Coal-Based Power Generations

    NASA Technical Reports Server (NTRS)

    Robson, F. L.

    1982-01-01

    Advanced power-generation systems using coal-derived fuels are evaluated in two-volume report. Report considers fuel cells, combined gas- and steam-turbine cycles, and magnetohydrodynamic (MHD) energy conversion. Presents technological status of each type of system and analyzes performance of each operating on medium-Btu fuel gas, either delivered via pipeline to powerplant or generated by coal-gasification process at plantsite.

  4. Development of a Knowledge Base for Enduser Consultation of AAL-Systems.

    PubMed

    Röll, Natalie; Stork, Wilhelm; Rosales, Bruno; Stephan, René; Knaup, Petra

    2016-01-01

    Manufacturer information, user experiences and product availability of assistive living technologies are usually not known to citizens or consultation centers. The different knowledge levels concerning the availability of technology shows the need for building up a knowledge base. The aim of this contribution is the definition of requirements in the development of knowledge bases for AAL consultations. The major requirements, such as a maintainable and easy to use structure were implemented into a web based knowledge base, which went productive in ~3700 consulting interviews of municipal technology information centers. Within this field phase the implementation of the requirements for a knowledge base in the field of AAL consulting was evaluated and further developed.

  5. Increasing levels of assistance in refinement of knowledge-based retrieval systems

    NASA Technical Reports Server (NTRS)

    Baudin, Catherine; Kedar, Smadar; Pell, Barney

    1994-01-01

    The task of incrementally acquiring and refining the knowledge and algorithms of a knowledge-based system in order to improve its performance over time is discussed. In particular, the design of DE-KART, a tool whose goal is to provide increasing levels of assistance in acquiring and refining indexing and retrieval knowledge for a knowledge-based retrieval system, is presented. DE-KART starts with knowledge that was entered manually, and increases its level of assistance in acquiring and refining that knowledge, both in terms of the increased level of automation in interacting with users, and in terms of the increased generality of the knowledge. DE-KART is at the intersection of machine learning and knowledge acquisition: it is a first step towards a system which moves along a continuum from interactive knowledge acquisition to increasingly automated machine learning as it acquires more knowledge and experience.

  6. Knowledge-based medical image analysis and representation for integrating content definition with the radiological report.

    PubMed

    Kulikowski, C A; Gong, L; Mezrich, R S

    1995-03-01

    Technology breakthroughs in high-speed, high-capacity, and high performance desk-top computers and workstations make the possibility of integrating multimedia medical data to better support clinical decision making, computer-aided education, and research not only attractive, but feasible. To systematically evaluate results from increasingly automated image segmentation it is necessary to correlate them with the expert judgments of radiologists and other clinical specialists interpreting the images. These are contained in increasingly computerized radiological reports and other related clinical records. But to make automated comparison feasible it is necessary to first ensure compatibility of the knowledge content of images with the descriptions contained in these records. Enough common vocabulary, language, and knowledge representation components must be represented on the computer, followed by automated extraction of image-content descriptions from the text, which can then be matched to the results of automated image segmentation. A knowledge-based approach to image segmentation is essential to obtain the structured image descriptions needed for matching against the expert's descriptions. We have developed a new approach to medical image analysis which helps generate such descriptions: a knowledge-based object-centered hierarchical planning method for automatically composing the image analysis processes. The problem-solving steps of specialists are represented at the knowledge level in terms of goals, tasks, and domain objects and concepts separately from the implementation level for specific representations of different image types, and generic analysis methods. This system can serve as a major functional component in incrementally building and updating a structured and integrated hybrid information system of patient data.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. Widening the Knowledge Acquisition Bottleneck for Constraint-Based Tutors

    ERIC Educational Resources Information Center

    Suraweera, Pramuditha; Mitrovic, Antonija; Martin, Brent

    2010-01-01

    Intelligent Tutoring Systems (ITS) are effective tools for education. However, developing them is a labour-intensive and time-consuming process. A major share of the effort is devoted to acquiring the domain knowledge that underlies the system's intelligence. The goal of this research is to reduce this knowledge acquisition bottleneck and better…

  8. Examining the Mismatch between Pupil and Teacher Knowledge in Acid-Base Chemistry.

    ERIC Educational Resources Information Center

    Erduran, Sibel

    2003-01-01

    Reports a mismatch between teacher and pupil knowledge of acid-base chemistry as a result of controversial episodes from three science lessons. Suggests that the teacher's knowledge is guided by textbook information while the pupil's knowledge is based on direct experimental experience. Proposes that classroom activities should support the…

  9. A Generalized Knowledge-Based Discriminatory Function for Biomolecular Interactions

    PubMed Central

    Bernard, Brady; Samudrala, Ram

    2010-01-01

    Several novel and established knowledge-based discriminatory function formulations and reference state derivations have been evaluated to identify parameter sets capable of distinguishing native and near-native biomolecular interactions from incorrect ones. We developed the r·m·r function, a novel atomic level radial distribution function with mean reference state that averages over all pairwise atom types from a reduced atom type composition, using experimentally determined intermolecular complexes in the Cambridge Structural Database (CSD) and the Protein Data Bank (PDB) as the information sources. We demonstrate that r·m·r had the best discriminatory accuracy and power for protein-small molecule and protein-DNA interactions, regardless of whether the native complex was included or excluded from the test set. The superior performance of the r·m·r discriminatory function compared to seventeen alternative functions evaluated on publicly available test sets for protein-small molecule and protein-DNA interactions indicated that the function was not over optimized through back testing on a single class of biomolecular interactions. The initial success of the reduced composition and superior performance with the CSD as the distribution set over the PDB implies that further improvements and generality of the function are possible by deriving probabilities from subsets of the CSD, using structures that consist of only the atom types to be considered for given biomolecular interactions. The method is available as a web server module at http://protinfo.compbio.washington.edu. PMID:19127590

  10. Knowledge-based operation and management of communications systems

    NASA Technical Reports Server (NTRS)

    Heggestad, Harold M.

    1988-01-01

    Expert systems techniques are being applied in operation and control of the Defense Communications System (DCS), which has the mission of providing reliable worldwide voice, data and message services for U.S. forces and commands. Thousands of personnel operate DCS facilities, and many of their functions match the classical expert system scenario: complex, skill-intensive environments with a full spectrum of problems in training and retention, cost containment, modernization, and so on. Two of these functions are: (1) fault isolation and restoral of dedicated circuits at Tech Control Centers, and (2) network management for the Defense Switched Network (the modernized dial-up voice system currently replacing AUTOVON). An expert system for the first of these is deployed for evaluation purposes at Andrews Air Force Base, and plans are being made for procurement of operational systems. In the second area, knowledge obtained with a sophisticated simulator is being embedded in an expert system. The background, design and status of both projects are described.

  11. Knowledge base navigator facilitating regional analysis inter-tool communication.

    SciTech Connect

    Hampton, Jeffery Wade; Chael, Eric Paul; Hart, Darren M.; Merchant, Bion John; Chown, Matthew N.

    2004-08-01

    To make use of some portions of the National Nuclear Security Administration (NNSA) Knowledge Base (KB) for which no current operational monitoring applications were available, Sandia National Laboratories have developed a set of prototype regional analysis tools (MatSeis, EventID Tool, CodaMag Tool, PhaseMatch Tool, Dendro Tool, Infra Tool, etc.), and we continue to maintain and improve these. Individually, these tools have proven effective in addressing specific monitoring tasks, but collectively their number and variety tend to overwhelm KB users, so we developed another application - the KB Navigator - to launch the tools and facilitate their use for real monitoring tasks. The KB Navigator is a flexible, extensible java application that includes a browser for KB data content, as well as support to launch any of the regional analysis tools. In this paper, we will discuss the latest versions of KB Navigator and the regional analysis tools, with special emphasis on the new overarching inter-tool communication methodology that we have developed to make the KB Navigator and the tools function together seamlessly. We use a peer-to-peer communication model, which allows any tool to communicate with any other. The messages themselves are passed as serialized XML, and the conversion from Java to XML (and vice versa) is done using Java Architecture for XML Binding (JAXB).

  12. A knowledge-based modeling for plantar pressure image reconstruction.

    PubMed

    Ostadabbas, Sarah; Nourani, Mehrdad; Saeed, Adnan; Yousefi, Rasoul; Pompeo, Matthew

    2014-10-01

    It is known that prolonged pressure on the plantar area is one of the main factors in developing foot ulcers. With current technology, electronic pressure monitoring systems can be placed as an insole into regular shoes to continuously monitor the plantar area and provide evidence on ulcer formation process as well as insight for proper orthotic footwear design. The reliability of these systems heavily depends on the spatial resolution of their sensor platforms. However, due to the cost and energy constraints, practical wireless in-shoe pressure monitoring systems have a limited number of sensors, i.e., typically K < 10. In this paper, we present a knowledge-based regression model (SCPM) to reconstruct a spatially continuous plantar pressure image from a small number of pressure sensors. This model makes use of high-resolution pressure data collected clinically to train a per-subject regression function. SCPM is shown to outperform all other tested interpolation methods for K < 60 sensors, with less than one-third of the error for K = 10 sensors. SCPM bridges the gap between the technological capability and medical need and can play an important role in the adoption of sensing insole for a wide range of medical applications.

  13. Data-driven sequence learning or search: What are the prerequisites for the generation of explicit sequence knowledge?

    PubMed Central

    Schwager, Sabine; Rünger, Dennis; Gaschler, Robert; Frensch, Peter A.

    2012-01-01

    In incidental sequence learning situations, there is often a number of participants who can report the task-inherent sequential regularity after training. Two kinds of mechanisms for the generation of this explicit knowledge have been proposed in the literature. First, a sequence representation may become explicit when its strength reaches a certain level (Cleeremans, 2006), and secondly, explicit knowledge may emerge as the result of a search process that is triggered by unexpected events that occur during task processing and require an explanation (the unexpected-event hypothesis; Haider & Frensch, 2009). Our study aimed at systematically exploring the contribution of both mechanisms to the generation of explicit sequence knowledge in an incidental learning situation. We varied the amount of specific sequence training and inserted unexpected events into a 6-choice serial reaction time task. Results support the unexpected-event view, as the generation of explicit sequence knowledge could not be predicted by the representation strength acquired through implicit sequence learning. Rather sequence detection turned out to be more likely when participants were shifted to the fixed repeating sequence after training than when practicing one and the same fixed sequence without interruption. The behavioral effects of representation strength appear to be related to the effectiveness of unexpected changes in performance as triggers of a controlled search. PMID:22723812

  14. A magnetoelectric composite based signal generator

    NASA Astrophysics Data System (ADS)

    Fetisov, Y. K.; Serov, V. N.; Fetisov, L. Y.; Makovkin, S. A.; Viehland, D.; Srinivasan, G.

    2016-05-01

    Self-oscillations in an active loop consisting of a wide-band amplifier and a magnetoelectric composite in the feedback circuit have been observed. The composite with a ferroelectric lead zirconate titanate bimorph and ferromagnetic Metglas serves as a resonator that determines the frequency of oscillations and provides the feedback voltage. Under amplitude balance and phase matching conditions, the device generated signals at 2.3 kHz, at the bending resonance frequency of the composite. The oscillations were observed over a specific range of magnetic bias H. The shape of the signal generated is dependent on electrical circuit parameters and magnitude and orientation of H.

  15. Knowledge-based optical coatings design and manufacturing

    NASA Astrophysics Data System (ADS)

    Guenther, Karl H.; Gonzalez, Avelino J.; Yoo, Hoi J.

    1990-12-01

    The theory of thin film optics is well developed for the spectral analysis of a given optical coating. The inverse synthesis - designing an optical coating for a certain spectral performance - is more complicated. Usually a multitude of theoretical designs is feasible because most design problems are over-determined with the number of layers possible with three variables each (n, k, t). The expertise of a good thin film designer comes in at this point with a mostly intuitive selection of certain designs based on previous experience and current manufacturing capabilities. Manufacturing a designed coating poses yet another subset of multiple solutions, as thin if in deposition technology has evolved over the years with a vast variety of different processes. The abundance of published literature may often be more confusing than helpful to the practicing thin film engineer, even if he has time and opportunity to read it. The choice of the right process is also severely limited by the given manufacturing hardware and cost considerations which may not easily allow for the adaption of a new manufacturing approach, even if it promises to be better technically (it ought to be also cheaper). On the user end of the thin film coating business, the typical optical designer or engineer who needs an optical coating may have limited or no knowledge at all about the theoretical and manufacturing criteria for the optimum selection of what he needs. This can be sensed frequently by overly tight tolerances and requirements for optical performance which sometimes stretch the limits of mother nature. We introduce here a know1edge-based system (KBS) intended to assist expert designers and manufacturers in their task of maximizing results and minimizing errors, trial runs, and unproductive time. It will help the experts to manipulate parameters which are largely determined through heuristic reasoning by employing artificial intelligence techniques. In a later state, the KBS will include a

  16. Citizen science in hydrology and water resources: opportunities for knowledge generation, ecosystem service management, and sustainable development

    NASA Astrophysics Data System (ADS)

    Buytaert, Wouter; Zulkafli, Zed; Grainger, Sam; Acosta, Luis; Bastiaensen, Johan; De Bièvre, Bert; Bhusal, Jagat; Chanie, Tilashwork; Clark, Julian; Dewulf, Art; Foggin, Marc; Hannah, David; Hergarten, Christian; Isaeva, Aiganysh; Karpouzoglou, Timos; Pandey, Bhopal; Paudel, Deepak; Sharma, Keshav; Steenhuis, Tammo; Tilahun, Seifu; Van Hecken, Gert; Zhumanova, Munavar

    2014-10-01

    The participation of the general public in the research design, data collection and interpretation process together with scientists is often referred to as citizen science. While citizen science itself has existed since the start of scientific practice, developments in sensing technology, data processing and visualisation, and communication of ideas and results, are creating a wide range of new opportunities for public participation in scientific research. This paper reviews the state of citizen science in a hydrological context and explores the potential of citizen science to complement more traditional ways of scientific data collection and knowledge generation for hydrological sciences and water resources management. Although hydrological data collection often involves advanced technology, the advent of robust, cheap and low-maintenance sensing equipment provides unprecedented opportunities for data collection in a citizen science context. These data have a significant potential to create new hydrological knowledge, especially in relation to the characterisation of process heterogeneity, remote regions, and human impacts on the water cycle. However, the nature and quality of data collected in citizen science experiments is potentially very different from those of traditional monitoring networks. This poses challenges in terms of their processing, interpretation, and use, especially with regard to assimilation of traditional knowledge, the quantification of uncertainties, and their role in decision support. It also requires care in designing citizen science projects such that the generated data complement optimally other available knowledge. Lastly, we reflect on the challenges and opportunities in the integration of hydrologically-oriented citizen science in water resources management, the role of scientific knowledge in the decision-making process, and the potential contestation to established community institutions posed by co-generation of new knowledge.

  17. yOWL: an ontology-driven knowledge base for yeast biologists.

    PubMed

    Villanueva-Rosales, Natalia; Dumontier, Michel

    2008-10-01

    Knowledge management is an ongoing challenge for the biological community such that large, diverse and continuously growing information requires more sophisticated methods to store, integrate and query their knowledge. The semantic web initiative provides a new knowledge engineering framework to represent, share and discover information. In this paper, we describe our efforts towards the development of an ontology-based knowledge base, including aspects from ontology design and population using "semantic" data mashup, to automated reasoning and semantic query answering. Based on yeast data obtained from the Saccharomyces Genome Database and UniProt, we discuss the challenges encountered during the building of the knowledge base and how they were overcome.

  18. Sensor explication: knowledge-based robotic plan execution through logical objects.

    PubMed

    Budenske, J; Gini, M

    1997-01-01

    Complex robot tasks are usually described as high level goals, with no details on how to achieve them. However, details must be provided to generate primitive commands to control a real robot. A sensor explication concept that makes details explicit from general commands is presented. We show how the transformation from high-level goals to primitive commands can be performed at execution time and we propose an architecture based on reconfigurable objects that contain domain knowledge and knowledge about the sensors and actuators available. Our approach is based on two premises: 1) plan execution is an information gathering process where determining what information is relevant is a great part of the process; and 2) plan execution requires that many details are made explicit. We show how our approach is used in solving the task of moving a robot to and through an unknown, and possibly narrow, doorway; where sonic range data is used to find the doorway, walls, and obstacles. We illustrate the difficulty of such a task using data from a large number of experiments we conducted with a real mobile robot. The laboratory results illustrate how the proper application of knowledge in the integration and utilization of sensors and actuators increases the robustness of plan execution.

  19. Sensor explication: knowledge-based robotic plan execution through logical objects.

    PubMed

    Budenske, J; Gini, M

    1997-01-01

    Complex robot tasks are usually described as high level goals, with no details on how to achieve them. However, details must be provided to generate primitive commands to control a real robot. A sensor explication concept that makes details explicit from general commands is presented. We show how the transformation from high-level goals to primitive commands can be performed at execution time and we propose an architecture based on reconfigurable objects that contain domain knowledge and knowledge about the sensors and actuators available. Our approach is based on two premises: 1) plan execution is an information gathering process where determining what information is relevant is a great part of the process; and 2) plan execution requires that many details are made explicit. We show how our approach is used in solving the task of moving a robot to and through an unknown, and possibly narrow, doorway; where sonic range data is used to find the doorway, walls, and obstacles. We illustrate the difficulty of such a task using data from a large number of experiments we conducted with a real mobile robot. The laboratory results illustrate how the proper application of knowledge in the integration and utilization of sensors and actuators increases the robustness of plan execution. PMID:18255901

  20. A development environment for knowledge-based medical applications on the World-Wide Web.

    PubMed

    Riva, A; Bellazzi, R; Lanzola, G; Stefanelli, M

    1998-11-01

    The World-Wide Web (WWW) is increasingly being used as a platform to develop distributed applications, particularly in contexts, such as medical ones, where high usability and availability are required. In this paper we propose a methodology for the development of knowledge-based medical applications on the web, based on the use of an explicit domain ontology to automatically generate parts of the system. We describe a development environment, centred on the LISPWEB Common Lisp HTTP server, that supports this methodology, and we show how it facilitates the creation of complex web-based applications, by overcoming the limitations that normally affect the adequacy of the web for this purpose. Finally, we present an outline of a system for the management of diabetic patients built using the LISPWEB environment. PMID:9821518

  1. Modeling Rule-Based Item Generation

    ERIC Educational Resources Information Center

    Geerlings, Hanneke; Glas, Cees A. W.; van der Linden, Wim J.

    2011-01-01

    An application of a hierarchical IRT model for items in families generated through the application of different combinations of design rules is discussed. Within the families, the items are assumed to differ only in surface features. The parameters of the model are estimated in a Bayesian framework, using a data-augmented Gibbs sampler. An obvious…

  2. Enhancing Automatic Biological Pathway Generation with GO-based Gene Similarity

    SciTech Connect

    Sanfilippo, Antonio P.; Baddeley, Robert L.; Beagley, Nathaniel; Riensche, Roderick M.; Gopalan, Banu

    2009-08-03

    One of the greatest challenges in today’s analysis of microarray gene expression data is to identify pathways across regulated genes that underlie structural and functional changes of living cells in specific pathologies. Most current approaches to pathway generation are based on a reverse engineering approach in which pathway plausibility is solely induced from observed pathway data. These approaches tend to lack in generality as they are too dependent on the pathway observables from which they are induced. By contrast, alternative approaches that rely on prior biological knowledge may err in the opposite direction as the prior knowledge is usually not sufficiently tuned to the pathology of focus. In this paper, we present a novel pathway generation approach which combines insights from the reverse engineering and knowledge-based approaches to increase the biological plausibility and specificity of induced regulatory networks.

  3. New knowledge-based genetic algorithm for excavator boom structural optimization

    NASA Astrophysics Data System (ADS)

    Hua, Haiyan; Lin, Shuwen

    2014-03-01

    Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.

  4. Effects of delays on 6-year-old children’s self-generation and retention of knowledge through integration

    PubMed Central

    Varga, Nicole L.; Bauer, Patricia J.

    2013-01-01

    The present research was an investigation of the effect of delay on self-generation and retention of knowledge derived through integration by 6-year-old children. Children were presented with novel facts from passages read aloud to them (stem facts) and tested for self-generation of new knowledge through integration of the facts. In Experiment 1, children integrated the stem facts at Session 1 and retained the self-generated memory traces over 1 week. In Experiment 2, 1-week delays were imposed either between the to-be-integrated facts (between-stem delay) or after the stem facts but before the test (before-test delay). Integration performance was diminished in both conditions. Moreover, memory for individual stem facts was lower in Experiment 2 than in Experiment 1, suggesting that self-generation through integration promoted memory for explicitly taught information. The results indicate the importance of tests for promoting self-generation through integration as well as for retaining newly self-generated and explicitly taught information. PMID:23563162

  5. Quantum mechanical energy-based screening of combinatorially generated library of tautomers. TauTGen: a tautomer generator program.

    PubMed

    Harańczyk, Maciej; Gutowski, Maciej

    2007-01-01

    We describe a procedure of finding low-energy tautomers of a molecule. The procedure consists of (i) combinatorial generation of a library of tautomers, (ii) screening based on the results of geometry optimization of initial structures performed at the density functional level of theory, and (iii) final refinement of geometry for the top hits at the second-order Möller-Plesset level of theory followed by single-point energy calculations at the coupled cluster level of theory with single, double, and perturbative triple excitations. The library of initial structures of various tautomers is generated with TauTGen, a tautomer generator program. The procedure proved to be successful for these molecular systems for which common chemical knowledge had not been sufficient to predict the most stable structures.

  6. Estimating evaporative vapor generation from automobiles based on parking activities.

    PubMed

    Dong, Xinyi; Tschantz, Michael; Fu, Joshua S

    2015-07-01

    A new approach is proposed to quantify the evaporative vapor generation based on real parking activity data. As compared to the existing methods, two improvements are applied in this new approach to reduce the uncertainties: First, evaporative vapor generation from diurnal parking events is usually calculated based on estimated average parking duration for the whole fleet, while in this study, vapor generation rate is calculated based on parking activities distribution. Second, rather than using the daily temperature gradient, this study uses hourly temperature observations to derive the hourly incremental vapor generation rates. The parking distribution and hourly incremental vapor generation rates are then adopted with Wade-Reddy's equation to estimate the weighted average evaporative generation. We find that hourly incremental rates can better describe the temporal variations of vapor generation, and the weighted vapor generation rate is 5-8% less than calculation without considering parking activity.

  7. Next Generation Multimedia Distributed Data Base Systems

    NASA Technical Reports Server (NTRS)

    Pendleton, Stuart E.

    1997-01-01

    The paradigm of client/server computing is changing. The model of a server running a monolithic application and supporting clients at the desktop is giving way to a different model that blurs the line between client and server. We are on the verge of plunging into the next generation of computing technology--distributed object-oriented computing. This is not only a change in requirements but a change in opportunities, and requires a new way of thinking for Information System (IS) developers. The information system demands caused by global competition are requiring even more access to decision making tools. Simply, object-oriented technology has been developed to supersede the current design process of information systems which is not capable of handling next generation multimedia.

  8. Evolving Expert Knowledge Bases: Applications of Crowdsourcing and Serious Gaming to Advance Knowledge Development for Intelligent Tutoring Systems

    ERIC Educational Resources Information Center

    Floryan, Mark

    2013-01-01

    This dissertation presents a novel effort to develop ITS technologies that adapt by observing student behavior. In particular, we define an evolving expert knowledge base (EEKB) that structures a domain's information as a set of nodes and the relationships that exist between those nodes. The structure of this model is not the particularly novel…

  9. Rangeland degradation assessment: a new strategy based on the ecological knowledge of indigenous pastoralists

    NASA Astrophysics Data System (ADS)

    Behmanesh, Bahareh; Barani, Hossein; Abedi Sarvestani, Ahmad; Shahraki, Mohammad Reza; Sharafatmandrad, Mohsen

    2016-04-01

    In a changing world, the prevalence of land degradation is becoming a serious problem, especially in countries with arid and semi-arid rangelands. There are many techniques to assess rangeland degradation that rely on scientific knowledge but ignore indigenous people. Indigenous people have accumulated precious knowledge about land management through generations of experience. Therefore, a study was conducted to find out how indigenous people assess rangeland degradation and how their ecological knowledge can be used for rangeland degradation assessment. Interviews were conducted with the pastoralists of two sites (Dasht and Mirza Baylu), where part of both areas is located in Golestan National Park (north-eastern Iran). A structured questionnaire was designed based on 17 indicators taken from literature and also primary discussions with pastoralists in order to evaluate land degradation. A qualitative Likert five-point scale was used for scoring rangeland degradation indicators. The results revealed that pastoralists pay more attention to edaphic indicators than to vegetative and other indicators. There were significant differences between the inside and outside of the park in terms of rangeland degradation indicators for both sites. The results show that the rangelands outside of the park in both sites were degraded compared to those inside of the park, especially in the areas close to villages. It can be concluded that pastoralists have a wealth of knowledge about the vegetation and grazing animal habits that can be used in rangeland degradation assessment. It is therefore necessary to document their ecological indigenous knowledge and involve them in the process of rangeland-degradation assessment.

  10. Voice care knowledge by dysphonic and healthy individuals of different generations.

    PubMed

    Moreti, Felipe; Zambon, Fabiana; Behlau, Mara

    2016-01-01

    The purpose of this study was to identify the opinions of both dysphonic and vocally healthy individuals regarding the factors that affect their voices positively and negatively, analyzing them according to the generation to which the participants belong. Eight hundred sixty-six individuals (304 dysphonic and 562 vocally healthy; 196 men and 670 women) categorized by generation: 22 individuals in Silent Generation (1926/-/1945), 180 in Baby Boomers (1946/-/1964), 285 in Generation X (1965/-/1981), and 379 in Generation Y (1982/-/2003) responded to two open questions: "Cite five things that you believe are good/bad to your voice". Five thousand, two hundred sixty answers were identified (2478 positive and 2782 negative) and organized in 365 factors related to voice care. The three most prevalent positive and negative factors for each generation were as follows: Silent Generation - positive factors: 1 - water, honey and pomegranate, 2 - apple, and 3 - ginger tea, voice exercises and gargling; negative factors: 1 - cold drinks, 2 - excessive speaking, and 3 - alcoholic drinks, smoking and screaming; Baby Boomers - positive factors: 1 - water, 2 - apple, and 3 - sleeping well; negative factors: 1 - cold drinks, 2 - screaming, and 3 - smoking; Generation X - positive factors: 1 - water, 2 - apple, and 3 - vocal warm-up; negative factors: 1 - screaming, 2 - smoking, and 3 - alcoholic drinks; and Generation Y - positive factors: 1 - water, 2 - apple, and 3 - vocal warm-up; negative factors: 1 - screaming, 2 - smoking, and 3 - alcoholic drinks. The impact of generation was greater on the frequency of the responses than on their type. Water and apple were the most frequently cited positive factors for all the generations investigated, whereas screaming and smoking were the most frequently mentioned negative factors. Behavioral aspects related to popular beliefs were reported more frequently by the older generations. PMID:27652928

  11. Voice care knowledge by dysphonic and healthy individuals of different generations.

    PubMed

    Moreti, Felipe; Zambon, Fabiana; Behlau, Mara

    2016-01-01

    The purpose of this study was to identify the opinions of both dysphonic and vocally healthy individuals regarding the factors that affect their voices positively and negatively, analyzing them according to the generation to which the participants belong. Eight hundred sixty-six individuals (304 dysphonic and 562 vocally healthy; 196 men and 670 women) categorized by generation: 22 individuals in Silent Generation (1926/-/1945), 180 in Baby Boomers (1946/-/1964), 285 in Generation X (1965/-/1981), and 379 in Generation Y (1982/-/2003) responded to two open questions: "Cite five things that you believe are good/bad to your voice". Five thousand, two hundred sixty answers were identified (2478 positive and 2782 negative) and organized in 365 factors related to voice care. The three most prevalent positive and negative factors for each generation were as follows: Silent Generation - positive factors: 1 - water, honey and pomegranate, 2 - apple, and 3 - ginger tea, voice exercises and gargling; negative factors: 1 - cold drinks, 2 - excessive speaking, and 3 - alcoholic drinks, smoking and screaming; Baby Boomers - positive factors: 1 - water, 2 - apple, and 3 - sleeping well; negative factors: 1 - cold drinks, 2 - screaming, and 3 - smoking; Generation X - positive factors: 1 - water, 2 - apple, and 3 - vocal warm-up; negative factors: 1 - screaming, 2 - smoking, and 3 - alcoholic drinks; and Generation Y - positive factors: 1 - water, 2 - apple, and 3 - vocal warm-up; negative factors: 1 - screaming, 2 - smoking, and 3 - alcoholic drinks. The impact of generation was greater on the frequency of the responses than on their type. Water and apple were the most frequently cited positive factors for all the generations investigated, whereas screaming and smoking were the most frequently mentioned negative factors. Behavioral aspects related to popular beliefs were reported more frequently by the older generations.

  12. RegenBase: a knowledge base of spinal cord injury biology for translational research.

    PubMed

    Callahan, Alison; Abeyruwan, Saminda W; Al-Ali, Hassan; Sakurai, Kunie; Ferguson, Adam R; Popovich, Phillip G; Shah, Nigam H; Visser, Ubbo; Bixby, John L; Lemmon, Vance P

    2016-01-01

    Spinal cord injury (SCI) research is a data-rich field that aims to identify the biological mechanisms resulting in loss of function and mobility after SCI, as well as develop therapies that promote recovery after injury. SCI experimental methods, data and domain knowledge are locked in the largely unstructured text of scientific publications, making large scale integration with existing bioinformatics resources and subsequent analysis infeasible. The lack of standard reporting for experiment variables and results also makes experiment replicability a significant challenge. To address these challenges, we have developed RegenBase, a knowledge base of SCI biology. RegenBase integrates curated literature-sourced facts and experimental details, raw assay data profiling the effect of compounds on enzyme activity and cell growth, and structured SCI domain knowledge in the form of the first ontology for SCI, using Semantic Web representation languages and frameworks. RegenBase uses consistent identifier schemes and data representations that enable automated linking among RegenBase statements and also to other biological databases and electronic resources. By querying RegenBase, we have identified novel biological hypotheses linking the effects of perturbagens to observed behavioral outcomes after SCI. RegenBase is publicly available for browsing, querying and download.Database URL:http://regenbase.org. PMID:27055827

  13. RegenBase: a knowledge base of spinal cord injury biology for translational research

    PubMed Central

    Callahan, Alison; Abeyruwan, Saminda W.; Al-Ali, Hassan; Sakurai, Kunie; Ferguson, Adam R.; Popovich, Phillip G.; Shah, Nigam H.; Visser, Ubbo; Bixby, John L.; Lemmon, Vance P.

    2016-01-01

    Spinal cord injury (SCI) research is a data-rich field that aims to identify the biological mechanisms resulting in loss of function and mobility after SCI, as well as develop therapies that promote recovery after injury. SCI experimental methods, data and domain knowledge are locked in the largely unstructured text of scientific publications, making large scale integration with existing bioinformatics resources and subsequent analysis infeasible. The lack of standard reporting for experiment variables and results also makes experiment replicability a significant challenge. To address these challenges, we have developed RegenBase, a knowledge base of SCI biology. RegenBase integrates curated literature-sourced facts and experimental details, raw assay data profiling the effect of compounds on enzyme activity and cell growth, and structured SCI domain knowledge in the form of the first ontology for SCI, using Semantic Web representation languages and frameworks. RegenBase uses consistent identifier schemes and data representations that enable automated linking among RegenBase statements and also to other biological databases and electronic resources. By querying RegenBase, we have identified novel biological hypotheses linking the effects of perturbagens to observed behavioral outcomes after SCI. RegenBase is publicly available for browsing, querying and download. Database URL: http://regenbase.org PMID:27055827

  14. RegenBase: a knowledge base of spinal cord injury biology for translational research.

    PubMed

    Callahan, Alison; Abeyruwan, Saminda W; Al-Ali, Hassan; Sakurai, Kunie; Ferguson, Adam R; Popovich, Phillip G; Shah, Nigam H; Visser, Ubbo; Bixby, John L; Lemmon, Vance P

    2016-01-01

    Spinal cord injury (SCI) research is a data-rich field that aims to identify the biological mechanisms resulting in loss of function and mobility after SCI, as well as develop therapies that promote recovery after injury. SCI experimental methods, data and domain knowledge are locked in the largely unstructured text of scientific publications, making large scale integration with existing bioinformatics resources and subsequent analysis infeasible. The lack of standard reporting for experiment variables and results also makes experiment replicability a significant challenge. To address these challenges, we have developed RegenBase, a knowledge base of SCI biology. RegenBase integrates curated literature-sourced facts and experimental details, raw assay data profiling the effect of compounds on enzyme activity and cell growth, and structured SCI domain knowledge in the form of the first ontology for SCI, using Semantic Web representation languages and frameworks. RegenBase uses consistent identifier schemes and data representations that enable automated linking among RegenBase statements and also to other biological databases and electronic resources. By querying RegenBase, we have identified novel biological hypotheses linking the effects of perturbagens to observed behavioral outcomes after SCI. RegenBase is publicly available for browsing, querying and download.Database URL:http://regenbase.org.

  15. Coal and Coal/Biomass-Based Power Generation

    EPA Science Inventory

    For Frank Princiotta's book, Global Climate Change--The Technology Challenge Coal is a key, growing component in power generation globally. It generates 50% of U.S. electricity, and criteria emissions from coal-based power generation are being reduced. However, CO2 emissions m...

  16. Optimal Test Design with Rule-Based Item Generation

    ERIC Educational Resources Information Center

    Geerlings, Hanneke; van der Linden, Wim J.; Glas, Cees A. W.

    2013-01-01

    Optimal test-design methods are applied to rule-based item generation. Three different cases of automated test design are presented: (a) test assembly from a pool of pregenerated, calibrated items; (b) test generation on the fly from a pool of calibrated item families; and (c) test generation on the fly directly from calibrated features defining…

  17. Will They Engage? Political Knowledge, Participation and Attitudes of Generations X and Y.

    ERIC Educational Resources Information Center

    Soule, Suzanne

    Most data support the thesis of declining civic engagement among Generations X and Y. If levels of civic engagement remain depressed across the life cycles of Generations X and Y, U.S. democracy may be threatened, for there will be fewer engaged people to fulfill the obligations of democratic citizens. The hope is that youths' indifference to…

  18. Confronting the Technological Pedagogical Knowledge of Finnish Net Generation Student Teachers

    ERIC Educational Resources Information Center

    Valtonen, Teemu; Pontinen, Susanna; Kukkonen, Jari; Dillon, Patrick; Vaisanen, Pertti; Hacklin, Stina

    2011-01-01

    The research reported here is concerned with a critical examination of some of the assumptions concerning the "Net Generation" capabilities of 74 first-year student teachers in a Finnish university. There are assumptions that: (i) Net Generation students are adept at learning through discovery and thinking in a hypertext-like manner (Oblinger &…

  19. When Generating Answers Benefits Arithmetic Skill: The Importance of Prior Knowledge

    ERIC Educational Resources Information Center

    Rittle-Johnson, Bethany; Kmicikewycz, Alexander Oleksij

    2008-01-01

    People remember information better if they generate the information while studying rather than read the information. However, prior research has not investigated whether this generation effect extends to related but unstudied items and has not been conducted in classroom settings. We compared third graders' success on studied and unstudied…

  20. A combined park management framework based on regulatory and behavioral strategies: use of visitors' knowledge to assess effectiveness.

    PubMed

    Papageorgiou, K

    2001-07-01

    In light of the increasing mandate for greater efficiency in conservation of natural reserves such as national parks, the present study suggests educational approaches as a tool to achieve conservation purposes. Currently, the management of human-wildlife interactions is dominated by regulatory strategies, but considerable potential exists for environmental education to enhance knowledge in the short run and to prompt attitude change in the long run. A framework for conservation based on both traditional regulatory- and behavior-oriented strategies was proposed whereby the level of knowledge that park visitors have acquired comprises an obvious outcome and establishes a basis upon which the effectiveness of regulatory- and behavior-based regimes could be assessed. The perceptions regarding park-related issues of two distinct visitor groups (locals and nonlocals) are summarized from a survey undertaken in Vikos-Aoos national park. The findings suggest a superficial knowledge for certain concepts but little profound understanding of the content of such concepts, indicating that knowledge-raising efforts should go a long way towards establishing a positive attitude for the resource. Visitors' poor knowledge of the park's operation regulation contest the efficiency of the presently dominant regulatory management regime. While geographical distances did not appear to significantly differentiate knowledge between the two groups, wilderness experience (as certified by visits to other parks) was proved to be an impetus for generating substantial learner interest in critical park issues among nonlocal visitors. School education and media were found to be significant knowledge providers.

  1. A combined park management framework based on regulatory and behavioral strategies: use of visitors' knowledge to assess effectiveness.

    PubMed

    Papageorgiou, K

    2001-07-01

    In light of the increasing mandate for greater efficiency in conservation of natural reserves such as national parks, the present study suggests educational approaches as a tool to achieve conservation purposes. Currently, the management of human-wildlife interactions is dominated by regulatory strategies, but considerable potential exists for environmental education to enhance knowledge in the short run and to prompt attitude change in the long run. A framework for conservation based on both traditional regulatory- and behavior-oriented strategies was proposed whereby the level of knowledge that park visitors have acquired comprises an obvious outcome and establishes a basis upon which the effectiveness of regulatory- and behavior-based regimes could be assessed. The perceptions regarding park-related issues of two distinct visitor groups (locals and nonlocals) are summarized from a survey undertaken in Vikos-Aoos national park. The findings suggest a superficial knowledge for certain concepts but little profound understanding of the content of such concepts, indicating that knowledge-raising efforts should go a long way towards establishing a positive attitude for the resource. Visitors' poor knowledge of the park's operation regulation contest the efficiency of the presently dominant regulatory management regime. While geographical distances did not appear to significantly differentiate knowledge between the two groups, wilderness experience (as certified by visits to other parks) was proved to be an impetus for generating substantial learner interest in critical park issues among nonlocal visitors. School education and media were found to be significant knowledge providers. PMID:11437001

  2. KBSIM: a system for interactive knowledge-based simulation.

    PubMed

    Hakman, M; Groth, T

    1991-01-01

    The KBSIM system integrates quantitative simulation with symbolic reasoning techniques, under the control of a user interface management system, using a relational database management system for data storage and interprocess communication. The system stores and processes knowledge from three distinct knowledge domains, viz. (i) knowledge about the processes of the system under investigation, expressed in terms of a Continuous System Simulation Language (CSSL); (ii) heuristic knowledge on how to reach the goals of the simulation experiment, expressed in terms of a Rule Description Language (RDL); and (iii) knowledge about the requirements of the intended users, expressed in terms of a User Interface Description Language (UIDL). The user works in an interactive environment controlling the simulation course with use of a mouse and a large screen containing a set of 'live' charts and forms. The user is assisted by an embedded 'expert system' module continuously watching both the system's behavior and the user's action, and producing alerts, alarms, comments and advice. The system was developed on a Hewlett-Packard 9000/350 workstation under the HP-Unix and HP-Windows operating systems, using the MIMER database management system, and Fortran, Prolog/Lisp and C as implementation languages. The KBSIM system has great potentials for supporting problem solving, design of working procedures and teaching related to management of highly dynamic systems. PMID:2060297

  3. Writing a bachelor thesis generates transferable knowledge and skills useable in nursing practice.

    PubMed

    Lundgren, Solveig M; Robertsson, Barbro

    2013-11-01

    Generic skills or transferable skills have been discussed in terms of whether or not skills learned in one context can be transferred into another context. The current study was aimed to explore nurses' self-perceptions of the knowledge and skills they had obtained while writing a Bachelor's thesis in nursing education, their experience of the extent of transfer and utilization in their current work. Responding nurses (N=42) had all worked from 1 to 1.5 years after their final examination and had completed a questionnaire that was structured with open-ended questions. Only five nurses reported that they were unable to use any of the knowledge and skills they had obtained from writing a thesis. A majority of the nurses (37/42) could give many examples of the practical application of the skills and knowledge they had obtained. Our findings indicate that writing a thesis as part of an undergraduate degree program plays a major role in the acquisition and development of knowledge and skills which can subsequently be transferred into and utilized in nursing practice.

  4. The Best Defense is a Good Offense. Keep the Focus on Knowledge Generation and Communication

    ERIC Educational Resources Information Center

    McAnear, Anita

    2005-01-01

    Helping students become information seekers, synthesizers, analyzers, evaluators, innovative thinkers, problem solvers, decision makers, producers of knowledge, communicators, and collaborators is one way to create an environment that minimizes cheating, plagiarism, and copyright violations. In such an environment, you may also be able to take…

  5. Acts of Discovery: Using Collaborative Research to Mobilize and Generate Knowledge about Visual Arts Teaching Practice

    ERIC Educational Resources Information Center

    Mitchell, Donna Mathewson

    2014-01-01

    Visual arts teachers engage in complex work on a daily basis. This work is informed by practical knowledge that is rarely examined or drawn on in research or in the development of policy. Focusing on the work of secondary visual arts teachers, this article reports on a research program conducted in a regional area of New South Wales, Australia.…

  6. Making Sense of Knowledge Transfer and Social Capital Generation for a Pacific Island Aid Infrastructure Project

    ERIC Educational Resources Information Center

    Manu, Christopher; Walker, Derek H. T.

    2006-01-01

    Purpose: The purpose of this research is to investigate how lessons learned from a case study of a construction project undertaken in the Pacific Islands relates to the interaction between social capital and knowledge transfer. The paper is reflective in nature focusing upon the experiences of one of the authors, being a Pacific Islander and…

  7. A knowledge-based design framework for airplane conceptual and preliminary design

    NASA Astrophysics Data System (ADS)

    Anemaat, Wilhelmus A. J.

    The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane

  8. Integrating Problem-Based Learning with ICT for Developing Trainee Teachers' Content Knowledge and Teaching Skill

    ERIC Educational Resources Information Center

    Karami, Mehdi; Karami, Zohreh; Attaran, Mohammad

    2013-01-01

    Professional teachers can guarantee the progress and the promotion of society because fostering the development of next generation is up to them and depends on their professional knowledge which has two kinds of sources: content knowledge and teaching skill. The aim of the present research was studying the effect of integrating problem-based…

  9. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.

    PubMed

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/.

  10. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies

    PubMed Central

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/. PMID:26313379

  11. A Conceptual Framework for a Web-based Knowledge Construction Support System.

    ERIC Educational Resources Information Center

    Kang, Myunghee; Byun, Hoseung Paul

    2001-01-01

    Provides a conceptual model for a Web-based Knowledge Construction Support System (KCSS) that helps learners acquire factual knowledge and supports the construction of new knowledge through individual internalization and collaboration with other people. Considers learning communities, motivation, cognitive styles, learning strategies,…

  12. A Comparison of Books and Hypermedia for Knowledge-based Sports Coaching.

    ERIC Educational Resources Information Center

    Vickers, Joan N.; Gaines, Brian R.

    1988-01-01

    Summarizes and illustrates the knowledge-based approach to instructional material design. A series of sports coaching handbooks and hypermedia presentations of the same material are described and the different instantiations of the knowledge and training structures are compared. Figures show knowledge structures for badminton and the architecture…

  13. The Research for Knowledge Management System of Virtual Enterprise Based on Multi-agent

    NASA Astrophysics Data System (ADS)

    Bo, Yang; Xu, Shenghua

    By analyzing the features and knowledge management system of virtual enterprise, the research introduces the complex adaptive systems into the knowledge management system of virtual enterprise. It offers a model based on the knowledge management system of virtual enterprise and discusses the functions of each agent as well as mutual communication and coordination mechanism.

  14. Analysis of a Knowledge-Management-Based Process of Transferring Project Management Skills

    ERIC Educational Resources Information Center

    Ioi, Toshihiro; Ono, Masakazu; Ishii, Kota; Kato, Kazuhiko

    2012-01-01

    Purpose: The purpose of this paper is to propose a method for the transfer of knowledge and skills in project management (PM) based on techniques in knowledge management (KM). Design/methodology/approach: The literature contains studies on methods to extract experiential knowledge in PM, but few studies exist that focus on methods to convert…

  15. Knowledge-based deformable surface model with application to segmentation of brain structures in MRI

    NASA Astrophysics Data System (ADS)

    Ghanei, Amir; Soltanian-Zadeh, Hamid; Elisevich, Kost; Fessler, Jeffrey A.

    2001-07-01

    We have developed a knowledge-based deformable surface for segmentation of medical images. This work has been done in the context of segmentation of hippocampus from brain MRI, due to its challenge and clinical importance. The model has a polyhedral discrete structure and is initialized automatically by analyzing brain MRI sliced by slice, and finding few landmark features at each slice using an expert system. The expert system decides on the presence of the hippocampus and its general location in each slice. The landmarks found are connected together by a triangulation method, to generate a closed initial surface. The surface deforms under defined internal and external force terms thereafter, to generate an accurate and reproducible boundary for the hippocampus. The anterior and posterior (AP) limits of the hippocampus is estimated by automatic analysis of the location of brain stem, and some of the features extracted in the initialization process. These data are combined together with a priori knowledge using Bayes method to estimate a probability density function (pdf) for the length of the structure in sagittal direction. The hippocampus AP limits are found by optimizing this pdf. The model is tested on real clinical data and the results show very good model performance.

  16. Model-based reasoning for system and software engineering: The Knowledge From Pictures (KFP) environment

    NASA Technical Reports Server (NTRS)

    Bailin, Sydney; Paterra, Frank; Henderson, Scott; Truszkowski, Walt

    1993-01-01

    This paper presents a discussion of current work in the area of graphical modeling and model-based reasoning being undertaken by the Automation Technology Section, Code 522.3, at Goddard. The work was initially motivated by the growing realization that the knowledge acquisition process was a major bottleneck in the generation of fault detection, isolation, and repair (FDIR) systems for application in automated Mission Operations. As with most research activities this work started out with a simple objective: to develop a proof-of-concept system demonstrating that a draft rule-base for a FDIR system could be automatically realized by reasoning from a graphical representation of the system to be monitored. This work was called Knowledge From Pictures (KFP) (Truszkowski et. al. 1992). As the work has successfully progressed the KFP tool has become an environment populated by a set of tools that support a more comprehensive approach to model-based reasoning. This paper continues by giving an overview of the graphical modeling objectives of the work, describing the three tools that now populate the KFP environment, briefly presenting a discussion of related work in the field, and by indicating future directions for the KFP environment.

  17. Optical generation of fuzzy-based rules.

    PubMed

    Gur, Eran; Mendlovic, David; Zalevsky, Zeev

    2002-08-10

    In the last third of the 20th century, fuzzy logic has risen from a mathematical concept to an applicable approach in soft computing. Today, fuzzy logic is used in control systems for various applications, such as washing machines, train-brake systems, automobile automatic gear, and so forth. The approach of optical implementation of fuzzy inferencing was given by the authors in previous papers, giving an extra emphasis to applications with two dominant inputs. In this paper the authors introduce a real-time optical rule generator for the dual-input fuzzy-inference engine. The paper briefly goes over the dual-input optical implementation of fuzzy-logic inferencing. Then, the concept of constructing a set of rules from given data is discussed. Next, the authors show ways to implement this procedure optically. The discussion is accompanied by an example that illustrates the transformation from raw data into fuzzy set rules.

  18. Meta-data based mediator generation

    SciTech Connect

    Critchlaw, T

    1998-06-28

    Mediators are a critical component of any data warehouse; they transform data from source formats to the warehouse representation while resolving semantic and syntactic conflicts. The close relationship between mediators and databases requires a mediator to be updated whenever an associated schema is modified. Failure to quickly perform these updates significantly reduces the reliability of the warehouse because queries do not have access to the most current data. This may result in incorrect or misleading responses, and reduce user confidence in the warehouse. Unfortunately, this maintenance may be a significant undertaking if a warehouse integrates several dynamic data sources. This paper describes a meta-data framework, and associated software, designed to automate a significant portion of the mediator generation task and thereby reduce the effort involved in adapting to schema changes. By allowing the DBA to concentrate on identifying the modifications at a high level, instead of reprogramming the mediator, turnaround time is reduced and warehouse reliability is improved.

  19. Towards a knowledge-based system to assist the Brazilian data-collecting system operation

    NASA Technical Reports Server (NTRS)

    Rodrigues, Valter; Simoni, P. O.; Oliveira, P. P. B.; Oliveira, C. A.; Nogueira, C. A. M.

    1988-01-01

    A study is reported which was carried out to show how a knowledge-based approach would lead to a flexible tool to assist the operation task in a satellite-based environmental data collection system. Some characteristics of a hypothesized system comprised of a satellite and a network of Interrogable Data Collecting Platforms (IDCPs) are pointed out. The Knowledge-Based Planning Assistant System (KBPAS) and some aspects about how knowledge is organized in the IDCP's domain are briefly described.

  20. Novel nonlinear knowledge-based mean force potentials based on machine learning.

    PubMed

    Dong, Qiwen; Zhou, Shuigeng

    2011-01-01

    The prediction of 3D structures of proteins from amino acid sequences is one of the most challenging problems in molecular biology. An essential task for solving this problem with coarse-grained models is to deduce effective interaction potentials. The development and evaluation of new energy functions is critical to accurately modeling the properties of biological macromolecules. Knowledge-based mean force potentials are derived from statistical analysis of proteins of known structures. Current knowledge-based potentials are almost in the form of weighted linear sum of interaction pairs. In this study, a class of novel nonlinear knowledge-based mean force potentials is presented. The potential parameters are obtained by nonlinear classifiers, instead of relative frequencies of interaction pairs against a reference state or linear classifiers. The support vector machine is used to derive the potential parameters on data sets that contain both native structures and decoy structures. Five knowledge-based mean force Boltzmann-based or linear potentials are introduced and their corresponding nonlinear potentials are implemented. They are the DIH potential (single-body residue-level Boltzmann-based potential), the DFIRE-SCM potential (two-body residue-level Boltzmann-based potential), the FS potential (two-body atom-level Boltzmann-based potential), the HR potential (two-body residue-level linear potential), and the T32S3 potential (two-body atom-level linear potential). Experiments are performed on well-established decoy sets, including the LKF data set, the CASP7 data set, and the Decoys “R”Us data set. The evaluation metrics include the energy Z score and the ability of each potential to discriminate native structures from a set of decoy structures. Experimental results show that all nonlinear potentials significantly outperform the corresponding Boltzmann-based or linear potentials, and the proposed discriminative framework is effective in developing knowledge-based

  1. A Map-Based Service Supporting Different Types of Geographic Knowledge for the Public.

    PubMed

    Zhou, Mengjie; Wang, Rui; Tian, Jing; Ye, Ning; Mai, Shumin

    2016-01-01

    The internet enables the rapid and easy creation, storage, and transfer of knowledge; however, services that transfer geographic knowledge and facilitate the public understanding of geographic knowledge are still underdeveloped to date. Existing online maps (or atlases) can support limited types of geographic knowledge. In this study, we propose a framework for map-based services to represent and transfer different types of geographic knowledge to the public. A map-based service provides tools to ensure the effective transfer of geographic knowledge. We discuss the types of geographic knowledge that should be represented and transferred to the public, and we propose guidelines and a method to represent various types of knowledge through a map-based service. To facilitate the effective transfer of geographic knowledge, tools such as auxiliary background knowledge and auxiliary map-reading tools are provided through interactions with maps. An experiment conducted to illustrate our idea and to evaluate the usefulness of the map-based service is described; the results demonstrate that the map-based service is useful for transferring different types of geographic knowledge.

  2. A Map-Based Service Supporting Different Types of Geographic Knowledge for the Public

    PubMed Central

    Zhou, Mengjie; Wang, Rui; Tian, Jing; Ye, Ning; Mai, Shumin

    2016-01-01

    The internet enables the rapid and easy creation, storage, and transfer of knowledge; however, services that transfer geographic knowledge and facilitate the public understanding of geographic knowledge are still underdeveloped to date. Existing online maps (or atlases) can support limited types of geographic knowledge. In this study, we propose a framework for map-based services to represent and transfer different types of geographic knowledge to the public. A map-based service provides tools to ensure the effective transfer of geographic knowledge. We discuss the types of geographic knowledge that should be represented and transferred to the public, and we propose guidelines and a method to represent various types of knowledge through a map-based service. To facilitate the effective transfer of geographic knowledge, tools such as auxiliary background knowledge and auxiliary map-reading tools are provided through interactions with maps. An experiment conducted to illustrate our idea and to evaluate the usefulness of the map-based service is described; the results demonstrate that the map-based service is useful for transferring different types of geographic knowledge. PMID:27045314

  3. Knowledge and Use of Intervention Practices by Community-Based Early Intervention Service Providers

    ERIC Educational Resources Information Center

    Paynter, Jessica M.; Keen, Deb

    2015-01-01

    This study investigated staff attitudes, knowledge and use of evidence-based practices (EBP) and links to organisational culture in a community-based autism early intervention service. An EBP questionnaire was completed by 99 metropolitan and regionally-based professional and paraprofessional staff. Participants reported greater knowledge and use…

  4. Investigating Knowledge Integration in Web-Based Thematic Learning Using Concept Mapping Assessment

    ERIC Educational Resources Information Center

    Liu, Ming-Chou; Wang, Jhen-Yu

    2010-01-01

    Theme-based learning (TBL) refers to learning modes which adopt the following sequence: (a) finding the theme; (b) finding a focus of interest based on the theme; (c) finding materials based on the focus of interest; (d) integrating the materials to establish shared knowledge; (e) publishing and sharing the integrated knowledge. We have created an…

  5. Building a Knowledge-Based Economy and Society.

    ERIC Educational Resources Information Center

    Bryson, Jo

    This paper provides an overview of the forces shaping the future of the knowledge economy and society, including: the speed and type of change that is occurring; the technologies that are propelling it; the technology and information choices that competitors are making; which organizations are in the lead; who has the most to gain and to lose; the…

  6. Of Tacit Knowledge, Texts and Thing-based Learning (TBL)

    ERIC Educational Resources Information Center

    Rangachari, P. K.

    2008-01-01

    Practical knowledge has two dimensions--a visible, codified component that resembles the tip of an iceberg. The larger but crucial tacit component which lies submerged consists of values, procedures and tricks of the trade and cannot be easily documented or codified. Undergraduate science students were given an opportunity to explore this…

  7. What Portion of the Knowledge Base Do Practicing Administrators Utilize?

    ERIC Educational Resources Information Center

    Wildman, Louis

    There is a lack of empirical evidence describing the actual problems encountered by school leaders and the knowledge that they use to find solutions to those problems. This paper presents findings of a study that explored the problems faced by members of a graduate educational-administration class. The participants, 22 practicing public school…

  8. Relationships among Hypermedia-Based Mental Models and Hypermedia Knowledge.

    ERIC Educational Resources Information Center

    Ayersman, David J.; Reed, W. Michael

    1998-01-01

    Analysis of data from two studies of undergraduates (n12 and n18) enrolled in a hypermedia-in-education course at West Virginia University determined that the group with more hypermedia knowledge more frequently cited nonlinear models, supporting the premise that students require hypermedia experience before they can use nonlinear information…

  9. Knowledge-based design of a soluble bacteriorhodopsin.

    PubMed

    Gibas, C; Subramaniam, S

    1997-10-01

    Much knowledge has been accrued from high resolution protein structures. This knowledge provides rules and guidelines for the rational design of soluble proteins. We have extracted these rules and applied them to redesigning the structure of bacteriorhodopsin and to creating blueprints for a monomeric, soluble seven-helix bundle protein. Such a protein is likely to have desirable properties, such as ready crystallization, which membrane proteins lack and an internal structure similar to that of the native protein. While preserving residues shown to be necessary for protein function, we made modifications to the rest of the sequence, distributing polar and charged residues over the surface of the protein to achieve an amino acid composition as akin to that of soluble helical proteins as possible. A secondary goal was to increase apolar contacts in the helix intercalation regions of the protein. The scheme used to design the model sequences requires knowledge of the number and orientation of helices and some information about interior contacts, but detailed structural knowledge is not required to use a scheme of this type.

  10. Drug Education Based on a Knowledge, Attitude, and Experience Study

    ERIC Educational Resources Information Center

    Grant, John A.

    1971-01-01

    Results of a questionnaire concerning factual knowledge of attitudes toward, and experience with a variety of drugs are reported. It was concluded that marihuana and other drugs are readily available to secondary school students, and widespread experimentation exists; however, a strict dichotomy exists between marihuana and other drugs. (Author/BY)

  11. Knowledge Base of Pronunciation Teaching: Staking out the Territory

    ERIC Educational Resources Information Center

    Baker, Amanda; Murphy, John

    2011-01-01

    Despite decades of advocacy for greater investigative attention, research into pronunciation instruction in the teaching of English as a second language (ESL) and English as a foreign language (EFL) continues to be limited. This limitation is particularly evident in explorations of teacher cognition (e.g., teachers' knowledge, beliefs, and…

  12. The Reliability of Randomly Generated Math Curriculum-Based Measurements

    ERIC Educational Resources Information Center

    Strait, Gerald G.; Smith, Bradley H.; Pender, Carolyn; Malone, Patrick S.; Roberts, Jarod; Hall, John D.

    2015-01-01

    "Curriculum-Based Measurement" (CBM) is a direct method of academic assessment used to screen and evaluate students' skills and monitor their responses to academic instruction and intervention. Interventioncentral.org offers a math worksheet generator at no cost that creates randomly generated "math curriculum-based measures"…

  13. BioGraph: unsupervised biomedical knowledge discovery via automated hypothesis generation

    PubMed Central

    2011-01-01

    We present BioGraph, a data integration and data mining platform for the exploration and discovery of biomedical information. The platform offers prioritizations of putative disease genes, supported by functional hypotheses. We show that BioGraph can retrospectively confirm recently discovered disease genes and identify potential susceptibility genes, outperforming existing technologies, without requiring prior domain knowledge. Additionally, BioGraph allows for generic biomedical applications beyond gene discovery. BioGraph is accessible at http://www.biograph.be. PMID:21696594

  14. Diagnosis by integrating model-based reasoning with knowledge-based reasoning

    NASA Technical Reports Server (NTRS)

    Bylander, Tom

    1988-01-01

    Our research investigates how observations can be categorized by integrating a qualitative physical model with experiential knowledge. Our domain is diagnosis of pathologic gait in humans, in which the observations are the gait motions, muscle activity during gait, and physical exam data, and the diagnostic hypotheses are the potential muscle weaknesses, muscle mistimings, and joint restrictions. Patients with underlying neurological disorders typically have several malfunctions. Among the problems that need to be faced are: the ambiguity of the observations, the ambiguity of the qualitative physical model, correspondence of the observations and hypotheses to the qualitative physical model, the inherent uncertainty of experiential knowledge, and the combinatorics involved in forming composite hypotheses. Our system divides the work so that the knowledge-based reasoning suggests which hypotheses appear more likely than others, the qualitative physical model is used to determine which hypotheses explain which observations, and another process combines these functionalities to construct a composite hypothesis based on explanatory power and plausibility. We speculate that the reasoning architecture of our system is generally applicable to complex domains in which a less-than-perfect physical model and less-than-perfect experiential knowledge need to be combined to perform diagnosis.

  15. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  16. Generation Of Manufacturing Routing And Operations Using Structured Knowledge As Basis To Application Of Computer Aided In Process Planning

    NASA Astrophysics Data System (ADS)

    Oswaldo, Luiz Agostinho

    2011-01-01

    The development of computer aided resources in automation of generation of manufacturing routings and operations is being mainly accomplished through the search of similarities between existent ones, resulting standard process routings that are grouped by analysis of similarities between parts or routings. This article proposes the development of manufacturing routings and operations detailment using a methodology which steps will define the initial, intermediate and final operations, starting from the rough piece and going up to the final specifications, that must have binunivocal relationship with the part design specifications. Each step will use the so called rules of precedence to link and chain the routing operations. The rules of precedence order and prioritize the knowledge of various manufacturing processes, taking in account the theories of machining, forging, assembly, and heat treatments; also, utilizes the theories of accumulation of tolerances and process capabilities, between others. It is also reinforced the availability of manufacturing databases related to process tolerances, deviations of machine tool- cutting tool- fixturing devices—workpiece, and process capabilities. The statement and application of rules of precedence, linking and joining manufacturing concepts in a logical and structured way, and their application in the methodology steps will make viable the utilization of structured knowledge instead of tacit one currently available in the manufacturing engineering departments, in the generation of manufacturing routing and operations. Consequently, the development of Computer Aided in Process Planning will be facilitated, due to the structured knowledge applied with this methodology.

  17. Constructing Clinical Decision Support Systems for Adverse Drug Event Prevention: A Knowledge-based Approach.

    PubMed

    Koutkias, Vassilis; Kilintzis, Vassilis; Stalidis, George; Lazou, Katerina; Collyda, Chrysa; Chazard, Emmanuel; McNair, Peter; Beuscart, Regis; Maglaveras, Nicos

    2010-11-13

    A knowledge-based approach is proposed that is employed for the construction of a framework suitable for the management and effective use of knowledge on Adverse Drug Event (ADE) prevention. The framework has as its core part a Knowledge Base (KB) comprised of rule-based knowledge sources, that is accompanied by the necessary inference and query mechanisms to provide healthcare professionals and patients with decision support services in clinical practice, in terms of alerts and recommendations on preventable ADEs. The relevant Knowledge Based System (KBS) is developed in the context of the EU-funded research project PSIP (Patient Safety through Intelligent Procedures in Medication). In the current paper, we present the foundations of the framework, its knowledge model and KB structure, as well as recent progress as regards the population of the KB, the implementation of the KBS, and results on the KBS verification in decision support operation.

  18. A New Knowledge Reduction Algorithm Based on Decision Power in Rough Set

    NASA Astrophysics Data System (ADS)

    Xu, Jiucheng; Sun, Lin

    Many researchers are working on developing fast data mining methods for processing huge data sets efficiently, but some current reduction algorithms based on rough sets still have some disadvantages. In this paper, we indicated their limitations for reduct generation, then a new measure to knowledge was introduced to discuss the roughness of rough sets, and we developed an efficient algorithm for knowledge reduction based on rough sets. So, we modified the mean decision power, and proposed to use the algebraic definition of decision power. To select optimal attribute reduction, the judgment criterion of decision with an inequality was presented and some important conclusions were obtained. A complete algorithm for the attribute reduction was designed. Finally, through analyzing the given example, it is shown that the proposed heuristic information is better and more efficient than the others, and the presented method in the paper reduces time complexity and improves the performance. We report experimental results with several data sets from UCI Machine Learning Repository, and we compare the results with some other methods. The results prove that the proposed method is promising, which enlarges the application areas of rough sets.

  19. Fuzzy knowledge base construction through belief networks based on Lukasiewicz logic

    NASA Technical Reports Server (NTRS)

    Lara-Rosano, Felipe

    1992-01-01

    In this paper, a procedure is proposed to build a fuzzy knowledge base founded on fuzzy belief networks and Lukasiewicz logic. Fuzzy procedures are developed to do the following: to assess the belief values of a consequent, in terms of the belief values of its logical antecedents and the belief value of the corresponding logical function; and to update belief values when new evidence is available.

  20. Organizational Knowledge Capitalization Based on Product Patterns and Web 2.0 Technology

    NASA Astrophysics Data System (ADS)

    Sanchez-Segura, Maria-Isabel; Mora-Soto, Arturo; Medina-Dominguez, Fuensanta; Amescua, Antonio

    Nowadays knowledge is seen as one of the most important assets for organizations, nevertheless assessing the value of organizational knowledge is a challenge because most of it exists as tacit knowledge in the mind every member of an organization. In order to overcome this issue authors propose a working strategy to gather and to explicitly represent organizational knowledge in order to foster its capitalization. In this paper authors propose, on the one hand, the use of a knowledge representation artifact based on patterns together with a transactive memory system to allow encoding, storing, and retrieving knowledge collectively in a smart manner, on the other hand, authors propose the use of Web 2.0 technologies and tools to facilitate knowledge access and representation, and to foster cooperation among organization's members towards knowledge capitalization.

  1. A JAVA implementation of a medical knowledge base for decision support.

    PubMed

    Ambrosiadou, V; Goulis, D; Shankararaman, V; Shamtani, G

    1999-01-01

    Distributed decision support is a challenging issue requiring the implementation of advanced computer science techniques together with tools of development which offer ease of communication and efficiency of searching and control performance. This paper presents a JAVA implementation of a knowledge base model called ARISTOTELES which may be used in order to support the development of the medical knowledge base by clinicians in diverse specialised areas of interest. The advantages that are evident by the application of such a cognitive model are ease of knowledge acquisition, modular construction of the knowledge base and greater acceptance from clinicians.

  2. Generating Vocabulary Knowledge for At-Risk Middle School Readers: Contrasting Program Effects and Growth Trajectories

    ERIC Educational Resources Information Center

    Lawrence, Joshua F.; Rolland, Rebecca Givens; Branum-Martin, Lee; Snow, Catherine E.

    2014-01-01

    We tested whether urban middle-school students from mostly low-income homes had improved academic vocabulary when they participated in a freely available vocabulary program, Word Generation (WG). To understand how this program may support students at risk for long-term reading difficulty, we examined treatment interactions with baseline…

  3. The Role of Domain-Specific Knowledge in Generative Reasoning about Complicated Multileveled Phenomena

    ERIC Educational Resources Information Center

    Duncan, Ravit Golan

    2007-01-01

    Promoting the ability to reason generatively about novel phenomena and problems students may encounter in their everyday lives is a major goal of science education. This goal proves to be a formidable challenge in domains, such as molecular genetics, for which the accumulated scientific understandings are daunting in both amount and complexity. To…

  4. Designing a Knowledge Representation Approach for the Generation of Pedagogical Interventions by MTTs

    ERIC Educational Resources Information Center

    Paquette, Luc; Lebeau, Jean-François; Beaulieu, Gabriel; Mayers, André

    2015-01-01

    Model-tracing tutors (MTTs) have proven effective for the tutoring of well-defined tasks, but the pedagogical interventions they produce are limited and usually require the inclusion of pedagogical content, such as text message templates, in the model of the task. The capability to generate pedagogical content would be beneficial to MTT…

  5. Stories as Knowledge: Bringing the Lived Experience of First-Generation College Students into the Academy

    ERIC Educational Resources Information Center

    Jehangir, Rashne

    2010-01-01

    This longitudinal study of first-generation, low-income students examines the impact of their participation in a multicultural learning community (MLC) designed to challenge the isolation and marginalization they experience at a large, predominantly White research university. The MLC employed multicultural curriculum and critical pedagogy to bring…

  6. Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris

    2010-01-01

    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

  7. Virk: an active learning-based system for bootstrapping knowledge base development in the neurosciences.

    PubMed

    Ambert, Kyle H; Cohen, Aaron M; Burns, Gully A P C; Boudreau, Eilis; Sonmez, Kemal

    2013-01-01

    The frequency and volume of newly-published scientific literature is quickly making manual maintenance of publicly-available databases of primary data unrealistic and costly. Although machine learning (ML) can be useful for developing automated approaches to identifying scientific publications containing relevant information for a database, developing such tools necessitates manually annotating an unrealistic number of documents. One approach to this problem, active learning (AL), builds classification models by iteratively identifying documents that provide the most information to a classifier. Although this approach has been shown to be effective for related problems, in the context of scientific databases curation, it falls short. We present Virk, an AL system that, while being trained, simultaneously learns a classification model and identifies documents having information of interest for a knowledge base. Our approach uses a support vector machine (SVM) classifier with input features derived from neuroscience-related publications from the primary literature. Using our approach, we were able to increase the size of the Neuron Registry, a knowledge base of neuron-related information, by a factor of 90%, a knowledge base of neuron-related information, in 3 months. Using standard biocuration methods, it would have taken between 1 and 2 years to make the same number of contributions to the Neuron Registry. Here, we describe the system pipeline in detail, and evaluate its performance against other approaches to sampling in AL.

  8. Problem-Based Knowledge Access: Useful Design Principles for Clinical Hypertexts

    PubMed Central

    Estey, Greg; Oliver, Diane E.; Chueh, Henry C.; Levinson, John R.; Zielstorff, Rita D.; Barnett, G. Octo

    1990-01-01

    This paper describes experience with the development of two systems for clinical knowledge access. Six design principles for implementing such systems in hypertext or related media are offered under the heading “Problem-based Knowledge Access”. These design principles concern: contextual indexing, categories of knowledge access, constraints on hypertext linking, graphical indexing, maintaining the illusion of “no navigation” and problem-based content hierarchies.

  9. Metadata-based generation and management of knowledgebases from molecular biological databases.

    PubMed

    Eccles, J R; Saldanha, J W

    1990-06-01

    Present-day knowledge-based systems (or expert systems) and databases constitute 'islands of computing' with little or no connection to each other. The use of software to provide a communication channel between the two, and to integrate their separate functions, is particularly attractive in certain data-rich domains where there are already pre-existing database systems containing the data required by the relevant knowledge-based system. Our evolving program, GENPRO, provides such a communication channel. The original methodology has been extended to provide interactive Prolog clause input with syntactic and semantic verification. This enables automatic generation of clauses from the source database, together with complete management of subsequent interfacing to the specified knowledge-based system. The particular data-rich domain used in this paper is protein structure, where processes which require reasoning (modelled by knowledge-based systems), such as the inference of protein topology, protein model-building and protein structure prediction, often require large amounts of raw data (i.e., facts about particular proteins) in the form of logic programming ground clauses. These are generated in the proper format by use of the concept of metadata. PMID:2397635

  10. Knowledge-based real-space explorations for low-resolution structure determination.

    PubMed

    Furnham, Nicholas; Doré, Andrew S; Chirgadze, Dimitri Y; de Bakker, Paul I W; Depristo, Mark A; Blundell, Tom L

    2006-08-01

    The accurate and effective interpretation of low-resolution data in X-ray crystallography is becoming increasingly important as structural initiatives turn toward large multiprotein complexes. Substantial challenges remain due to the poor information content and ambiguity in the interpretation of electron density maps at low resolution. Here, we describe a semiautomated procedure that employs a restraint-based conformational search algorithm, RAPPER, to produce a starting model for the structure determination of ligase interacting factor 1 in complex with a fragment of DNA ligase IV at low resolution. The combined use of experimental data and a priori knowledge of protein structure enabled us not only to generate an all-atom model but also to reaffirm the inferred sequence registry. This approach provides a means to extract quickly from experimental data useful information that would otherwise be discarded and to take into account the uncertainty in the interpretation--an overriding issue for low-resolution data.

  11. Towards knowledge-based systems in clinical practice: development of an integrated clinical information and knowledge management support system.

    PubMed

    Kalogeropoulos, Dimitris A; Carson, Ewart R; Collinson, Paul O

    2003-09-01

    Given that clinicians presented with identical clinical information will act in different ways, there is a need to introduce into routine clinical practice methods and tools to support the scientific homogeneity and accountability of healthcare decisions and actions. The benefits expected from such action include an overall reduction in cost, improved quality of care, patient and public opinion satisfaction. Computer-based medical data processing has yielded methods and tools for managing the task away from the hospital management level and closer to the desired disease and patient management level. To this end, advanced applications of information and disease process modelling technologies have already demonstrated an ability to significantly augment clinical decision making as a by-product. The wide-spread acceptance of evidence-based medicine as the basis of cost-conscious and concurrently quality-wise accountable clinical practice suffices as evidence supporting this claim. Electronic libraries are one-step towards an online status of this key health-care delivery quality control environment. Nonetheless, to date, the underlying information and knowledge management technologies have failed to be integrated into any form of pragmatic or marketable online and real-time clinical decision making tool. One of the main obstacles that needs to be overcome is the development of systems that treat both information and knowledge as clinical objects with same modelling requirements. This paper describes the development of such a system in the form of an intelligent clinical information management system: a system which at the most fundamental level of clinical decision support facilitates both the organised acquisition of clinical information and knowledge and provides a test-bed for the development and evaluation of knowledge-based decision support functions.

  12. Learning from Evolution: Thellungiella Generates New Knowledge on Essential and Critical Components of Abiotic Stress Tolerance in Plants

    PubMed Central

    Amtmann, Anna

    2009-01-01

    Thellungiella salsuginea (halophila) is a close relative of Arabidopsis thaliana but, unlike A. thaliana, it grows well in extreme conditions of cold, salt, and drought as well as nitrogen limitation. Over the last decade, many laboratories have started to use Thellungiella to investigate the physiological, metabolic, and molecular mechanisms of abiotic stress tolerance in plants, and new knowledge has been gained in particular with respect to ion transport and gene expression. The advantage of Thellungiella over other extremophile model plants is that it can be directly compared with Arabidopsis, and therefore generate information on both essential and critical components of stress tolerance. Thellungiella research is supported by a growing body of technical resources comprising physiological and molecular protocols, ecotype collections, expressed sequence tags, cDNA-libraries, microarrays, and a pending genome sequence. This review summarizes the current state of knowledge on Thellungiella and re-evaluates its usefulness as a model for research into plant stress tolerance. PMID:19529830

  13. A Discourse Based Approach to the Language Documentation of Local Ecological Knowledge

    ERIC Educational Resources Information Center

    Odango, Emerson Lopez

    2016-01-01

    This paper proposes a discourse-based approach to the language documentation of local ecological knowledge (LEK). The knowledge, skills, beliefs, cultural worldviews, and ideologies that shape the way a community interacts with its environment can be examined through the discourse in which LEK emerges. 'Discourse-based' refers to two components:…

  14. Preparing Oral Examinations of Mathematical Domains with the Help of a Knowledge-Based Dialogue System.

    ERIC Educational Resources Information Center

    Schmidt, Peter

    A conception of discussing mathematical material in the domain of calculus is outlined. Applications include that university students work at their knowledge and prepare for their oral examinations by utilizing the dialog system. The conception is based upon three pillars. One central pillar is a knowledge base containing the collections of…

  15. The Knowledge Base as an Extension of Distance Learning Reference Service

    ERIC Educational Resources Information Center

    Casey, Anne Marie

    2012-01-01

    This study explores knowledge bases as extension of reference services for distance learners. Through a survey and follow-up interviews with distance learning librarians, this paper discusses their interest in creating and maintaining a knowledge base as a resource for reference services to distance learners. It also investigates their perceptions…

  16. Mapping and Managing Knowledge and Information in Resource-Based Learning

    ERIC Educational Resources Information Center

    Tergan, Sigmar-Olaf; Graber, Wolfgang; Neumann, Anja

    2006-01-01

    In resource-based learning scenarios, students are often overwhelmed by the complexity of task-relevant knowledge and information. Techniques for the external interactive representation of individual knowledge in graphical format may help them to cope with complex problem situations. Advanced computer-based concept-mapping tools have the potential…

  17. GUIDON-WATCH: A Graphic Interface for Viewing a Knowledge-Based System. Technical Report #14.

    ERIC Educational Resources Information Center

    Richer, Mark H.; Clancey, William J.

    This paper describes GUIDON-WATCH, a graphic interface that uses multiple windows and a mouse to allow a student to browse a knowledge base and view reasoning processes during diagnostic problem solving. The GUIDON project at Stanford University is investigating how knowledge-based systems can provide the basis for teaching programs, and this…

  18. Public School Teachers' Knowledge, Perception, and Implementation of Brain-Based Learning Practices

    ERIC Educational Resources Information Center

    Wachob, David A.

    2012-01-01

    The purpose of this study was to determine K-12 teachers' knowledge, beliefs, and practices of brain-based learning strategies in western Pennsylvania schools. The following five research questions were explored: (a) What is the extent of knowledge K-12 public school teachers have about the indicators of brain-based learning and Brain Gym?;…

  19. A comparison of LISP and MUMPS as implementation languages for knowledge-based systems.

    PubMed

    Curtis, A C

    1984-10-01

    Major components of knowledge-based systems are summarized, along with the programming language features generally useful in their implementation. LISP and MUMPS are briefly described and compared as vehicles for building knowledge-based systems. The paper concludes with suggestions for extensions to MUMPS that might increase its usefulness in artificial intelligence applications without affecting the essential nature of the language.

  20. Comparison of LISP and MUMPS as implementation languages for knowledge-based systems

    SciTech Connect

    Curtis, A.C.

    1984-01-01

    Major components of knowledge-based systems are summarized, along with the programming language features generally useful in their implementation. LISP and MUMPS are briefly described and compared as vehicles for building knowledge-based systems. The paper concludes with suggestions for extensions to MUMPS which might increase its usefulness in artificial intelligence applications without affecting the essential nature of the language. 8 references.