Sample records for large systems knowledge-based

  1. Knowledge representation by connection matrices: A method for the on-board implementation of large expert systems

    NASA Technical Reports Server (NTRS)

    Kellner, A.

    1987-01-01

    Extremely large knowledge sources and efficient knowledge access characterizing future real-life artificial intelligence applications represent crucial requirements for on-board artificial intelligence systems due to obvious computer time and storage constraints on spacecraft. A type of knowledge representation and corresponding reasoning mechanism is proposed which is particularly suited for the efficient processing of such large knowledge bases in expert systems.

  2. Small Knowledge-Based Systems in Education and Training: Something New Under the Sun.

    ERIC Educational Resources Information Center

    Wilson, Brent G.; Welsh, Jack R.

    1986-01-01

    Discusses artificial intelligence, robotics, natural language processing, and expert or knowledge-based systems research; examines two large expert systems, MYCIN and XCON; and reviews the resources required to build large expert systems and affordable smaller systems (intelligent job aids) for training. Expert system vendors and products are…

  3. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  4. Terminological reference of a knowledge-based system: the data dictionary.

    PubMed

    Stausberg, J; Wormek, A; Kraut, U

    1995-01-01

    The development of open and integrated knowledge bases makes new demands on the definition of the used terminology. The definition should be realized in a data dictionary separated from the knowledge base. Within the works done at a reference model of medical knowledge, a data dictionary has been developed and used in different applications: a term definition shell, a documentation tool and a knowledge base. The data dictionary includes that part of terminology, which is largely independent of a certain knowledge model. For that reason, the data dictionary can be used as a basis for integrating knowledge bases into information systems, for knowledge sharing and reuse and for modular development of knowledge-based systems.

  5. A Knowledge-Based System Developer for aerospace applications

    NASA Technical Reports Server (NTRS)

    Shi, George Z.; Wu, Kewei; Fensky, Connie S.; Lo, Ching F.

    1993-01-01

    A prototype Knowledge-Based System Developer (KBSD) has been developed for aerospace applications by utilizing artificial intelligence technology. The KBSD directly acquires knowledge from domain experts through a graphical interface then builds expert systems from that knowledge. This raises the state of the art of knowledge acquisition/expert system technology to a new level by lessening the need for skilled knowledge engineers. The feasibility, applicability , and efficiency of the proposed concept was established, making a continuation which would develop the prototype to a full-scale general-purpose knowledge-based system developer justifiable. The KBSD has great commercial potential. It will provide a marketable software shell which alleviates the need for knowledge engineers and increase productivity in the workplace. The KBSD will therefore make knowledge-based systems available to a large portion of industry.

  6. Assessment of Teachers' Reactions to a Knowledge- and Skills-Based Pay Structure at an International School

    ERIC Educational Resources Information Center

    Lowe, Joel Courtney

    2013-01-01

    This study explores teachers' reactions to a knowledge- and skills-based pay (KSBP) system implemented in a large international school. Such systems are designed to set teacher compensation based on demonstrated professional knowledge and skills as opposed to the traditional scale based on years of experience and degrees attained. This study fills…

  7. Knowledge-based reasoning in the Paladin tactical decision generation system

    NASA Technical Reports Server (NTRS)

    Chappell, Alan R.

    1993-01-01

    A real-time tactical decision generation system for air combat engagements, Paladin, has been developed. A pilot's job in air combat includes tasks that are largely symbolic. These symbolic tasks are generally performed through the application of experience and training (i.e. knowledge) gathered over years of flying a fighter aircraft. Two such tasks, situation assessment and throttle control, are identified and broken out in Paladin to be handled by specialized knowledge based systems. Knowledge pertaining to these tasks is encoded into rule-bases to provide the foundation for decisions. Paladin uses a custom built inference engine and a partitioned rule-base structure to give these symbolic results in real-time. This paper provides an overview of knowledge-based reasoning systems as a subset of rule-based systems. The knowledge used by Paladin in generating results as well as the system design for real-time execution is discussed.

  8. Space shuttle main engine anomaly data and inductive knowledge based systems: Automated corporate expertise

    NASA Technical Reports Server (NTRS)

    Modesitt, Kenneth L.

    1987-01-01

    Progress is reported on the development of SCOTTY, an expert knowledge-based system to automate the analysis procedure following test firings of the Space Shuttle Main Engine (SSME). The integration of a large-scale relational data base system, a computer graphics interface for experts and end-user engineers, potential extension of the system to flight engines, application of the system for training of newly-hired engineers, technology transfer to other engines, and the essential qualities of good software engineering practices for building expert knowledge-based systems are among the topics discussed.

  9. Case-based tutoring from a medical knowledge base.

    PubMed

    Chin, H L; Cooper, G F

    1989-01-01

    The past decade has seen the emergence of programs that make use of large knowledge bases to assist physicians in diagnosis within the general field of internal medicine. One such program, Internist-I, contains knowledge about over 600 diseases, covering a significant proportion of internal medicine. This paper describes the process of converting a subset of this knowledge base--in the area of cardiovascular diseases--into a probabilistic format, and the use of this resulting knowledge base to teach medical diagnostic knowledge. The system (called KBSimulator--for Knowledge-Based patient Simulator) generates simulated patient cases and uses these cases as a focal point from which to teach medical knowledge. This project demonstrates the feasibility of building an intelligent, flexible instructional system that uses a knowledge base constructed primarily for medical diagnosis.

  10. Automated knowledge base development from CAD/CAE databases

    NASA Technical Reports Server (NTRS)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  11. Experiments in Knowledge Refinement for a Large Rule-Based System

    DTIC Science & Technology

    1993-08-01

    empirical analysis to refine expert system knowledge bases. Aritificial Intelligence , 22:23-48, 1984. *! ...The Addison- Weslev series in artificial intelligence . Addison-Weslev. Reading, Massachusetts. 1981. Cooke, 1991: ttoger M. Cooke. Experts in...ment for classification systems. Artificial Intelligence , 35:197-226, 1988. 14 Overall, we believe that it will be possible to build a heuristic system

  12. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs.

    PubMed

    McCoy, A B; Wright, A; Krousel-Wood, M; Thomas, E J; McCoy, J A; Sittig, D F

    2015-01-01

    Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes.

  13. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs

    PubMed Central

    Wright, A.; Krousel-Wood, M.; Thomas, E. J.; McCoy, J. A.; Sittig, D. F.

    2015-01-01

    Summary Background Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. Objective We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. Methods We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. Results The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. Conclusions We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes. PMID:26171079

  14. Engineering youth service system infrastructure: Hawaii's continued efforts at large-scale implementation through knowledge management strategies.

    PubMed

    Nakamura, Brad J; Mueller, Charles W; Higa-McMillan, Charmaine; Okamura, Kelsie H; Chang, Jaime P; Slavin, Lesley; Shimabukuro, Scott

    2014-01-01

    Hawaii's Child and Adolescent Mental Health Division provides a unique illustration of a youth public mental health system with a long and successful history of large-scale quality improvement initiatives. Many advances are linked to flexibly organizing and applying knowledge gained from the scientific literature and move beyond installing a limited number of brand-named treatment approaches that might be directly relevant only to a small handful of system youth. This article takes a knowledge-to-action perspective and outlines five knowledge management strategies currently under way in Hawaii. Each strategy represents one component of a larger coordinated effort at engineering a service system focused on delivering both brand-named treatment approaches and complimentary strategies informed by the evidence base. The five knowledge management examples are (a) a set of modular-based professional training activities for currently practicing therapists, (b) an outreach initiative for supporting youth evidence-based practices training at Hawaii's mental health-related professional programs, (c) an effort to increase consumer knowledge of and demand for youth evidence-based practices, (d) a practice and progress agency performance feedback system, and (e) a sampling of system-level research studies focused on understanding treatment as usual. We end by outlining a small set of lessons learned and a longer term vision for embedding these efforts into the system's infrastructure.

  15. Reasoning Mind Genie 2: An Intelligent Tutoring System as a Vehicle for International Transfer of Instructional Methods in Mathematics

    ERIC Educational Resources Information Center

    Khachatryan, George A.; Romashov, Andrey V.; Khachatryan, Alexander R.; Gaudino, Steven J.; Khachatryan, Julia M.; Guarian, Konstantin R.; Yufa, Nataliya V.

    2014-01-01

    Effective mathematics teachers have a large body of professional knowledge, which is largely undocumented and shared by teachers working in a given country's education system. The volume and cultural nature of this knowledge make it particularly challenging to share curricula and instructional methods between countries. Thus, approaches based on…

  16. KBGIS-II: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, Terence; Peuquet, Donna; Menon, Sudhakar; Agarwal, Pankaj

    1986-01-01

    The architecture and working of a recently implemented Knowledge-Based Geographic Information System (KBGIS-II), designed to satisfy several general criteria for the GIS, is described. The system has four major functions including query-answering, learning and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial object language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is performing all its designated tasks successfully. Future reports will relate performance characteristics of the system.

  17. Using Topdown Conceptual Analysis To Accelerate The Learning Of New Domains For Knowledge Engineers & Domain Experts

    NASA Astrophysics Data System (ADS)

    Xuan, Albert L.; Shinghal, Rajjan

    1989-03-01

    As the need for knowledge-based systems increases, an increasing number of domain experts are becoming interested in taking more active part in the building of knowledge-based systems. However, such a domain expert often must deal with a large number of unfamiliar terms concepts, facts, procedures and principles based on different approaches and schools of thought. He (for brevity, we shall use masculine pronouns for both genders) may need the help of a knowledge engineer (KE) in building the knowledge-based system but may encounter a number of problems. For instance, much of the early interaction between him and the knowl edge engineer may be spent in educating each other about their seperate kinds of expertise. Since the knowledge engineer will usually be ignorant of the knowledge domain while the domain expert (DE) will have little knowledge about knowledge-based systems, a great deal of time will be wasted on these issues ad the DE and the KE train each other to the point where a fruitful interaction can occur. In some situations, it may not even be possible for the DE to find a suitable KE to work with because he has no time to train the latter in his domain. This will engender the need for the DE to be more knowledgeable about knowledge-based systems and for the KE to find methods and techniques which will allow them to learn new domains as fast as they can. In any event, it is likely that the process of building knowledge-based systems will be smooth, er and more efficient if the domain expert is knowledgeable about the methods and techniques of knowledge-based systems building.

  18. Computer Assisted Multi-Center Creation of Medical Knowledge Bases

    PubMed Central

    Giuse, Nunzia Bettinsoli; Giuse, Dario A.; Miller, Randolph A.

    1988-01-01

    Computer programs which support different aspects of medical care have been developed in recent years. Their capabilities range from diagnosis to medical imaging, and include hospital management systems and therapy prescription. In spite of their diversity these systems have one commonality: their reliance on a large body of medical knowledge in computer-readable form. This knowledge enables such programs to draw inferences, validate hypotheses, and in general to perform their intended task. As has been clear to developers of such systems, however, the creation and maintenance of medical knowledge bases are very expensive. Practical and economical difficulties encountered during this long-term process have discouraged most attempts. This paper discusses knowledge base creation and maintenance, with special emphasis on medical applications. We first describe the methods currently used and their limitations. We then present our recent work on developing tools and methodologies which will assist in the process of creating a medical knowledge base. We focus, in particular, on the possibility of multi-center creation of the knowledge base.

  19. An evidential approach to problem solving when a large number of knowledge systems is available

    NASA Technical Reports Server (NTRS)

    Dekorvin, Andre

    1989-01-01

    Some recent problems are no longer formulated in terms of imprecise facts, missing data or inadequate measuring devices. Instead, questions pertaining to knowledge and information itself arise and can be phrased independently of any particular area of knowledge. The problem considered in the present work is how to model a problem solver that is trying to find the answer to some query. The problem solver has access to a large number of knowledge systems that specialize in diverse features. In this context, feature means an indicator of what the possibilities for the answer are. The knowledge systems should not be accessed more than once, in order to have truly independent sources of information. Moreover, these systems are allowed to run in parallel. Since access might be expensive, it is necessary to construct a management policy for accessing these knowledge systems. To help in the access policy, some control knowledge systems are available. Control knowledge systems have knowledge about the performance parameters status of the knowledge systems. In order to carry out the double goal of estimating what units to access and to answer the given query, diverse pieces of evidence must be fused. The Dempster-Shafer Theory of Evidence is used to pool the knowledge bases.

  20. Large system change challenges: addressing complex critical issues in linked physical and social domains

    NASA Astrophysics Data System (ADS)

    Waddell, Steve; Cornell, Sarah; Hsueh, Joe; Ozer, Ceren; McLachlan, Milla; Birney, Anna

    2015-04-01

    Most action to address contemporary complex challenges, including the urgent issues of global sustainability, occurs piecemeal and without meaningful guidance from leading complex change knowledge and methods. The potential benefit of using such knowledge is greater efficacy of effort and investment. However, this knowledge and its associated tools and methods are under-utilized because understanding about them is low, fragmented between diverse knowledge traditions, and often requires shifts in mindsets and skills from expert-led to participant-based action. We have been engaged in diverse action-oriented research efforts in Large System Change for sustainability. For us, "large" systems can be characterized as large-scale systems - up to global - with many components, of many kinds (physical, biological, institutional, cultural/conceptual), operating at multiple levels, driven by multiple forces, and presenting major challenges for people involved. We see change of such systems as complex challenges, in contrast with simple or complicated problems, or chaotic situations. In other words, issues and sub-systems have unclear boundaries, interact with each other, and are often contradictory; dynamics are non-linear; issues are not "controllable", and "solutions" are "emergent" and often paradoxical. Since choices are opportunity-, power- and value-driven, these social, institutional and cultural factors need to be made explicit in any actionable theory of change. Our emerging network is sharing and building a knowledge base of experience, heuristics, and theories of change from multiple disciplines and practice domains. We will present our views on focal issues for the development of the field of large system change, which include processes of goal-setting and alignment; leverage of systemic transitions and transformation; and the role of choice in influencing critical change processes, when only some sub-systems or levels of the system behave in purposeful ways, while others are undeniably and unavoidably deterministic.

  1. Expert systems and simulation models; Proceedings of the Seminar, Tucson, AZ, November 18, 19, 1985

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The seminar presents papers on modeling and simulation methodology, artificial intelligence and expert systems, environments for simulation/expert system development, and methodology for simulation/expert system development. Particular attention is given to simulation modeling concepts and their representation, modular hierarchical model specification, knowledge representation, and rule-based diagnostic expert system development. Other topics include the combination of symbolic and discrete event simulation, real time inferencing, and the management of large knowledge-based simulation projects.

  2. An application of object-oriented knowledge representation to engineering expert systems

    NASA Technical Reports Server (NTRS)

    Logie, D. S.; Kamil, H.; Umaretiya, J. R.

    1990-01-01

    The paper describes an object-oriented knowledge representation and its application to engineering expert systems. The object-oriented approach promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects and organized by defining relationships between the objects. An Object Representation Language (ORL) was implemented as a tool for building and manipulating the object base. Rule-based knowledge representation is then used to simulate engineering design reasoning. Using a common object base, very large expert systems can be developed, comprised of small, individually processed, rule sets. The integration of these two schemes makes it easier to develop practical engineering expert systems. The general approach to applying this technology to the domain of the finite element analysis, design, and optimization of aerospace structures is discussed.

  3. Automated Induction Of Rule-Based Neural Networks

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.; Goodman, Rodney M.

    1994-01-01

    Prototype expert systems implemented in software and are functionally equivalent to neural networks set up automatically and placed into operation within minutes following information-theoretic approach to automated acquisition of knowledge from large example data bases. Approach based largely on use of ITRULE computer program.

  4. KBGIS-2: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, T.; Peuquet, D.; Menon, S.; Agarwal, P.

    1986-01-01

    The architecture and working of a recently implemented knowledge-based geographic information system (KBGIS-2) that was designed to satisfy several general criteria for the geographic information system are described. The system has four major functions that include query-answering, learning, and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial objects language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is currently performing all its designated tasks successfully, although currently implemented on inadequate hardware. Future reports will detail the performance characteristics of the system, and various new extensions are planned in order to enhance the power of KBGIS-2.

  5. AI in medicine on its way from knowledge-intensive to data-intensive systems.

    PubMed

    Horn, W

    2001-08-01

    The last 20 years of research and development in the field of artificial intelligence in medicine (AIM) show a path from knowledge-intensive systems, which try to capture the essential knowledge of experts in a knowledge-based system, to data-intensive systems available today. Nowadays enormous amounts of information is accessible electronically. Large datasets are collected continuously monitoring physiological parameters of patients. Knowledge-based systems are needed to make use of all these data available and to help us to cope with the information explosion. In addition, temporal data analysis and intelligent information visualization can help us to get a summarized view of the change over time of clinical parameters. Integrating AIM modules into the daily-routine software environment of our care providers gives us a great chance for maintaining and improving quality of care.

  6. An overview of expert systems. [artificial intelligence

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An expert system is defined and its basic structure is discussed. The knowledge base, the inference engine, and uses of expert systems are discussed. Architecture is considered, including choice of solution direction, reasoning in the presence of uncertainty, searching small and large search spaces, handling large search spaces by transforming them and by developing alternative or additional spaces, and dealing with time. Existing expert systems are reviewed. Tools for building such systems, construction, and knowledge acquisition and learning are discussed. Centers of research and funding sources are listed. The state-of-the-art, current problems, required research, and future trends are summarized.

  7. Knowledge based systems for intelligent robotics

    NASA Technical Reports Server (NTRS)

    Rajaram, N. S.

    1982-01-01

    It is pointed out that the construction of large space platforms, such as space stations, has to be carried out in the outer space environment. As it is extremely expensive to support human workers in space for large periods, the only feasible solution appears to be related to the development and deployment of highly capable robots for most of the tasks. Robots for space applications will have to possess characteristics which are very different from those needed by robots in industry. The present investigation is concerned with the needs of space robotics and the technologies which can be of assistance to meet these needs, giving particular attention to knowledge bases. 'Intelligent' robots are required for the solution of arising problems. The collection of facts and rules needed for accomplishing such solutions form the 'knowledge base' of the system.

  8. Knowledge-based machine indexing from natural language text: Knowledge base design, development, and maintenance

    NASA Technical Reports Server (NTRS)

    Genuardi, Michael T.

    1993-01-01

    One strategy for machine-aided indexing (MAI) is to provide a concept-level analysis of the textual elements of documents or document abstracts. In such systems, natural-language phrases are analyzed in order to identify and classify concepts related to a particular subject domain. The overall performance of these MAI systems is largely dependent on the quality and comprehensiveness of their knowledge bases. These knowledge bases function to (1) define the relations between a controlled indexing vocabulary and natural language expressions; (2) provide a simple mechanism for disambiguation and the determination of relevancy; and (3) allow the extension of concept-hierarchical structure to all elements of the knowledge file. After a brief description of the NASA Machine-Aided Indexing system, concerns related to the development and maintenance of MAI knowledge bases are discussed. Particular emphasis is given to statistically-based text analysis tools designed to aid the knowledge base developer. One such tool, the Knowledge Base Building (KBB) program, presents the domain expert with a well-filtered list of synonyms and conceptually-related phrases for each thesaurus concept. Another tool, the Knowledge Base Maintenance (KBM) program, functions to identify areas of the knowledge base affected by changes in the conceptual domain (for example, the addition of a new thesaurus term). An alternate use of the KBM as an aid in thesaurus construction is also discussed.

  9. Using a Dialogue System Based on Dialogue Maps for Computer Assisted Second Language Learning

    ERIC Educational Resources Information Center

    Choi, Sung-Kwon; Kwon, Oh-Woog; Kim, Young-Kil; Lee, Yunkeun

    2016-01-01

    In order to use dialogue systems for computer assisted second-language learning systems, one of the difficult issues in such systems is how to construct large-scale dialogue knowledge that matches the dialogue modelling of a dialogue system. This paper describes how we have accomplished the short-term construction of large-scale and…

  10. Capturing flight system test engineering expertise: Lessons learned

    NASA Technical Reports Server (NTRS)

    Woerner, Irene Wong

    1991-01-01

    Within a few years, JPL will be challenged by the most active mission set in history. Concurrently, flight systems are increasingly more complex. Presently, the knowledge to conduct integration and test of spacecraft and large instruments is held by a few key people, each with many years of experience. JPL is in danger of losing a significant amount of this critical expertise, through retirement, during a period when demand for this expertise is rapidly increasing. The most critical issue at hand is to collect and retain this expertise and develop tools that would ensure the ability to successfully perform the integration and test of future spacecraft and large instruments. The proposed solution was to capture and codity a subset of existing knowledge, and to utilize this captured expertise in knowledge-based systems. First year results and activities planned for the second year of this on-going effort are described. Topics discussed include lessons learned in knowledge acquisition and elicitation techniques, life-cycle paradigms, and rapid prototyping of a knowledge-based advisor (Spacecraft Test Assistant) and a hypermedia browser (Test Engineering Browser). The prototype Spacecraft Test Assistant supports a subset of integration and test activities for flight systems. Browser is a hypermedia tool that allows users easy perusal of spacecraft test topics. A knowledge acquisition tool called ConceptFinder which was developed to search through large volumes of data for related concepts is also described and is modified to semi-automate the process of creating hypertext links.

  11. Chapter 1: Biomedical knowledge integration.

    PubMed

    Payne, Philip R O

    2012-01-01

    The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed via the design and use of knowledge-based systems.

  12. Chapter 1: Biomedical Knowledge Integration

    PubMed Central

    Payne, Philip R. O.

    2012-01-01

    The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed via the design and use of knowledge-based systems. PMID:23300416

  13. Graphical explanation in an expert system for Space Station Freedom rack integration

    NASA Technical Reports Server (NTRS)

    Craig, F. G.; Cutts, D. E.; Fennel, T. R.; Purves, B.

    1990-01-01

    The rationale and methodology used to incorporate graphics into explanations provided by an expert system for Space Station Freedom rack integration is examined. The rack integration task is typical of a class of constraint satisfaction problems for large programs where expertise from several areas is required. Graphically oriented approaches are used to explain the conclusions made by the system, the knowledge base content, and even at more abstract levels the control strategies employed by the system. The implemented architecture combines hypermedia and inference engine capabilities. The advantages of this architecture include: closer integration of user interface, explanation system, and knowledge base; the ability to embed links to deeper knowledge underlying the compiled knowledge used in the knowledge base; and allowing for more direct control of explanation depth and duration by the user. The graphical techniques employed range from simple statis presentation of schematics to dynamic creation of a series of pictures presented motion picture style. User models control the type, amount, and order of information presented.

  14. NASA's online machine aided indexing system

    NASA Technical Reports Server (NTRS)

    Silvester, June P.; Genuardi, Michael T.; Klingbiel, Paul H.

    1993-01-01

    This report describes the NASA Lexical Dictionary, a machine aided indexing system used online at the National Aeronautics and Space Administration's Center for Aerospace Information (CASI). This system is comprised of a text processor that is based on the computational, non-syntactic analysis of input text, and an extensive 'knowledge base' that serves to recognize and translate text-extracted concepts. The structure and function of the various NLD system components are described in detail. Methods used for the development of the knowledge base are discussed. Particular attention is given to a statistically-based text analysis program that provides the knowledge base developer with a list of concept-specific phrases extracted from large textual corpora. Production and quality benefits resulting from the integration of machine aided indexing at CASI are discussed along with a number of secondary applications of NLD-derived systems including on-line spell checking and machine aided lexicography.

  15. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  16. Graph-based real-time fault diagnostics

    NASA Technical Reports Server (NTRS)

    Padalkar, S.; Karsai, G.; Sztipanovits, J.

    1988-01-01

    A real-time fault detection and diagnosis capability is absolutely crucial in the design of large-scale space systems. Some of the existing AI-based fault diagnostic techniques like expert systems and qualitative modelling are frequently ill-suited for this purpose. Expert systems are often inadequately structured, difficult to validate and suffer from knowledge acquisition bottlenecks. Qualitative modelling techniques sometimes generate a large number of failure source alternatives, thus hampering speedy diagnosis. In this paper we present a graph-based technique which is well suited for real-time fault diagnosis, structured knowledge representation and acquisition and testing and validation. A Hierarchical Fault Model of the system to be diagnosed is developed. At each level of hierarchy, there exist fault propagation digraphs denoting causal relations between failure modes of subsystems. The edges of such a digraph are weighted with fault propagation time intervals. Efficient and restartable graph algorithms are used for on-line speedy identification of failure source components.

  17. Probabilistic load simulation: Code development status

    NASA Astrophysics Data System (ADS)

    Newell, J. F.; Ho, H.

    1991-05-01

    The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.

  18. XML-based data model and architecture for a knowledge-based grid-enabled problem-solving environment for high-throughput biological imaging.

    PubMed

    Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif

    2008-03-01

    High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.

  19. Software-engineering challenges of building and deploying reusable problem solvers.

    PubMed

    O'Connor, Martin J; Nyulas, Csongor; Tu, Samson; Buckeridge, David L; Okhmatovskaia, Anna; Musen, Mark A

    2009-11-01

    Problem solving methods (PSMs) are software components that represent and encode reusable algorithms. They can be combined with representations of domain knowledge to produce intelligent application systems. A goal of research on PSMs is to provide principled methods and tools for composing and reusing algorithms in knowledge-based systems. The ultimate objective is to produce libraries of methods that can be easily adapted for use in these systems. Despite the intuitive appeal of PSMs as conceptual building blocks, in practice, these goals are largely unmet. There are no widely available tools for building applications using PSMs and no public libraries of PSMs available for reuse. This paper analyzes some of the reasons for the lack of widespread adoptions of PSM techniques and illustrate our analysis by describing our experiences developing a complex, high-throughput software system based on PSM principles. We conclude that many fundamental principles in PSM research are useful for building knowledge-based systems. In particular, the task-method decomposition process, which provides a means for structuring knowledge-based tasks, is a powerful abstraction for building systems of analytic methods. However, despite the power of PSMs in the conceptual modeling of knowledge-based systems, software engineering challenges have been seriously underestimated. The complexity of integrating control knowledge modeled by developers using PSMs with the domain knowledge that they model using ontologies creates a barrier to widespread use of PSM-based systems. Nevertheless, the surge of recent interest in ontologies has led to the production of comprehensive domain ontologies and of robust ontology-authoring tools. These developments present new opportunities to leverage the PSM approach.

  20. Software-engineering challenges of building and deploying reusable problem solvers

    PubMed Central

    O’CONNOR, MARTIN J.; NYULAS, CSONGOR; TU, SAMSON; BUCKERIDGE, DAVID L.; OKHMATOVSKAIA, ANNA; MUSEN, MARK A.

    2012-01-01

    Problem solving methods (PSMs) are software components that represent and encode reusable algorithms. They can be combined with representations of domain knowledge to produce intelligent application systems. A goal of research on PSMs is to provide principled methods and tools for composing and reusing algorithms in knowledge-based systems. The ultimate objective is to produce libraries of methods that can be easily adapted for use in these systems. Despite the intuitive appeal of PSMs as conceptual building blocks, in practice, these goals are largely unmet. There are no widely available tools for building applications using PSMs and no public libraries of PSMs available for reuse. This paper analyzes some of the reasons for the lack of widespread adoptions of PSM techniques and illustrate our analysis by describing our experiences developing a complex, high-throughput software system based on PSM principles. We conclude that many fundamental principles in PSM research are useful for building knowledge-based systems. In particular, the task–method decomposition process, which provides a means for structuring knowledge-based tasks, is a powerful abstraction for building systems of analytic methods. However, despite the power of PSMs in the conceptual modeling of knowledge-based systems, software engineering challenges have been seriously underestimated. The complexity of integrating control knowledge modeled by developers using PSMs with the domain knowledge that they model using ontologies creates a barrier to widespread use of PSM-based systems. Nevertheless, the surge of recent interest in ontologies has led to the production of comprehensive domain ontologies and of robust ontology-authoring tools. These developments present new opportunities to leverage the PSM approach. PMID:23565031

  1. Knowledge management: An abstraction of knowledge base and database management systems

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel D.

    1990-01-01

    Artificial intelligence application requirements demand powerful representation capabilities as well as efficiency for real-time domains. Many tools exist, the most prevalent being expert systems tools such as ART, KEE, OPS5, and CLIPS. Other tools just emerging from the research environment are truth maintenance systems for representing non-monotonic knowledge, constraint systems, object oriented programming, and qualitative reasoning. Unfortunately, as many knowledge engineers have experienced, simply applying a tool to an application requires a large amount of effort to bend the application to fit. Much work goes into supporting work to make the tool integrate effectively. A Knowledge Management Design System (KNOMAD), is described which is a collection of tools built in layers. The layered architecture provides two major benefits; the ability to flexibly apply only those tools that are necessary for an application, and the ability to keep overhead, and thus inefficiency, to a minimum. KNOMAD is designed to manage many knowledge bases in a distributed environment providing maximum flexibility and expressivity to the knowledge engineer while also providing support for efficiency.

  2. A knowledge-based, concept-oriented view generation system for clinical data.

    PubMed

    Zeng, Q; Cimino, J J

    2001-04-01

    Information overload is a well-known problem for clinicians who must review large amounts of data in patient records. Concept-oriented views, which organize patient data around clinical concepts such as diagnostic strategies and therapeutic goals, may offer a solution to the problem of information overload. However, although concept-oriented views are desirable, they are difficult to create and maintain. We have developed a general-purpose, knowledge-based approach to the generation of concept-oriented views and have developed a system to test our approach. The system creates concept-oriented views through automated identification of relevant patient data. The knowledge in the system is represented by both a semantic network and rules. The key relevant data identification function is accomplished by a rule-based traversal of the semantic network. This paper focuses on the design and implementation of the system; an evaluation of the system is reported separately.

  3. HSTDEK: Developing a methodology for construction of large-scale, multi-use knowledge bases

    NASA Technical Reports Server (NTRS)

    Freeman, Michael S.

    1987-01-01

    The primary research objectives of the Hubble Space Telescope Design/Engineering Knowledgebase (HSTDEK) are to develop a methodology for constructing and maintaining large scale knowledge bases which can be used to support multiple applications. To insure the validity of its results, this research is being persued in the context of a real world system, the Hubble Space Telescope. The HSTDEK objectives are described in detail. The history and motivation of the project are briefly described. The technical challenges faced by the project are outlined.

  4. Speech recognition: Acoustic phonetic and lexical knowledge representation

    NASA Astrophysics Data System (ADS)

    Zue, V. W.

    1983-02-01

    The purpose of this program is to develop a speech data base facility under which the acoustic characteristics of speech sounds in various contexts can be studied conveniently; investigate the phonological properties of a large lexicon of, say 10,000 words, and determine to what extent the phontactic constraints can be utilized in speech recognition; study the acoustic cues that are used to mark work boundaries; develop a test bed in the form of a large-vocabulary, IWR system to study the interactions of acoustic, phonetic and lexical knowledge; and develop a limited continuous speech recognition system with the goal of recognizing any English word from its spelling in order to assess the interactions of higher-level knowledge sources.

  5. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  6. Computer integrated documentation

    NASA Technical Reports Server (NTRS)

    Boy, Guy

    1991-01-01

    The main technical issues of the Computer Integrated Documentation (CID) project are presented. The problem of automation of documents management and maintenance is analyzed both from an artificial intelligence viewpoint and from a human factors viewpoint. Possible technologies for CID are reviewed: conventional approaches to indexing and information retrieval; hypertext; and knowledge based systems. A particular effort was made to provide an appropriate representation for contextual knowledge. This representation is used to generate context on hypertext links. Thus, indexing in CID is context sensitive. The implementation of the current version of CID is described. It includes a hypertext data base, a knowledge based management and maintenance system, and a user interface. A series is also presented of theoretical considerations as navigation in hyperspace, acquisition of indexing knowledge, generation and maintenance of a large documentation, and relation to other work.

  7. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  8. Knowledge network model of the energy consumption in discrete manufacturing system

    NASA Astrophysics Data System (ADS)

    Xu, Binzi; Wang, Yan; Ji, Zhicheng

    2017-07-01

    Discrete manufacturing system generates a large amount of data and information because of the development of information technology. Hence, a management mechanism is urgently required. In order to incorporate knowledge generated from manufacturing data and production experience, a knowledge network model of the energy consumption in the discrete manufacturing system was put forward based on knowledge network theory and multi-granularity modular ontology technology. This model could provide a standard representation for concepts, terms and their relationships, which could be understood by both human and computer. Besides, the formal description of energy consumption knowledge elements (ECKEs) in the knowledge network was also given. Finally, an application example was used to verify the feasibility of the proposed method.

  9. Design and Analysis Techniques for Concurrent Blackboard Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Mcmanus, John William

    1992-01-01

    Blackboard systems are a natural progression of knowledge-based systems into a more powerful problem solving technique. They provide a way for several highly specialized knowledge sources to cooperate to solve large, complex problems. Blackboard systems incorporate the concepts developed by rule-based and expert systems programmers and include the ability to add conventionally coded knowledge sources. The small and specialized knowledge sources are easier to develop and test, and can be hosted on hardware specifically suited to the task that they are solving. The Formal Model for Blackboard Systems was developed to provide a consistent method for describing a blackboard system. A set of blackboard system design tools has been developed and validated for implementing systems that are expressed using the Formal Model. The tools are used to test and refine a proposed blackboard system design before the design is implemented. My research has shown that the level of independence and specialization of the knowledge sources directly affects the performance of blackboard systems. Using the design, simulation, and analysis tools, I developed a concurrent object-oriented blackboard system that is faster, more efficient, and more powerful than existing systems. The use of the design and analysis tools provided the highly specialized and independent knowledge sources required for my concurrent blackboard system to achieve its design goals.

  10. D and D Knowledge Management Information Tool - 2012 - 12106

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Upadhyay, H.; Lagos, L.; Quintero, W.

    2012-07-01

    Deactivation and decommissioning (D and D) work is a high priority activity across the Department of Energy (DOE) complex. Subject matter specialists (SMS) associated with the different ALARA (As-Low-As-Reasonably-Achievable) Centers, DOE sites, Energy Facility Contractors Group (EFCOG) and the D and D community have gained extensive knowledge and experience over the years in the cleanup of the legacy waste from the Manhattan Project. To prevent the D and D knowledge and expertise from being lost over time from the evolving and aging workforce, DOE and the Applied Research Center (ARC) at Florida International University (FIU) proposed to capture and maintainmore » this valuable information in a universally available and easily usable system. D and D KM-IT provides single point access to all D and D related activities through its knowledge base. It is a community driven system. D and D KM-IT makes D and D knowledge available to the people who need it at the time they need it and in a readily usable format. It uses the World Wide Web as the primary source for content in addition to information collected from subject matter specialists and the D and D community. It brings information in real time through web based custom search processes and its dynamic knowledge repository. Future developments include developing a document library, providing D and D information access on mobile devices for the Technology module and Hotline, and coordinating multiple subject matter specialists to support the Hotline. The goal is to deploy a high-end sophisticated and secured system to serve as a single large knowledge base for all the D and D activities. The system consolidates a large amount of information available on the web and presents it to users in the simplest way possible. (authors)« less

  11. Ontology-Based Search of Genomic Metadata.

    PubMed

    Fernandez, Javier D; Lenzerini, Maurizio; Masseroli, Marco; Venco, Francesco; Ceri, Stefano

    2016-01-01

    The Encyclopedia of DNA Elements (ENCODE) is a huge and still expanding public repository of more than 4,000 experiments and 25,000 data files, assembled by a large international consortium since 2007; unknown biological knowledge can be extracted from these huge and largely unexplored data, leading to data-driven genomic, transcriptomic, and epigenomic discoveries. Yet, search of relevant datasets for knowledge discovery is limitedly supported: metadata describing ENCODE datasets are quite simple and incomplete, and not described by a coherent underlying ontology. Here, we show how to overcome this limitation, by adopting an ENCODE metadata searching approach which uses high-quality ontological knowledge and state-of-the-art indexing technologies. Specifically, we developed S.O.S. GeM (http://www.bioinformatics.deib.polimi.it/SOSGeM/), a system supporting effective semantic search and retrieval of ENCODE datasets. First, we constructed a Semantic Knowledge Base by starting with concepts extracted from ENCODE metadata, matched to and expanded on biomedical ontologies integrated in the well-established Unified Medical Language System. We prove that this inference method is sound and complete. Then, we leveraged the Semantic Knowledge Base to semantically search ENCODE data from arbitrary biologists' queries. This allows correctly finding more datasets than those extracted by a purely syntactic search, as supported by the other available systems. We empirically show the relevance of found datasets to the biologists' queries.

  12. Artificial Intelligence In Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1991-01-01

    Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.

  13. An American knowledge base in England - Alternate implementations of an expert system flight status monitor

    NASA Technical Reports Server (NTRS)

    Butler, G. F.; Graves, A. T.; Disbrow, J. D.; Duke, E. L.

    1989-01-01

    A joint activity between the Dryden Flight Research Facility of the NASA Ames Research Center (Ames-Dryden) and the Royal Aerospace Establishment (RAE) on knowledge-based systems has been agreed. Under the agreement, a flight status monitor knowledge base developed at Ames-Dryden has been implemented using the real-time AI (artificial intelligence) toolkit MUSE, which was developed in the UK. Here, the background to the cooperation is described and the details of the flight status monitor and a prototype MUSE implementation are presented. It is noted that the capabilities of the expert-system flight status monitor to monitor data downlinked from the flight test aircraft and to generate information on the state and health of the system for the test engineers provides increased safety during flight testing of new systems. Furthermore, the expert-system flight status monitor provides the systems engineers with ready access to the large amount of information required to describe a complex aircraft system.

  14. KAM (Knowledge Acquisition Module): A tool to simplify the knowledge acquisition process

    NASA Technical Reports Server (NTRS)

    Gettig, Gary A.

    1988-01-01

    Analysts, knowledge engineers and information specialists are faced with increasing volumes of time-sensitive data in text form, either as free text or highly structured text records. Rapid access to the relevant data in these sources is essential. However, due to the volume and organization of the contents, and limitations of human memory and association, frequently: (1) important information is not located in time; (2) reams of irrelevant data are searched; and (3) interesting or critical associations are missed due to physical or temporal gaps involved in working with large files. The Knowledge Acquisition Module (KAM) is a microcomputer-based expert system designed to assist knowledge engineers, analysts, and other specialists in extracting useful knowledge from large volumes of digitized text and text-based files. KAM formulates non-explicit, ambiguous, or vague relations, rules, and facts into a manageable and consistent formal code. A library of system rules or heuristics is maintained to control the extraction of rules, relations, assertions, and other patterns from the text. These heuristics can be added, deleted or customized by the user. The user can further control the extraction process with optional topic specifications. This allows the user to cluster extracts based on specific topics. Because KAM formalizes diverse knowledge, it can be used by a variety of expert systems and automated reasoning applications. KAM can also perform important roles in computer-assisted training and skill development. Current research efforts include the applicability of neural networks to aid in the extraction process and the conversion of these extracts into standard formats.

  15. Strategies for adding adaptive learning mechanisms to rule-based diagnostic expert systems

    NASA Technical Reports Server (NTRS)

    Stclair, D. C.; Sabharwal, C. L.; Bond, W. E.; Hacke, Keith

    1988-01-01

    Rule-based diagnostic expert systems can be used to perform many of the diagnostic chores necessary in today's complex space systems. These expert systems typically take a set of symptoms as input and produce diagnostic advice as output. The primary objective of such expert systems is to provide accurate and comprehensive advice which can be used to help return the space system in question to nominal operation. The development and maintenance of diagnostic expert systems is time and labor intensive since the services of both knowledge engineer(s) and domain expert(s) are required. The use of adaptive learning mechanisms to increment evaluate and refine rules promises to reduce both time and labor costs associated with such systems. This paper describes the basic adaptive learning mechanisms of strengthening, weakening, generalization, discrimination, and discovery. Next basic strategies are discussed for adding these learning mechanisms to rule-based diagnostic expert systems. These strategies support the incremental evaluation and refinement of rules in the knowledge base by comparing the set of advice given by the expert system (A) with the correct diagnosis (C). Techniques are described for selecting those rules in the in the knowledge base which should participate in adaptive learning. The strategies presented may be used with a wide variety of learning algorithms. Further, these strategies are applicable to a large number of rule-based diagnostic expert systems. They may be used to provide either immediate or deferred updating of the knowledge base.

  16. Structure of the knowledge base for an expert labeling system

    NASA Technical Reports Server (NTRS)

    Rajaram, N. S.

    1981-01-01

    One of the principal objectives of the NASA AgRISTARS program is the inventory of global crop resources using remotely sensed data gathered by Land Satellites (LANDSAT). A central problem in any such crop inventory procedure is the interpretation of LANDSAT images and identification of parts of each image which are covered by a particular crop of interest. This task of labeling is largely a manual one done by trained human analysts and consequently presents obstacles to the development of totally automated crop inventory systems. However, development in knowledge engineering as well as widespread availability of inexpensive hardware and software for artificial intelligence work offers possibilities for developing expert systems for labeling of crops. Such a knowledge based approach to labeling is presented.

  17. Flood AI: An Intelligent Systems for Discovery and Communication of Disaster Knowledge

    NASA Astrophysics Data System (ADS)

    Demir, I.; Sermet, M. Y.

    2017-12-01

    Communities are not immune from extreme events or natural disasters that can lead to large-scale consequences for the nation and public. Improving resilience to better prepare, plan, recover, and adapt to disasters is critical to reduce the impacts of extreme events. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This project presents an intelligent system, Flood AI, for flooding to improve societal preparedness by providing a knowledge engine using voice recognition, artificial intelligence, and natural language processing based on a generalized ontology for disasters with a primary focus on flooding. The knowledge engine utilizes the flood ontology and concepts to connect user input to relevant knowledge discovery channels on flooding by developing a data acquisition and processing framework utilizing environmental observations, forecast models, and knowledge bases. Communication channels of the framework includes web-based systems, agent-based chat bots, smartphone applications, automated web workflows, and smart home devices, opening the knowledge discovery for flooding to many unique use cases.

  18. Facilitating Metacognitive Processes of Academic Genre-Based Writing Using an Online Writing System

    ERIC Educational Resources Information Center

    Yeh, Hui-Chin

    2015-01-01

    Few studies have investigated how metacognitive processes foster the application of genre knowledge to students' academic writing. This is largely due to its internal and unobservable characteristics. To bridge this gap, an online writing system based on metacognition, involving the stages of planning, monitoring, evaluating, and revising, was…

  19. Northeast Artificial Intelligence Consortium annual report. Volume 2. 1988. Discussing, using, and recognizing plans (NLP). Interim report, January-December 1988

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shapiro, S.C.; Woolf, B.

    The Northeast Artificial Intelligence Consortium (NAIC) was created by the Air Force Systems Command, Rome Air Development Center, and the Office of Scientific Research. Its purpose is to conduct pertinent research in artificial intelligence and to perform activities ancillary to this research. This report describes progress that has been made in the fourth year of the existence of the NAIC on the technical research tasks undertaken at the member universities. The topics covered in general are: versatile expert system for equipment maintenance, distributed AI for communications system control, automatic photointerpretation, time-oriented problem solving, speech understanding systems, knowledge base maintenance, hardwaremore » architectures for very large systems, knowledge-based reasoning and planning, and a knowledge acquisition, assistance, and explanation system. The specific topic for this volume is the recognition of plans expressed in natural language, followed by their discussion and use.« less

  20. The Spacecraft Materials Selector: An Artificial Intelligence System for Preliminary Design Trade Studies, Materials Assessments, and Estimates of Environments Present

    NASA Technical Reports Server (NTRS)

    Pippin, H. G.; Woll, S. L. B.

    2000-01-01

    Institutions need ways to retain valuable information even as experienced individuals leave an organization. Modern electronic systems have enough capacity to retain large quantities of information that can mitigate the loss of experience. Performance information for long-term space applications is relatively scarce and specific information (typically held by a few individuals within a single project) is often rather narrowly distributed. Spacecraft operate under severe conditions and the consequences of hardware and/or system failures, in terms of cost, loss of information, and time required to replace the loss, are extreme. These risk factors place a premium on appropriate choice of materials and components for space applications. An expert system is a very cost-effective method for sharing valuable and scarce information about spacecraft performance. Boeing has an artificial intelligence software package, called the Boeing Expert System Tool (BEST), to construct and operate knowledge bases to selectively recall and distribute information about specific subjects. A specific knowledge base to evaluate the on-orbit performance of selected materials on spacecraft has been developed under contract to the NASA SEE program. The performance capabilities of the Spacecraft Materials Selector (SMS) knowledge base are described. The knowledge base is a backward-chaining, rule-based system. The user answers a sequence of questions, and the expert system provides estimates of optical and mechanical performance of selected materials under specific environmental conditions. The initial operating capability of the system will include data for Kapton, silverized Teflon, selected paints, silicone-based materials, and certain metals. For situations where a mission profile (launch date, orbital parameters, mission duration, spacecraft orientation) is not precisely defined, the knowledge base still attempts to provide qualitative observations about materials performance and likely exposures. Prior to the NASA contract, a knowledge base, the Spacecraft Environments Assistant (SEA,) was initially developed by Boeing to estimate the environmental factors important for a specific spacecraft mission profile. The NASA SEE program has funded specific enhancements to the capability of this knowledge base. The SEA qualitatively identifies over 25 environmental factors that may influence the performance of a spacecraft during its operational lifetime. For cases where sufficiently detailed answers are provided to questions asked by the knowledge base, atomic oxygen fluence levels, proton and/or electron fluence and dose levels, and solar exposure hours are calculated. The SMS knowledge base incorporates the previously developed SEA knowledge base. A case history for previous flight experiment will be shown as an example, and capabilities and limitations of the system will be discussed.

  1. An automated knowledge-based textual summarization system for longitudinal, multivariate clinical data.

    PubMed

    Goldstein, Ayelet; Shahar, Yuval

    2016-06-01

    Design and implement an intelligent free-text summarization system: The system's input includes large numbers of longitudinal, multivariate, numeric and symbolic clinical raw data, collected over varying periods of time, and in different complex contexts, and a suitable medical knowledge base. The system then automatically generates a textual summary of the data. We aim to prove the feasibility of implementing such a system, and to demonstrate its potential benefits for clinicians and for enhancement of quality of care. We have designed a new, domain-independent, knowledge-based system, the CliniText system, for automated summarization in free text of longitudinal medical records of any duration, in any context. The system is composed of six components: (1) A temporal abstraction module generates all possible abstractions from the patient's raw data using a temporal-abstraction knowledge base; (2) The abductive reasoning module infers abstractions or events from the data, which were not explicitly included in the database; (3) The pruning module filters out raw or abstract data based on predefined heuristics; (4) The document structuring module organizes the remaining raw or abstract data, according to the desired format; (5) The microplanning module, groups the raw or abstract data and creates referring expressions; (6) The surface realization module, generates the text, and applies the grammar rules of the chosen language. We have performed an initial technical evaluation of the system in the cardiac intensive-care and diabetes domains. We also summarize the results of a more detailed evaluation study that we have performed in the intensive-care domain that assessed the completeness, correctness, and overall quality of the system's generated text, and its potential benefits to clinical decision making. We assessed these measures for 31 letters originally composed by clinicians, and for the same letters when generated by the CliniText system. We have successfully implemented all of the components of the CliniText system in software. We have also been able to create a comprehensive temporal-abstraction knowledge base to support its functionality, mostly in the intensive-care domain. The initial technical evaluation of the system in the cardiac intensive-care and diabetes domains has shown great promise, proving the feasibility of constructing and operating such systems. The detailed results of the evaluation in the intensive-care domain are out of scope of the current paper, and we refer the reader to a more detailed source. In all of the letters composed by clinicians, there were at least two important items per letter missed that were included by the CliniText system. The clinicians' letters got a significantly better grade in three out of four measured quality parameters, as judged by an expert; however, the variance in the quality was much higher in the clinicians' letters. In addition, three clinicians answered questions based on the discharge letter 40% faster, and answered four out of the five questions equally well or significantly better, when using the CliniText-generated letters, than when using the clinician-composed letters. Constructing a working system for automated summarization in free text of large numbers of varying periods of multivariate longitudinal clinical data is feasible. So is the construction of a large knowledge base, designed to support such a system, in a complex clinical domain, such as the intensive-care domain. The integration of the quality and functionality results suggests that the optimal discharge letter should exploit both human and machine, possibly by creating a machine-generated draft that will be polished by a human clinician. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. A knowledge-based approach to identification and adaptation in dynamical systems control

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Wong, C. M.

    1988-01-01

    Artificial intelligence techniques are applied to the problems of model form and parameter identification of large-scale dynamic systems. The object-oriented knowledge representation is discussed in the context of causal modeling and qualitative reasoning. Structured sets of rules are used for implementing qualitative component simulations, for catching qualitative discrepancies and quantitative bound violations, and for making reconfiguration and control decisions that affect the physical system. These decisions are executed by backward-chaining through a knowledge base of control action tasks. This approach was implemented for two examples: a triple quadrupole mass spectrometer and a two-phase thermal testbed. Results of tests with both of these systems demonstrate that the software replicates some or most of the functionality of a human operator, thereby reducing the need for a human-in-the-loop in the lower levels of control of these complex systems.

  3. Tools for knowledge acquisition within the NeuroScholar system and their application to anatomical tract-tracing data

    PubMed Central

    Burns, Gully APC; Cheng, Wei-Cheng

    2006-01-01

    Background Knowledge bases that summarize the published literature provide useful online references for specific areas of systems-level biology that are not otherwise supported by large-scale databases. In the field of neuroanatomy, groups of small focused teams have constructed medium size knowledge bases to summarize the literature describing tract-tracing experiments in several species. Despite years of collation and curation, these databases only provide partial coverage of the available published literature. Given that the scientists reading these papers must all generate the interpretations that would normally be entered into such a system, we attempt here to provide general-purpose annotation tools to make it easy for members of the community to contribute to the task of data collation. Results In this paper, we describe an open-source, freely available knowledge management system called 'NeuroScholar' that allows straightforward structured markup of the PDF files according to a well-designed schema to capture the essential details of this class of experiment. Although, the example worked through in this paper is quite specific to neuroanatomical connectivity, the design is freely extensible and could conceivably be used to construct local knowledge bases for other experiment types. Knowledge representations of the experiment are also directly linked to the contributing textual fragments from the original research article. Through the use of this system, not only could members of the community contribute to the collation task, but input data can be gathered for automated approaches to permit knowledge acquisition through the use of Natural Language Processing (NLP). Conclusion We present a functional, working tool to permit users to populate knowledge bases for neuroanatomical connectivity data from the literature through the use of structured questionnaires. This system is open-source, fully functional and available for download from [1]. PMID:16895608

  4. Research directions in large scale systems and decentralized control

    NASA Technical Reports Server (NTRS)

    Tenney, R. R.

    1980-01-01

    Control theory provides a well established framework for dealing with automatic decision problems and a set of techniques for automatic decision making which exploit special structure, but it does not deal well with complexity. The potential exists for combining control theoretic and knowledge based concepts into a unified approach. The elements of control theory are diagrammed, including modern control and large scale systems.

  5. Scaffolded Writing and Rewriting in the Discipline: A Web-Based Reciprocal Peer Review System

    ERIC Educational Resources Information Center

    Cho, Kwangsu; Schunn, Christian D.

    2007-01-01

    This paper describes how SWoRD (scaffolded writing and rewriting in the discipline), a web-based reciprocal peer review system, supports writing practice, particularly for large content courses in which writing is considered critical but not feasibly included. To help students gain content knowledge as well as writing and reviewing skills, SWoRD…

  6. A CLIPS based personal computer hardware diagnostic system

    NASA Technical Reports Server (NTRS)

    Whitson, George M.

    1991-01-01

    Often the person designated to repair personal computers has little or no knowledge of how to repair a computer. Described here is a simple expert system to aid these inexperienced repair people. The first component of the system leads the repair person through a number of simple system checks such as making sure that all cables are tight and that the dip switches are set correctly. The second component of the system assists the repair person in evaluating error codes generated by the computer. The final component of the system applies a large knowledge base to attempt to identify the component of the personal computer that is malfunctioning. We have implemented and tested our design with a full system to diagnose problems for an IBM compatible system based on the 8088 chip. In our tests, the inexperienced repair people found the system very useful in diagnosing hardware problems.

  7. A machine independent expert system for diagnosing environmentally induced spacecraft anomalies

    NASA Technical Reports Server (NTRS)

    Rolincik, Mark J.

    1991-01-01

    A new rule-based, machine independent analytical tool for diagnosing spacecraft anomalies, the EnviroNET expert system, was developed. Expert systems provide an effective method for storing knowledge, allow computers to sift through large amounts of data pinpointing significant parts, and most importantly, use heuristics in addition to algorithms which allow approximate reasoning and inference, and the ability to attack problems not rigidly defines. The EviroNET expert system knowledge base currently contains over two hundred rules, and links to databases which include past environmental data, satellite data, and previous known anomalies. The environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose.

  8. A Game Based e-Learning System to Teach Artificial Intelligence in the Computer Sciences Degree

    ERIC Educational Resources Information Center

    de Castro-Santos, Amable; Fajardo, Waldo; Molina-Solana, Miguel

    2017-01-01

    Our students taking the Artificial Intelligence and Knowledge Engineering courses often encounter a large number of problems to solve which are not directly related to the subject to be learned. To solve this problem, we have developed a game based e-learning system. The elected game, that has been implemented as an e-learning system, allows to…

  9. KAT: A Flexible XML-based Knowledge Authoring Environment

    PubMed Central

    Hulse, Nathan C.; Rocha, Roberto A.; Del Fiol, Guilherme; Bradshaw, Richard L.; Hanna, Timothy P.; Roemer, Lorrie K.

    2005-01-01

    As part of an enterprise effort to develop new clinical information systems at Intermountain Health Care, the authors have built a knowledge authoring tool that facilitates the development and refinement of medical knowledge content. At present, users of the application can compose order sets and an assortment of other structured clinical knowledge documents based on XML schemas. The flexible nature of the application allows the immediate authoring of new types of documents once an appropriate XML schema and accompanying Web form have been developed and stored in a shared repository. The need for a knowledge acquisition tool stems largely from the desire for medical practitioners to be able to write their own content for use within clinical applications. We hypothesize that medical knowledge content for clinical use can be successfully created and maintained through XML-based document frameworks containing structured and coded knowledge. PMID:15802477

  10. A Fresh Look at Spanish Scientific Publishing in the Framework of International Standards

    ERIC Educational Resources Information Center

    Kindelan, Paz

    2009-01-01

    Research has become a key element in the knowledge-based society with its role of producing and disseminating results. In this context, scientific publishing becomes the means by which research activity and knowledge production are circulated to the scientific community and society at large. However, there are factors influencing the system of…

  11. Systems survivor: a program for house staff in systems-based practice.

    PubMed

    Turley, Christine B; Roach, Richard; Marx, Marilyn

    2007-01-01

    The Systems-Based Practice competency expanded the scope of graduate medical education. Innovative approaches are needed to teach this material. We have designed and implemented a rotation in Systems-Based Practice focused on the interrelationships of patient care, clinical revenue, and the physician's role within health care systems. Experiential learning occurs during a 5-day rotation through 26 areas encompassing the clinical revenue cycle, guided by "expert" staff. Using a reversal of the TV show Survivor, house staff begin conceptually "alone" and discover they are members of a large, dedicated team. Assessment results, including a system knowledge test and course evaluations, are presented. Twenty-five residents from four clinical departments participated in Year 1. An increase in pretest to posttest knowledge scores of 14.8% (p

  12. Building distributed rule-based systems using the AI Bus

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain C.

    1990-01-01

    The AI Bus software architecture was designed to support the construction of large-scale, production-quality applications in areas of high technology flux, running heterogeneous distributed environments, utilizing a mix of knowledge-based and conventional components. These goals led to its current development as a layered, object-oriented library for cooperative systems. This paper describes the concepts and design of the AI Bus and its implementation status as a library of reusable and customizable objects, structured by layers from operating system interfaces up to high-level knowledge-based agents. Each agent is a semi-autonomous process with specialized expertise, and consists of a number of knowledge sources (a knowledge base and inference engine). Inter-agent communication mechanisms are based on blackboards and Actors-style acquaintances. As a conservative first implementation, we used C++ on top of Unix, and wrapped an embedded Clips with methods for the knowledge source class. This involved designing standard protocols for communication and functions which use these protocols in rules. Embedding several CLIPS objects within a single process was an unexpected problem because of global variables, whose solution involved constructing and recompiling a C++ version of CLIPS. We are currently working on a more radical approach to incorporating CLIPS, by separating out its pattern matcher, rule and fact representations and other components as true object oriented modules.

  13. Management of Knowledge Representation Standards Activities

    NASA Technical Reports Server (NTRS)

    Patil, Ramesh S.

    1993-01-01

    Ever since the mid-seventies, researchers have recognized that capturing knowledge is the key to building large and powerful AI systems. In the years since, we have also found that representing knowledge is difficult and time consuming. In spite of the tools developed to help with knowledge acquisition, knowledge base construction remains one of the major costs in building an Al system: For almost every system we build, a new knowledge base must be constructed from scratch. As a result, most systems remain small to medium in size. Even if we build several systems within a general area, such as medicine or electronics diagnosis, significant portions of the domain must be represented for every system we create. The cost of this duplication of effort has been high and will become prohibitive as we attempt to build larger and larger systems. To overcome this barrier we must find ways of preserving existing knowledge bases and of sharing, re-using, and building on them. This report describes the efforts undertaken over the last two years to identify the issues underlying the current difficulties in sharing and reuse, and a community wide initiative to overcome them. First, we discuss four bottlenecks to sharing and reuse, present a vision of a future in which these bottlenecks have been ameliorated, and describe the efforts of the initiative's four working groups to address these bottlenecks. We then address the supporting technology and infrastructure that is critical to enabling the vision of the future. Finally, we consider topics of longer-range interest by reviewing some of the research issues raised by our vision.

  14. A Knowledge-Based System for the Computer Assisted Diagnosis of Endoscopic Images

    NASA Astrophysics Data System (ADS)

    Kage, Andreas; Münzenmayer, Christian; Wittenberg, Thomas

    Due to the actual demographic development the use of Computer-Assisted Diagnosis (CAD) systems becomes a more important part of clinical workflows and clinical decision making. Because changes on the mucosa of the esophagus can indicate the first stage of cancerous developments, there is a large interest to detect and correctly diagnose any such lesion. We present a knowledge-based system which is able to support a physician with the interpretation and diagnosis of endoscopic images of the esophagus. Our system is designed to support the physician directly during the examination of the patient, thus prodving diagnostic assistence at the point of care (POC). Based on an interactively marked region in an endoscopic image of interest, the system provides a diagnostic suggestion, based on an annotated reference image database. Furthermore, using relevant feedback mechanisms, the results can be enhanced interactively.

  15. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  16. Problem-Oriented Corporate Knowledge Base Models on the Case-Based Reasoning Approach Basis

    NASA Astrophysics Data System (ADS)

    Gluhih, I. N.; Akhmadulin, R. K.

    2017-07-01

    One of the urgent directions of efficiency enhancement of production processes and enterprises activities management is creation and use of corporate knowledge bases. The article suggests a concept of problem-oriented corporate knowledge bases (PO CKB), in which knowledge is arranged around possible problem situations and represents a tool for making and implementing decisions in such situations. For knowledge representation in PO CKB a case-based reasoning approach is encouraged to use. Under this approach, the content of a case as a knowledge base component has been defined; based on the situation tree a PO CKB knowledge model has been developed, in which the knowledge about typical situations as well as specific examples of situations and solutions have been represented. A generalized problem-oriented corporate knowledge base structural chart and possible modes of its operation have been suggested. The obtained models allow creating and using corporate knowledge bases for support of decision making and implementing, training, staff skill upgrading and analysis of the decisions taken. The universal interpretation of terms “situation” and “solution” adopted in the work allows using the suggested models to develop problem-oriented corporate knowledge bases in different subject domains. It has been suggested to use the developed models for making corporate knowledge bases of the enterprises that operate engineer systems and networks at large production facilities.

  17. Advanced data structures for the interpretation of image and cartographic data in geo-based information systems

    NASA Technical Reports Server (NTRS)

    Peuquet, D. J.

    1986-01-01

    A growing need to usse geographic information systems (GIS) to improve the flexibility and overall performance of very large, heterogeneous data bases was examined. The Vaster structure and the Topological Grid structure were compared to test whether such hybrid structures represent an improvement in performance. The use of artificial intelligence in a geographic/earth sciences data base context is being explored. The architecture of the Knowledge Based GIS (KBGIS) has a dual object/spatial data base and a three tier hierarchial search subsystem. Quadtree Spatial Spectra (QTSS) are derived, based on the quadtree data structure, to generate and represent spatial distribution information for large volumes of spatial data.

  18. Factors shaping the evolution of electronic documentation systems

    NASA Technical Reports Server (NTRS)

    Dede, Christopher J.; Sullivan, Tim R.; Scace, Jacque R.

    1990-01-01

    The main goal is to prepare the space station technical and managerial structure for likely changes in the creation, capture, transfer, and utilization of knowledge. By anticipating advances, the design of Space Station Project (SSP) information systems can be tailored to facilitate a progression of increasingly sophisticated strategies as the space station evolves. Future generations of advanced information systems will use increases in power to deliver environmentally meaningful, contextually targeted, interconnected data (knowledge). The concept of a Knowledge Base Management System is emerging when the problem is focused on how information systems can perform such a conversion of raw data. Such a system would include traditional management functions for large space databases. Added artificial intelligence features might encompass co-existing knowledge representation schemes; effective control structures for deductive, plausible, and inductive reasoning; means for knowledge acquisition, refinement, and validation; explanation facilities; and dynamic human intervention. The major areas covered include: alternative knowledge representation approaches; advanced user interface capabilities; computer-supported cooperative work; the evolution of information system hardware; standardization, compatibility, and connectivity; and organizational impacts of information intensive environments.

  19. Adaptive interface for personalizing information seeking.

    PubMed

    Narayanan, S; Koppaka, Lavanya; Edala, Narasimha; Loritz, Don; Daley, Raymond

    2004-12-01

    An adaptive interface autonomously adjusts its display and available actions to current goals and abilities of the user by assessing user status, system task, and the context. Knowledge content adaptability is needed for knowledge acquisition and refinement tasks. In the case of knowledge content adaptability, the requirements of interface design focus on the elicitation of information from the user and the refinement of information based on patterns of interaction. In such cases, the emphasis on adaptability is on facilitating information search and knowledge discovery. In this article, we present research on adaptive interfaces that facilitates personalized information seeking from a large data warehouse. The resulting proof-of-concept system, called source recommendation system (SRS), assists users in locating and navigating data sources in the repository. Based on the initial user query and an analysis of the content of the search results, the SRS system generates a profile of the user tailored to the individual's context during information seeking. The user profiles are refined successively and are used in progressively guiding the user to the appropriate set of sources within the knowledge base. The SRS system is implemented as an Internet browser plug-in to provide a seamless and unobtrusive, personalized experience to the users during the information search process. The rationale behind our approach, system design, empirical evaluation, and implications for research on adaptive interfaces are described in this paper.

  20. Drug transport across the blood–brain barrier

    PubMed Central

    Pardridge, William M

    2012-01-01

    The blood–brain barrier (BBB) prevents the brain uptake of most pharmaceuticals. This property arises from the epithelial-like tight junctions within the brain capillary endothelium. The BBB is anatomically and functionally distinct from the blood–cerebrospinal fluid barrier at the choroid plexus. Certain small molecule drugs may cross the BBB via lipid-mediated free diffusion, providing the drug has a molecular weight <400 Da and forms <8 hydrogen bonds. These chemical properties are lacking in the majority of small molecule drugs, and all large molecule drugs. Nevertheless, drugs can be reengineered for BBB transport, based on the knowledge of the endogenous transport systems within the BBB. Small molecule drugs can be synthesized that access carrier-mediated transport (CMT) systems within the BBB. Large molecule drugs can be reengineered with molecular Trojan horse delivery systems to access receptor-mediated transport (RMT) systems within the BBB. Peptide and antisense radiopharmaceuticals are made brain-penetrating with the combined use of RMT-based delivery systems and avidin–biotin technology. Knowledge on the endogenous CMT and RMT systems expressed at the BBB enable new solutions to the problem of BBB drug transport. PMID:22929442

  1. Knowledge-based approaches to the maintenance of a large controlled medical terminology.

    PubMed Central

    Cimino, J J; Clayton, P D; Hripcsak, G; Johnson, S B

    1994-01-01

    OBJECTIVE: Develop a knowledge-based representation for a controlled terminology of clinical information to facilitate creation, maintenance, and use of the terminology. DESIGN: The Medical Entities Dictionary (MED) is a semantic network, based on the Unified Medical Language System (UMLS), with a directed acyclic graph to represent multiple hierarchies. Terms from four hospital systems (laboratory, electrocardiography, medical records coding, and pharmacy) were added as nodes in the network. Additional knowledge about terms, added as semantic links, was used to assist in integration, harmonization, and automated classification of disparate terminologies. RESULTS: The MED contains 32,767 terms and is in active clinical use. Automated classification was successfully applied to terms for laboratory specimens, laboratory tests, and medications. One benefit of the approach has been the automated inclusion of medications into multiple pharmacologic and allergenic classes that were not present in the pharmacy system. Another benefit has been the reduction of maintenance efforts by 90%. CONCLUSION: The MED is a hybrid of terminology and knowledge. It provides domain coverage, synonymy, consistency of views, explicit relationships, and multiple classification while preventing redundancy, ambiguity (homonymy) and misclassification. PMID:7719786

  2. Riverine settlement in the evolution of prehistoric land-use systems in the Middle Rio Grande Valley, New Mexico

    Treesearch

    Joseph A. Tainter; Bonnie Bagley Tainter

    1996-01-01

    Ecosystem management should be based on the fullest possible knowledge of ecological structures and processes. In prehistoric North America, the involvement of Indian populations in ecosystem processes ranged from inadvertent alteration of the distribution and abundance of species to large-scale management of landscapes. The knowledge needed to manage ecosystems today...

  3. Creating a Ten-Year Science and Innovation Framework for the UK: A Perspective Based on US Experience

    ERIC Educational Resources Information Center

    Crawley, Edward F.; Greenwald, Suzanne B.

    2006-01-01

    The sustainability of a competitive, national economy depends largely on the ability of companies to deliver innovative knowledge-intensive goods and services to the market. These are the ultimate outputs of a scientific knowledge system. Ideas flow from the critical, identifiable phases of (a) the discovery, (b) the development, (c) the…

  4. A bioinformatics knowledge discovery in text application for grid computing

    PubMed Central

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-01-01

    Background A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. Methods The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. Results A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. Conclusion In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities. PMID:19534749

  5. A bioinformatics knowledge discovery in text application for grid computing.

    PubMed

    Castellano, Marcello; Mastronardi, Giuseppe; Bellotti, Roberto; Tarricone, Gianfranco

    2009-06-16

    A fundamental activity in biomedical research is Knowledge Discovery which has the ability to search through large amounts of biomedical information such as documents and data. High performance computational infrastructures, such as Grid technologies, are emerging as a possible infrastructure to tackle the intensive use of Information and Communication resources in life science. The goal of this work was to develop a software middleware solution in order to exploit the many knowledge discovery applications on scalable and distributed computing systems to achieve intensive use of ICT resources. The development of a grid application for Knowledge Discovery in Text using a middleware solution based methodology is presented. The system must be able to: perform a user application model, process the jobs with the aim of creating many parallel jobs to distribute on the computational nodes. Finally, the system must be aware of the computational resources available, their status and must be able to monitor the execution of parallel jobs. These operative requirements lead to design a middleware to be specialized using user application modules. It included a graphical user interface in order to access to a node search system, a load balancing system and a transfer optimizer to reduce communication costs. A middleware solution prototype and the performance evaluation of it in terms of the speed-up factor is shown. It was written in JAVA on Globus Toolkit 4 to build the grid infrastructure based on GNU/Linux computer grid nodes. A test was carried out and the results are shown for the named entity recognition search of symptoms and pathologies. The search was applied to a collection of 5,000 scientific documents taken from PubMed. In this paper we discuss the development of a grid application based on a middleware solution. It has been tested on a knowledge discovery in text process to extract new and useful information about symptoms and pathologies from a large collection of unstructured scientific documents. As an example a computation of Knowledge Discovery in Database was applied on the output produced by the KDT user module to extract new knowledge about symptom and pathology bio-entities.

  6. Health data and data governance.

    PubMed

    Hovenga, Evelyn J S; Grain, Heather

    2013-01-01

    Health is a knowledge industry, based on data collected to support care, service planning, financing and knowledge advancement. Increasingly there is a need to collect, retrieve and use health record information in an electronic format to provide greater flexibility, as this enables retrieval and display of data in multiple locations and formats irrespective of where the data were collected. Electronically maintained records require greater structure and consistency to achieve this. The use of data held in records generated in real time in clinical systems also has the potential to reduce the time it takes to gain knowledge, as there is less need to collect research specific information, this is only possible if data governance principles are applied. Connected devices and information systems are now generating huge amounts of data, as never before seen. An ability to analyse and mine very large amounts of data, "Big Data", provides policy and decision makers with new insights into varied aspects of work and information flow and operational business patterns and trends, and drives greater efficiencies, and safer and more effective health care. This enables decision makers to apply rules and guidance that have been developed based upon knowledge from many individual patient records through recognition of triggers based upon that knowledge. In clinical decision support systems information about the individual is compared to rules based upon knowledge gained from accumulated information of many to provide guidance at appropriate times in the clinical process. To achieve this the data in the individual system, and the knowledge rules must be represented in a compatible and consistent manner. This chapter describes data attributes; explains the difference between data and information; outlines the requirements for quality data; shows the relevance of health data standards; and describes how data governance impacts representation of content in systems and the use of that information.

  7. The neuron classification problem

    PubMed Central

    Bota, Mihail; Swanson, Larry W.

    2007-01-01

    A systematic account of neuron cell types is a basic prerequisite for determining the vertebrate nervous system global wiring diagram. With comprehensive lineage and phylogenetic information unavailable, a general ontology based on structure-function taxonomy is proposed and implemented in a knowledge management system, and a prototype analysis of select regions (including retina, cerebellum, and hypothalamus) presented. The supporting Brain Architecture Knowledge Management System (BAMS) Neuron ontology is online and its user interface allows queries about terms and their definitions, classification criteria based on the original literature and “Petilla Convention” guidelines, hierarchies, and relations—with annotations documenting each ontology entry. Combined with three BAMS modules for neural regions, connections between regions and neuron types, and molecules, the Neuron ontology provides a general framework for physical descriptions and computational modeling of neural systems. The knowledge management system interacts with other web resources, is accessible in both XML and RDF/OWL, is extendible to the whole body, and awaits large-scale data population requiring community participation for timely implementation. PMID:17582506

  8. Ontological modelling of knowledge management for human-machine integrated design of ultra-precision grinding machine

    NASA Astrophysics Data System (ADS)

    Hong, Haibo; Yin, Yuehong; Chen, Xing

    2016-11-01

    Despite the rapid development of computer science and information technology, an efficient human-machine integrated enterprise information system for designing complex mechatronic products is still not fully accomplished, partly because of the inharmonious communication among collaborators. Therefore, one challenge in human-machine integration is how to establish an appropriate knowledge management (KM) model to support integration and sharing of heterogeneous product knowledge. Aiming at the diversity of design knowledge, this article proposes an ontology-based model to reach an unambiguous and normative representation of knowledge. First, an ontology-based human-machine integrated design framework is described, then corresponding ontologies and sub-ontologies are established according to different purposes and scopes. Second, a similarity calculation-based ontology integration method composed of ontology mapping and ontology merging is introduced. The ontology searching-based knowledge sharing method is then developed. Finally, a case of human-machine integrated design of a large ultra-precision grinding machine is used to demonstrate the effectiveness of the method.

  9. Mechanisms Affecting the Sustainability and Scale-up of a System-Wide Numeracy Reform

    ERIC Educational Resources Information Center

    Bobis, Janette

    2011-01-01

    With deliberate system-level reform now being acted upon around the world, both successful and unsuccessful cases provide a rich source of knowledge from which we can learn to improve large-scale reform. Research surrounding the effectiveness of a theory-based system-wide numeracy reform operating in primary schools across Australia is examined to…

  10. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  11. Expert system shell to reason on large amounts of data

    NASA Technical Reports Server (NTRS)

    Giuffrida, Gionanni

    1994-01-01

    The current data base management systems (DBMS's) do not provide a sophisticated environment to develop rule based expert systems applications. Some of the new DBMS's come with some sort of rule mechanism; these are active and deductive database systems. However, both of these are not featured enough to support full implementation based on rules. On the other hand, current expert system shells do not provide any link with external databases. That is, all the data are kept in the system working memory. Such working memory is maintained in main memory. For some applications the reduced size of the available working memory could represent a constraint for the development. Typically these are applications which require reasoning on huge amounts of data. All these data do not fit into the computer main memory. Moreover, in some cases these data can be already available in some database systems and continuously updated while the expert system is running. This paper proposes an architecture which employs knowledge discovering techniques to reduce the amount of data to be stored in the main memory; in this architecture a standard DBMS is coupled with a rule-based language. The data are stored into the DBMS. An interface between the two systems is responsible for inducing knowledge from the set of relations. Such induced knowledge is then transferred to the rule-based language working memory.

  12. The application of SSADM to modelling the logical structure of proteins.

    PubMed

    Saldanha, J; Eccles, J

    1991-10-01

    A logical design that describes the overall structure of proteins, together with a more detailed design describing secondary and some supersecondary structures, has been constructed using the computer-aided software engineering (CASE) tool, Auto-mate. Auto-mate embodies the philosophy of the Structured Systems Analysis and Design Method (SSADM) which enables the logical design of computer systems. Our design will facilitate the building of large information systems, such as databases and knowledgebases in the field of protein structure, by the derivation of system requirements from our logical model prior to producing the final physical system. In addition, the study has highlighted the ease of employing SSADM as a formalism in which to conduct the transferral of concepts from an expert into a design for a knowledge-based system that can be implemented on a computer (the knowledge-engineering exercise). It has been demonstrated how SSADM techniques may be extended for the purpose of modelling the constituent Prolog rules. This facilitates the integration of the logical system design model with the derived knowledge-based system.

  13. Knowledge base and sensor bus messaging service architecture for critical tsunami warning and decision-support

    NASA Astrophysics Data System (ADS)

    Sabeur, Z. A.; Wächter, J.; Middleton, S. E.; Zlatev, Z.; Häner, R.; Hammitzsch, M.; Loewe, P.

    2012-04-01

    The intelligent management of large volumes of environmental monitoring data for early tsunami warning requires the deployment of robust and scalable service oriented infrastructure that is supported by an agile knowledge-base for critical decision-support In the TRIDEC project (TRIDEC 2010-2013), a sensor observation service bus of the TRIDEC system is being developed for the advancement of complex tsunami event processing and management. Further, a dedicated TRIDEC system knowledge-base is being implemented to enable on-demand access to semantically rich OGC SWE compliant hydrodynamic observations and operationally oriented meta-information to multiple subscribers. TRIDEC decision support requires a scalable and agile real-time processing architecture which enables fast response to evolving subscribers requirements as the tsunami crisis develops. This is also achieved with the support of intelligent processing services which specialise in multi-level fusion methods with relevance feedback and deep learning. The TRIDEC knowledge base development work coupled with that of the generic sensor bus platform shall be presented to demonstrate advanced decision-support with situation awareness in context of tsunami early warning and crisis management.

  14. AiGERM: A logic programming front end for GERM

    NASA Technical Reports Server (NTRS)

    Hashim, Safaa H.

    1990-01-01

    AiGerm (Artificially Intelligent Graphical Entity Relation Modeler) is a relational data base query and programming language front end for MCC (Mission Control Center)/STP's (Space Test Program) Germ (Graphical Entity Relational Modeling) system. It is intended as an add-on component of the Germ system to be used for navigating very large networks of information. It can also function as an expert system shell for prototyping knowledge-based systems. AiGerm provides an interface between the programming language and Germ.

  15. Framework Support For Knowledge-Based Software Development

    NASA Astrophysics Data System (ADS)

    Huseth, Steve

    1988-03-01

    The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.

  16. Advanced techniques for the storage and use of very large, heterogeneous spatial databases

    NASA Technical Reports Server (NTRS)

    Peuquet, Donna J.

    1987-01-01

    Progress is reported in the development of a prototype knowledge-based geographic information system. The overall purpose of this project is to investigate and demonstrate the use of advanced methods in order to greatly improve the capabilities of geographic information system technology in the handling of large, multi-source collections of spatial data in an efficient manner, and to make these collections of data more accessible and usable for the Earth scientist.

  17. The Impact of Collaborative and Individualized Student Response System Strategies on Learner Motivation, Metacognition, and Knowledge Transfer

    ERIC Educational Resources Information Center

    Jones, M. E.; Antonenko, P. D.; Greenwood, C. M.

    2012-01-01

    This study investigated the impact of collaborative and individualized student response system-based instruction on learner motivation, metacognition, and concept transfer in a large-enrolment undergraduate science course. Participants in the collaborative group responded to conceptual questions, discussed their responses in small groups, and…

  18. A Data Management System Integrating Web-Based Training and Randomized Trials

    ERIC Educational Resources Information Center

    Muroff, Jordana; Amodeo, Maryann; Larson, Mary Jo; Carey, Margaret; Loftin, Ralph D.

    2011-01-01

    This article describes a data management system (DMS) developed to support a large-scale randomized study of an innovative web-course that was designed to improve substance abuse counselors' knowledge and skills in applying a substance abuse treatment method (i.e., cognitive behavioral therapy; CBT). The randomized trial compared the performance…

  19. Applications of Artificial Intelligence to Information Search and Retrieval: The Development and Testing of an Intelligent Technical Information System.

    ERIC Educational Resources Information Center

    Harvey, Francis A.

    This paper describes the evolution and development of an intelligent information system, i.e., a knowledge base for steel structures being undertaken as part of the Technical Information Center for Steel Structures at Lehigh University's Center of Advanced Technology for Large Structural Systems (ATLSS). The initial development of the Technical…

  20. Knowledge-based reusable software synthesis system

    NASA Technical Reports Server (NTRS)

    Donaldson, Cammie

    1989-01-01

    The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.

  1. Will the future of knowledge work automation transform personalized medicine?

    PubMed

    Naik, Gauri; Bhide, Sanika S

    2014-09-01

    Today, we live in a world of 'information overload' which demands high level of knowledge-based work. However, advances in computer hardware and software have opened possibilities to automate 'routine cognitive tasks' for knowledge processing. Engineering intelligent software systems that can process large data sets using unstructured commands and subtle judgments and have the ability to learn 'on the fly' are a significant step towards automation of knowledge work. The applications of this technology for high throughput genomic analysis, database updating, reporting clinically significant variants, and diagnostic imaging purposes are explored using case studies.

  2. A knowledge engineering approach to recognizing and extracting sequences of nucleic acids from scientific literature.

    PubMed

    García-Remesal, Miguel; Maojo, Victor; Crespo, José

    2010-01-01

    In this paper we present a knowledge engineering approach to automatically recognize and extract genetic sequences from scientific articles. To carry out this task, we use a preliminary recognizer based on a finite state machine to extract all candidate DNA/RNA sequences. The latter are then fed into a knowledge-based system that automatically discards false positives and refines noisy and incorrectly merged sequences. We created the knowledge base by manually analyzing different manuscripts containing genetic sequences. Our approach was evaluated using a test set of 211 full-text articles in PDF format containing 3134 genetic sequences. For such set, we achieved 87.76% precision and 97.70% recall respectively. This method can facilitate different research tasks. These include text mining, information extraction, and information retrieval research dealing with large collections of documents containing genetic sequences.

  3. Supporting Knowledge Transfer in IS Deployment Projects

    NASA Astrophysics Data System (ADS)

    Schönström, Mikael

    To deploy new information systems is an expensive and complex task, and does seldom result in successful usage where the system adds strategic value to the firm (e.g. Sharma et al. 2003). It has been argued that innovation diffusion is a knowledge integration problem (Newell et al. 2000). Knowledge about business processes, deployment processes, information systems and technology are needed in a large-scale deployment of a corporate IS. These deployments can therefore to a large extent be argued to be a knowledge management (KM) problem. An effective deployment requires that knowledge about the system is effectively transferred to the target organization (Ko et al. 2005).

  4. Towards building a disease-phenotype knowledge base: extracting disease-manifestation relationship from literature

    PubMed Central

    Xu, Rong; Li, Li; Wang, QuanQiu

    2013-01-01

    Motivation: Systems approaches to studying phenotypic relationships among diseases are emerging as an active area of research for both novel disease gene discovery and drug repurposing. Currently, systematic study of disease phenotypic relationships on a phenome-wide scale is limited because large-scale machine-understandable disease–phenotype relationship knowledge bases are often unavailable. Here, we present an automatic approach to extract disease–manifestation (D-M) pairs (one specific type of disease–phenotype relationship) from the wide body of published biomedical literature. Data and Methods: Our method leverages external knowledge and limits the amount of human effort required. For the text corpus, we used 119 085 682 MEDLINE sentences (21 354 075 citations). First, we used D-M pairs from existing biomedical ontologies as prior knowledge to automatically discover D-M–specific syntactic patterns. We then extracted additional pairs from MEDLINE using the learned patterns. Finally, we analysed correlations between disease manifestations and disease-associated genes and drugs to demonstrate the potential of this newly created knowledge base in disease gene discovery and drug repurposing. Results: In total, we extracted 121 359 unique D-M pairs with a high precision of 0.924. Among the extracted pairs, 120 419 (99.2%) have not been captured in existing structured knowledge sources. We have shown that disease manifestations correlate positively with both disease-associated genes and drug treatments. Conclusions: The main contribution of our study is the creation of a large-scale and accurate D-M phenotype relationship knowledge base. This unique knowledge base, when combined with existing phenotypic, genetic and proteomic datasets, can have profound implications in our deeper understanding of disease etiology and in rapid drug repurposing. Availability: http://nlp.case.edu/public/data/DMPatternUMLS/ Contact: rxx@case.edu PMID:23828786

  5. Simulation Of Combat With An Expert System

    NASA Technical Reports Server (NTRS)

    Provenzano, J. P.

    1989-01-01

    Proposed expert system predicts outcomes of combat situations. Called "COBRA", combat outcome based on rules for attrition, system selects rules for mathematical modeling of losses and discrete events in combat according to previous experiences. Used with another software module known as the "Game". Game/COBRA software system, consisting of Game and COBRA modules, provides for both quantitative aspects and qualitative aspects in simulations of battles. COBRA intended for simulation of large-scale military exercises, concepts embodied in it have much broader applicability. In industrial research, knowledge-based system enables qualitative as well as quantitative simulations.

  6. Analysis of Nursing Clinical Decision Support Requests and Strategic Plan in a Large Academic Health System.

    PubMed

    Whalen, Kimberly; Bavuso, Karen; Bouyer-Ferullo, Sharon; Goldsmith, Denise; Fairbanks, Amanda; Gesner, Emily; Lagor, Charles; Collins, Sarah

    2016-01-01

    To understand requests for nursing Clinical Decision Support (CDS) interventions at a large integrated health system undergoing vendor-based EHR implementation. In addition, to establish a process to guide both short-term implementation and long-term strategic goals to meet nursing CDS needs. We conducted an environmental scan to understand current state of nursing CDS over three months. The environmental scan consisted of a literature review and an analysis of CDS requests received from across our health system. We identified existing high priority CDS and paper-based tools used in nursing practice at our health system that guide decision-making. A total of 46 nursing CDS requests were received. Fifty-six percent (n=26) were specific to a clinical specialty; 22 percent (n=10) were focused on facilitating clinical consults in the inpatient setting. "Risk Assessments/Risk Reduction/Promotion of Healthy Habits" (n=23) was the most requested High Priority Category received for nursing CDS. A continuum of types of nursing CDS needs emerged using the Data-Information-Knowledge-Wisdom Conceptual Framework: 1) facilitating data capture, 2) meeting information needs, 3) guiding knowledge-based decision making, and 4) exposing analytics for wisdom-based clinical interpretation by the nurse. Identifying and prioritizing paper-based tools that can be modified into electronic CDS is a challenge. CDS strategy is an evolving process that relies on close collaboration and engagement with clinical sites for short-term implementation and should be incorporated into a long-term strategic plan that can be optimized and achieved overtime. The Data-Information-Knowledge-Wisdom Conceptual Framework in conjunction with the High Priority Categories established may be a useful tool to guide a strategic approach for meeting short-term nursing CDS needs and aligning with the organizational strategic plan.

  7. Knowledge-acquisition tools for medical knowledge-based systems.

    PubMed

    Lanzola, G; Quaglini, S; Stefanelli, M

    1995-03-01

    Knowledge-based systems (KBS) have been proposed to solve a large variety of medical problems. A strategic issue for KBS development and maintenance are the efforts required for both knowledge engineers and domain experts. The proposed solution is building efficient knowledge acquisition (KA) tools. This paper presents a set of KA tools we are developing within a European Project called GAMES II. They have been designed after the formulation of an epistemological model of medical reasoning. The main goal is that of developing a computational framework which allows knowledge engineers and domain experts to interact cooperatively in developing a medical KBS. To this aim, a set of reusable software components is highly recommended. Their design was facilitated by the development of a methodology for KBS construction. It views this process as comprising two activities: the tailoring of the epistemological model to the specific medical task to be executed and the subsequent translation of this model into a computational architecture so that the connections between computational structures and their knowledge level counterparts are maintained. The KA tools we developed are illustrated taking examples from the behavior of a KBS we are building for the management of children with acute myeloid leukemia.

  8. Moving Knowledge Around: Strategies for Fostering Equity within Educational Systems

    ERIC Educational Resources Information Center

    Ainscow, Mel

    2012-01-01

    This paper describes and analyses the work of a large scale improvement project in England in order to find more effective ways of fostering equity within education systems. The project involved an approach based on an analysis of local context, and used processes of networking and collaboration in order to make better use of available expertise.…

  9. Association of Systemic Anaplastic Large Cell Lymphoma and Active Toxoplasmosis in a Child.

    PubMed

    Sayyahfar, Shirin; Karimi, Abdollah; Gharib, Atoosa; Fahimzad, Alireza

    2015-08-01

    Anaplastic large cell lymphoma is a subset of non-Hodgkin lymphoma and an unusual disease in children. Herein we have reported a 7- year- old girl with a large necrotic skin ulcer on the chest caused by systemic form of anaplastic large-cell lymphoma and simultaneous active toxoplasmosis diagnosed by PCR on lymph node specimen. There were few reports showing a role for toxoplasma infection to cause some malignancies such as lymphoma in adults. Based to our knowledge, this has been the first report of simultaneous systemic anaplastic large cell lymphoma and active toxoplasmosis, documented by positive PCR on tissue biopsy in a child. This case report has suggested more attention to the accompanying Toxoplasma gondii infection as a probable cause of some types of lymphomas.

  10. Systems, methods and apparatus for verification of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Rash, James L. (Inventor); Gracinin, Denis (Inventor); Erickson, John D. (Inventor); Rouff, Christopher A. (Inventor); Hinchey, Michael G. (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments, domain knowledge is translated into a knowledge-based system. In some embodiments, a formal specification is derived from rules of a knowledge-based system, the formal specification is analyzed, and flaws in the formal specification are used to identify and correct errors in the domain knowledge, from which a knowledge-based system is translated.

  11. Improving Cyber-Security of Smart Grid Systems via Anomaly Detection and Linguistic Domain Knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ondrej Linda; Todd Vollmer; Milos Manic

    The planned large scale deployment of smart grid network devices will generate a large amount of information exchanged over various types of communication networks. The implementation of these critical systems will require appropriate cyber-security measures. A network anomaly detection solution is considered in this work. In common network architectures multiple communications streams are simultaneously present, making it difficult to build an anomaly detection solution for the entire system. In addition, common anomaly detection algorithms require specification of a sensitivity threshold, which inevitably leads to a tradeoff between false positives and false negatives rates. In order to alleviate these issues, thismore » paper proposes a novel anomaly detection architecture. The designed system applies the previously developed network security cyber-sensor method to individual selected communication streams allowing for learning accurate normal network behavior models. Furthermore, the developed system dynamically adjusts the sensitivity threshold of each anomaly detection algorithm based on domain knowledge about the specific network system. It is proposed to model this domain knowledge using Interval Type-2 Fuzzy Logic rules, which linguistically describe the relationship between various features of the network communication and the possibility of a cyber attack. The proposed method was tested on experimental smart grid system demonstrating enhanced cyber-security.« less

  12. Technology and testing.

    PubMed

    Quellmalz, Edys S; Pellegrino, James W

    2009-01-02

    Large-scale testing of educational outcomes benefits already from technological applications that address logistics such as development, administration, and scoring of tests, as well as reporting of results. Innovative applications of technology also provide rich, authentic tasks that challenge the sorts of integrated knowledge, critical thinking, and problem solving seldom well addressed in paper-based tests. Such tasks can be used on both large-scale and classroom-based assessments. Balanced assessment systems can be developed that integrate curriculum-embedded, benchmark, and summative assessments across classroom, district, state, national, and international levels. We discuss here the potential of technology to launch a new era of integrated, learning-centered assessment systems.

  13. A relational data-knowledge base system and its potential in developing a distributed data-knowledge system

    NASA Technical Reports Server (NTRS)

    Rahimian, Eric N.; Graves, Sara J.

    1988-01-01

    A new approach used in constructing a rational data knowledge base system is described. The relational database is well suited for distribution due to its property of allowing data fragmentation and fragmentation transparency. An example is formulated of a simple relational data knowledge base which may be generalized for use in developing a relational distributed data knowledge base system. The efficiency and ease of application of such a data knowledge base management system is briefly discussed. Also discussed are the potentials of the developed model for sharing the data knowledge base as well as the possible areas of difficulty in implementing the relational data knowledge base management system.

  14. Multi-agent based control of large-scale complex systems employing distributed dynamic inference engine

    NASA Astrophysics Data System (ADS)

    Zhang, Daili

    Increasing societal demand for automation has led to considerable efforts to control large-scale complex systems, especially in the area of autonomous intelligent control methods. The control system of a large-scale complex system needs to satisfy four system level requirements: robustness, flexibility, reusability, and scalability. Corresponding to the four system level requirements, there arise four major challenges. First, it is difficult to get accurate and complete information. Second, the system may be physically highly distributed. Third, the system evolves very quickly. Fourth, emergent global behaviors of the system can be caused by small disturbances at the component level. The Multi-Agent Based Control (MABC) method as an implementation of distributed intelligent control has been the focus of research since the 1970s, in an effort to solve the above-mentioned problems in controlling large-scale complex systems. However, to the author's best knowledge, all MABC systems for large-scale complex systems with significant uncertainties are problem-specific and thus difficult to extend to other domains or larger systems. This situation is partly due to the control architecture of multiple agents being determined by agent to agent coupling and interaction mechanisms. Therefore, the research objective of this dissertation is to develop a comprehensive, generalized framework for the control system design of general large-scale complex systems with significant uncertainties, with the focus on distributed control architecture design and distributed inference engine design. A Hybrid Multi-Agent Based Control (HyMABC) architecture is proposed by combining hierarchical control architecture and module control architecture with logical replication rings. First, it decomposes a complex system hierarchically; second, it combines the components in the same level as a module, and then designs common interfaces for all of the components in the same module; third, replications are made for critical agents and are organized into logical rings. This architecture maintains clear guidelines for complexity decomposition and also increases the robustness of the whole system. Multiple Sectioned Dynamic Bayesian Networks (MSDBNs) as a distributed dynamic probabilistic inference engine, can be embedded into the control architecture to handle uncertainties of general large-scale complex systems. MSDBNs decomposes a large knowledge-based system into many agents. Each agent holds its partial perspective of a large problem domain by representing its knowledge as a Dynamic Bayesian Network (DBN). Each agent accesses local evidence from its corresponding local sensors and communicates with other agents through finite message passing. If the distributed agents can be organized into a tree structure, satisfying the running intersection property and d-sep set requirements, globally consistent inferences are achievable in a distributed way. By using different frequencies for local DBN agent belief updating and global system belief updating, it balances the communication cost with the global consistency of inferences. In this dissertation, a fully factorized Boyen-Koller (BK) approximation algorithm is used for local DBN agent belief updating, and the static Junction Forest Linkage Tree (JFLT) algorithm is used for global system belief updating. MSDBNs assume a static structure and a stable communication network for the whole system. However, for a real system, sub-Bayesian networks as nodes could be lost, and the communication network could be shut down due to partial damage in the system. Therefore, on-line and automatic MSDBNs structure formation is necessary for making robust state estimations and increasing survivability of the whole system. A Distributed Spanning Tree Optimization (DSTO) algorithm, a Distributed D-Sep Set Satisfaction (DDSSS) algorithm, and a Distributed Running Intersection Satisfaction (DRIS) algorithm are proposed in this dissertation. Combining these three distributed algorithms and a Distributed Belief Propagation (DBP) algorithm in MSDBNs makes state estimations robust to partial damage in the whole system. Combining the distributed control architecture design and the distributed inference engine design leads to a process of control system design for a general large-scale complex system. As applications of the proposed methodology, the control system design of a simplified ship chilled water system and a notional ship chilled water system have been demonstrated step by step. Simulation results not only show that the proposed methodology gives a clear guideline for control system design for general large-scale complex systems with dynamic and uncertain environment, but also indicate that the combination of MSDBNs and HyMABC can provide excellent performance for controlling general large-scale complex systems.

  15. Food Consumption and its impact on Cardiovascular Disease: Importance of Solutions focused on the globalized food system

    PubMed Central

    Anand, Sonia S.; Hawkes, Corinna; de Souza, Russell J.; Mente, Andrew; Dehghan, Mahshid; Nugent, Rachel; Zulyniak, Michael A.; Weis, Tony; Bernstein, Adam M.; Krauss, Ronald; Kromhout, Daan; Jenkins, David J.A.; Malik, Vasanti; Martinez-Gonzalez, Miguel A.; Mozafarrian, Dariush; Yusuf, Salim; Willett, Walter C.; Popkin, Barry M

    2015-01-01

    Major scholars in the field, based on a 3-day consensus, created an in-depth review of current knowledge on the role of diet in CVD, the changing global food system and global dietary patterns, and potential policy solutions. Evidence from different countries, age/race/ethnicity/socioeconomic groups suggest the health effects studies of foods, macronutrients, and dietary patterns on CVD appear to be far more consistent though regional knowledge gaps are highlighted. There are large gaps in knowledge about the association of macronutrients to CVD in low- and middle-income countries (LMIC), particularly linked with dietary patterns are reviewed. Our understanding of foods and macronutrients in relationship to CVD is broadly clear; however major gaps exist both in dietary pattern research and ways to change diets and food systems. Based on the current evidence, the traditional Mediterranean-type diet, including plant foods/emphasizing plant protein sources, provides a well-tested healthy dietary pattern to reduce CVD. PMID:26429085

  16. Planning bioinformatics workflows using an expert system.

    PubMed

    Chen, Xiaoling; Chang, Jeffrey T

    2017-04-15

    Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  17. Planning bioinformatics workflows using an expert system

    PubMed Central

    Chen, Xiaoling; Chang, Jeffrey T.

    2017-01-01

    Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928

  18. Incorporating linguistic knowledge for learning distributed word representations.

    PubMed

    Wang, Yan; Liu, Zhiyuan; Sun, Maosong

    2015-01-01

    Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining.

  19. Incorporating Linguistic Knowledge for Learning Distributed Word Representations

    PubMed Central

    Wang, Yan; Liu, Zhiyuan; Sun, Maosong

    2015-01-01

    Combined with neural language models, distributed word representations achieve significant advantages in computational linguistics and text mining. Most existing models estimate distributed word vectors from large-scale data in an unsupervised fashion, which, however, do not take rich linguistic knowledge into consideration. Linguistic knowledge can be represented as either link-based knowledge or preference-based knowledge, and we propose knowledge regularized word representation models (KRWR) to incorporate these prior knowledge for learning distributed word representations. Experiment results demonstrate that our estimated word representation achieves better performance in task of semantic relatedness ranking. This indicates that our methods can efficiently encode both prior knowledge from knowledge bases and statistical knowledge from large-scale text corpora into a unified word representation model, which will benefit many tasks in text mining. PMID:25874581

  20. Extending TOPS: Ontology-driven Anomaly Detection and Analysis System

    NASA Astrophysics Data System (ADS)

    Votava, P.; Nemani, R. R.; Michaelis, A.

    2010-12-01

    Terrestrial Observation and Prediction System (TOPS) is a flexible modeling software system that integrates ecosystem models with frequent satellite and surface weather observations to produce ecosystem nowcasts (assessments of current conditions) and forecasts useful in natural resources management, public health and disaster management. We have been extending the Terrestrial Observation and Prediction System (TOPS) to include a capability for automated anomaly detection and analysis of both on-line (streaming) and off-line data. In order to best capture the knowledge about data hierarchies, Earth science models and implied dependencies between anomalies and occurrences of observable events such as urbanization, deforestation, or fires, we have developed an ontology to serve as a knowledge base. We can query the knowledge base and answer questions about dataset compatibilities, similarities and dependencies so that we can, for example, automatically analyze similar datasets in order to verify a given anomaly occurrence in multiple data sources. We are further extending the system to go beyond anomaly detection towards reasoning about possible causes of anomalies that are also encoded in the knowledge base as either learned or implied knowledge. This enables us to scale up the analysis by eliminating a large number of anomalies early on during the processing by either failure to verify them from other sources, or matching them directly with other observable events without having to perform an extensive and time-consuming exploration and analysis. The knowledge is captured using OWL ontology language, where connections are defined in a schema that is later extended by including specific instances of datasets and models. The information is stored using Sesame server and is accessible through both Java API and web services using SeRQL and SPARQL query languages. Inference is provided using OWLIM component integrated with Sesame.

  1. Employing Model-Based Reasoning in Interdisciplinary Research Teams: Evidence-Based Practices for Integrating Knowledge Across Systems

    NASA Astrophysics Data System (ADS)

    Pennington, D. D.; Vincent, S.

    2017-12-01

    The NSF-funded project "Employing Model-Based Reasoning in Socio-Environmental Synthesis (EMBeRS)" has developed a generic model for exchanging knowledge across disciplines that is based on findings from the cognitive, learning, social, and organizational sciences addressing teamwork in complex problem solving situations. Two ten-day summer workshops for PhD students from large, NSF-funded interdisciplinary projects working on a variety of water issues were conducted in 2016 and 2017, testing the model by collecting a variety of data, including surveys, interviews, audio/video recordings, material artifacts and documents, and photographs. This presentation will introduce the EMBeRS model, the design of workshop activities based on the model, and results from surveys and interviews with the participating students. Findings suggest that this approach is very effective for developing a shared, integrated research vision across disciplines, compared with activities typically provided by most large research projects, and that students believe the skills developed in the EMBeRS workshops are unique and highly desireable.

  2. Combining knowledge discovery from databases (KDD) and case-based reasoning (CBR) to support diagnosis of medical images

    NASA Astrophysics Data System (ADS)

    Stranieri, Andrew; Yearwood, John; Pham, Binh

    1999-07-01

    The development of data warehouses for the storage and analysis of very large corpora of medical image data represents a significant trend in health care and research. Amongst other benefits, the trend toward warehousing enables the use of techniques for automatically discovering knowledge from large and distributed databases. In this paper, we present an application design for knowledge discovery from databases (KDD) techniques that enhance the performance of the problem solving strategy known as case- based reasoning (CBR) for the diagnosis of radiological images. The problem of diagnosing the abnormality of the cervical spine is used to illustrate the method. The design of a case-based medical image diagnostic support system has three essential characteristics. The first is a case representation that comprises textual descriptions of the image, visual features that are known to be useful for indexing images, and additional visual features to be discovered by data mining many existing images. The second characteristic of the approach presented here involves the development of a case base that comprises an optimal number and distribution of cases. The third characteristic involves the automatic discovery, using KDD techniques, of adaptation knowledge to enhance the performance of the case based reasoner. Together, the three characteristics of our approach can overcome real time efficiency obstacles that otherwise mitigate against the use of CBR to the domain of medical image analysis.

  3. Development of a GIService based on spatial data mining for location choice of convenience stores in Taipei City

    NASA Astrophysics Data System (ADS)

    Jung, Chinte; Sun, Chih-Hong

    2006-10-01

    Motivated by the increasing accessibility of technology, more and more spatial data are being made digitally available. How to extract the valuable knowledge from these large (spatial) databases is becoming increasingly important to businesses, as well. It is essential to be able to analyze and utilize these large datasets, convert them into useful knowledge, and transmit them through GIS-enabled instruments and the Internet, conveying the key information to business decision-makers effectively and benefiting business entities. In this research, we combine the techniques of GIS, spatial decision support system (SDSS), spatial data mining (SDM), and ArcGIS Server to achieve the following goals: (1) integrate databases from spatial and non-spatial datasets about the locations of businesses in Taipei, Taiwan; (2) use the association rules, one of the SDM methods, to extract the knowledge from the integrated databases; and (3) develop a Web-based SDSS GIService as a location-selection tool for business by the product of ArcGIS Server.

  4. Towards Modeling False Memory With Computational Knowledge Bases.

    PubMed

    Li, Justin; Kohanyi, Emma

    2017-01-01

    One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling. Copyright © 2016 Cognitive Science Society, Inc.

  5. An on-line expert system for diagnosing environmentally induced spacecraft anomalies using CLIPS

    NASA Technical Reports Server (NTRS)

    Lauriente, Michael; Rolincik, Mark; Koons, Harry C; Gorney, David

    1993-01-01

    A new rule-based, expert system for diagnosing spacecraft anomalies is under development. The knowledge base consists of over two-hundred rules and provide links to historical and environmental databases. Environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose. The system's driver translates forward chaining rules into a backward chaining sequence, prompting the user for information pertinent to the causes considered. The use of heuristics frees the user from searching through large amounts of irrelevant information (varying degrees of confidence in an answer) or 'unknown' to any question. The expert system not only provides scientists with needed risk analysis and confidence estimates not available in standard numerical models or databases, but it is also an effective learning tool. In addition, the architecture of the expert system allows easy additions to the knowledge base and the database. For example, new frames concerning orbital debris and ionospheric scintillation are being considered. The system currently runs on a MicroVAX and uses the C Language Integrated Production System (CLIPS).

  6. DOD Space Systems: Additional Knowledge Would Better Support Decisions about Disaggregating Large Satellites

    DTIC Science & Technology

    2014-10-01

    considering new approaches. According to Air Force Space Command, U.S. space systems face intentional and unintentional threats , which have increased...life cycle costs • Demand for more satellites may stimulate new entrants and competition to lower acquisition costs. • Smaller, less complex...Fiscal constraints and growing threats to space systems have led DOD to consider alternatives for acquiring space-based capabilities, including

  7. Knowledge-based expert systems and a proof-of-concept case study for multiple sequence alignment construction and analysis.

    PubMed

    Aniba, Mohamed Radhouene; Siguenza, Sophie; Friedrich, Anne; Plewniak, Frédéric; Poch, Olivier; Marchler-Bauer, Aron; Thompson, Julie Dawn

    2009-01-01

    The traditional approach to bioinformatics analyses relies on independent task-specific services and applications, using different input and output formats, often idiosyncratic, and frequently not designed to inter-operate. In general, such analyses were performed by experts who manually verified the results obtained at each step in the process. Today, the amount of bioinformatics information continuously being produced means that handling the various applications used to study this information presents a major data management and analysis challenge to researchers. It is now impossible to manually analyse all this information and new approaches are needed that are capable of processing the large-scale heterogeneous data in order to extract the pertinent information. We review the recent use of integrated expert systems aimed at providing more efficient knowledge extraction for bioinformatics research. A general methodology for building knowledge-based expert systems is described, focusing on the unstructured information management architecture, UIMA, which provides facilities for both data and process management. A case study involving a multiple alignment expert system prototype called AlexSys is also presented.

  8. Knowledge-based expert systems and a proof-of-concept case study for multiple sequence alignment construction and analysis

    PubMed Central

    Aniba, Mohamed Radhouene; Siguenza, Sophie; Friedrich, Anne; Plewniak, Frédéric; Poch, Olivier; Marchler-Bauer, Aron

    2009-01-01

    The traditional approach to bioinformatics analyses relies on independent task-specific services and applications, using different input and output formats, often idiosyncratic, and frequently not designed to inter-operate. In general, such analyses were performed by experts who manually verified the results obtained at each step in the process. Today, the amount of bioinformatics information continuously being produced means that handling the various applications used to study this information presents a major data management and analysis challenge to researchers. It is now impossible to manually analyse all this information and new approaches are needed that are capable of processing the large-scale heterogeneous data in order to extract the pertinent information. We review the recent use of integrated expert systems aimed at providing more efficient knowledge extraction for bioinformatics research. A general methodology for building knowledge-based expert systems is described, focusing on the unstructured information management architecture, UIMA, which provides facilities for both data and process management. A case study involving a multiple alignment expert system prototype called AlexSys is also presented. PMID:18971242

  9. An integrated science-based methodology to assess potential ...

    EPA Pesticide Factsheets

    There is an urgent need for broad and integrated studies that address the risks of engineered nanomaterials (ENMs) along the different endpoints of the society, environment, and economy (SEE) complex adaptive system. This article presents an integrated science-based methodology to assess the potential risks of engineered nanomaterials. To achieve the study objective, two major tasks are accomplished, knowledge synthesis and algorithmic computational methodology. The knowledge synthesis task is designed to capture “what is known” and to outline the gaps in knowledge from ENMs risk perspective. The algorithmic computational methodology is geared toward the provision of decisions and an understanding of the risks of ENMs along different endpoints for the constituents of the SEE complex adaptive system. The approach presented herein allows for addressing the formidable task of assessing the implications and risks of exposure to ENMs, with the long term goal to build a decision-support system to guide key stakeholders in the SEE system towards building sustainable ENMs and nano-enabled products. The following specific aims are formulated to achieve the study objective: (1) to propose a system of systems (SoS) architecture that builds a network management among the different entities in the large SEE system to track the flow of ENMs emission, fate and transport from the source to the receptor; (2) to establish a staged approach for knowledge synthesis methodo

  10. Mechanical Transformation of Task Heuristics into Operational Procedures

    DTIC Science & Technology

    1981-04-14

    Introduction A central theme of recent research in artificial intelligence is that *Intelligent task performance requires large amounts of knowledge...PLAY P1 C4] (. (LEADING (QSO)) (OR (CAN-LEAO- HEARrS (gSO)J (mEg (SUIT-OF C3) H])] C-) (FOLLOWING (QSO)) (OR [VOID (OSO) (SUIT-LED)3 [IN-SUIT C3 (SUIT...Production rules as a representation for a knowledge based consultation system. Artificial Intelligence 8:15-45, Spring, 1977. [Davis 77b] R. Davis

  11. On the accuracy of modelling the dynamics of large space structures

    NASA Technical Reports Server (NTRS)

    Diarra, C. M.; Bainum, P. M.

    1985-01-01

    Proposed space missions will require large scale, light weight, space based structural systems. Large space structure technology (LSST) systems will have to accommodate (among others): ocean data systems; electronic mail systems; large multibeam antenna systems; and, space based solar power systems. The structures are to be delivered into orbit by the space shuttle. Because of their inherent size, modelling techniques and scaling algorithms must be developed so that system performance can be predicted accurately prior to launch and assembly. When the size and weight-to-area ratio of proposed LSST systems dictate that the entire system be considered flexible, there are two basic modeling methods which can be used. The first is a continuum approach, a mathematical formulation for predicting the motion of a general orbiting flexible body, in which elastic deformations are considered small compared with characteristic body dimensions. This approach is based on an a priori knowledge of the frequencies and shape functions of all modes included within the system model. Alternatively, finite element techniques can be used to model the entire structure as a system of lumped masses connected by a series of (restoring) springs and possibly dampers. In addition, a computational algorithm was developed to evaluate the coefficients of the various coupling terms in the equations of motion as applied to the finite element model of the Hoop/Column.

  12. Accessibility Principles for Reading Assessments

    ERIC Educational Resources Information Center

    Thurlow, Martha L.; Laitusis, Cara Cahalan; Dillon, Deborah R.; Cook, Linda L.; Moen, Ross E.; Abedi, Jamal; O'Brien, David G.

    2009-01-01

    Within the context of standards-based educational systems, states are using large scale reading assessments to help ensure that all children have the opportunity to learn essential knowledge and skills. The challenge for developers of accessible reading assessments is to develop assessments that measure only those student characteristics that are…

  13. Co-production of knowledge: An Inuit Indigenous Knowledge perspective

    NASA Astrophysics Data System (ADS)

    Daniel, R.; Behe, C.

    2017-12-01

    A "co-production of knowledge" approach brings together different knowledge systems while building equitable and collaborative partnerships from `different ways of knowing.' Inuit Indigenous Knowledge is a systematic way of thinking applied to phenomena across biological, physical, cultural and spiritual systems; rooted with a holistic understanding of ecosystems (ICC Alaska 2016). A holistic image of Arctic environmental change is attained by bringing Indigenous Knowledge (IK) holders and scientists together through a co-production of knowledge framework. Experts from IK and science should be involved together from the inception of a project. IK should be respected as its own knowledge system and should not be translated into science. A co-production of knowledge approach is important in developing adaptation policies and practices, for sustainability and to address biodiversity conservation (Daniel et al. 2016). Co-production of knowledge is increasingly being recognized by the scientific community at-large. However, in many instances the concept is being incorrectly applied. This talk will build on the important components of co-production of knowledge from an Inuit perspective and specifically IK. In this presentation we will differentiate the co-production of knowledge from a multi-disciplinary approach or multi-evidence based decision-making. We underscore the role and value of different knowledge systems with different methodologies and the need for collaborative approaches in identifying research questions. We will also provide examples from our experiences with Indigenous communities and scientists in the Arctic. References: Inuit Circumpolar Council of Alaska. 2016. Alaskan Inuit Food Security Conceptual Framework: How to Assess the Arctic From An Inuit Perspective, 201pp. Daniel, R., C. Behe, J. Raymond-Yakoubian, E. Krummel, and S. Gearhead. Arctic Observing Summit White Paper Synthesis, Theme 6: Interfacing Indigenous Knowledge, Community-based Monitoring and Scientific Methods for Sustained Arctic Observations. http://www.arcticobservingsummit.org/sites/arcticobservingsummit.org/files/Daniel_Laing_Kielsen%20Holm_et_al-AOS2016-Theme-6-IK-CBM-Synthesis-updated-2016-04.pdf

  14. ECO: A Framework for Entity Co-Occurrence Exploration with Faceted Navigation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halliday, K. D.

    2010-08-20

    Even as highly structured databases and semantic knowledge bases become more prevalent, a substantial amount of human knowledge is reported as written prose. Typical textual reports, such as news articles, contain information about entities (people, organizations, and locations) and their relationships. Automatically extracting such relationships from large text corpora is a key component of corporate and government knowledge bases. The primary goal of the ECO project is to develop a scalable framework for extracting and presenting these relationships for exploration using an easily navigable faceted user interface. ECO uses entity co-occurrence relationships to identify related entities. The system aggregates andmore » indexes information on each entity pair, allowing the user to rapidly discover and mine relational information.« less

  15. Unveiling health attitudes and creating good-for-you foods: the genomics metaphor, consumer innovative web-based technologies.

    PubMed

    Moskowitz, H R; German, J B; Saguy, I S

    2005-01-01

    This article presents an integrated analysis of three emerging knowledge bases in the nutrition and consumer products industries, and how they may effect the food industry. These knowledge bases produce new vistas for corporate product development, especially with respect to those foods that are positioned as 'good for you.' Couched within the current thinking of state-of-the-art knowledge and information, this article highlights how today's thinking about accelerated product development can be introduced into the food and health industries to complement these three research areas. The 3 knowledge bases are: the genomics revolution, which has opened new insights into understanding the interactions of personal needs of individual consumers with nutritionally relevant components of the foods; the investigation of food choice by scientific studies; the development of large scale databases (mega-studies) about the consumer mind. These knowledge bases, combined with new methods to understand the consumer through research, make possible a more focused development. The confluence of trends outlined in this article provides the corporation with the beginnings of a new path to a knowledge-based, principles-grounded product-development system. The approaches hold the potential to create foods based upon people's nutritional requirements combined with their individual preferences. Integrating these emerging knowledge areas with new consumer research techniques may well reshape how the food industry develops new products to satisfy consumer needs and wants.

  16. Knowledge base rule partitioning design for CLIPS

    NASA Technical Reports Server (NTRS)

    Mainardi, Joseph D.; Szatkowski, G. P.

    1990-01-01

    This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.

  17. The Intelligent Program Editor: A Knowledge Based System for Supporting Program and Documentation Maintenance.

    DTIC Science & Technology

    1983-03-01

    As an illustration of this fact, the collection of tools designed to alleviate these maintenance costs for large systems typically our- problems, all...provide bnowledge which other or modifies code; and a collection of semantic systems (such as the IP) can employ, analysis and manipulation tools that...TPleetl and sumations. > (index *loops syntea-database) In the second approach, the user identifies a -> LOOPsetl:[lengtb 2) collection of items

  18. Knowledge Discovery, Integration and Communication for Extreme Weather and Flood Resilience Using Artificial Intelligence: Flood AI Alpha

    NASA Astrophysics Data System (ADS)

    Demir, I.; Sermet, M. Y.

    2016-12-01

    Nobody is immune from extreme events or natural hazards that can lead to large-scale consequences for the nation and public. One of the solutions to reduce the impacts of extreme events is to invest in improving resilience with the ability to better prepare, plan, recover, and adapt to disasters. The National Research Council (NRC) report discusses the topic of how to increase resilience to extreme events through a vision of resilient nation in the year 2030. The report highlights the importance of data, information, gaps and knowledge challenges that needs to be addressed, and suggests every individual to access the risk and vulnerability information to make their communities more resilient. This abstracts presents our project on developing a resilience framework for flooding to improve societal preparedness with objectives; (a) develop a generalized ontology for extreme events with primary focus on flooding; (b) develop a knowledge engine with voice recognition, artificial intelligence, natural language processing, and inference engine. The knowledge engine will utilize the flood ontology and concepts to connect user input to relevant knowledge discovery outputs on flooding; (c) develop a data acquisition and processing framework from existing environmental observations, forecast models, and social networks. The system will utilize the framework, capabilities and user base of the Iowa Flood Information System (IFIS) to populate and test the system; (d) develop a communication framework to support user interaction and delivery of information to users. The interaction and delivery channels will include voice and text input via web-based system (e.g. IFIS), agent-based bots (e.g. Microsoft Skype, Facebook Messenger), smartphone and augmented reality applications (e.g. smart assistant), and automated web workflows (e.g. IFTTT, CloudWork) to open the knowledge discovery for flooding to thousands of community extensible web workflows.

  19. GalenOWL: Ontology-based drug recommendations discovery

    PubMed Central

    2012-01-01

    Background Identification of drug-drug and drug-diseases interactions can pose a difficult problem to cope with, as the increasingly large number of available drugs coupled with the ongoing research activities in the pharmaceutical domain, make the task of discovering relevant information difficult. Although international standards, such as the ICD-10 classification and the UNII registration, have been developed in order to enable efficient knowledge sharing, medical staff needs to be constantly updated in order to effectively discover drug interactions before prescription. The use of Semantic Web technologies has been proposed in earlier works, in order to tackle this problem. Results This work presents a semantic-enabled online service, named GalenOWL, capable of offering real time drug-drug and drug-diseases interaction discovery. For enabling this kind of service, medical information and terminology had to be translated to ontological terms and be appropriately coupled with medical knowledge of the field. International standards such as the aforementioned ICD-10 and UNII, provide the backbone of the common representation of medical data, while the medical knowledge of drug interactions is represented by a rule base which makes use of the aforementioned standards. Details of the system architecture are presented while also giving an outline of the difficulties that had to be overcome. A comparison of the developed ontology-based system with a similar system developed using a traditional business logic rule engine is performed, giving insights on the advantages and drawbacks of both implementations. Conclusions The use of Semantic Web technologies has been found to be a good match for developing drug recommendation systems. Ontologies can effectively encapsulate medical knowledge and rule-based reasoning can capture and encode the drug interactions knowledge. PMID:23256945

  20. A Hybrid Neuro-Fuzzy Model For Integrating Large Earth-Science Datasets

    NASA Astrophysics Data System (ADS)

    Porwal, A.; Carranza, J.; Hale, M.

    2004-12-01

    A GIS-based hybrid neuro-fuzzy approach to integration of large earth-science datasets for mineral prospectivity mapping is described. It implements a Takagi-Sugeno type fuzzy inference system in the framework of a four-layered feed-forward adaptive neural network. Each unique combination of the datasets is considered a feature vector whose components are derived by knowledge-based ordinal encoding of the constituent datasets. A subset of feature vectors with a known output target vector (i.e., unique conditions known to be associated with either a mineralized or a barren location) is used for the training of an adaptive neuro-fuzzy inference system. Training involves iterative adjustment of parameters of the adaptive neuro-fuzzy inference system using a hybrid learning procedure for mapping each training vector to its output target vector with minimum sum of squared error. The trained adaptive neuro-fuzzy inference system is used to process all feature vectors. The output for each feature vector is a value that indicates the extent to which a feature vector belongs to the mineralized class or the barren class. These values are used to generate a prospectivity map. The procedure is demonstrated by an application to regional-scale base metal prospectivity mapping in a study area located in the Aravalli metallogenic province (western India). A comparison of the hybrid neuro-fuzzy approach with pure knowledge-driven fuzzy and pure data-driven neural network approaches indicates that the former offers a superior method for integrating large earth-science datasets for predictive spatial mathematical modelling.

  1. Collaborative filtering to improve navigation of large radiology knowledge resources.

    PubMed

    Kahn, Charles E

    2005-06-01

    Collaborative filtering is a knowledge-discovery technique that can help guide readers to items of potential interest based on the experience of prior users. This study sought to determine the impact of collaborative filtering on navigation of a large, Web-based radiology knowledge resource. Collaborative filtering was applied to a collection of 1,168 radiology hypertext documents available via the Internet. An item-based collaborative filtering algorithm identified each document's six most closely related documents based on 248,304 page views in an 18-day period. Documents were amended to include links to their related documents, and use was analyzed over the next 5 days. The mean number of documents viewed per visit increased from 1.57 to 1.74 (P < 0.0001). Collaborative filtering can increase a radiology information resource's utilization and can improve its usefulness and ease of navigation. The technique holds promise for improving navigation of large Internet-based radiology knowledge resources.

  2. Inductive knowledge acquisition experience with commercial tools for space shuttle main engine testing

    NASA Technical Reports Server (NTRS)

    Modesitt, Kenneth L.

    1990-01-01

    Since 1984, an effort has been underway at Rocketdyne, manufacturer of the Space Shuttle Main Engine (SSME), to automate much of the analysis procedure conducted after engine test firings. Previously published articles at national and international conferences have contained the context of and justification for this effort. Here, progress is reported in building the full system, including the extensions of integrating large databases with the system, known as Scotty. Inductive knowledge acquisition has proven itself to be a key factor in the success of Scotty. The combination of a powerful inductive expert system building tool (ExTran), a relational data base management system (Reliance), and software engineering principles and Computer-Assisted Software Engineering (CASE) tools makes for a practical, useful and state-of-the-art application of an expert system.

  3. Genomics Portals: integrative web-platform for mining genomics data.

    PubMed

    Shinde, Kaustubh; Phatak, Mukta; Johannes, Freudenberg M; Chen, Jing; Li, Qian; Vineet, Joshi K; Hu, Zhen; Ghosh, Krishnendu; Meller, Jaroslaw; Medvedovic, Mario

    2010-01-13

    A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org.

  4. Genomics Portals: integrative web-platform for mining genomics data

    PubMed Central

    2010-01-01

    Background A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Results Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. Conclusion The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org. PMID:20070909

  5. Fuzzy logic techniques for rendezvous and docking of two geostationary satellites

    NASA Technical Reports Server (NTRS)

    Ortega, Guillermo

    1995-01-01

    Large assemblings in space require the ability to manage rendezvous and docking operations. In future these techniques will be required for the gradual build up of big telecommunication platforms in the geostationary orbit. The paper discusses the use of fuzzy logic to model and implement a control system for the docking/berthing of two satellites in geostationary orbit. The system mounted in a chaser vehicle determines the actual state of both satellites and generates torques to execute maneuvers to establish the structural latching. The paper describes the proximity operations to collocate the two satellites in the same orbital window, the fuzzy guidance and navigation of the chaser approaching the target and the final Fuzzy berthing. The fuzzy logic system represents a knowledge based controller that realizes the close loop operations autonomously replacing the conventional control algorithms. The goal is to produce smooth control actions in the proximity of the target and during the docking to avoid disturbance torques in the final assembly orbit. The knowledge of the fuzzy controller consists of a data base of rules and the definitions of the fuzzy sets. The knowledge of an experienced spacecraft controller is captured into a set of rules forming the Rules Data Base.

  6. An expert system for diagnosing environmentally induced spacecraft anomalies

    NASA Technical Reports Server (NTRS)

    Rolincik, Mark; Lauriente, Michael; Koons, Harry C.; Gorney, David

    1992-01-01

    A new rule-based, machine independent analytical tool was designed for diagnosing spacecraft anomalies using an expert system. Expert systems provide an effective method for saving knowledge, allow computers to sift through large amounts of data pinpointing significant parts, and most importantly, use heuristics in addition to algorithms, which allow approximate reasoning and inference and the ability to attack problems not rigidly defined. The knowledge base consists of over two-hundred (200) rules and provides links to historical and environmental databases. The environmental causes considered are bulk charging, single event upsets (SEU), surface charging, and total radiation dose. The system's driver translates forward chaining rules into a backward chaining sequence, prompting the user for information pertinent to the causes considered. The use of heuristics frees the user from searching through large amounts of irrelevant information and allows the user to input partial information (varying degrees of confidence in an answer) or 'unknown' to any question. The modularity of the expert system allows for easy updates and modifications. It not only provides scientists with needed risk analysis and confidence not found in algorithmic programs, but is also an effective learning tool, and the window implementation makes it very easy to use. The system currently runs on a Micro VAX II at Goddard Space Flight Center (GSFC). The inference engine used is NASA's C Language Integrated Production System (CLIPS).

  7. Computer-assisted education and interdisciplinary breast cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Whatmough, Pamela; Gale, Alastair G.; Wilson, A. R. M.

    1996-04-01

    The diagnosis of breast disease for screening or symptomatic women is largely arrived at by a multi-disciplinary team. We report work on the development and assessment of an inter- disciplinary computer based learning system to support the diagnosis of this disease. The diagnostic process is first modelled from different viewpoints and then appropriate knowledge structures pertinent to the domains of radiologist, pathologist and surgeon are depicted. Initially the underlying inter-relationships of the mammographic diagnostic approach were detailed which is largely considered here. Ultimately a system is envisaged which will link these specialties and act as a diagnostic aid as well as a multi-media educational system.

  8. Expert system for computer-assisted annotation of MS/MS spectra.

    PubMed

    Neuhauser, Nadin; Michalski, Annette; Cox, Jürgen; Mann, Matthias

    2012-11-01

    An important step in mass spectrometry (MS)-based proteomics is the identification of peptides by their fragment spectra. Regardless of the identification score achieved, almost all tandem-MS (MS/MS) spectra contain remaining peaks that are not assigned by the search engine. These peaks may be explainable by human experts but the scale of modern proteomics experiments makes this impractical. In computer science, Expert Systems are a mature technology to implement a list of rules generated by interviews with practitioners. We here develop such an Expert System, making use of literature knowledge as well as a large body of high mass accuracy and pure fragmentation spectra. Interestingly, we find that even with high mass accuracy data, rule sets can quickly become too complex, leading to over-annotation. Therefore we establish a rigorous false discovery rate, calculated by random insertion of peaks from a large collection of other MS/MS spectra, and use it to develop an optimized knowledge base. This rule set correctly annotates almost all peaks of medium or high abundance. For high resolution HCD data, median intensity coverage of fragment peaks in MS/MS spectra increases from 58% by search engine annotation alone to 86%. The resulting annotation performance surpasses a human expert, especially on complex spectra such as those of larger phosphorylated peptides. Our system is also applicable to high resolution collision-induced dissociation data. It is available both as a part of MaxQuant and via a webserver that only requires an MS/MS spectrum and the corresponding peptides sequence, and which outputs publication quality, annotated MS/MS spectra (www.biochem.mpg.de/mann/tools/). It provides expert knowledge to beginners in the field of MS-based proteomics and helps advanced users to focus on unusual and possibly novel types of fragment ions.

  9. Expert System for Computer-assisted Annotation of MS/MS Spectra*

    PubMed Central

    Neuhauser, Nadin; Michalski, Annette; Cox, Jürgen; Mann, Matthias

    2012-01-01

    An important step in mass spectrometry (MS)-based proteomics is the identification of peptides by their fragment spectra. Regardless of the identification score achieved, almost all tandem-MS (MS/MS) spectra contain remaining peaks that are not assigned by the search engine. These peaks may be explainable by human experts but the scale of modern proteomics experiments makes this impractical. In computer science, Expert Systems are a mature technology to implement a list of rules generated by interviews with practitioners. We here develop such an Expert System, making use of literature knowledge as well as a large body of high mass accuracy and pure fragmentation spectra. Interestingly, we find that even with high mass accuracy data, rule sets can quickly become too complex, leading to over-annotation. Therefore we establish a rigorous false discovery rate, calculated by random insertion of peaks from a large collection of other MS/MS spectra, and use it to develop an optimized knowledge base. This rule set correctly annotates almost all peaks of medium or high abundance. For high resolution HCD data, median intensity coverage of fragment peaks in MS/MS spectra increases from 58% by search engine annotation alone to 86%. The resulting annotation performance surpasses a human expert, especially on complex spectra such as those of larger phosphorylated peptides. Our system is also applicable to high resolution collision-induced dissociation data. It is available both as a part of MaxQuant and via a webserver that only requires an MS/MS spectrum and the corresponding peptides sequence, and which outputs publication quality, annotated MS/MS spectra (www.biochem.mpg.de/mann/tools/). It provides expert knowledge to beginners in the field of MS-based proteomics and helps advanced users to focus on unusual and possibly novel types of fragment ions. PMID:22888147

  10. Conceptual astronomy: A novel model for teaching postsecondary science courses

    NASA Astrophysics Data System (ADS)

    Zeilik, Michael; Schau, Candace; Mattern, Nancy; Hall, Shannon; Teague, Kathleen W.; Bisard, Walter

    1997-10-01

    An innovative, conceptually based instructional model for teaching large undergraduate astronomy courses was designed, implemented, and evaluated in the Fall 1995 semester. This model was based on cognitive and educational theories of knowledge and, we believe, is applicable to other large postsecondary science courses. Major components were: (a) identification of the basic important concepts and their interrelationships that are necessary for connected understanding of astronomy in novice students; (b) use of these concepts and their interrelationships throughout the design, implementation, and evaluation stages of the model; (c) identification of students' prior knowledge and misconceptions; and (d) implementation of varied instructional strategies targeted toward encouraging conceptual understanding in students (i.e., instructional concept maps, cooperative small group work, homework assignments stressing concept application, and a conceptually based student assessment system). Evaluation included the development and use of three measures of conceptual understanding and one of attitudes toward studying astronomy. Over the semester, students showed very large increases in their understanding as assessed by a conceptually based multiple-choice measure of misconceptions, a select-and-fill-in concept map measure, and a relatedness-ratings measure. Attitudes, which were slightly positive before the course, changed slightly in a less favorable direction.

  11. Enhanced situational awareness in the maritime domain: an agent-based approach for situation management

    NASA Astrophysics Data System (ADS)

    Brax, Christoffer; Niklasson, Lars

    2009-05-01

    Maritime Domain Awareness is important for both civilian and military applications. An important part of MDA is detection of unusual vessel activities such as piracy, smuggling, poaching, collisions, etc. Today's interconnected sensorsystems provide us with huge amounts of information over large geographical areas which can make the operators reach their cognitive capacity and start to miss important events. We propose and agent-based situation management system that automatically analyse sensor information to detect unusual activity and anomalies. The system combines knowledge-based detection with data-driven anomaly detection. The system is evaluated using information from both radar and AIS sensors.

  12. Analysis of Nursing Clinical Decision Support Requests and Strategic Plan in a Large Academic Health System

    PubMed Central

    Bavuso, Karen; Bouyer-Ferullo, Sharon; Goldsmith, Denise; Fairbanks, Amanda; Gesner, Emily; Lagor, Charles; Collins, Sarah

    2016-01-01

    Summary Objectives To understand requests for nursing Clinical Decision Support (CDS) interventions at a large integrated health system undergoing vendor-based EHR implementation. In addition, to establish a process to guide both short-term implementation and long-term strategic goals to meet nursing CDS needs. Materials and Methods We conducted an environmental scan to understand current state of nursing CDS over three months. The environmental scan consisted of a literature review and an analysis of CDS requests received from across our health system. We identified existing high priority CDS and paper-based tools used in nursing practice at our health system that guide decision-making. Results A total of 46 nursing CDS requests were received. Fifty-six percent (n=26) were specific to a clinical specialty; 22 percent (n=10) were focused on facilitating clinical consults in the inpatient setting. “Risk Assessments/Risk Reduction/Promotion of Healthy Habits” (n=23) was the most requested High Priority Category received for nursing CDS. A continuum of types of nursing CDS needs emerged using the Data-Information-Knowledge-Wisdom Conceptual Framework: 1) facilitating data capture, 2) meeting information needs, 3) guiding knowledge-based decision making, and 4) exposing analytics for wisdom-based clinical interpretation by the nurse. Conclusion Identifying and prioritizing paper-based tools that can be modified into electronic CDS is a challenge. CDS strategy is an evolving process that relies on close collaboration and engagement with clinical sites for short-term implementation and should be incorporated into a long-term strategic plan that can be optimized and achieved overtime. The Data-Information-Knowledge-Wisdom Conceptual Framework in conjunction with the High Priority Categories established may be a useful tool to guide a strategic approach for meeting short-term nursing CDS needs and aligning with the organizational strategic plan. PMID:27437036

  13. A Discussion of Knowledge Based Design

    NASA Technical Reports Server (NTRS)

    Wood, Richard M.; Bauer, Steven X. S.

    1999-01-01

    A discussion of knowledge and Knowledge- Based design as related to the design of aircraft is presented. The paper discusses the perceived problem with existing design studies and introduces the concepts of design and knowledge for a Knowledge- Based design system. A review of several Knowledge-Based design activities is provided. A Virtual Reality, Knowledge-Based system is proposed and reviewed. The feasibility of Virtual Reality to improve the efficiency and effectiveness of aerodynamic and multidisciplinary design, evaluation, and analysis of aircraft through the coupling of virtual reality technology and a Knowledge-Based design system is also reviewed. The final section of the paper discusses future directions for design and the role of Knowledge-Based design.

  14. Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Karpatne, A.; Kumar, V.

    2017-12-01

    Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.

  15. Liberal Entity Extraction: Rapid Construction of Fine-Grained Entity Typing Systems.

    PubMed

    Huang, Lifu; May, Jonathan; Pan, Xiaoman; Ji, Heng; Ren, Xiang; Han, Jiawei; Zhao, Lin; Hendler, James A

    2017-03-01

    The ability of automatically recognizing and typing entities in natural language without prior knowledge (e.g., predefined entity types) is a major challenge in processing such data. Most existing entity typing systems are limited to certain domains, genres, and languages. In this article, we propose a novel unsupervised entity-typing framework by combining symbolic and distributional semantics. We start from learning three types of representations for each entity mention: general semantic representation, specific context representation, and knowledge representation based on knowledge bases. Then we develop a novel joint hierarchical clustering and linking algorithm to type all mentions using these representations. This framework does not rely on any annotated data, predefined typing schema, or handcrafted features; therefore, it can be quickly adapted to a new domain, genre, and/or language. Experiments on genres (news and discussion forum) show comparable performance with state-of-the-art supervised typing systems trained from a large amount of labeled data. Results on various languages (English, Chinese, Japanese, Hausa, and Yoruba) and domains (general and biomedical) demonstrate the portability of our framework.

  16. Liberal Entity Extraction: Rapid Construction of Fine-Grained Entity Typing Systems

    PubMed Central

    Huang, Lifu; May, Jonathan; Pan, Xiaoman; Ji, Heng; Ren, Xiang; Han, Jiawei; Zhao, Lin; Hendler, James A.

    2017-01-01

    Abstract The ability of automatically recognizing and typing entities in natural language without prior knowledge (e.g., predefined entity types) is a major challenge in processing such data. Most existing entity typing systems are limited to certain domains, genres, and languages. In this article, we propose a novel unsupervised entity-typing framework by combining symbolic and distributional semantics. We start from learning three types of representations for each entity mention: general semantic representation, specific context representation, and knowledge representation based on knowledge bases. Then we develop a novel joint hierarchical clustering and linking algorithm to type all mentions using these representations. This framework does not rely on any annotated data, predefined typing schema, or handcrafted features; therefore, it can be quickly adapted to a new domain, genre, and/or language. Experiments on genres (news and discussion forum) show comparable performance with state-of-the-art supervised typing systems trained from a large amount of labeled data. Results on various languages (English, Chinese, Japanese, Hausa, and Yoruba) and domains (general and biomedical) demonstrate the portability of our framework. PMID:28328252

  17. Knowledge-Based Personal Health System to empower outpatients of diabetes mellitus by means of P4 Medicine.

    PubMed

    Bresó, Adrián; Sáez, Carlos; Vicente, Javier; Larrinaga, Félix; Robles, Montserrat; García-Gómez, Juan Miguel

    2015-01-01

    Diabetes Mellitus (DM) affects hundreds of millions of people worldwide and it imposes a large economic burden on healthcare systems. We present a web patient empowering system (PHSP4) that ensures continuous monitoring and assessment of the health state of patients with DM (type I and II). PHSP4 is a Knowledge-Based Personal Health System (PHS) which follows the trend of P4 Medicine (Personalized, Predictive, Preventive, and Participative). It provides messages to outpatients and clinicians about the achievement of objectives, follow-up, and treatments adjusted to the patient condition. Additionally, it calculates a four-component risk vector of the associated pathologies with DM: Nephropathy, Diabetic retinopathy, Diabetic foot, and Cardiovascular event. The core of the system is a Rule-Based System which Knowledge Base is composed by a set of rules implementing the recommendations of the American Diabetes Association (ADA) (American Diabetes Association: http://www.diabetes.org/ ) clinical guideline. The PHSP4 is designed to be standardized and to facilitate its interoperability by means of terminologies (SNOMED-CT [The International Health Terminology Standards Development Organization: http://www.ihtsdo.org/snomed-ct/ ] and UCUM [The Unified Code for Units of Measure: http://unitsofmeasure.org/ ]), standardized clinical documents (HL7 CDA R2 [Health Level Seven International: http://www.hl7.org/index.cfm ]) for managing Electronic Health Record (EHR). We have evaluated the functionality of the system and its users' acceptance of the system using simulated and real data, and a questionnaire based in the Technology Acceptance Model methodology (TAM). Finally results show the reliability of the system and the high acceptance of clinicians.

  18. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  19. Large-System Transformation in Health Care: A Realist Review

    PubMed Central

    Best, Allan; Greenhalgh, Trisha; Lewis, Steven; Saul, Jessie E; Carroll, Simon; Bitz, Jennifer

    2012-01-01

    Context An evidence base that addresses issues of complexity and context is urgently needed for large-system transformation (LST) and health care reform. Fundamental conceptual and methodological challenges also must be addressed. The Saskatchewan Ministry of Health in Canada requested a six-month synthesis project to guide four major policy development and strategy initiatives focused on patient- and family-centered care, primary health care renewal, quality improvement, and surgical wait lists. The aims of the review were to analyze examples of successful and less successful transformation initiatives, to synthesize knowledge of the underlying mechanisms, to clarify the role of government, and to outline options for evaluation. Methods We used realist review, whose working assumption is that a particular intervention triggers particular mechanisms of change. Mechanisms may be more or less effective in producing their intended outcomes, depending on their interaction with various contextual factors. We explain the variations in outcome as the interplay between context and mechanisms. We nested this analytic approach in a macro framing of complex adaptive systems (CAS). Findings Our rapid realist review identified five “simple rules” of LST that were likely to enhance the success of the target initiatives: (1) blend designated leadership with distributed leadership; (2) establish feedback loops; (3) attend to history; (4) engage physicians; and (5) include patients and families. These principles play out differently in different contexts affecting human behavior (and thereby contributing to change) through a wide range of different mechanisms. Conclusions Realist review methodology can be applied in combination with a complex system lens on published literature to produce a knowledge synthesis that informs a prospective change effort in large-system transformation. A collaborative process engaging both research producers and research users contributes to local applications of universal principles and mid-range theories, as well as to a more robust knowledge base for applied research. We conclude with suggestions for the future development of synthesis and evaluation methods. PMID:22985277

  20. Intravesicular taxane-induced dermatotoxicity in a 78-year-old man with urothelial carcinoma and primary cutaneous anaplastic large cell lymphoma.

    PubMed

    J Pelletier, Daniel; O'Donnell, Michael; Stone, Mary Seabury; Liu, Vincent

    2018-06-01

    Patients treated with intravesical bacillus Calmette-Guérin therapy for urothelial carcinoma often become refractory and experience recurrent disease, thus necessitating alternative intravesical treatment modalities if the patient is to be spared the morbidities associated with radical cystectomy. Intravesical treatment with taxane-based chemotherapy, such as docetaxel, has gained traction in urologic oncology, proving to be an effective salvage therapy in such patients. Systemic taxane-based chemotherapeutic regimens have long been used in several advanced malignancies, and their systemic side-effects and associated histologic correlates have been extensively documented. In contrast to adverse effects associated with systemic administration, intravesical taxane administration has thus far proven to be well-tolerated, with little to no systemic absorption. To our knowledge, features of taxane-induced systemic effects have not been reported in this setting. Herein, we report a case of a patient with recurrent urothelial carcinoma treated with intravesical docetaxel, along with primary cutaneous anaplastic large cell lymphoma, who developed characteristic dermatotoxic histologic findings associated with intravenous taxane administration. As such histopathologic findings often represent close mimickers of neoplastic and infectious etiologies, knowledge of the potential for systemic manifestations of taxane therapy in patients treated topically may prevent potentially costly diagnostic pitfalls. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Workflow based framework for life science informatics.

    PubMed

    Tiwari, Abhishek; Sekhar, Arvind K T

    2007-10-01

    Workflow technology is a generic mechanism to integrate diverse types of available resources (databases, servers, software applications and different services) which facilitate knowledge exchange within traditionally divergent fields such as molecular biology, clinical research, computational science, physics, chemistry and statistics. Researchers can easily incorporate and access diverse, distributed tools and data to develop their own research protocols for scientific analysis. Application of workflow technology has been reported in areas like drug discovery, genomics, large-scale gene expression analysis, proteomics, and system biology. In this article, we have discussed the existing workflow systems and the trends in applications of workflow based systems.

  2. A Method for Populating the Knowledge Base of AFIT’s Domain-Oriented Application Composition System

    DTIC Science & Technology

    1993-12-01

    Analysis ( FODA ). The approach identifies prominent features (similarities) and distinctive features (differences) of software systems within an... analysis approaches we have summarized, the re- searchers described FODA in sufficient detail to use on large domain analysis projects (ones with...Software Technology Center, July 1991. 18. Kang, Kyo C. and others. Feature-Oriented Domain Analysis ( FODA ) Feasibility Study. Technical Report, Software

  3. Executable medical guidelines with Arden Syntax-Applications in dermatology and obstetrics.

    PubMed

    Seitinger, Alexander; Rappelsberger, Andrea; Leitich, Harald; Binder, Michael; Adlassnig, Klaus-Peter

    2016-08-12

    Clinical decision support systems (CDSSs) are being developed to assist physicians in processing extensive data and new knowledge based on recent scientific advances. Structured medical knowledge in the form of clinical alerts or reminder rules, decision trees or tables, clinical protocols or practice guidelines, score algorithms, and others, constitute the core of CDSSs. Several medical knowledge representation and guideline languages have been developed for the formal computerized definition of such knowledge. One of these languages is Arden Syntax for Medical Logic Systems, an International Health Level Seven (HL7) standard whose development started in 1989. Its latest version is 2.10, which was presented in 2014. In the present report we discuss Arden Syntax as a modern medical knowledge representation and processing language, and show that this language is not only well suited to define clinical alerts, reminders, and recommendations, but can also be used to implement and process computerized medical practice guidelines. This section describes how contemporary software such as Java, server software, web-services, XML, is used to implement CDSSs based on Arden Syntax. Special emphasis is given to clinical decision support (CDS) that employs practice guidelines as its clinical knowledge base. Two guideline-based applications using Arden Syntax for medical knowledge representation and processing were developed. The first is a software platform for implementing practice guidelines from dermatology. This application employs fuzzy set theory and logic to represent linguistic and propositional uncertainty in medical data, knowledge, and conclusions. The second application implements a reminder system based on clinically published standard operating procedures in obstetrics to prevent deviations from state-of-the-art care. A to-do list with necessary actions specifically tailored to the gestational week/labor/delivery is generated. Today, with the latest versions of Arden Syntax and the application of contemporary software development methods, Arden Syntax has become a powerful and versatile medical knowledge representation and processing language, well suited to implement a large range of CDSSs, including clinical-practice-guideline-based CDSSs. Moreover, such CDS is provided and can be shared as a service by different medical institutions, redefining the sharing of medical knowledge. Arden Syntax is also highly flexible and provides developers the freedom to use up-to-date software design and programming patterns for external patient data access. Copyright © 2016. Published by Elsevier B.V.

  4. An intelligent knowledge-based and customizable home care system framework with ubiquitous patient monitoring and alerting techniques.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions.

  5. An Intelligent Knowledge-Based and Customizable Home Care System Framework with Ubiquitous Patient Monitoring and Alerting Techniques

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions. PMID:23112650

  6. Semantically Enhanced Online Configuration of Feedback Control Schemes.

    PubMed

    Milis, Georgios M; Panayiotou, Christos G; Polycarpou, Marios M

    2018-03-01

    Recent progress toward the realization of the "Internet of Things" has improved the ability of physical and soft/cyber entities to operate effectively within large-scale, heterogeneous systems. It is important that such capacity be accompanied by feedback control capabilities sufficient to ensure that the overall systems behave according to their specifications and meet their functional objectives. To achieve this, such systems require new architectures that facilitate the online deployment, composition, interoperability, and scalability of control system components. Most current control systems lack scalability and interoperability because their design is based on a fixed configuration of specific components, with knowledge of their individual characteristics only implicitly passed through the design. This paper addresses the need for flexibility when replacing components or installing new components, which might occur when an existing component is upgraded or when a new application requires a new component, without the need to readjust or redesign the overall system. A semantically enhanced feedback control architecture is introduced for a class of systems, aimed at accommodating new components into a closed-loop control framework by exploiting the semantic inference capabilities of an ontology-based knowledge model. This architecture supports continuous operation of the control system, a crucial property for large-scale systems for which interruptions have negative impact on key performance metrics that may include human comfort and welfare or economy costs. A case-study example from the smart buildings domain is used to illustrate the proposed architecture and semantic inference mechanisms.

  7. Collaborative data model and data base development for paleoenvironmental and archaeological domain using Semantic MediaWiki

    NASA Astrophysics Data System (ADS)

    Willmes, C.

    2017-12-01

    In the frame of the Collaborative Research Centre 806 (CRC 806) an interdisciplinary research project, that needs to manage data, information and knowledge from heterogeneous domains, such as archeology, cultural sciences, and the geosciences, a collaborative internal knowledge base system was developed. The system is based on the open source MediaWiki software, that is well known as the software that enables Wikipedia, for its facilitation of a web based collaborative knowledge and information management platform. This software is additionally enhanced with the Semantic MediaWiki (SMW) extension, that allows to store and manage structural data within the Wiki platform, as well as it facilitates complex query and API interfaces to the structured data stored in the SMW data base. Using an additional open source software called mobo, it is possible to improve the data model development process, as well as automated data imports, from small spreadsheets to large relational databases. Mobo is a command line tool that helps building and deploying SMW structure in an agile, Schema-Driven Development way, and allows to manage and collaboratively develop the data model formalizations, that are formalized in JSON-Schema format, using version control systems like git. The combination of a well equipped collaborative web platform facilitated by Mediawiki, the possibility to store and query structured data in this collaborative database provided by SMW, as well as the possibility for automated data import and data model development enabled by mobo, result in a powerful but flexible system to build and develop a collaborative knowledge base system. Furthermore, SMW allows the application of Semantic Web technology, the structured data can be exported into RDF, thus it is possible to set a triple-store including a SPARQL endpoint on top of the database. The JSON-Schema based data models, can be enhanced into JSON-LD, to facilitate and profit from the possibilities of Linked Data technology.

  8. Object-oriented fault tree evaluation program for quantitative analyses

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Koen, B. V.

    1988-01-01

    Object-oriented programming can be combined with fault free techniques to give a significantly improved environment for evaluating the safety and reliability of large complex systems for space missions. Deep knowledge about system components and interactions, available from reliability studies and other sources, can be described using objects that make up a knowledge base. This knowledge base can be interrogated throughout the design process, during system testing, and during operation, and can be easily modified to reflect design changes in order to maintain a consistent information source. An object-oriented environment for reliability assessment has been developed on a Texas Instrument (TI) Explorer LISP workstation. The program, which directly evaluates system fault trees, utilizes the object-oriented extension to LISP called Flavors that is available on the Explorer. The object representation of a fault tree facilitates the storage and retrieval of information associated with each event in the tree, including tree structural information and intermediate results obtained during the tree reduction process. Reliability data associated with each basic event are stored in the fault tree objects. The object-oriented environment on the Explorer also includes a graphical tree editor which was modified to display and edit the fault trees.

  9. Autonomous and Autonomic Swarms

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G.; Rash, James L.; Truszkowski, Walter F.; Rouff, Christopher A.; Sterritt, Roy

    2005-01-01

    A watershed in systems engineering is represented by the advent of swarm-based systems that accomplish missions through cooperative action by a (large) group of autonomous individuals each having simple capabilities and no global knowledge of the group s objective. Such systems, with individuals capable of surviving in hostile environments, pose unprecedented challenges to system developers. Design and testing and verification at much higher levels will be required, together with the corresponding tools, to bring such systems to fruition. Concepts for possible future NASA space exploration missions include autonomous, autonomic swarms. Engineering swarm-based missions begins with understanding autonomy and autonomicity and how to design, test, and verify systems that have those properties and, simultaneously, the capability to accomplish prescribed mission goals. Formal methods-based technologies, both projected and in development, are described in terms of their potential utility to swarm-based system developers.

  10. Case-based Reasoning for Automotive Engine Performance Tune-up

    NASA Astrophysics Data System (ADS)

    Vong, C. M.; Huang, H.; Wong, P. K.

    2010-05-01

    The automotive engine performance tune-up is greatly affected by the calibration of its electronic control unit (ECU). The ECU calibration is traditionally done by trial-and-error method. This traditional method consumes a large amount of time and money because of a large number of dynamometer tests. To resolve this problem, case based reasoning (CBR) is employed, so that an existing and effective ECU setup can be adapted to fit another similar class of engines. The adaptation procedure is done through a more sophisticated step called case-based adaptation (CBA) [1, 2]. CBA is an effective knowledge management tool, which can interactively learn the expert adaptation knowledge. The paper briefly reviews the methodologies of CBR and CBA. Then the application to ECU calibration is described via a case study. With CBR and CBA, the efficiency of calibrating an ECU can be enhanced. A prototype system has also been developed to verify the usefulness of CBR in ECU calibration.

  11. A novel knowledge-based system for interpreting complex engineering drawings: theory, representation, and implementation.

    PubMed

    Lu, Tong; Tai, Chiew-Lan; Yang, Huafei; Cai, Shijie

    2009-08-01

    We present a novel knowledge-based system to automatically convert real-life engineering drawings to content-oriented high-level descriptions. The proposed method essentially turns the complex interpretation process into two parts: knowledge representation and knowledge-based interpretation. We propose a new hierarchical descriptor-based knowledge representation method to organize the various types of engineering objects and their complex high-level relations. The descriptors are defined using an Extended Backus Naur Form (EBNF), facilitating modification and maintenance. When interpreting a set of related engineering drawings, the knowledge-based interpretation system first constructs an EBNF-tree from the knowledge representation file, then searches for potential engineering objects guided by a depth-first order of the nodes in the EBNF-tree. Experimental results and comparisons with other interpretation systems demonstrate that our knowledge-based system is accurate and robust for high-level interpretation of complex real-life engineering projects.

  12. Knowledge information management toolkit and method

    DOEpatents

    Hempstead, Antoinette R.; Brown, Kenneth L.

    2006-08-15

    A system is provided for managing user entry and/or modification of knowledge information into a knowledge base file having an integrator support component and a data source access support component. The system includes processing circuitry, memory, a user interface, and a knowledge base toolkit. The memory communicates with the processing circuitry and is configured to store at least one knowledge base. The user interface communicates with the processing circuitry and is configured for user entry and/or modification of knowledge pieces within a knowledge base. The knowledge base toolkit is configured for converting knowledge in at least one knowledge base from a first knowledge base form into a second knowledge base form. A method is also provided.

  13. An architecture for rule based system explanation

    NASA Technical Reports Server (NTRS)

    Fennel, T. R.; Johannes, James D.

    1990-01-01

    A system architecture is presented which incorporate both graphics and text into explanations provided by rule based expert systems. This architecture facilitates explanation of the knowledge base content, the control strategies employed by the system, and the conclusions made by the system. The suggested approach combines hypermedia and inference engine capabilities. Advantages include: closer integration of user interface, explanation system, and knowledge base; the ability to embed links to deeper knowledge underlying the compiled knowledge used in the knowledge base; and allowing for more direct control of explanation depth and duration by the user. User models are suggested to control the type, amount, and order of information presented.

  14. NASA/DOD Aerospace Knowledge Diffusion Research Project. Paper 59: Japanese Technological Innovation. Implications for Large Commercial Aircraft and Knowledge Diffusion

    NASA Technical Reports Server (NTRS)

    Pinelli, Thomas E.; Barclay, Rebecca O.; Kotler, Mindy L.

    1997-01-01

    This paper explores three factors-public policy, the Japanese (national) innovation system, and knowledge-that influence technological innovation in Japan. To establish a context for the paper, we examine Japanese culture and the U.S. and Japanese patent systems in the background section. A brief history of the Japanese aircraft industry as a source of knowledge and technology for other industries is presented. Japanese and U.S. alliances and linkages in three sectors-biotechnology, semiconductors, and large commercial aircraft (LCA)-and the importation, absorption, and diffusion of knowledge and technology are examined next. The paper closes with implications for diffusing knowledge and technology, U.S. public policy, and LCA.

  15. Domain-independent information extraction in unstructured text

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Irwin, N.H.

    Extracting information from unstructured text has become an important research area in recent years due to the large amount of text now electronically available. This status report describes the findings and work done during the second year of a two-year Laboratory Directed Research and Development Project. Building on the first-year`s work of identifying important entities, this report details techniques used to group words into semantic categories and to output templates containing selective document content. Using word profiles and category clustering derived during a training run, the time-consuming knowledge-building task can be avoided. Though the output still lacks in completeness whenmore » compared to systems with domain-specific knowledge bases, the results do look promising. The two approaches are compatible and could complement each other within the same system. Domain-independent approaches retain appeal as a system that adapts and learns will soon outpace a system with any amount of a priori knowledge.« less

  16. Exascale Storage Systems the SIRIUS Way

    NASA Astrophysics Data System (ADS)

    Klasky, S. A.; Abbasi, H.; Ainsworth, M.; Choi, J.; Curry, M.; Kurc, T.; Liu, Q.; Lofstead, J.; Maltzahn, C.; Parashar, M.; Podhorszki, N.; Suchyta, E.; Wang, F.; Wolf, M.; Chang, C. S.; Churchill, M.; Ethier, S.

    2016-10-01

    As the exascale computing age emerges, data related issues are becoming critical factors that determine how and where we do computing. Popular approaches used by traditional I/O solution and storage libraries become increasingly bottlenecked due to their assumptions about data movement, re-organization, and storage. While, new technologies, such as “burst buffers”, can help address some of the short-term performance issues, it is essential that we reexamine the underlying storage and I/O infrastructure to effectively support requirements and challenges at exascale and beyond. In this paper we present a new approach to the exascale Storage System and I/O (SSIO), which is based on allowing users to inject application knowledge into the system and leverage this knowledge to better manage, store, and access large data volumes so as to minimize the time to scientific insights. Central to our approach is the distinction between the data, metadata, and the knowledge contained therein, transferred from the user to the system by describing “utility” of data as it ages.

  17. Towards Semantic e-Science for Traditional Chinese Medicine

    PubMed Central

    Chen, Huajun; Mao, Yuxin; Zheng, Xiaoqing; Cui, Meng; Feng, Yi; Deng, Shuiguang; Yin, Aining; Zhou, Chunying; Tang, Jinming; Jiang, Xiaohong; Wu, Zhaohui

    2007-01-01

    Background Recent advances in Web and information technologies with the increasing decentralization of organizational structures have resulted in massive amounts of information resources and domain-specific services in Traditional Chinese Medicine. The massive volume and diversity of information and services available have made it difficult to achieve seamless and interoperable e-Science for knowledge-intensive disciplines like TCM. Therefore, information integration and service coordination are two major challenges in e-Science for TCM. We still lack sophisticated approaches to integrate scientific data and services for TCM e-Science. Results We present a comprehensive approach to build dynamic and extendable e-Science applications for knowledge-intensive disciplines like TCM based on semantic and knowledge-based techniques. The semantic e-Science infrastructure for TCM supports large-scale database integration and service coordination in a virtual organization. We use domain ontologies to integrate TCM database resources and services in a semantic cyberspace and deliver a semantically superior experience including browsing, searching, querying and knowledge discovering to users. We have developed a collection of semantic-based toolkits to facilitate TCM scientists and researchers in information sharing and collaborative research. Conclusion Semantic and knowledge-based techniques are suitable to knowledge-intensive disciplines like TCM. It's possible to build on-demand e-Science system for TCM based on existing semantic and knowledge-based techniques. The presented approach in the paper integrates heterogeneous distributed TCM databases and services, and provides scientists with semantically superior experience to support collaborative research in TCM discipline. PMID:17493289

  18. Automated knowledge generation

    NASA Technical Reports Server (NTRS)

    Myler, Harley R.; Gonzalez, Avelino J.

    1988-01-01

    The general objectives of the NASA/UCF Automated Knowledge Generation Project were the development of an intelligent software system that could access CAD design data bases, interpret them, and generate a diagnostic knowledge base in the form of a system model. The initial area of concentration is in the diagnosis of the process control system using the Knowledge-based Autonomous Test Engineer (KATE) diagnostic system. A secondary objective was the study of general problems of automated knowledge generation. A prototype was developed, based on object-oriented language (Flavors).

  19. Machine learning research 1989-90

    NASA Technical Reports Server (NTRS)

    Porter, Bruce W.; Souther, Arthur

    1990-01-01

    Multifunctional knowledge bases offer a significant advance in artificial intelligence because they can support numerous expert tasks within a domain. As a result they amortize the costs of building a knowledge base over multiple expert systems and they reduce the brittleness of each system. Due to the inevitable size and complexity of multifunctional knowledge bases, their construction and maintenance require knowledge engineering and acquisition tools that can automatically identify interactions between new and existing knowledge. Furthermore, their use requires software for accessing those portions of the knowledge base that coherently answer questions. Considerable progress was made in developing software for building and accessing multifunctional knowledge bases. A language was developed for representing knowledge, along with software tools for editing and displaying knowledge, a machine learning program for integrating new information into existing knowledge, and a question answering system for accessing the knowledge base.

  20. NetWeaver for EMDS user guide (version 1.1): a knowledge base development system.

    Treesearch

    Keith M. Reynolds

    1999-01-01

    The guide describes use of the NetWeaver knowledge base development system. Knowledge representation in NetWeaver is based on object-oriented fuzzy-logic networks that offer several significant advantages over the more traditional rulebased representation. Compared to rule-based knowledge bases, NetWeaver knowledge bases are easier to build, test, and maintain because...

  1. Machine intelligence and autonomy for aerospace systems

    NASA Technical Reports Server (NTRS)

    Heer, Ewald (Editor); Lum, Henry (Editor)

    1988-01-01

    The present volume discusses progress toward intelligent robot systems in aerospace applications, NASA Space Program automation and robotics efforts, the supervisory control of telerobotics in space, machine intelligence and crew/vehicle interfaces, expert-system terms and building tools, and knowledge-acquisition for autonomous systems. Also discussed are methods for validation of knowledge-based systems, a design methodology for knowledge-based management systems, knowledge-based simulation for aerospace systems, knowledge-based diagnosis, planning and scheduling methods in AI, the treatment of uncertainty in AI, vision-sensing techniques in aerospace applications, image-understanding techniques, tactile sensing for robots, distributed sensor integration, and the control of articulated and deformable space structures.

  2. The Semi-opened Infrastructure Model (SopIM): A Frame to Set Up an Organizational Learning Process

    NASA Astrophysics Data System (ADS)

    Grundstein, Michel

    In this paper, we introduce the "Semi-opened Infrastructure Model (SopIM)" implemented to deploy Artificial Intelligence and Knowledge-based Systems within a large industrial company. This model illustrates what could be two of the operating elements of the Model for General Knowledge Management within the Enterprise (MGKME) that are essential to set up the organizational learning process that leads people to appropriate and use concepts, methods and tools of an innovative technology: the "Ad hoc Infrastructures" element, and the "Organizational Learning Processes" element.

  3. CACTUS: Command and Control Training Using Knowledge-Based Simulations

    ERIC Educational Resources Information Center

    Hartley, Roger; Ravenscroft, Andrew; Williams, R. J.

    2008-01-01

    The CACTUS project was concerned with command and control training of large incidents where public order may be at risk, such as large demonstrations and marches. The training requirements and objectives of the project are first summarized justifying the use of knowledge-based computer methods to support and extend conventional training…

  4. Northeast Artificial Intelligence Consortium (NAIC). Volume 12. Computer Architecture for Very Large Knowledge Bases

    DTIC Science & Technology

    1990-12-01

    data rate to the electronics would be much lower on the average and the data much "richer" in information. Intelligent use of...system bottleneck, a high data rate should be provided by I/O systems. 2. machines with intelligent storage management specially designed for logic...management information processing, surveillance sensors, intelligence data collection and handling, solid state sciences, electromagnetics, and propagation, and electronic reliability/maintainability and compatibility.

  5. System diagnostic builder: a rule-generation tool for expert systems that do intelligent data evaluation

    NASA Astrophysics Data System (ADS)

    Nieten, Joseph L.; Burke, Roger

    1993-03-01

    The system diagnostic builder (SDB) is an automated knowledge acquisition tool using state- of-the-art artificial intelligence (AI) technologies. The SDB uses an inductive machine learning technique to generate rules from data sets that are classified by a subject matter expert (SME). Thus, data is captured from the subject system, classified by an expert, and used to drive the rule generation process. These rule-bases are used to represent the observable behavior of the subject system, and to represent knowledge about this system. The rule-bases can be used in any knowledge based system which monitors or controls a physical system or simulation. The SDB has demonstrated the utility of using inductive machine learning technology to generate reliable knowledge bases. In fact, we have discovered that the knowledge captured by the SDB can be used in any number of applications. For example, the knowledge bases captured from the SMS can be used as black box simulations by intelligent computer aided training devices. We can also use the SDB to construct knowledge bases for the process control industry, such as chemical production, or oil and gas production. These knowledge bases can be used in automated advisory systems to ensure safety, productivity, and consistency.

  6. Knowledge Representation and Ontologies

    NASA Astrophysics Data System (ADS)

    Grimm, Stephan

    Knowledge representation and reasoning aims at designing computer systems that reason about a machine-interpretable representation of the world. Knowledge-based systems have a computational model of some domain of interest in which symbols serve as surrogates for real world domain artefacts, such as physical objects, events, relationships, etc. [1]. The domain of interest can cover any part of the real world or any hypothetical system about which one desires to represent knowledge for com-putational purposes. A knowledge-based system maintains a knowledge base, which stores the symbols of the computational model in the form of statements about the domain, and it performs reasoning by manipulating these symbols. Applications can base their decisions on answers to domain-relevant questions posed to a knowledge base.

  7. Defaults, context, and knowledge: alternatives for OWL-indexed knowledge bases.

    PubMed

    Rector, A

    2004-01-01

    The new Web Ontology Language (OWL) and its Description Logic compatible sublanguage (OWL-DL) explicitly exclude defaults and exceptions, as do all logic based formalisms for ontologies. However, many biomedical applications appear to require default reasoning, at least if they are to be engineered in a maintainable way. Default reasoning has always been one of the great strengths of Frame systems such as Protégé. Resolving this conflict requires analysis of the different uses for defaults and exceptions. In some cases, alternatives can be provided within the OWL framework; in others, it appears that hybrid reasoning about a knowledge base of contingent facts built around the core ontology is necessary. Trade-offs include both human factors and the scaling of computational performance. The analysis presented here is based on the OpenGALEN experience with large scale ontologies using a formalism, GRAIL, which explicitly incorporates constructs for hybrid reasoning, numerous experiments with OWL, and initial work on combining OWL and Protégé.

  8. Distributed, cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    Some current research in the development and application of distributed, cooperating knowledge-based systems technology is addressed. The focus of the current research is the spacecraft ground operations environment. The underlying hypothesis is that, because of the increasing size, complexity, and cost of planned systems, conventional procedural approaches to the architecture of automated systems will give way to a more comprehensive knowledge-based approach. A hallmark of these future systems will be the integration of multiple knowledge-based agents which understand the operational goals of the system and cooperate with each other and the humans in the loop to attain the goals. The current work includes the development of a reference model for knowledge-base management, the development of a formal model of cooperating knowledge-based agents, the use of testbed for prototyping and evaluating various knowledge-based concepts, and beginning work on the establishment of an object-oriented model of an intelligent end-to-end (spacecraft to user) system. An introductory discussion of these activities is presented, the major concepts and principles being investigated are highlighted, and their potential use in other application domains is indicated.

  9. Mental Spatial Transformations of Objects and Bodies: Different Developmental Trajectories in Children from 7 to 11 Years of Age

    ERIC Educational Resources Information Center

    Crescentini, Cristiano; Fabbro, Franco; Urgesi, Cosimo

    2014-01-01

    Despite the large body of knowledge on adults suggesting that 2 basic types of mental spatial transformation--namely, object-based and egocentric perspective transformations--are dissociable and specialized for different situations, there is much less research investigating the developmental aspects of such spatial transformation systems. Here, an…

  10. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  11. Increasing the automation of a 2D-3D registration system.

    PubMed

    Varnavas, Andreas; Carrell, Tom; Penney, Graeme

    2013-02-01

    Routine clinical use of 2D-3D registration algorithms for Image Guided Surgery remains limited. A key aspect for routine clinical use of this technology is its degree of automation, i.e., the amount of necessary knowledgeable interaction between the clinicians and the registration system. Current image-based registration approaches usually require knowledgeable manual interaction during two stages: for initial pose estimation and for verification of produced results. We propose four novel techniques, particularly suited to vertebra-based registration systems, which can significantly automate both of the above stages. Two of these techniques are based upon the intraoperative "insertion" of a virtual fiducial marker into the preoperative data. The remaining two techniques use the final registration similarity value between multiple CT vertebrae and a single fluoroscopy vertebra. The proposed methods were evaluated with data from 31 operations (31 CT scans, 419 fluoroscopy images). Results show these methods can remove the need for manual vertebra identification during initial pose estimation, and were also very effective for result verification, producing a combined true positive rate of 100% and false positive rate equal to zero. This large decrease in required knowledgeable interaction is an important contribution aiming to enable more widespread use of 2D-3D registration technology.

  12. Towards a knowledge-based system to assist the Brazilian data-collecting system operation

    NASA Technical Reports Server (NTRS)

    Rodrigues, Valter; Simoni, P. O.; Oliveira, P. P. B.; Oliveira, C. A.; Nogueira, C. A. M.

    1988-01-01

    A study is reported which was carried out to show how a knowledge-based approach would lead to a flexible tool to assist the operation task in a satellite-based environmental data collection system. Some characteristics of a hypothesized system comprised of a satellite and a network of Interrogable Data Collecting Platforms (IDCPs) are pointed out. The Knowledge-Based Planning Assistant System (KBPAS) and some aspects about how knowledge is organized in the IDCP's domain are briefly described.

  13. eClims: An Extensible and Dynamic Integration Framework for Biomedical Information Systems.

    PubMed

    Savonnet, Marinette; Leclercq, Eric; Naubourg, Pierre

    2016-11-01

    Biomedical information systems (BIS) require consideration of three types of variability: data variability induced by new high throughput technologies, schema or model variability induced by large scale studies or new fields of research, and knowledge variability resulting from new discoveries. Beyond data heterogeneity, managing variabilities in the context of BIS requires extensible and dynamic integration process. In this paper, we focus on data and schema variabilities and we propose an integration framework based on ontologies, master data, and semantic annotations. The framework addresses issues related to: 1) collaborative work through a dynamic integration process; 2) variability among studies using an annotation mechanism; and 3) quality control over data and semantic annotations. Our approach relies on two levels of knowledge: BIS-related knowledge is modeled using an application ontology coupled with UML models that allow controlling data completeness and consistency, and domain knowledge is described by a domain ontology, which ensures data coherence. A system build with the eClims framework has been implemented and evaluated in the context of a proteomic platform.

  14. Online self-administered training for post-traumatic stress disorder treatment providers: design and methods for a randomized, prospective intervention study

    PubMed Central

    2012-01-01

    This paper presents the rationale and methods for a randomized controlled evaluation of web-based training in motivational interviewing, goal setting, and behavioral task assignment. Web-based training may be a practical and cost-effective way to address the need for large-scale mental health training in evidence-based practice; however, there is a dearth of well-controlled outcome studies of these approaches. For the current trial, 168 mental health providers treating post-traumatic stress disorder (PTSD) were assigned to web-based training plus supervision, web-based training, or training-as-usual (control). A novel standardized patient (SP) assessment was developed and implemented for objective measurement of changes in clinical skills, while on-line self-report measures were used for assessing changes in knowledge, perceived self-efficacy, and practice related to cognitive behavioral therapy (CBT) techniques. Eligible participants were all actively involved in mental health treatment of veterans with PTSD. Study methodology illustrates ways of developing training content, recruiting participants, and assessing knowledge, perceived self-efficacy, and competency-based outcomes, and demonstrates the feasibility of conducting prospective studies of training efficacy or effectiveness in large healthcare systems. PMID:22583520

  15. An expert system based software sizing tool, phase 2

    NASA Technical Reports Server (NTRS)

    Friedlander, David

    1990-01-01

    A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.

  16. One to One Recommendation System in Apparel On-Line Shopping

    NASA Astrophysics Data System (ADS)

    Sekozawa, Teruji; Mitsuhashi, Hiroyuki; Ozawa, Yukio

    We propose an apparel online shopping site that the fashion adviser exists on the internet. The fashion adviser, who has detailed knowledge about the fashion in real shop, selects and coordinates the clothes of the customer's preference. However, the customer, who didn't have detailed knowledge about the fashion, was not able to choose the clothes suitable for the customer's preference from among the candidate of a large amount of clothes on a conventional apparel shopping site. Then, we compose the system that analyzes the customer's preference by the AHP technique, makes to the cluster by the correlation of clothes, and analyzes the market basket. As a result, this system can coordinate the clothes appropriate for the favor of an individual customer. Moreover, this system can propose the recommendation of other clothes based on past sales data.

  17. Current Standardization and Cooperative Efforts Related to Industrial Information Infrastructures.

    DTIC Science & Technology

    1993-05-01

    Data Management Systems: Components used to store, manage, and retrieve data. Data management includes knowledge bases, database management...Application Development Tools and Methods X/Open and POSIX APIs Integrated Design Support System (IDS) Knowledge -Based Systems (KBS) Application...IDEFlx) Yourdon Jackson System Design (JSD) Knowledge -Based Systems (KBSs) Structured Systems Development (SSD) Semantic Unification Meta-Model

  18. Computation as the mechanistic bridge between precision medicine and systems therapeutics.

    PubMed

    Hansen, J; Iyengar, R

    2013-01-01

    Over the past 50 years, like molecular cell biology, medicine and pharmacology have been driven by a reductionist approach. The focus on individual genes and cellular components as disease loci and drug targets has been a necessary step in understanding the basic mechanisms underlying tissue/organ physiology and drug action. Recent progress in genomics and proteomics, as well as advances in other technologies that enable large-scale data gathering and computational approaches, is providing new knowledge of both normal and disease states. Systems-biology approaches enable integration of knowledge from different types of data for precision medicine and systems therapeutics. In this review, we describe recent studies that contribute to these emerging fields and discuss how together these fields can lead to a mechanism-based therapy for individual patients.

  19. Computation as the Mechanistic Bridge Between Precision Medicine and Systems Therapeutics

    PubMed Central

    Hansen, J; Iyengar, R

    2014-01-01

    Over the past 50 years, like molecular cell biology, medicine and pharmacology have been driven by a reductionist approach. The focus on individual genes and cellular components as disease loci and drug targets has been a necessary step in understanding the basic mechanisms underlying tissue/organ physiology and drug action. Recent progress in genomics and proteomics, as well as advances in other technologies that enable large-scale data gathering and computational approaches, is providing new knowledge of both normal and disease states. Systems-biology approaches enable integration of knowledge from different types of data for precision medicine and systems therapeutics. In this review, we describe recent studies that contribute to these emerging fields and discuss how together these fields can lead to a mechanism-based therapy for individual patients. PMID:23212109

  20. Preliminary Design of a Consultation Knowledge-Based System for the Minimization of Distortion in Welded Structures

    DTIC Science & Technology

    1989-02-01

    which capture the knowledge of such experts. These Expert Systems, or Knowledge-Based Systems’, differ from the usual computer programming techniques...their applications in the fields of structural design and welding is reviewed. 5.1 Introduction Expert Systems, or KBES, are computer programs using Al...procedurally constructed as conventional computer programs usually are; * The knowledge base of such systems is executable, unlike databases 3 "Ill

  1. Knowledge-driven computational modeling in Alzheimer's disease research: Current state and future trends.

    PubMed

    Geerts, Hugo; Hofmann-Apitius, Martin; Anastasio, Thomas J

    2017-11-01

    Neurodegenerative diseases such as Alzheimer's disease (AD) follow a slowly progressing dysfunctional trajectory, with a large presymptomatic component and many comorbidities. Using preclinical models and large-scale omics studies ranging from genetics to imaging, a large number of processes that might be involved in AD pathology at different stages and levels have been identified. The sheer number of putative hypotheses makes it almost impossible to estimate their contribution to the clinical outcome and to develop a comprehensive view on the pathological processes driving the clinical phenotype. Traditionally, bioinformatics approaches have provided correlations and associations between processes and phenotypes. Focusing on causality, a new breed of advanced and more quantitative modeling approaches that use formalized domain expertise offer new opportunities to integrate these different modalities and outline possible paths toward new therapeutic interventions. This article reviews three different computational approaches and their possible complementarities. Process algebras, implemented using declarative programming languages such as Maude, facilitate simulation and analysis of complicated biological processes on a comprehensive but coarse-grained level. A model-driven Integration of Data and Knowledge, based on the OpenBEL platform and using reverse causative reasoning and network jump analysis, can generate mechanistic knowledge and a new, mechanism-based taxonomy of disease. Finally, Quantitative Systems Pharmacology is based on formalized implementation of domain expertise in a more fine-grained, mechanism-driven, quantitative, and predictive humanized computer model. We propose a strategy to combine the strengths of these individual approaches for developing powerful modeling methodologies that can provide actionable knowledge for rational development of preventive and therapeutic interventions. Development of these computational approaches is likely to be required for further progress in understanding and treating AD. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.

  2. Predicate Oriented Pattern Analysis for Biomedical Knowledge Discovery

    PubMed Central

    Shen, Feichen; Liu, Hongfang; Sohn, Sunghwan; Larson, David W.; Lee, Yugyung

    2017-01-01

    In the current biomedical data movement, numerous efforts have been made to convert and normalize a large number of traditional structured and unstructured data (e.g., EHRs, reports) to semi-structured data (e.g., RDF, OWL). With the increasing number of semi-structured data coming into the biomedical community, data integration and knowledge discovery from heterogeneous domains become important research problem. In the application level, detection of related concepts among medical ontologies is an important goal of life science research. It is more crucial to figure out how different concepts are related within a single ontology or across multiple ontologies by analysing predicates in different knowledge bases. However, the world today is one of information explosion, and it is extremely difficult for biomedical researchers to find existing or potential predicates to perform linking among cross domain concepts without any support from schema pattern analysis. Therefore, there is a need for a mechanism to do predicate oriented pattern analysis to partition heterogeneous ontologies into closer small topics and do query generation to discover cross domain knowledge from each topic. In this paper, we present such a model that predicates oriented pattern analysis based on their close relationship and generates a similarity matrix. Based on this similarity matrix, we apply an innovated unsupervised learning algorithm to partition large data sets into smaller and closer topics and generate meaningful queries to fully discover knowledge over a set of interlinked data sources. We have implemented a prototype system named BmQGen and evaluate the proposed model with colorectal surgical cohort from the Mayo Clinic. PMID:28983419

  3. A Knowledge-Base for a Personalized Infectious Disease Risk Prediction System.

    PubMed

    Vinarti, Retno; Hederman, Lucy

    2018-01-01

    We present a knowledge-base to represent collated infectious disease risk (IDR) knowledge. The knowledge is about personal and contextual risk of contracting an infectious disease obtained from declarative sources (e.g. Atlas of Human Infectious Diseases). Automated prediction requires encoding this knowledge in a form that can produce risk probabilities (e.g. Bayesian Network - BN). The knowledge-base presented in this paper feeds an algorithm that can auto-generate the BN. The knowledge from 234 infectious diseases was compiled. From this compilation, we designed an ontology and five rule types for modelling IDR knowledge in general. The evaluation aims to assess whether the knowledge-base structure, and its application to three disease-country contexts, meets the needs of personalized IDR prediction system. From the evaluation results, the knowledge-base conforms to the system's purpose: personalization of infectious disease risk.

  4. Adapting forest health assessments to changing perspectives on threats--a case example from Sweden.

    PubMed

    Wulff, Sören; Lindelöw, Åke; Lundin, Lars; Hansson, Per; Axelsson, Anna-Lena; Barklund, Pia; Wijk, Sture; Ståhl, Göran

    2012-04-01

    A revised Swedish forest health assessment system is presented. The assessment system is composed of several interacting components which target information needs for strategic and operational decision making and accommodate a continuously expanding knowledge base. The main motivation for separating information for strategic and operational decision making is that major damage outbreaks are often scattered throughout the landscape. Generally, large-scale inventories (such as national forest inventories) cannot provide adequate information for mitigation measures. In addition to broad monitoring programs that provide time-series information on known damaging agents and their effects, there is also a need for local and regional inventories adapted to specific damage events. While information for decision making is the major focus of the health assessment system, the system also contributes to expanding the knowledge base of forest conditions. For example, the integrated monitoring programs provide a better understanding of ecological processes linked to forest health. The new health assessment system should be able to respond to the need for quick and reliable information and thus will be an important part of the future monitoring of Swedish forests.

  5. 3D exploitation of large urban photo archives

    NASA Astrophysics Data System (ADS)

    Cho, Peter; Snavely, Noah; Anderson, Ross

    2010-04-01

    Recent work in computer vision has demonstrated the potential to automatically recover camera and scene geometry from large collections of uncooperatively-collected photos. At the same time, aerial ladar and Geographic Information System (GIS) data are becoming more readily accessible. In this paper, we present a system for fusing these data sources in order to transfer 3D and GIS information into outdoor urban imagery. Applying this system to 1000+ pictures shot of the lower Manhattan skyline and the Statue of Liberty, we present two proof-of-concept examples of geometry-based photo enhancement which are difficult to perform via conventional image processing: feature annotation and image-based querying. In these examples, high-level knowledge projects from 3D world-space into georegistered 2D image planes and/or propagates between different photos. Such automatic capabilities lay the groundwork for future real-time labeling of imagery shot in complex city environments by mobile smart phones.

  6. Field size, length, and width distributions based on LACIE ground truth data. [large area crop inventory experiment

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Badhwar, G.

    1980-01-01

    The development of agricultural remote sensing systems requires knowledge of agricultural field size distributions so that the sensors, sampling frames, image interpretation schemes, registration systems, and classification systems can be properly designed. Malila et al. (1976) studied the field size distribution for wheat and all other crops in two Kansas LACIE (Large Area Crop Inventory Experiment) intensive test sites using ground observations of the crops and measurements of their field areas based on current year rectified aerial photomaps. The field area and size distributions reported in the present investigation are derived from a representative subset of a stratified random sample of LACIE sample segments. In contrast to previous work, the obtained results indicate that most field-size distributions are not log-normally distributed. The most common field size observed in this study was 10 acres for most crops studied.

  7. Computer-Assisted Search Of Large Textual Data Bases

    NASA Technical Reports Server (NTRS)

    Driscoll, James R.

    1995-01-01

    "QA" denotes high-speed computer system for searching diverse collections of documents including (but not limited to) technical reference manuals, legal documents, medical documents, news releases, and patents. Incorporates previously available and emerging information-retrieval technology to help user intelligently and rapidly locate information found in large textual data bases. Technology includes provision for inquiries in natural language; statistical ranking of retrieved information; artificial-intelligence implementation of semantics, in which "surface level" knowledge found in text used to improve ranking of retrieved information; and relevance feedback, in which user's judgements of relevance of some retrieved documents used automatically to modify search for further information.

  8. A Cognitive Neural Architecture Able to Learn and Communicate through Natural Language.

    PubMed

    Golosio, Bruno; Cangelosi, Angelo; Gamotina, Olesya; Masala, Giovanni Luca

    2015-01-01

    Communicative interactions involve a kind of procedural knowledge that is used by the human brain for processing verbal and nonverbal inputs and for language production. Although considerable work has been done on modeling human language abilities, it has been difficult to bring them together to a comprehensive tabula rasa system compatible with current knowledge of how verbal information is processed in the brain. This work presents a cognitive system, entirely based on a large-scale neural architecture, which was developed to shed light on the procedural knowledge involved in language elaboration. The main component of this system is the central executive, which is a supervising system that coordinates the other components of the working memory. In our model, the central executive is a neural network that takes as input the neural activation states of the short-term memory and yields as output mental actions, which control the flow of information among the working memory components through neural gating mechanisms. The proposed system is capable of learning to communicate through natural language starting from tabula rasa, without any a priori knowledge of the structure of phrases, meaning of words, role of the different classes of words, only by interacting with a human through a text-based interface, using an open-ended incremental learning process. It is able to learn nouns, verbs, adjectives, pronouns and other word classes, and to use them in expressive language. The model was validated on a corpus of 1587 input sentences, based on literature on early language assessment, at the level of about 4-years old child, and produced 521 output sentences, expressing a broad range of language processing functionalities.

  9. Knowledge-based geographic information systems (KBGIS): New analytic and data management tools

    USGS Publications Warehouse

    Albert, T.M.

    1988-01-01

    In its simplest form, a geographic information system (GIS) may be viewed as a data base management system in which most of the data are spatially indexed, and upon which sets of procedures operate to answer queries about spatial entities represented in the data base. Utilization of artificial intelligence (AI) techniques can enhance greatly the capabilities of a GIS, particularly in handling very large, diverse data bases involved in the earth sciences. A KBGIS has been developed by the U.S. Geological Survey which incorporates AI techniques such as learning, expert systems, new data representation, and more. The system, which will be developed further and applied, is a prototype of the next generation of GIS's, an intelligent GIS, as well as an example of a general-purpose intelligent data handling system. The paper provides a description of KBGIS and its application, as well as the AI techniques involved. ?? 1988 International Association for Mathematical Geology.

  10. Knowledge-light adaptation approaches in case-based reasoning for radiotherapy treatment planning.

    PubMed

    Petrovic, Sanja; Khussainova, Gulmira; Jagannathan, Rupa

    2016-03-01

    Radiotherapy treatment planning aims at delivering a sufficient radiation dose to cancerous tumour cells while sparing healthy organs in the tumour-surrounding area. It is a time-consuming trial-and-error process that requires the expertise of a group of medical experts including oncologists and medical physicists and can take from 2 to 3h to a few days. Our objective is to improve the performance of our previously built case-based reasoning (CBR) system for brain tumour radiotherapy treatment planning. In this system, a treatment plan for a new patient is retrieved from a case base containing patient cases treated in the past and their treatment plans. However, this system does not perform any adaptation, which is needed to account for any difference between the new and retrieved cases. Generally, the adaptation phase is considered to be intrinsically knowledge-intensive and domain-dependent. Therefore, an adaptation often requires a large amount of domain-specific knowledge, which can be difficult to acquire and often is not readily available. In this study, we investigate approaches to adaptation that do not require much domain knowledge, referred to as knowledge-light adaptation. We developed two adaptation approaches: adaptation based on machine-learning tools and adaptation-guided retrieval. They were used to adapt the beam number and beam angles suggested in the retrieved case. Two machine-learning tools, neural networks and naive Bayes classifier, were used in the adaptation to learn how the difference in attribute values between the retrieved and new cases affects the output of these two cases. The adaptation-guided retrieval takes into consideration not only the similarity between the new and retrieved cases, but also how to adapt the retrieved case. The research was carried out in collaboration with medical physicists at the Nottingham University Hospitals NHS Trust, City Hospital Campus, UK. All experiments were performed using real-world brain cancer patient cases treated with three-dimensional (3D)-conformal radiotherapy. Neural networks-based adaptation improved the success rate of the CBR system with no adaptation by 12%. However, naive Bayes classifier did not improve the current retrieval results as it did not consider the interplay among attributes. The adaptation-guided retrieval of the case for beam number improved the success rate of the CBR system by 29%. However, it did not demonstrate good performance for the beam angle adaptation. Its success rate was 29% versus 39% when no adaptation was performed. The obtained empirical results demonstrate that the proposed adaptation methods improve the performance of the existing CBR system in recommending the number of beams to use. However, we also conclude that to be effective, the proposed adaptation of beam angles requires a large number of relevant cases in the case base. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. A diagnostic prototype of the potable water subsystem of the Space Station Freedom ECLSS

    NASA Technical Reports Server (NTRS)

    Lukefahr, Brenda D.; Rochowiak, Daniel M.; Benson, Brian L.; Rogers, John S.; Mckee, James W.

    1989-01-01

    In analyzing the baseline Environmental Control and Life Support System (ECLSS) command and control architecture, various processes are found which would be enhanced by the use of knowledge based system methods of implementation. The most suitable process for prototyping using rule based methods are documented, while domain knowledge resources and other practical considerations are examined. Requirements for a prototype rule based software system are documented. These requirements reflect Space Station Freedom ECLSS software and hardware development efforts, and knowledge based system requirements. A quick prototype knowledge based system environment is researched and developed.

  12. Knowledge Production within the Innovation System: A Case Study from the United Kingdom

    ERIC Educational Resources Information Center

    Wilson-Medhurst, Sarah

    2010-01-01

    This paper focuses on a key issue for university managers, educational developers and teaching practitioners: that of producing new operational knowledge in the innovation system. More specifically, it explores the knowledge required to guide individual and institutional styles of teaching and learning in a large multi-disciplinary faculty. The…

  13. KIPSE1: A Knowledge-based Interactive Problem Solving Environment for data estimation and pattern classification

    NASA Technical Reports Server (NTRS)

    Han, Chia Yung; Wan, Liqun; Wee, William G.

    1990-01-01

    A knowledge-based interactive problem solving environment called KIPSE1 is presented. The KIPSE1 is a system built on a commercial expert system shell, the KEE system. This environment gives user capability to carry out exploratory data analysis and pattern classification tasks. A good solution often consists of a sequence of steps with a set of methods used at each step. In KIPSE1, solution is represented in the form of a decision tree and each node of the solution tree represents a partial solution to the problem. Many methodologies are provided at each node to the user such that the user can interactively select the method and data sets to test and subsequently examine the results. Otherwise, users are allowed to make decisions at various stages of problem solving to subdivide the problem into smaller subproblems such that a large problem can be handled and a better solution can be found.

  14. Organisation of biotechnological information into knowledge.

    PubMed

    Boh, B

    1996-09-01

    The success of biotechnological research, development and marketing depends to a large extent on the international transfer of information and on the ability to organise biotechnology information into knowledge. To increase the efficiency of information-based approaches, an information strategy has been developed and consists of the following stages: definition of the problem, its structure and sub-problems; acquisition of data by targeted processing of computer-supported bibliographic, numeric, textual and graphic databases; analysis of data and building of specialized in-house information systems; information processing for structuring data into systems, recognition of trends and patterns of knowledge, particularly by information synthesis using the concept of information density; design of research hypotheses; testing hypotheses in the laboratory and/or pilot plant; repeated evaluation and optimization of hypotheses by information methods and testing them by further laboratory work. The information approaches are illustrated by examples from the university-industry joint projects in biotechnology, biochemistry and agriculture.

  15. Web-video-mining-supported workflow modeling for laparoscopic surgeries.

    PubMed

    Liu, Rui; Zhang, Xiaoli; Zhang, Hao

    2016-11-01

    As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Expert system for web based collaborative CAE

    NASA Astrophysics Data System (ADS)

    Hou, Liang; Lin, Zusheng

    2006-11-01

    An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.

  17. Fuzzy Linguistic Knowledge Based Behavior Extraction for Building Energy Management Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumidu Wijayasekara; Milos Manic

    2013-08-01

    Significant portion of world energy production is consumed by building Heating, Ventilation and Air Conditioning (HVAC) units. Thus along with occupant comfort, energy efficiency is also an important factor in HVAC control. Modern buildings use advanced Multiple Input Multiple Output (MIMO) control schemes to realize these goals. However, since the performance of HVAC units is dependent on many criteria including uncertainties in weather, number of occupants, and thermal state, the performance of current state of the art systems are sub-optimal. Furthermore, because of the large number of sensors in buildings, and the high frequency of data collection, large amount ofmore » information is available. Therefore, important behavior of buildings that compromise energy efficiency or occupant comfort is difficult to identify. This paper presents an easy to use and understandable framework for identifying such behavior. The presented framework uses human understandable knowledge-base to extract important behavior of buildings and present it to users via a graphical user interface. The presented framework was tested on a building in the Pacific Northwest and was shown to be able to identify important behavior that relates to energy efficiency and occupant comfort.« less

  18. An intelligent tool for activity data collection.

    PubMed

    Sarkar, A M Jehad

    2011-01-01

    Activity recognition systems using simple and ubiquitous sensors require a large variety of real-world sensor data for not only evaluating their performance but also training the systems for better functioning. However, a tremendous amount of effort is required to setup an environment for collecting such data. For example, expertise and resources are needed to design and install the sensors, controllers, network components, and middleware just to perform basic data collections. It is therefore desirable to have a data collection method that is inexpensive, flexible, user-friendly, and capable of providing large and diverse activity datasets. In this paper, we propose an intelligent activity data collection tool which has the ability to provide such datasets inexpensively without physically deploying the testbeds. It can be used as an inexpensive and alternative technique to collect human activity data. The tool provides a set of web interfaces to create a web-based activity data collection environment. It also provides a web-based experience sampling tool to take the user's activity input. The tool generates an activity log using its activity knowledge and the user-given inputs. The activity knowledge is mined from the web. We have performed two experiments to validate the tool's performance in producing reliable datasets.

  19. Feasibility of using a knowledge-based system concept for in-flight primary flight display research

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.

    1991-01-01

    A study was conducted to determine the feasibility of using knowledge-based systems architectures for inflight research of primary flight display information management issues. The feasibility relied on the ability to integrate knowledge-based systems with existing onboard aircraft systems. And, given the hardware and software platforms available, the feasibility also depended on the ability to use interpreted LISP software with the real time operation of the primary flight display. In addition to evaluating these feasibility issues, the study determined whether the software engineering advantages of knowledge-based systems found for this application in the earlier workstation study extended to the inflight research environment. To study these issues, two integrated knowledge-based systems were designed to control the primary flight display according to pre-existing specifications of an ongoing primary flight display information management research effort. These two systems were implemented to assess the feasibility and software engineering issues listed. Flight test results were successful in showing the feasibility of using knowledge-based systems inflight with actual aircraft data.

  20. Systems aspects of COBE science data compression

    NASA Technical Reports Server (NTRS)

    Freedman, I.; Boggess, E.; Seiler, E.

    1993-01-01

    A general approach to compression of diverse data from large scientific projects has been developed and this paper addresses the appropriate system and scientific constraints together with the algorithm development and test strategy. This framework has been implemented for the COsmic Background Explorer spacecraft (COBE) by retrofitting the existing VAS-based data management system with high-performance compression software permitting random access to the data. Algorithms which incorporate scientific knowledge and consume relatively few system resources are preferred over ad hoc methods. COBE exceeded its planned storage by a large and growing factor and the retrieval of data significantly affects the processing, delaying the availability of data for scientific usage and software test. Embedded compression software is planned to make the project tractable by reducing the data storage volume to an acceptable level during normal processing.

  1. A reusable knowledge acquisition shell: KASH

    NASA Technical Reports Server (NTRS)

    Westphal, Christopher; Williams, Stephen; Keech, Virginia

    1991-01-01

    KASH (Knowledge Acquisition SHell) is proposed to assist a knowledge engineer by providing a set of utilities for constructing knowledge acquisition sessions based on interviewing techniques. The information elicited from domain experts during the sessions is guided by a question dependency graph (QDG). The QDG defined by the knowledge engineer, consists of a series of control questions about the domain that are used to organize the knowledge of an expert. The content information supplies by the expert, in response to the questions, is represented in the form of a concept map. These maps can be constructed in a top-down or bottom-up manner by the QDG and used by KASH to generate the rules for a large class of expert system domains. Additionally, the concept maps can support the representation of temporal knowledge. The high degree of reusability encountered in the QDG and concept maps can vastly reduce the development times and costs associated with producing intelligent decision aids, training programs, and process control functions.

  2. Conceptual model of knowledge base system

    NASA Astrophysics Data System (ADS)

    Naykhanova, L. V.; Naykhanova, I. V.

    2018-05-01

    In the article, the conceptual model of the knowledge based system by the type of the production system is provided. The production system is intended for automation of problems, which solution is rigidly conditioned by the legislation. A core component of the system is a knowledge base. The knowledge base consists of a facts set, a rules set, the cognitive map and ontology. The cognitive map is developed for implementation of a control strategy, ontology - the explanation mechanism. Knowledge representation about recognition of a situation in the form of rules allows describing knowledge of the pension legislation. This approach provides the flexibility, originality and scalability of the system. In the case of changing legislation, it is necessary to change the rules set. This means that the change of the legislation would not be a big problem. The main advantage of the system is that there is an opportunity to be adapted easily to changes of the legislation.

  3. Building Better Decision-Support by Using Knowledge Discovery.

    ERIC Educational Resources Information Center

    Jurisica, Igor

    2000-01-01

    Discusses knowledge-based decision-support systems that use artificial intelligence approaches. Addresses the issue of how to create an effective case-based reasoning system for complex and evolving domains, focusing on automated methods for system optimization and domain knowledge evolution that can supplement knowledge acquired from domain…

  4. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  5. NASA/DOD Aerospace Knowledge Diffusion Research Project. Paper 19: Computer and information technology and aerospace knowledge diffusion

    NASA Technical Reports Server (NTRS)

    Pinelli, Thomas E.; Kennedy, John M.; Barclay, Rebecca O.; Bishop, Ann P.

    1992-01-01

    To remain a world leader in aerospace, the US must improve and maintain the professional competency of its engineers and scientists, increase the research and development (R&D) knowledge base, improve productivity, and maximize the integration of recent technological developments into the R&D process. How well these objectives are met, and at what cost, depends on a variety of factors, but largely on the ability of US aerospace engineers and scientists to acquire and process the results of federally funded R&D. The Federal Government's commitment to high speed computing and networking systems presupposes that computer and information technology will play a major role in the aerospace knowledge diffusion process. However, we know little about information technology needs, uses, and problems within the aerospace knowledge diffusion process. The use of computer and information technology by US aerospace engineers and scientists in academia, government, and industry is reported.

  6. Advanced techniques for the storage and use of very large, heterogeneous spatial databases. The representation of geographic knowledge: Toward a universal framework. [relations (mathematics)

    NASA Technical Reports Server (NTRS)

    Peuquet, Donna J.

    1987-01-01

    A new approach to building geographic data models that is based on the fundamental characteristics of the data is presented. An overall theoretical framework for representing geographic data is proposed. An example of utilizing this framework in a Geographic Information System (GIS) context by combining artificial intelligence techniques with recent developments in spatial data processing techniques is given. Elements of data representation discussed include hierarchical structure, separation of locational and conceptual views, and the ability to store knowledge at variable levels of completeness and precision.

  7. From Process Models to Decision Making: The Use of Data Mining Techniques for Developing Effect Decision Support Systems

    NASA Astrophysics Data System (ADS)

    Conrads, P. A.; Roehl, E. A.

    2010-12-01

    Natural-resource managers face the difficult problem of controlling the interactions between hydrologic and man-made systems in ways that preserve resources while optimally meeting the needs of disparate stakeholders. Finding success depends on obtaining and employing detailed scientific knowledge about the cause-effect relations that govern the physics of these hydrologic systems. This knowledge is most credible when derived from large field-based datasets that encompass the wide range of variability in the parameters of interest. The means of converting data into knowledge of the hydrologic system often involves developing computer models that predict the consequences of alternative management practices to guide resource managers towards the best path forward. Complex hydrologic systems are typically modeled using computer programs that implement traditional, generalized, physical equations, which are calibrated to match the field data as closely as possible. This type of model commonly is limited in terms of demonstrable predictive accuracy, development time, and cost. The science of data mining presents a powerful complement to physics-based models. Data mining is a relatively new science that assists in converting large databases into knowledge and is uniquely able to leverage the real-time, multivariate data now being collected for hydrologic systems. In side-by-side comparisons with state-of-the-art physics-based hydrologic models, the authors have found data-mining solutions have been substantially more accurate, less time consuming to develop, and embeddable into spreadsheets and sophisticated decision support systems (DSS), making them easy to use by regulators and stakeholders. Three data-mining applications will be presented that demonstrate how data-mining techniques can be applied to existing environmental databases to address regional concerns of long-term consequences. In each case, data were transformed into information, and ultimately, into knowledge. In each case, DSSs were developed that facilitated the use of simulation models and analysis of model output to a broad range of end users with various technical abilities. When compared to other modeling projects of comparable scope and complexity, these DSSs were able to pass through needed technical reviews much more quickly. Unlike programs such as finite-element flow models, DSSs are by design open systems that are easy to use and readily disseminated directly to decision makers. The DSSs provide direct coupling of predictive models with the real-time databases that drive them, graphical user interfaces for point-and-click program control, and streaming displays of numerical and graphical results so that users can monitor the progress of long-term simulations. Customizations for specific problems include numerical optimization loops that invert predictive models; integrations with a three-dimensional finite-element flow model, GIS packages, and a plant ecology model; and color contouring of simulation output data.

  8. Reducing the Conflict Factors Strategies in Question Answering System

    NASA Astrophysics Data System (ADS)

    Suwarningsih, W.; Purwarianti, A.; Supriana, I.

    2017-03-01

    A rule-based system is prone to conflict as new knowledge every time will emerge and indirectly must sign in to the knowledge base that is used by the system. A conflict occurred between the rules in the knowledge base can lead to the errors of reasoning or reasoning circulation. Therefore, when added, the new rules will lead to conflict with other rules, and the only rules that really can be added to the knowledge base. From these conditions, this paper aims to propose a conflict resolution strategy for a medical debriefing system by analyzing scenarios based upon the runtime to improve the efficiency and reliability of systems.

  9. Cooperative Knowledge Bases.

    DTIC Science & Technology

    1988-02-01

    intellegent knowledge bases. The present state of our system for concurrent evaluation of a knowledge base of logic clauses using static allocation...de Kleer, J., An assumption-based TMS, Artificial Intelligence, Vol. 28, No. 2, 1986. [Doyle 79) Doyle, J. A truth maintenance system, Artificial

  10. From protein-protein interactions to protein co-expression networks: a new perspective to evaluate large-scale proteomic data.

    PubMed

    Vella, Danila; Zoppis, Italo; Mauri, Giancarlo; Mauri, Pierluigi; Di Silvestre, Dario

    2017-12-01

    The reductionist approach of dissecting biological systems into their constituents has been successful in the first stage of the molecular biology to elucidate the chemical basis of several biological processes. This knowledge helped biologists to understand the complexity of the biological systems evidencing that most biological functions do not arise from individual molecules; thus, realizing that the emergent properties of the biological systems cannot be explained or be predicted by investigating individual molecules without taking into consideration their relations. Thanks to the improvement of the current -omics technologies and the increasing understanding of the molecular relationships, even more studies are evaluating the biological systems through approaches based on graph theory. Genomic and proteomic data are often combined with protein-protein interaction (PPI) networks whose structure is routinely analyzed by algorithms and tools to characterize hubs/bottlenecks and topological, functional, and disease modules. On the other hand, co-expression networks represent a complementary procedure that give the opportunity to evaluate at system level including organisms that lack information on PPIs. Based on these premises, we introduce the reader to the PPI and to the co-expression networks, including aspects of reconstruction and analysis. In particular, the new idea to evaluate large-scale proteomic data by means of co-expression networks will be discussed presenting some examples of application. Their use to infer biological knowledge will be shown, and a special attention will be devoted to the topological and module analysis.

  11. Automated extraction of knowledge for model-based diagnostics

    NASA Technical Reports Server (NTRS)

    Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.

    1990-01-01

    The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.

  12. PVDaCS - A prototype knowledge-based expert system for certification of spacecraft data

    NASA Technical Reports Server (NTRS)

    Wharton, Cathleen; Shiroma, Patricia J.; Simmons, Karen E.

    1989-01-01

    On-line data management techniques to certify spacecraft information are mandated by increasing telemetry rates. Knowledge-based expert systems offer the ability to certify data electronically without the need for time-consuming human interaction. Issues of automatic certification are explored by designing a knowledge-based expert system to certify data from a scientific instrument, the Orbiter Ultraviolet Spectrometer, on an operating NASA planetary spacecraft, Pioneer Venus. The resulting rule-based system, called PVDaCS (Pioneer Venus Data Certification System), is a functional prototype demonstrating the concepts of a larger system design. A key element of the system design is the representation of an expert's knowledge through the usage of well ordered sequences. PVDaCS produces a certification value derived from expert knowledge and an analysis of the instrument's operation. Results of system performance are presented.

  13. Knowledge-based diagnosis for aerospace systems

    NASA Technical Reports Server (NTRS)

    Atkinson, David J.

    1988-01-01

    The need for automated diagnosis in aerospace systems and the approach of using knowledge-based systems are examined. Research issues in knowledge-based diagnosis which are important for aerospace applications are treated along with a review of recent relevant research developments in Artificial Intelligence. The design and operation of some existing knowledge-based diagnosis systems are described. The systems described and compared include the LES expert system for liquid oxygen loading at NASA Kennedy Space Center, the FAITH diagnosis system developed at the Jet Propulsion Laboratory, the PES procedural expert system developed at SRI International, the CSRL approach developed at Ohio State University, the StarPlan system developed by Ford Aerospace, the IDM integrated diagnostic model, and the DRAPhys diagnostic system developed at NASA Langley Research Center.

  14. Conservation of design knowledge. [of large complex spaceborne systems

    NASA Technical Reports Server (NTRS)

    Sivard, Cecilia; Zweben, Monte; Cannon, David; Lakin, Fred; Leifer, Larry

    1989-01-01

    This paper presents an approach for acquiring knowledge about a design during the design process. The objective is to increase the efficiency of the lifecycle management of a space-borne system by providing operational models of the system's structure and behavior, as well as the design rationale, to human and automated operators. A design knowledge acquisition system is under development that compares how two alternative design versions meet the system requirements as a means for automatically capturing rationale for design changes.

  15. An expert system executive for automated assembly of large space truss structures

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1993-01-01

    Langley Research Center developed a unique test bed for investigating the practical problems associated with the assembly of large space truss structures using robotic manipulators. The test bed is the result of an interdisciplinary effort that encompasses the full spectrum of assembly problems - from the design of mechanisms to the development of software. The automated structures assembly test bed and its operation are described, the expert system executive and its development are detailed, and the planned system evolution is discussed. Emphasis is on the expert system implementation of the program executive. The executive program must direct and reliably perform complex assembly tasks with the flexibility to recover from realistic system errors. The employment of an expert system permits information that pertains to the operation of the system to be encapsulated concisely within a knowledge base. This consolidation substantially reduced code, increased flexibility, eased software upgrades, and realized a savings in software maintenance costs.

  16. Development of a Spacecraft Materials Selector Expert System

    NASA Technical Reports Server (NTRS)

    Pippin, G.; Kauffman, W. (Technical Monitor)

    2002-01-01

    This report contains a description of the knowledge base tool and examples of its use. A downloadable version of the Spacecraft Materials Selector (SMS) knowledge base is available through the NASA Space Environments and Effects Program. The "Spacecraft Materials Selector" knowledge base is part of an electronic expert system. The expert system consists of an inference engine that contains the "decision-making" code and the knowledge base that contains the selected body of information. The inference engine is a software package previously developed at Boeing, called the Boeing Expert System Tool (BEST) kit.

  17. Constructing Model of Relationship among Behaviors and Injuries to Products Based on Large Scale Text Data on Injuries

    NASA Astrophysics Data System (ADS)

    Nomori, Koji; Kitamura, Koji; Motomura, Yoichi; Nishida, Yoshifumi; Yamanaka, Tatsuhiro; Komatsubara, Akinori

    In Japan, childhood injury prevention is urgent issue. Safety measures through creating knowledge of injury data are essential for preventing childhood injuries. Especially the injury prevention approach by product modification is very important. The risk assessment is one of the most fundamental methods to design safety products. The conventional risk assessment has been carried out subjectively because product makers have poor data on injuries. This paper deals with evidence-based risk assessment, in which artificial intelligence technologies are strongly needed. This paper describes a new method of foreseeing usage of products, which is the first step of the evidence-based risk assessment, and presents a retrieval system of injury data. The system enables a product designer to foresee how children use a product and which types of injuries occur due to the product in daily environment. The developed system consists of large scale injury data, text mining technology and probabilistic modeling technology. Large scale text data on childhood injuries was collected from medical institutions by an injury surveillance system. Types of behaviors to a product were derived from the injury text data using text mining technology. The relationship among products, types of behaviors, types of injuries and characteristics of children was modeled by Bayesian Network. The fundamental functions of the developed system and examples of new findings obtained by the system are reported in this paper.

  18. Learning by teaching: undergraduate engineering students improving a community's response capability to an early warning system

    NASA Astrophysics Data System (ADS)

    Suvannatsiri, Ratchasak; Santichaianant, Kitidech; Murphy, Elizabeth

    2015-01-01

    This paper reports on a project in which students designed, constructed and tested a model of an existing early warning system with simulation of debris flow in a context of a landslide. Students also assessed rural community members' knowledge of this system and subsequently taught them to estimate the time needed for evacuation of the community in the event of a landslide. Participants were four undergraduate students in a civil engineering programme at a university in Thailand, as well as nine community members and three external evaluators. Results illustrate project and problem-based, experiential learning and highlight the real-world applications and development of knowledge and of hard and soft skills. The discussion raises issues of scalability and feasibility for implementation of these types of projects in large undergraduate engineering classes.

  19. Parallel processing for scientific computations

    NASA Technical Reports Server (NTRS)

    Alkhatib, Hasan S.

    1991-01-01

    The main contribution of the effort in the last two years is the introduction of the MOPPS system. After doing extensive literature search, we introduced the system which is described next. MOPPS employs a new solution to the problem of managing programs which solve scientific and engineering applications on a distributed processing environment. Autonomous computers cooperate efficiently in solving large scientific problems with this solution. MOPPS has the advantage of not assuming the presence of any particular network topology or configuration, computer architecture, or operating system. It imposes little overhead on network and processor resources while efficiently managing programs concurrently. The core of MOPPS is an intelligent program manager that builds a knowledge base of the execution performance of the parallel programs it is managing under various conditions. The manager applies this knowledge to improve the performance of future runs. The program manager learns from experience.

  20. Distributed data mining on grids: services, tools, and applications.

    PubMed

    Cannataro, Mario; Congiusta, Antonio; Pugliese, Andrea; Talia, Domenico; Trunfio, Paolo

    2004-12-01

    Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.

  1. An Expert System toward Buiding An Earth Science Knowledge Graph

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Duan, X.; Ramachandran, R.; Lee, T. J.; Bao, Q.; Gatlin, P. N.; Maskey, M.

    2017-12-01

    In this ongoing work, we aim to build foundations of Cognitive Computing for Earth Science research. The goal of our project is to develop an end-to-end automated methodology for incrementally constructing Knowledge Graphs for Earth Science (KG4ES). These knowledge graphs can then serve as the foundational components for building cognitive systems in Earth science, enabling researchers to uncover new patterns and hypotheses that are virtually impossible to identify today. In addition, this research focuses on developing mining algorithms needed to exploit these constructed knowledge graphs. As such, these graphs will free knowledge from publications that are generated in a very linear, deterministic manner, and structure knowledge in a way that users can both interact and connect with relevant pieces of information. Our major contributions are two-fold. First, we have developed an end-to-end methodology for constructing Knowledge Graphs for Earth Science (KG4ES) using existing corpus of journal papers and reports. One of the key challenges in any machine learning, especially deep learning applications, is the need for robust and large training datasets. We have developed techniques capable of automatically retraining models and incrementally building and updating KG4ES, based on ever evolving training data. We also adopt the evaluation instrument based on common research methodologies used in Earth science research, especially in Atmospheric Science. Second, we have developed an algorithm to infer new knowledge that can exploit the constructed KG4ES. In more detail, we have developed a network prediction algorithm aiming to explore and predict possible new connections in the KG4ES and aid in new knowledge discovery.

  2. Architectural Large Constructed Environment. Modeling and Interaction Using Dynamic Simulations

    NASA Astrophysics Data System (ADS)

    Fiamma, P.

    2011-09-01

    How to use for the architectural design, the simulation coming from a large size data model? The topic is related to the phase coming usually after the acquisition of the data, during the construction of the model and especially after, when designers must have an interaction with the simulation, in order to develop and verify their idea. In the case of study, the concept of interaction includes the concept of real time "flows". The work develops contents and results that can be part of the large debate about the current connection between "architecture" and "movement". The focus of the work, is to realize a collaborative and participative virtual environment on which different specialist actors, client and final users can share knowledge, targets and constraints to better gain the aimed result. The goal is to have used a dynamic micro simulation digital resource that allows all the actors to explore the model in powerful and realistic way and to have a new type of interaction in a complex architectural scenario. On the one hand, the work represents a base of knowledge that can be implemented more and more; on the other hand the work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. The architectural design before, and the architectural fact after, both happen in a sort of "Spatial Analysis System". The way is open to offer to this "system", knowledge and theories, that can support architectural design work for every application and scale. We think that the presented work represents a dealt to understand the large constructed architecture simulation as a way of life, a way of being in time and space. Architecture like a spatial configuration, that can be reconfigurable too through designing.

  3. Approximate Degrees of Similarity between a User's Knowledge and the Tutorial Systems' Knowledge Base

    ERIC Educational Resources Information Center

    Mogharreban, Namdar

    2004-01-01

    A typical tutorial system functions by means of interaction between four components: the expert knowledge base component, the inference engine component, the learner's knowledge component and the user interface component. In typical tutorial systems the interaction and the sequence of presentation as well as the mode of evaluation are…

  4. In silico model-based inference: a contemporary approach for hypothesis testing in network biology

    PubMed Central

    Klinke, David J.

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900’s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. PMID:25139179

  5. In silico model-based inference: a contemporary approach for hypothesis testing in network biology.

    PubMed

    Klinke, David J

    2014-01-01

    Inductive inference plays a central role in the study of biological systems where one aims to increase their understanding of the system by reasoning backwards from uncertain observations to identify causal relationships among components of the system. These causal relationships are postulated from prior knowledge as a hypothesis or simply a model. Experiments are designed to test the model. Inferential statistics are used to establish a level of confidence in how well our postulated model explains the acquired data. This iterative process, commonly referred to as the scientific method, either improves our confidence in a model or suggests that we revisit our prior knowledge to develop a new model. Advances in technology impact how we use prior knowledge and data to formulate models of biological networks and how we observe cellular behavior. However, the approach for model-based inference has remained largely unchanged since Fisher, Neyman and Pearson developed the ideas in the early 1900s that gave rise to what is now known as classical statistical hypothesis (model) testing. Here, I will summarize conventional methods for model-based inference and suggest a contemporary approach to aid in our quest to discover how cells dynamically interpret and transmit information for therapeutic aims that integrates ideas drawn from high performance computing, Bayesian statistics, and chemical kinetics. © 2014 American Institute of Chemical Engineers.

  6. An SQL query generator for CLIPS

    NASA Technical Reports Server (NTRS)

    Snyder, James; Chirica, Laurian

    1990-01-01

    As expert systems become more widely used, their access to large amounts of external information becomes increasingly important. This information exists in several forms such as statistical, tabular data, knowledge gained by experts and large databases of information maintained by companies. Because many expert systems, including CLIPS, do not provide access to this external information, much of the usefulness of expert systems is left untapped. The scope of this paper is to describe a database extension for the CLIPS expert system shell. The current industry standard database language is SQL. Due to SQL standardization, large amounts of information stored on various computers, potentially at different locations, will be more easily accessible. Expert systems should be able to directly access these existing databases rather than requiring information to be re-entered into the expert system environment. The ORACLE relational database management system (RDBMS) was used to provide a database connection within the CLIPS environment. To facilitate relational database access a query generation system was developed as a CLIPS user function. The queries are entered in a CLlPS-like syntax and are passed to the query generator, which constructs and submits for execution, an SQL query to the ORACLE RDBMS. The query results are asserted as CLIPS facts. The query generator was developed primarily for use within the ICADS project (Intelligent Computer Aided Design System) currently being developed by the CAD Research Unit in the California Polytechnic State University (Cal Poly). In ICADS, there are several parallel or distributed expert systems accessing a common knowledge base of facts. Expert system has a narrow domain of interest and therefore needs only certain portions of the information. The query generator provides a common method of accessing this information and allows the expert system to specify what data is needed without specifying how to retrieve it.

  7. The Impact of Electronic Knowledge-Based Nursing Content and Decision-Support on Nursing-Sensitive Patient Outcomes

    DTIC Science & Technology

    2014-02-01

    aware that notwithstanding any other provision of law, no person shall be subject to any penalty for failing to comply with a collection of information...if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE February 2014 2... Akre , et al., 2006) content and evidence-based clinical decision support (CDS) tools were embedded into the EHR of one large health care system. Since

  8. Knowledge Management Systems: Linking Contribution, Refinement and Use

    ERIC Educational Resources Information Center

    Chung, Ting-ting

    2009-01-01

    Electronic knowledge repositories represent one of the fundamental tools for knowledge management (KM) initiatives. Existing research, however, has largely focused on supply-side driven research questions, such as employee motivation to contribute knowledge to a repository. This research turns attention to the dynamic relationship between the…

  9. System and method for knowledge based matching of users in a network

    DOEpatents

    Verspoor, Cornelia Maria [Santa Fe, NM; Sims, Benjamin Hayden [Los Alamos, NM; Ambrosiano, John Joseph [Los Alamos, NM; Cleland, Timothy James [Los Alamos, NM

    2011-04-26

    A knowledge-based system and methods to matchmaking and social network extension are disclosed. The system is configured to allow users to specify knowledge profiles, which are collections of concepts that indicate a certain topic or area of interest selected from an. The system utilizes the knowledge model as the semantic space within which to compare similarities in user interests. The knowledge model is hierarchical so that indications of interest in specific concepts automatically imply interest in more general concept. Similarity measures between profiles may then be calculated based on suitable distance formulas within this space.

  10. Development of an expert system prototype for determining software functional requirements for command management activities at NASA Goddard

    NASA Technical Reports Server (NTRS)

    Liebowitz, J.

    1986-01-01

    The development of an expert system prototype for software functional requirement determination for NASA Goddard's Command Management System, as part of its process of transforming general requests into specific near-earth satellite commands, is described. The present knowledge base was formulated through interactions with domain experts, and was then linked to the existing Knowledge Engineering Systems (KES) expert system application generator. Steps in the knowledge-base development include problem-oriented attribute hierarchy development, knowledge management approach determination, and knowledge base encoding. The KES Parser and Inspector, in addition to backcasting and analogical mapping, were used to validate the expert system-derived requirements for one of the major functions of a spacecraft, the solar Maximum Mission. Knowledge refinement, evaluation, and implementation procedures of the expert system were then accomplished.

  11. A decision support system for map projections of small scale data

    USGS Publications Warehouse

    Finn, Michael P.; Usery, E. Lynn; Posch, Stephan T.; Seong, Jeong Chang

    2004-01-01

    The use of commercial geographic information system software to process large raster datasets of terrain elevation, population, land cover, vegetation, soils, temperature, and rainfall requires both projection from spherical coordinates to plane coordinate systems and transformation from one plane system to another. Decision support systems deliver information resulting in knowledge that assists in policies, priorities, or processes. This paper presents an approach to handling the problems of raster dataset projection and transformation through the development of a Web-enabled decision support system to aid users of transformation processes with the selection of appropriate map projections based on data type, areal extent, location, and preservation properties.

  12. Results, Knowledge, and Attitudes Regarding an Incentive Compensation Plan in a Hospital-Based, Academic, Employed Physician Multispecialty Group.

    PubMed

    Dolan, Robert W; Nesto, Richard; Ellender, Stacey; Luccessi, Christopher

    Hospitals and healthcare systems are introducing incentive metrics into compensation plans that align with value-based payment methodologies. These incentive measures should be considered a practical application of the transition from volume to value and will likely replace traditional productivity-based compensation in the future. During the transition, there will be provider resistance and implementation challenges. This article examines a large multispecialty group's experience with a newly implemented incentive compensation plan including the structure of the plan, formulas for calculation of the payments, the mix of quality and productivity metrics, and metric threshold achievement. Three rounds of surveys with comments were collected to measure knowledge and attitudes regarding the plan. Lessons learned and specific recommendations for success are described. The participant's knowledge and attitudes regarding the plan are important considerations and affect morale and engagement. Significant provider dissatisfaction with the plan was found. Careful metric selection, design, and management are critical activities that will facilitate provider acceptance and support. Improvements in data collection and reporting will be needed to produce reliable metrics that can supplant traditional volume-based productivity measures.

  13. The Collision of the Adoption and Safe Families Act and Substance Abuse: Research-Based Education and Training Priorities for Child Welfare Professionals

    ERIC Educational Resources Information Center

    Schroeder, Julie; Lemieux, Catherine; Pogue, Rene

    2008-01-01

    A large body of descriptive literature demonstrates the problem of substance abuse in child welfare. The 1997 Adoption and Safe Families Act (ASFA) established time frames that make children's need for permanency the overriding priority in families involved with the child welfare system. Child welfare workers often lack proper knowledge and skill…

  14. A knowledge base browser using hypermedia

    NASA Technical Reports Server (NTRS)

    Pocklington, Tony; Wang, Lui

    1990-01-01

    A hypermedia system is being developed to browse CLIPS (C Language Integrated Production System) knowledge bases. This system will be used to help train flight controllers for the Mission Control Center. Browsing this knowledge base will be accomplished either by having navigating through the various collection nodes that have already been defined, or through the query languages.

  15. Exploitation of biotechnology in a large company.

    PubMed

    Dart, E C

    1989-08-31

    Almost from the outset, most large companies saw the 'new biotechnology' not as a new business but as a set of very powerful techniques that, in time, would radically improve the understanding of biological systems. This new knowledge was generally seen by them as enhancing the process of invention and not as a substitute for tried and tested ways of meeting clearly identified targets. As the knowledge base grows, so the big-company response to biotechnology becomes more positive. Within ICI, biotechnology is now integrated into five bio-businesses (Pharmaceuticals, Agrochemicals, Seeds, Diagnostics and Biological Products). Within the Central Toxicology Laboratory it also contributes to the understanding of the mechanisms of toxic action of chemicals as part of assessing risk. ICI has entered two of these businesses (Seeds and Diagnostics) because it sees biotechnology making a major contribution to the profitability of each.

  16. Development and Evaluation of a Pharmacogenomics Educational Program for Pharmacists

    PubMed Central

    Formea, Christine M.; Nicholson, Wayne T.; McCullough, Kristen B.; Berg, Kevin D.; Berg, Melody L.; Cunningham, Julie L.; Merten, Julianna A.; Ou, Narith N.; Stollings, Joanna L.

    2013-01-01

    Objectives. To evaluate hospital and outpatient pharmacists’ pharmacogenomics knowledge before and 2 months after participating in a targeted, case-based pharmacogenomics continuing education program. Design. As part of a continuing education program accredited by the Accreditation Council for Pharmacy Education (ACPE), pharmacists were provided with a fundamental pharmacogenomics education program. Evaluation. An 11-question, multiple-choice, electronic survey instrument was distributed to 272 eligible pharmacists at a single campus of a large, academic healthcare system. Pharmacists improved their pharmacogenomics test scores by 0.7 questions (pretest average 46%; posttest average 53%, p=0.0003). Conclusions. Although pharmacists demonstrated improvement, overall retention of educational goals and objectives was marginal. These results suggest that the complex topic of pharmacogenomics requires a large educational effort in order to increase pharmacists’ knowledge and comfort level with this emerging therapeutic opportunity. PMID:23459098

  17. Development and evaluation of a pharmacogenomics educational program for pharmacists.

    PubMed

    Formea, Christine M; Nicholson, Wayne T; McCullough, Kristen B; Berg, Kevin D; Berg, Melody L; Cunningham, Julie L; Merten, Julianna A; Ou, Narith N; Stollings, Joanna L

    2013-02-12

    Objectives. To evaluate hospital and outpatient pharmacists' pharmacogenomics knowledge before and 2 months after participating in a targeted, case-based pharmacogenomics continuing education program.Design. As part of a continuing education program accredited by the Accreditation Council for Pharmacy Education (ACPE), pharmacists were provided with a fundamental pharmacogenomics education program.Evaluation. An 11-question, multiple-choice, electronic survey instrument was distributed to 272 eligible pharmacists at a single campus of a large, academic healthcare system. Pharmacists improved their pharmacogenomics test scores by 0.7 questions (pretest average 46%; posttest average 53%, p=0.0003).Conclusions. Although pharmacists demonstrated improvement, overall retention of educational goals and objectives was marginal. These results suggest that the complex topic of pharmacogenomics requires a large educational effort in order to increase pharmacists' knowledge and comfort level with this emerging therapeutic opportunity.

  18. Data and Model Integration Promoting Interdisciplinarity

    NASA Astrophysics Data System (ADS)

    Koike, T.

    2014-12-01

    It is very difficult to reflect accumulated subsystem knowledge into holistic knowledge. Knowledge about a whole system can rarely be introduced into a targeted subsystem. In many cases, knowledge in one discipline is inapplicable to other disciplines. We are far from resolving cross-disciplinary issues. It is critically important to establish interdisciplinarity so that scientific knowledge can transcend disciplines. We need to share information and develop knowledge interlinkages by building models and exchanging tools. We need to tackle a large increase in the volume and diversity of data from observing the Earth. The volume of data stored has exponentially increased. Previously, almost all of the large-volume data came from satellites, but model outputs occupy the largest volume in general. To address the large diversity of data, we should develop an ontology system for technical and geographical terms in coupling with a metadata design according to international standards. In collaboration between Earth environment scientists and IT group, we should accelerate data archiving by including data loading, quality checking and metadata registration, and enrich data-searching capability. DIAS also enables us to perform integrated research and realize interdisciplinarity. For example, climate change should be addressed in collaboration between the climate models, integrated assessment models including energy, economy, agriculture, health, and the models of adaptation, vulnerability, and human settlement and infrastructure. These models identify water as central to these systems. If a water expert can develop an interrelated system including each component, the integrated crisis can be addressed by collaboration with various disciplines. To realize this purpose, we are developing a water-related data- and model-integration system called a water cycle integrator (WCI).

  19. HERB: A production system for programming with hierarchical expert rule bases: User's manual, HERB Version 1. 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hummel, K.E.

    1987-12-01

    Expert systems are artificial intelligence programs that solve problems requiring large amounts of heuristic knowledge, based on years of experience and tradition. Production systems are domain-independent tools that support the development of rule-based expert systems. This document describes a general purpose production system known as HERB. This system was developed to support the programming of expert systems using hierarchically structured rule bases. HERB encourages the partitioning of rules into multiple rule bases and supports the use of multiple conflict resolution strategies. Multiple rule bases can also be placed on a system stack and simultaneously searched during each interpreter cycle. Bothmore » backward and forward chaining rules are supported by HERB. The condition portion of each rule can contain both patterns, which are matched with facts in a data base, and LISP expressions, which are explicitly evaluated in the LISP environment. Properties of objects can also be stored in the HERB data base and referenced within the scope of each rule. This document serves both as an introduction to the principles of LISP-based production systems and as a user's manual for the HERB system. 6 refs., 17 figs.« less

  20. A New Student Performance Analysing System Using Knowledge Discovery in Higher Educational Databases

    ERIC Educational Resources Information Center

    Guruler, Huseyin; Istanbullu, Ayhan; Karahasan, Mehmet

    2010-01-01

    Knowledge discovery is a wide ranged process including data mining, which is used to find out meaningful and useful patterns in large amounts of data. In order to explore the factors having impact on the success of university students, knowledge discovery software, called MUSKUP, has been developed and tested on student data. In this system a…

  1. Addressing the translational dilemma: dynamic knowledge representation of inflammation using agent-based modeling.

    PubMed

    An, Gary; Christley, Scott

    2012-01-01

    Given the panoply of system-level diseases that result from disordered inflammation, such as sepsis, atherosclerosis, cancer, and autoimmune disorders, understanding and characterizing the inflammatory response is a key target of biomedical research. Untangling the complex behavioral configurations associated with a process as ubiquitous as inflammation represents a prototype of the translational dilemma: the ability to translate mechanistic knowledge into effective therapeutics. A critical failure point in the current research environment is a throughput bottleneck at the level of evaluating hypotheses of mechanistic causality; these hypotheses represent the key step toward the application of knowledge for therapy development and design. Addressing the translational dilemma will require utilizing the ever-increasing power of computers and computational modeling to increase the efficiency of the scientific method in the identification and evaluation of hypotheses of mechanistic causality. More specifically, development needs to focus on facilitating the ability of non-computer trained biomedical researchers to utilize and instantiate their knowledge in dynamic computational models. This is termed "dynamic knowledge representation." Agent-based modeling is an object-oriented, discrete-event, rule-based simulation method that is well suited for biomedical dynamic knowledge representation. Agent-based modeling has been used in the study of inflammation at multiple scales. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggest that this modeling framework is well suited for addressing the translational dilemma. This review describes agent-based modeling, gives examples of its applications in the study of inflammation, and introduces a proposed general expansion of the use of modeling and simulation to augment the generation and evaluation of knowledge by the biomedical research community at large.

  2. Applications of artificial intelligence 1993: Knowledge-based systems in aerospace and industry; Proceedings of the Meeting, Orlando, FL, Apr. 13-15, 1993

    NASA Technical Reports Server (NTRS)

    Fayyad, Usama M. (Editor); Uthurusamy, Ramasamy (Editor)

    1993-01-01

    The present volume on applications of artificial intelligence with regard to knowledge-based systems in aerospace and industry discusses machine learning and clustering, expert systems and optimization techniques, monitoring and diagnosis, and automated design and expert systems. Attention is given to the integration of AI reasoning systems and hardware description languages, care-based reasoning, knowledge, retrieval, and training systems, and scheduling and planning. Topics addressed include the preprocessing of remotely sensed data for efficient analysis and classification, autonomous agents as air combat simulation adversaries, intelligent data presentation for real-time spacecraft monitoring, and an integrated reasoner for diagnosis in satellite control. Also discussed are a knowledge-based system for the design of heat exchangers, reuse of design information for model-based diagnosis, automatic compilation of expert systems, and a case-based approach to handling aircraft malfunctions.

  3. Knowledge-based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.

    1993-01-01

    Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scalable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded with this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on concept called the DataHub. With the DataHub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (address, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.

  4. Knowledge-based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.

    1992-01-01

    Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scaleable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded within this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on the concept called the Data Hub. With the Data Hub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (access, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.

  5. The Visual Representation and Acquisition of Driving Knowledge for Autonomous Vehicle

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxia; Jiang, Qing; Li, Ping; Song, LiangTu; Wang, Rujing; Yu, Biao; Mei, Tao

    2017-09-01

    In this paper, the driving knowledge base of autonomous vehicle is designed. Based on the driving knowledge modeling system, the driving knowledge of autonomous vehicle is visually acquired, managed, stored, and maintenanced, which has vital significance for creating the development platform of intelligent decision-making systems of automatic driving expert systems for autonomous vehicle.

  6. Capturing farm diversity with hypothesis-based typologies: An innovative methodological framework for farming system typology development

    PubMed Central

    Alvarez, Stéphanie; Timler, Carl J.; Michalscheck, Mirja; Paas, Wim; Descheemaeker, Katrien; Tittonell, Pablo; Andersson, Jens A.; Groot, Jeroen C. J.

    2018-01-01

    Creating typologies is a way to summarize the large heterogeneity of smallholder farming systems into a few farm types. Various methods exist, commonly using statistical analysis, to create these typologies. We demonstrate that the methodological decisions on data collection, variable selection, data-reduction and clustering techniques can bear a large impact on the typology results. We illustrate the effects of analysing the diversity from different angles, using different typology objectives and different hypotheses, on typology creation by using an example from Zambia’s Eastern Province. Five separate typologies were created with principal component analysis (PCA) and hierarchical clustering analysis (HCA), based on three different expert-informed hypotheses. The greatest overlap between typologies was observed for the larger, wealthier farm types but for the remainder of the farms there were no clear overlaps between typologies. Based on these results, we argue that the typology development should be guided by a hypothesis on the local agriculture features and the drivers and mechanisms of differentiation among farming systems, such as biophysical and socio-economic conditions. That hypothesis is based both on the typology objective and on prior expert knowledge and theories of the farm diversity in the study area. We present a methodological framework that aims to integrate participatory and statistical methods for hypothesis-based typology construction. This is an iterative process whereby the results of the statistical analysis are compared with the reality of the target population as hypothesized by the local experts. Using a well-defined hypothesis and the presented methodological framework, which consolidates the hypothesis through local expert knowledge for the creation of typologies, warrants development of less subjective and more contextualized quantitative farm typologies. PMID:29763422

  7. Capturing farm diversity with hypothesis-based typologies: An innovative methodological framework for farming system typology development.

    PubMed

    Alvarez, Stéphanie; Timler, Carl J; Michalscheck, Mirja; Paas, Wim; Descheemaeker, Katrien; Tittonell, Pablo; Andersson, Jens A; Groot, Jeroen C J

    2018-01-01

    Creating typologies is a way to summarize the large heterogeneity of smallholder farming systems into a few farm types. Various methods exist, commonly using statistical analysis, to create these typologies. We demonstrate that the methodological decisions on data collection, variable selection, data-reduction and clustering techniques can bear a large impact on the typology results. We illustrate the effects of analysing the diversity from different angles, using different typology objectives and different hypotheses, on typology creation by using an example from Zambia's Eastern Province. Five separate typologies were created with principal component analysis (PCA) and hierarchical clustering analysis (HCA), based on three different expert-informed hypotheses. The greatest overlap between typologies was observed for the larger, wealthier farm types but for the remainder of the farms there were no clear overlaps between typologies. Based on these results, we argue that the typology development should be guided by a hypothesis on the local agriculture features and the drivers and mechanisms of differentiation among farming systems, such as biophysical and socio-economic conditions. That hypothesis is based both on the typology objective and on prior expert knowledge and theories of the farm diversity in the study area. We present a methodological framework that aims to integrate participatory and statistical methods for hypothesis-based typology construction. This is an iterative process whereby the results of the statistical analysis are compared with the reality of the target population as hypothesized by the local experts. Using a well-defined hypothesis and the presented methodological framework, which consolidates the hypothesis through local expert knowledge for the creation of typologies, warrants development of less subjective and more contextualized quantitative farm typologies.

  8. Facilitating access to information in large documents with an intelligent hypertext system

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie

    1993-01-01

    Retrieving specific information from large amounts of documentation is not an easy task. It could be facilitated if information relevant in the current problem solving context could be automatically supplied to the user. As a first step towards this goal, we have developed an intelligent hypertext system called CID (Computer Integrated Documentation) and tested it on the Space Station Freedom requirement documents. The CID system enables integration of various technical documents in a hypertext framework and includes an intelligent context-sensitive indexing and retrieval mechanism. This mechanism utilizes on-line user information requirements and relevance feedback either to reinforce current indexing in case of success or to generate new knowledge in case of failure. This allows the CID system to provide helpful responses, based on previous usage of the documentation, and to improve its performance over time.

  9. A prototype system based on visual interactive SDM called VGC

    NASA Astrophysics Data System (ADS)

    Jia, Zelu; Liu, Yaolin; Liu, Yanfang

    2009-10-01

    In many application domains, data is collected and referenced by its geo-spatial location. Spatial data mining, or the discovery of interesting patterns in such databases, is an important capability in the development of database systems. Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. For spatial data mining of large data sets to be effective, it is also important to include humans in the data exploration process and combine their flexibility, creativity, and general knowledge with the enormous storage capacity and computational power of today's computers. Visual spatial data mining applies human visual perception to the exploration of large data sets. Presenting data in an interactive, graphical form often fosters new insights, encouraging the information and validation of new hypotheses to the end of better problem-solving and gaining deeper domain knowledge. In this paper a visual interactive spatial data mining prototype system (visual geo-classify) based on VC++6.0 and MapObject2.0 are designed and developed, the basic algorithms of the spatial data mining is used decision tree and Bayesian networks, and data classify are used training and learning and the integration of the two to realize. The result indicates it's a practical and extensible visual interactive spatial data mining tool.

  10. Engineering Knowledge for Assistive Living

    NASA Astrophysics Data System (ADS)

    Chen, Liming; Nugent, Chris

    This paper introduces a knowledge based approach to assistive living in smart homes. It proposes a system architecture that makes use of knowledge in the lifecycle of assistive living. The paper describes ontology based knowledge engineering practices and discusses mechanisms for exploiting knowledge for activity recognition and assistance. It presents system implementation and experiments, and discusses initial results.

  11. Systematic technology transfer from biology to engineering.

    PubMed

    Vincent, Julian F V; Mann, Darrell L

    2002-02-15

    Solutions to problems move only very slowly between different disciplines. Transfer can be greatly speeded up with suitable abstraction and classification of problems. Russian researchers working on the TRIZ (Teoriya Resheniya Izobretatelskikh Zadatch) method for inventive problem solving have identified systematic means of transferring knowledge between different scientific and engineering disciplines. With over 1500 person years of effort behind it, TRIZ represents the biggest study of human creativity ever conducted, whose aim has been to establish a system into which all known solutions can be placed, classified in terms of function. At present, the functional classification structure covers nearly 3 000 000 of the world's successful patents and large proportions of the known physical, chemical and mathematical knowledge-base. Additional tools are the identification of factors which prevent the attainment of new technology, leading directly to a system of inventive principles which will resolve the impasse, a series of evolutionary trends of development, and to a system of methods for effecting change in a system (Su-fields). As yet, the database contains little biological knowledge despite early recognition by the instigator of TRIZ (Genrich Altshuller) that one day it should. This is illustrated by natural systems evolved for thermal stability and the maintenance of cleanliness.

  12. Intrusion Detection Systems with Live Knowledge System

    DTIC Science & Technology

    2016-05-31

    Ripple -down Rule (RDR) to maintain the knowledge from human experts with knowledge base generated by the Induct RDR, which is a machine-learning based RDR...propose novel approach that uses Ripple -down Rule (RDR) to maintain the knowledge from human experts with knowledge base generated by the Induct RDR...detection model by applying Induct RDR approach. The proposed induct RDR ( Ripple Down Rules) approach allows to acquire the phishing detection

  13. Object-oriented fault tree models applied to system diagnosis

    NASA Technical Reports Server (NTRS)

    Iverson, David L.; Patterson-Hine, F. A.

    1990-01-01

    When a diagnosis system is used in a dynamic environment, such as the distributed computer system planned for use on Space Station Freedom, it must execute quickly and its knowledge base must be easily updated. Representing system knowledge as object-oriented augmented fault trees provides both features. The diagnosis system described here is based on the failure cause identification process of the diagnostic system described by Narayanan and Viswanadham. Their system has been enhanced in this implementation by replacing the knowledge base of if-then rules with an object-oriented fault tree representation. This allows the system to perform its task much faster and facilitates dynamic updating of the knowledge base in a changing diagnosis environment. Accessing the information contained in the objects is more efficient than performing a lookup operation on an indexed rule base. Additionally, the object-oriented fault trees can be easily updated to represent current system status. This paper describes the fault tree representation, the diagnosis algorithm extensions, and an example application of this system. Comparisons are made between the object-oriented fault tree knowledge structure solution and one implementation of a rule-based solution. Plans for future work on this system are also discussed.

  14. PRAIS: Distributed, real-time knowledge-based systems made easy

    NASA Technical Reports Server (NTRS)

    Goldstein, David G.

    1990-01-01

    This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.

  15. Techniques and implementation of the embedded rule-based expert system using Ada

    NASA Technical Reports Server (NTRS)

    Liberman, Eugene M.; Jones, Robert E.

    1991-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with its portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assured a growing role in providing human-like reasoning capability and expertise for computer systems. The integration of expert system technology with Ada programming language, specifically a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell is discussed. The NASA Lewis Research Center was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-base power expert system, in ART-Ada. Three components, the rule-based expert system, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  16. Structure/activity relationships for biodegradability and their role in environmental assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boethling, R.S.

    1994-12-31

    Assessment of biodegradability is an important part of the review process for both new and existing chemicals under the Toxic Substances Control Act. It is often necessary to estimate biodegradability because experimental data are unavailable. Structure/biodegradability relationships (SBR) are a means to this end. Quantitative SBR have been developed, but this approach has not been very useful because they apply only to a few narrowly defined classes of chemicals. In response to the need for more widely applicable methods, multivariate analysis has been used to develop biodegradability classification models. For example, recent efforts have produced four new models. Two calculatemore » the probability of rapid biodegradation and can be used for classification; the other two models allow semi-quantitative estimation of primary and ultimate biodegradation rates. All are based on multiple regressions against 36 preselected substructures plus molecular weight. Such efforts have been fairly successful by statistical criteria, but in general are hampered by a lack of large and consistent datasets. Knowledge-based expert systems may represent the next step in the evolution of SBR. In principle such systems need not be as severely limited by imperfect datasets. However, the codification of expert knowledge and reasoning is a critical prerequisite. Results of knowledge acquisition exercises and modeling based on them will also be described.« less

  17. Asynchronous Two-Level Checkpointing Scheme for Large-Scale Adjoints in the Spectral-Element Solver Nek5000

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schanen, Michel; Marin, Oana; Zhang, Hong

    Adjoints are an important computational tool for large-scale sensitivity evaluation, uncertainty quantification, and derivative-based optimization. An essential component of their performance is the storage/recomputation balance in which efficient checkpointing methods play a key role. We introduce a novel asynchronous two-level adjoint checkpointing scheme for multistep numerical time discretizations targeted at large-scale numerical simulations. The checkpointing scheme combines bandwidth-limited disk checkpointing and binomial memory checkpointing. Based on assumptions about the target petascale systems, which we later demonstrate to be realistic on the IBM Blue Gene/Q system Mira, we create a model of the expected performance of our checkpointing approach and validatemore » it using the highly scalable Navier-Stokes spectralelement solver Nek5000 on small to moderate subsystems of the Mira supercomputer. In turn, this allows us to predict optimal algorithmic choices when using all of Mira. We also demonstrate that two-level checkpointing is significantly superior to single-level checkpointing when adjoining a large number of time integration steps. To our knowledge, this is the first time two-level checkpointing had been designed, implemented, tuned, and demonstrated on fluid dynamics codes at large scale of 50k+ cores.« less

  18. RAVE: Rapid Visualization Environment

    NASA Technical Reports Server (NTRS)

    Klumpar, D. M.; Anderson, Kevin; Simoudis, Avangelos

    1994-01-01

    Visualization is used in the process of analyzing large, multidimensional data sets. However, the selection and creation of visualizations that are appropriate for the characteristics of a particular data set and the satisfaction of the analyst's goals is difficult. The process consists of three tasks that are performed iteratively: generate, test, and refine. The performance of these tasks requires the utilization of several types of domain knowledge that data analysts do not often have. Existing visualization systems and frameworks do not adequately support the performance of these tasks. In this paper we present the RApid Visualization Environment (RAVE), a knowledge-based system that interfaces with commercial visualization frameworks and assists a data analyst in quickly and easily generating, testing, and refining visualizations. RAVE was used for the visualization of in situ measurement data captured by spacecraft.

  19. M-and-C Domain Map Maker: an environment complimenting MDE with M-and-C knowledge and ensuring solution completeness

    NASA Astrophysics Data System (ADS)

    Patwari, Puneet; Choudhury, Subhrojyoti R.; Banerjee, Amar; Swaminathan, N.; Pandey, Shreya

    2016-07-01

    Model Driven Engineering (MDE) as a key driver to reduce development cost of M&C systems is beginning to find acceptance across scientific instruments such as Radio Telescopes and Nuclear Reactors. Such projects are adopting it to reduce time to integrate, test and simulate their individual controllers and increase reusability and traceability in the process. The creation and maintenance of models is still a significant challenge to realizing MDE benefits. Creating domain-specific modelling environments reduces the barriers, and we have been working along these lines, creating a domain-specific language and environment based on an M&C knowledge model. However, large projects involve several such domains, and there is still a need to interconnect the domain models, in order to ensure modelling completeness. This paper presents a knowledge-centric approach to doing that, by creating a generic system model that underlies the individual domain knowledge models. We present our vision for M&C Domain Map Maker, a set of processes and tools that enables explication of domain knowledge in terms of domain models with mutual consistency relationships to aid MDE.

  20. Enhancing acronym/abbreviation knowledge bases with semantic information.

    PubMed

    Torii, Manabu; Liu, Hongfang

    2007-10-11

    In the biomedical domain, a terminology knowledge base that associates acronyms/abbreviations (denoted as SFs) with the definitions (denoted as LFs) is highly needed. For the construction such terminology knowledge base, we investigate the feasibility to build a system automatically assigning semantic categories to LFs extracted from text. Given a collection of pairs (SF,LF) derived from text, we i) assess the coverage of LFs and pairs (SF,LF) in the UMLS and justify the need of a semantic category assignment system; and ii) automatically derive name phrases annotated with semantic category and construct a system using machine learning. Utilizing ADAM, an existing collection of (SF,LF) pairs extracted from MEDLINE, our system achieved an f-measure of 87% when assigning eight UMLS-based semantic groups to LFs. The system has been incorporated into a web interface which integrates SF knowledge from multiple SF knowledge bases. Web site: http://gauss.dbb.georgetown.edu/liblab/SFThesurus.

  1. Archetype-based conversion of EHR content models: pilot experience with a regional EHR system.

    PubMed

    Chen, Rong; Klein, Gunnar O; Sundvall, Erik; Karlsson, Daniel; Ahlfeldt, Hans

    2009-07-01

    Exchange of Electronic Health Record (EHR) data between systems from different suppliers is a major challenge. EHR communication based on archetype methodology has been developed by openEHR and CEN/ISO. The experience of using archetypes in deployed EHR systems is quite limited today. Currently deployed EHR systems with large user bases have their own proprietary way of representing clinical content using various models. This study was designed to investigate the feasibility of representing EHR content models from a regional EHR system as openEHR archetypes and inversely to convert archetypes to the proprietary format. The openEHR EHR Reference Model (RM) and Archetype Model (AM) specifications were used. The template model of the Cambio COSMIC, a regional EHR product from Sweden, was analyzed and compared to the openEHR RM and AM. This study was focused on the convertibility of the EHR semantic models. A semantic mapping between the openEHR RM/AM and the COSMIC template model was produced and used as the basis for developing prototype software that performs automated bi-directional conversion between openEHR archetypes and COSMIC templates. Automated bi-directional conversion between openEHR archetype format and COSMIC template format has been achieved. Several archetypes from the openEHR Clinical Knowledge Repository have been imported into COSMIC, preserving most of the structural and terminology related constraints. COSMIC templates from a large regional installation were successfully converted into the openEHR archetype format. The conversion from the COSMIC templates into archetype format preserves nearly all structural and semantic definitions of the original content models. A strategy of gradually adding archetype support to legacy EHR systems was formulated in order to allow sharing of clinical content models defined using different formats. The openEHR RM and AM are expressive enough to represent the existing clinical content models from the template based EHR system tested and legacy content models can automatically be converted to archetype format for sharing of knowledge. With some limitations, internationally available archetypes could be converted to the legacy EHR models. Archetype support can be added to legacy EHR systems in an incremental way allowing a migration path to interoperability based on standards.

  2. Expert system technology

    NASA Technical Reports Server (NTRS)

    Prince, Mary Ellen

    1987-01-01

    The expert system is a computer program which attempts to reproduce the problem-solving behavior of an expert, who is able to view problems from a broad perspective and arrive at conclusions rapidly, using intuition, shortcuts, and analogies to previous situations. Expert systems are a departure from the usual artificial intelligence approach to problem solving. Researchers have traditionally tried to develop general modes of human intelligence that could be applied to many different situations. Expert systems, on the other hand, tend to rely on large quantities of domain specific knowledge, much of it heuristic. The reasoning component of the system is relatively simple and straightforward. For this reason, expert systems are often called knowledge based systems. The report expands on the foregoing. Section 1 discusses the architecture of a typical expert system. Section 2 deals with the characteristics that make a problem a suitable candidate for expert system solution. Section 3 surveys current technology, describing some of the software aids available for expert system development. Section 4 discusses the limitations of the latter. The concluding section makes predictions of future trends.

  3. Knowledge-Based Entrepreneurship in a Boundless Research System

    ERIC Educational Resources Information Center

    Dell'Anno, Davide

    2008-01-01

    International entrepreneurship and knowledge-based entrepreneurship have recently generated considerable academic and non-academic attention. This paper explores the "new" field of knowledge-based entrepreneurship in a boundless research system. Cultural barriers to the development of business opportunities by researchers persist in some academic…

  4. Issues on the use of meta-knowledge in expert systems

    NASA Technical Reports Server (NTRS)

    Facemire, Jon; Chen, Imao

    1988-01-01

    Meta knowledge is knowledge about knowledge; knowledge that is not domain specific but is concerned instead with its own internal structure. Several past systems have used meta-knowledge to improve the nature of the user interface, to maintain the knowledge base, and to control the inference engine. More extensive use of meta-knowledge is probable for the future as larger scale problems are considered. A proposed system architecture is presented and discussed in terms of meta-knowledge applications. The principle components of this system: the user support subsystem, the control structure, the knowledge base, the inference engine, and a learning facility are all outlined and discussed in light of the use of meta-knowledge. Problems with meta-constructs are also mentioned but it is concluded that the use of meta-knowledge is crucial for increasingly autonomous operations.

  5. Knowledge-based environment for optical system design

    NASA Astrophysics Data System (ADS)

    Johnson, R. Barry

    1991-01-01

    Optical systems are extensively utilized by industry government and military organizations. The conceptual design engineering design fabrication and testing of these systems presently requires significant time typically on the order of 3-5 years. The Knowledge-Based Environment for Optical System Design (KB-OSD) Program has as its principal objectives the development of a methodology and tool(s) that will make a notable reduction in the development time of optical system projects reduce technical risk and overall cost. KB-OSD can be considered as a computer-based optical design associate for system engineers and design engineers. By utilizing artificial intelligence technology coupled with extensive design/evaluation computer application programs and knowledge bases the KB-OSD will provide the user with assistance and guidance to accomplish such activities as (i) develop system level and hardware level requirements from mission requirements (ii) formulate conceptual designs (iii) construct a statement of work for an RFP (iv) develop engineering level designs (v) evaluate an existing design and (vi) explore the sensitivity of a system to changing scenarios. The KB-OSD comprises a variety of computer platforms including a Stardent Titan supercomputer numerous design programs (lens design coating design thermal materials structural atmospherics etc. ) data bases and heuristic knowledge bases. An important element of the KB-OSD Program is the inclusion of the knowledge of individual experts in various areas of optics and optical system engineering. This knowledge is obtained by KB-OSD knowledge engineers performing

  6. A knowledge-based flight status monitor for real-time application in digital avionics systems

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Disbrow, J. D.; Butler, G. F.

    1989-01-01

    The Dryden Flight Research Facility of the National Aeronautics and Space Administration (NASA) Ames Research Center (Ames-Dryden) is the principal NASA facility for the flight testing and evaluation of new and complex avionics systems. To aid in the interpretation of system health and status data, a knowledge-based flight status monitor was designed. The monitor was designed to use fault indicators from the onboard system which are telemetered to the ground and processed by a rule-based model of the aircraft failure management system to give timely advice and recommendations in the mission control room. One of the important constraints on the flight status monitor is the need to operate in real time, and to pursue this aspect, a joint research activity between NASA Ames-Dryden and the Royal Aerospace Establishment (RAE) on real-time knowledge-based systems was established. Under this agreement, the original LISP knowledge base for the flight status monitor was reimplemented using the intelligent knowledge-based system toolkit, MUSE, which was developed under RAE sponsorship. Details of the flight status monitor and the MUSE implementation are presented.

  7. PDA: A coupling of knowledge and memory for case-based reasoning

    NASA Technical Reports Server (NTRS)

    Bharwani, S.; Walls, J.; Blevins, E.

    1988-01-01

    Problem solving in most domains requires reference to past knowledge and experience whether such knowledge is represented as rules, decision trees, networks or any variant of attributed graphs. Regardless of the representational form employed, designers of expert systems rarely make a distinction between the static and dynamic aspects of the system's knowledge base. The current paper clearly distinguishes between knowledge-based and memory-based reasoning where the former in its most pure sense is characterized by a static knowledge based resulting in a relatively brittle expert system while the latter is dynamic and analogous to the functions of human memory which learns from experience. The paper discusses the design of an advisory system which combines a knowledge base consisting of domain vocabulary and default dependencies between concepts with a dynamic conceptual memory which stores experimental knowledge in the form of cases. The case memory organizes past experience in the form of MOPs (memory organization packets) and sub-MOPs. Each MOP consists of a context frame and a set of indices. The context frame contains information about the features (norms) common to all the events and sub-MOPs indexed under it.

  8. Why build a virtual brain? Large-scale neural simulations as jump start for cognitive computing

    NASA Astrophysics Data System (ADS)

    Colombo, Matteo

    2017-03-01

    Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates this claim and argues that the main challenge this era is facing is not the lack of biological realism. The challenge lies in identifying general neurocomputational principles for the design of artificial systems, which could display the robust flexibility characteristic of biological intelligence.

  9. Eurogin 2016 Roadmap: how HPV knowledge is changing screening practice.

    PubMed

    Wentzensen, Nicolas; Arbyn, Marc; Berkhof, Johannes; Bower, Mark; Canfell, Karen; Einstein, Mark; Farley, Christopher; Monsonego, Joseph; Franceschi, Silvia

    2017-05-15

    Human papillomaviruses (HPVs) are the necessary cause of most cervical cancers, a large proportion of other anogenital cancers, and a subset of oropharyngeal cancers. The knowledge about HPV has led to development of novel HPV-based prevention strategies with important impact on clinical and public health practice. Two complementary reviews have been prepared following the 2015 Eurogin Conference to evaluate how knowledge about HPV is changing practice in HPV infection and disease control through vaccination and screening. This review focuses on screening for cervical and anal cancers in increasingly vaccinated populations. The introduction of HPV vaccines a decade ago has led to reductions in HPV infections and early cancer precursors in countries with wide vaccination coverage. Despite the high efficacy of HPV vaccines, cervical cancer screening will remain important for many decades. Many healthcare systems are considering switching to primary HPV screening, which has higher sensitivity for cervical precancers and allows extending screening intervals. We describe different approaches to implementing HPV-based screening efforts in different healthcare systems with a focus in high-income countries. While the population prevalence for other anogenital cancers is too low for population-based screening, anal cancer incidence is very high in HIV-infected men who have sex with men, warranting consideration of early detection approaches. We summarize the current evidence on HPV-based prevention of anal cancers and highlight important evidence gaps. © 2016 UICC.

  10. ISHM Implementation for Constellation Systems

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Holland, Randy; Schmalzel, John; Duncavage, Dan; Crocker, Alan; Alena, Rick

    2006-01-01

    Integrated System Health Management (ISHM) is a capability that focuses on determining the condition (health) of every element in a complex System (detect anomalies, diagnose causes, prognosis of future anomalies), and provide data, information, and knowledge (DIaK) "not just data" to control systems for safe and effective operation. This capability is currently done by large teams of people, primarily from ground, but needs to be embedded on-board systems to a higher degree to enable NASA's new Exploration Mission (long term travel and stay in space), while increasing safety and decreasing life cycle costs of systems (vehicles; platforms; bases or outposts; and ground test, launch, and processing operations). This viewgraph presentation reviews the use of ISHM for the Constellation system.

  11. Intelligent E-Learning Systems: Automatic Construction of Ontologies

    NASA Astrophysics Data System (ADS)

    Peso, Jesús del; de Arriaga, Fernando

    2008-05-01

    During the last years a new generation of Intelligent E-Learning Systems (ILS) has emerged with enhanced functionality due, mainly, to influences from Distributed Artificial Intelligence, to the use of cognitive modelling, to the extensive use of the Internet, and to new educational ideas such as the student-centered education and Knowledge Management. The automatic construction of ontologies provides means of automatically updating the knowledge bases of their respective ILS, and of increasing their interoperability and communication among them, sharing the same ontology. The paper presents a new approach, able to produce ontologies from a small number of documents such as those obtained from the Internet, without the assistance of large corpora, by using simple syntactic rules and some semantic information. The method is independent of the natural language used. The use of a multi-agent system increases the flexibility and capability of the method. Although the method can be easily improved, the results so far obtained, are promising.

  12. The role of frontline RNs in the selection of an electronic medical record business partner.

    PubMed

    Wilhoit, Kathryn; Mustain, Jane; King, Marjorie

    2006-01-01

    Frontline RNs knowledgeable in the strategic objectives of their organization made a difference in the selection of an electronic medical record business partner for a large, complex healthcare system. Their impact was significant because of the chief nurse executive's personal articulation of the organization's strategic goals and of her investment in their education. These factors provided the frontline RNs with a foundational base of knowledge about a variety of electronic medical record systems. The preparation and exposure enabled the frontline RNs to make a valuable contribution to the selection of an electronic medical record business partner. The RNs were a major force in affecting philosophical change from the organization's original pursuit of "best-of-breed" interfaced systems to a fully integrated, "best-of-class" vendor business partner. The learning experiences of the frontline RNs are explored to answer the following question: Why must frontline RNs play a key role in this process?

  13. Meta-tools for software development and knowledge acquisition

    NASA Technical Reports Server (NTRS)

    Eriksson, Henrik; Musen, Mark A.

    1992-01-01

    The effectiveness of tools that provide support for software development is highly dependent on the match between the tools and their task. Knowledge-acquisition (KA) tools constitute a class of development tools targeted at knowledge-based systems. Generally, KA tools that are custom-tailored for particular application domains are more effective than are general KA tools that cover a large class of domains. The high cost of custom-tailoring KA tools manually has encouraged researchers to develop meta-tools for KA tools. Current research issues in meta-tools for knowledge acquisition are the specification styles, or meta-views, for target KA tools used, and the relationships between the specification entered in the meta-tool and other specifications for the target program under development. We examine different types of meta-views and meta-tools. Our current project is to provide meta-tools that produce KA tools from multiple specification sources--for instance, from a task analysis of the target application.

  14. Knowledge acquisition in the fuzzy knowledge representation framework of a medical consultation system.

    PubMed

    Boegl, Karl; Adlassnig, Klaus-Peter; Hayashi, Yoichi; Rothenfluh, Thomas E; Leitich, Harald

    2004-01-01

    This paper describes the fuzzy knowledge representation framework of the medical computer consultation system MedFrame/CADIAG-IV as well as the specific knowledge acquisition techniques that have been developed to support the definition of knowledge concepts and inference rules. As in its predecessor system CADIAG-II, fuzzy medical knowledge bases are used to model the uncertainty and the vagueness of medical concepts and fuzzy logic reasoning mechanisms provide the basic inference processes. The elicitation and acquisition of medical knowledge from domain experts has often been described as the most difficult and time-consuming task in knowledge-based system development in medicine. It comes as no surprise that this is even more so when unfamiliar representations like fuzzy membership functions are to be acquired. From previous projects we have learned that a user-centered approach is mandatory in complex and ill-defined knowledge domains such as internal medicine. This paper describes the knowledge acquisition framework that has been developed in order to make easier and more accessible the three main tasks of: (a) defining medical concepts; (b) providing appropriate interpretations for patient data; and (c) constructing inferential knowledge in a fuzzy knowledge representation framework. Special emphasis is laid on the motivations for some system design and data modeling decisions. The theoretical framework has been implemented in a software package, the Knowledge Base Builder Toolkit. The conception and the design of this system reflect the need for a user-centered, intuitive, and easy-to-handle tool. First results gained from pilot studies have shown that our approach can be successfully implemented in the context of a complex fuzzy theoretical framework. As a result, this critical aspect of knowledge-based system development can be accomplished more easily.

  15. An expert system for the design of heating, ventilating, and air-conditioning systems

    NASA Astrophysics Data System (ADS)

    Camejo, Pedro Jose

    1989-12-01

    Expert systems are computer programs that seek to mimic human reason. An expert system shelf, a software program commonly used for developing expert systems in a relatively short time, was used to develop a prototypical expert system for the design of heating, ventilating, and air-conditioning (HVAC) systems in buildings. Because HVAC design involves several related knowledge domains, developing an expert system for HVAC design requires the integration of several smaller expert systems known as knowledge bases. A menu program and several auxiliary programs for gathering data, completing calculations, printing project reports, and passing data between the knowledge bases are needed and have been developed to join the separate knowledge bases into one simple-to-use program unit.

  16. Development of the Internet-Based Customer-Oriented Ordering System Framework for Complicated Mechanical Product

    NASA Astrophysics Data System (ADS)

    Ong, Mingwei; Watanuki, Keiichi

    Recently, as consumers gradually prefer buying products that reflect their own personality, there exist some consumers who wish to involve in the product design process. Parallel with the popularization of e-business, many manufacturers have utilized the Internet to promote their products, and some have even built websites that enable consumers to select their desirable product specifications. Nevertheless, this method has not been applied on complicated mechanical product due to the facts that complicated mechanical product has a large number of specifications that inter-relate among one another. In such a case, ordinary consumers who are lacking of design knowledge, are not capable of determining these specifications. In this paper, a prototype framework called Internet-based consumer-oriented product ordering system has been developed in which it enables ordinary consumers to have large freedom in determining complicated mechanical product specifications, and meanwhile ensures that the manufacturing of the determined product is feasible.

  17. Integrative pathway knowledge bases as a tool for systems molecular medicine.

    PubMed

    Liang, Mingyu

    2007-08-20

    There exists a sense of urgency to begin to generate a cohesive assembly of biomedical knowledge as the pace of knowledge accumulation accelerates. The urgency is in part driven by the emergence of systems molecular medicine that emphasizes the combination of systems analysis and molecular dissection in the future of medical practice and research. A potentially powerful approach is to build integrative pathway knowledge bases that link organ systems function with molecules.

  18. Autonomous power expert fault diagnostic system for Space Station Freedom electrical power system testbed

    NASA Technical Reports Server (NTRS)

    Truong, Long V.; Walters, Jerry L.; Roth, Mary Ellen; Quinn, Todd M.; Krawczonek, Walter M.

    1990-01-01

    The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control to the Space Station Freedom Electrical Power System (SSF/EPS) testbed being developed and demonstrated at NASA Lewis Research Center. The objectives of the program are to establish artificial intelligence technology paths, to craft knowledge-based tools with advanced human-operator interfaces for power systems, and to interface and integrate knowledge-based systems with conventional controllers. The Autonomous Power EXpert (APEX) portion of the APS program will integrate a knowledge-based fault diagnostic system and a power resource planner-scheduler. Then APEX will interface on-line with the SSF/EPS testbed and its Power Management Controller (PMC). The key tasks include establishing knowledge bases for system diagnostics, fault detection and isolation analysis, on-line information accessing through PMC, enhanced data management, and multiple-level, object-oriented operator displays. The first prototype of the diagnostic expert system for fault detection and isolation has been developed. The knowledge bases and the rule-based model that were developed for the Power Distribution Control Unit subsystem of the SSF/EPS testbed are described. A corresponding troubleshooting technique is also described.

  19. Large-Scale Urban Localisation with a Pushbroom LIDAR

    DTIC Science & Technology

    2012-10-01

    the sole means of gen- erating full 6-DOF poses from a previously-surveyed workspace. In this approach, Normalised Information Distance is used as the...vehicle to have good knowledge of its position at system- initialization. We now turn to an overview of the use of point-based registration methods for...combination of reference vertices, and provides a convenient representation for the intersection test . Using this parametric representation, the coordinates

  20. An Intelligent Tool for Activity Data Collection

    PubMed Central

    Jehad Sarkar, A. M.

    2011-01-01

    Activity recognition systems using simple and ubiquitous sensors require a large variety of real-world sensor data for not only evaluating their performance but also training the systems for better functioning. However, a tremendous amount of effort is required to setup an environment for collecting such data. For example, expertise and resources are needed to design and install the sensors, controllers, network components, and middleware just to perform basic data collections. It is therefore desirable to have a data collection method that is inexpensive, flexible, user-friendly, and capable of providing large and diverse activity datasets. In this paper, we propose an intelligent activity data collection tool which has the ability to provide such datasets inexpensively without physically deploying the testbeds. It can be used as an inexpensive and alternative technique to collect human activity data. The tool provides a set of web interfaces to create a web-based activity data collection environment. It also provides a web-based experience sampling tool to take the user’s activity input. The tool generates an activity log using its activity knowledge and the user-given inputs. The activity knowledge is mined from the web. We have performed two experiments to validate the tool’s performance in producing reliable datasets. PMID:22163832

  1. Use of metaknowledge in the verification of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Morell, Larry J.

    1989-01-01

    Knowledge-based systems are modeled as deductive systems. The model indicates that the two primary areas of concern in verification are demonstrating consistency and completeness. A system is inconsistent if it asserts something that is not true of the modeled domain. A system is incomplete if it lacks deductive capability. Two forms of consistency are discussed along with appropriate verification methods. Three forms of incompleteness are discussed. The use of metaknowledge, knowledge about knowledge, is explored in connection to each form of incompleteness.

  2. A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Chen, James

    1994-01-01

    Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.

  3. Creation of a master table for checking indication and contraindication of medicine from a knowledge base linked with a thesaurus.

    PubMed

    Ji, Shanmei; Matsumura, Yasushi; Kuwata, Shigeki; Nakano, Hirohiko; Chen, Yufeng; Teratani, Tadamasa; Zhang, Qiyan; Mineno, Takahiro; Takeda, Hiroshi

    2004-12-01

    To develop a system for checking indication and contraindication of medicines in prescription order entry system, a master table consisting of the disease names corresponding to the medicines adopted in a hospital is needed. The creation of this table requires a considerable manpower. We developed a Web-based system for constructing a medicine/disease thesaurus and a knowledge base. By authority management of users, this system enables many specialists to create the thesaurus collaboratively without confusion. It supports the creation of a knowledge base using concept names by referring to the thesaurus, which is automatically converted to the check master table. When a disease name or medicine name was added to the thesaurus, the check table was automatically updated. We constructed a thesaurus and a knowledge base in the field of circulatory system disease. The knowledge base linked with the thesaurus proved to be efficient for making the check master table for indication/contraindication of medicines.

  4. Fundamentals of pulmonary drug delivery.

    PubMed

    Groneberg, D A; Witt, C; Wagner, U; Chung, K F; Fischer, A

    2003-04-01

    Aerosol administration of peptide-based drugs plays an important role in the treatment of pulmonary and systemic diseases and the unique cellular properties of airway epithelium offers a great potential to deliver new compounds. As the relative contributions from the large airways to the alveolar space are important to the local and systemic availability, the sites and mechanism of uptake and transport of different target compounds have to be characterized. Among the different respiratory cells, the ciliated epithelial cells of the larger and smaller airways and the type I and type II pneumocytes are the key players in pulmonary drug transport. With their diverse cellular characteristics, each of these cell types displays a unique uptake possibility. Next to the knowledge of these cellular aspects, the nature of aerosolized drugs, characteristics of delivery systems and the depositional and pulmonary clearance mechanisms display major targets to optimize pulmonary drug delivery. Based on the growing knowledge on pulmonary cell biology and pathophysiology due to modern methods of molecular biology, the future characterization of pulmonary drug transport pathways can lead to new strategies in aerosol drug therapy.

  5. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  6. Land management: data availability and process understanding for global change studies.

    PubMed

    Erb, Karl-Heinz; Luyssaert, Sebastiaan; Meyfroidt, Patrick; Pongratz, Julia; Don, Axel; Kloster, Silvia; Kuemmerle, Tobias; Fetzel, Tamara; Fuchs, Richard; Herold, Martin; Haberl, Helmut; Jones, Chris D; Marín-Spiotta, Erika; McCallum, Ian; Robertson, Eddy; Seufert, Verena; Fritz, Steffen; Valade, Aude; Wiltshire, Andrew; Dolman, Albertus J

    2017-02-01

    In the light of daunting global sustainability challenges such as climate change, biodiversity loss and food security, improving our understanding of the complex dynamics of the Earth system is crucial. However, large knowledge gaps related to the effects of land management persist, in particular those human-induced changes in terrestrial ecosystems that do not result in land-cover conversions. Here, we review the current state of knowledge of ten common land management activities for their biogeochemical and biophysical impacts, the level of process understanding and data availability. Our review shows that ca. one-tenth of the ice-free land surface is under intense human management, half under medium and one-fifth under extensive management. Based on our review, we cluster these ten management activities into three groups: (i) management activities for which data sets are available, and for which a good knowledge base exists (cropland harvest and irrigation); (ii) management activities for which sufficient knowledge on biogeochemical and biophysical effects exists but robust global data sets are lacking (forest harvest, tree species selection, grazing and mowing harvest, N fertilization); and (iii) land management practices with severe data gaps concomitant with an unsatisfactory level of process understanding (crop species selection, artificial wetland drainage, tillage and fire management and crop residue management, an element of crop harvest). Although we identify multiple impediments to progress, we conclude that the current status of process understanding and data availability is sufficient to advance with incorporating management in, for example, Earth system or dynamic vegetation models in order to provide a systematic assessment of their role in the Earth system. This review contributes to a strategic prioritization of research efforts across multiple disciplines, including land system research, ecological research and Earth system modelling. © 2016 John Wiley & Sons Ltd.

  7. Integration of a knowledge-based system and a clinical documentation system via a data dictionary.

    PubMed

    Eich, H P; Ohmann, C; Keim, E; Lang, K

    1997-01-01

    This paper describes the design and realisation of a knowledge-based system and a clinical documentation system linked via a data dictionary. The software was developed as a shell with object oriented methods and C++ for IBM-compatible PC's and WINDOWS 3.1/95. The data dictionary covers terminology and document objects with relations to external classifications. It controls the terminology in the documentation program with form-based entry of clinical documents and in the knowledge-based system with scores and rules. The software was applied to the clinical field of acute abdominal pain by implementing a data dictionary with 580 terminology objects, 501 document objects, and 2136 links; a documentation module with 8 clinical documents and a knowledge-based system with 10 scores and 7 sets of rules.

  8. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Feyock, Stefan; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    The purpose of this research effort is to investigate the benefits that might be derived from applying artificial intelligence tools in the area of conceptual design. Therefore, the emphasis is on the artificial intelligence aspects of conceptual design rather than structural and optimization aspects. A prototype knowledge-based system, called STRUTEX, was developed to initially configure a structure to support point loads in two dimensions. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user by integrating a knowledge base interface and inference engine, a data base interface, and graphics while keeping the knowledge base and data base files separate. The system writes a file which can be input into a structural synthesis system, which combines structural analysis and optimization.

  9. Biological standards for the Knowledge-Based BioEconomy: What is at stake.

    PubMed

    de Lorenzo, Víctor; Schmidt, Markus

    2018-01-25

    The contribution of life sciences to the Knowledge-Based Bioeconomy (KBBE) asks for the transition of contemporary, gene-based biotechnology from being a trial-and-error endeavour to becoming an authentic branch of engineering. One requisite to this end is the need for standards to measure and represent accurately biological functions, along with languages for data description and exchange. However, the inherent complexity of biological systems and the lack of quantitative tradition in the field have largely curbed this enterprise. Fortunately, the onset of systems and synthetic biology has emphasized the need for standards not only to manage omics data, but also to increase reproducibility and provide the means of engineering living systems in earnest. Some domains of biotechnology can be easily standardized (e.g. physical composition of DNA sequences, tools for genome editing, languages to encode workflows), while others might be standardized with some dedicated research (e.g. biological metrology, operative systems for bio-programming cells) and finally others will require a considerable effort, e.g. defining the rules that allow functional composition of biological activities. Despite difficulties, these are worthy attempts, as the history of technology shows that those who set/adopt standards gain a competitive advantage over those who do not. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. LISP as an Environment for Software Design: Powerful and Perspicuous

    PubMed Central

    Blum, Robert L.; Walker, Michael G.

    1986-01-01

    The LISP language provides a useful set of features for prototyping knowledge-intensive, clinical applications software that is not found In most other programing environments. Medical computer programs that need large medical knowledge bases, such as programs for diagnosis, therapeutic consultation, education, simulation, and peer review, are hard to design, evolve continually, and often require major revisions. They necessitate an efficient and flexible program development environment. The LISP language and programming environments bullt around it are well suited for program prototyping. The lingua franca of artifical intelligence researchers, LISP facllitates bullding complex systems because it is simple yet powerful. Because of its simplicity, LISP programs can read, execute, modify and even compose other LISP programs at run time. Hence, it has been easy for system developers to create programming tools that greatly speed the program development process, and that may be easily extended by users. This has resulted in the creation of many useful graphical interfaces, editors, and debuggers, which facllitate the development of knowledge-intensive medical applications.

  11. Automated knowledge-base refinement

    NASA Technical Reports Server (NTRS)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  12. Semantics-driven modelling of user preferences for information retrieval in the biomedical domain.

    PubMed

    Gladun, Anatoly; Rogushina, Julia; Valencia-García, Rafael; Béjar, Rodrigo Martínez

    2013-03-01

    A large amount of biomedical and genomic data are currently available on the Internet. However, data are distributed into heterogeneous biological information sources, with little or even no organization. Semantic technologies provide a consistent and reliable basis with which to confront the challenges involved in the organization, manipulation and visualization of data and knowledge. One of the knowledge representation techniques used in semantic processing is the ontology, which is commonly defined as a formal and explicit specification of a shared conceptualization of a domain of interest. The work presented here introduces a set of interoperable algorithms that can use domain and ontological information to improve information-retrieval processes. This work presents an ontology-based information-retrieval system for the biomedical domain. This system, with which some experiments have been carried out that are described in this paper, is based on the use of domain ontologies for the creation and normalization of lightweight ontologies that represent user preferences in a determined domain in order to improve information-retrieval processes.

  13. Knowledge retrieval as one type of knowledge-based decision support in medicine: results of an evaluation study.

    PubMed

    Haux, R; Grothe, W; Runkel, M; Schackert, H K; Windeler, H J; Winter, A; Wirtz, R; Herfarth, C; Kunze, S

    1996-04-01

    We report on a prospective, prolective observational study, supplying information on how physicians and other health care professionals retrieve medical knowledge on-line within the Heidelberg University Hospital information system. Within this hospital information system, on-line access to medical knowledge has been realised by installing a medical knowledge server in the range of about 24 GB and by providing access to it by health care professional workstations in wards, physicians' rooms, etc. During the study, we observed about 96 accesses per working day. The main group of health care professionals retrieving medical knowledge were physicians and medical students. Primary reasons for its utilisation were identified as support for the users' scientific work (50%), own clinical cases (19%), general medical problems (14%) and current clinical problems (13%). Health care professionals had accesses to medical knowledge bases such as MEDLINE (79%), drug bases ('Rote Liste', 6%), and to electronic text books and knowledge base systems as well. Sixty-five percent of accesses to medical knowledge were judged to be successful. In our opinion, medical knowledge retrieval can serve as a first step towards knowledge processing in medicine. We point out the consequences for the management of hospital information systems in order to provide the prerequisites for such a type of knowledge retrieval.

  14. What Is the Current Level of Asthma Knowledge in Elementary, Middle, and High School Teachers?

    ERIC Educational Resources Information Center

    Carey, Stephen

    2013-01-01

    This study examined teacher asthma knowledge based on three areas including (a) the level of teacher asthma knowledge in the Maury County Public School System, (b) the level of teacher asthma knowledge based on five demographic factors, and (c) the level of teacher asthma knowledge in the Maury County Public School System compared with teacher…

  15. PANDORA: keyword-based analysis of protein sets by integration of annotation sources.

    PubMed

    Kaplan, Noam; Vaaknin, Avishay; Linial, Michal

    2003-10-01

    Recent advances in high-throughput methods and the application of computational tools for automatic classification of proteins have made it possible to carry out large-scale proteomic analyses. Biological analysis and interpretation of sets of proteins is a time-consuming undertaking carried out manually by experts. We have developed PANDORA (Protein ANnotation Diagram ORiented Analysis), a web-based tool that provides an automatic representation of the biological knowledge associated with any set of proteins. PANDORA uses a unique approach of keyword-based graphical analysis that focuses on detecting subsets of proteins that share unique biological properties and the intersections of such sets. PANDORA currently supports SwissProt keywords, NCBI Taxonomy, InterPro entries and the hierarchical classification terms from ENZYME, SCOP and GO databases. The integrated study of several annotation sources simultaneously allows a representation of biological relations of structure, function, cellular location, taxonomy, domains and motifs. PANDORA is also integrated into the ProtoNet system, thus allowing testing thousands of automatically generated clusters. We illustrate how PANDORA enhances the biological understanding of large, non-uniform sets of proteins originating from experimental and computational sources, without the need for prior biological knowledge on individual proteins.

  16. A Knowledge Portal and Collaboration Environment for the Earth Sciences

    NASA Astrophysics Data System (ADS)

    D'Agnese, F. A.

    2008-12-01

    Earth Knowledge is developing a web-based 'Knowledge Portal and Collaboration Environment' that will serve as the information-technology-based foundation of a modular Internet-based Earth-Systems Monitoring, Analysis, and Management Tool. This 'Knowledge Portal' is essentially a 'mash- up' of web-based and client-based tools and services that support on-line collaboration, community discussion, and broad public dissemination of earth and environmental science information in a wide-area distributed network. In contrast to specialized knowledge-management or geographic-information systems developed for long- term and incremental scientific analysis, this system will exploit familiar software tools using industry standard protocols, formats, and APIs to discover, process, fuse, and visualize existing environmental datasets using Google Earth and Google Maps. An early form of these tools and services is being used by Earth Knowledge to facilitate the investigations and conversations of scientists, resource managers, and citizen-stakeholders addressing water resource sustainability issues in the Great Basin region of the desert southwestern United States. These ongoing projects will serve as use cases for the further development of this information-technology infrastructure. This 'Knowledge Portal' will accelerate the deployment of Earth- system data and information into an operational knowledge management system that may be used by decision-makers concerned with stewardship of water resources in the American Desert Southwest.

  17. Mars Rover imaging systems and directional filtering

    NASA Technical Reports Server (NTRS)

    Wang, Paul P.

    1989-01-01

    Computer literature searches were carried out at Duke University and NASA Langley Research Center. The purpose is to enhance personal knowledge based on the technical problems of pattern recognition and image understanding which must be solved for the Mars Rover and Sample Return Mission. Intensive study effort of a large collection of relevant literature resulted in a compilation of all important documents in one place. Furthermore, the documents are being classified into: Mars Rover; computer vision (theory); imaging systems; pattern recognition methodologies; and other smart techniques (AI, neural networks, fuzzy logic, etc).

  18. CLIPS: A tool for corn disease diagnostic system and an aid to neural network for automated knowledge acquisition

    NASA Technical Reports Server (NTRS)

    Wu, Cathy; Taylor, Pam; Whitson, George; Smith, Cathy

    1990-01-01

    This paper describes the building of a corn disease diagnostic expert system using CLIPS, and the development of a neural expert system using the fact representation method of CLIPS for automated knowledge acquisition. The CLIPS corn expert system diagnoses 21 diseases from 52 symptoms and signs with certainty factors. CLIPS has several unique features. It allows the facts in rules to be broken down to object-attribute-value (OAV) triples, allows rule-grouping, and fires rules based on pattern-matching. These features combined with the chained inference engine result to a natural user query system and speedy execution. In order to develop a method for automated knowledge acquisition, an Artificial Neural Expert System (ANES) is developed by a direct mapping from the CLIPS system. The ANES corn expert system uses the same OAV triples in the CLIPS system for its facts. The LHS and RHS facts of the CLIPS rules are mapped into the input and output layers of the ANES, respectively; and the inference engine of the rules is imbedded in the hidden layer. The fact representation by OAC triples gives a natural grouping of the rules. These features allow the ANES system to automate rule-generation, and make it efficient to execute and easy to expand for a large and complex domain.

  19. Comparison of LISP and MUMPS as implementation languages for knowledge-based systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtis, A.C.

    1984-01-01

    Major components of knowledge-based systems are summarized, along with the programming language features generally useful in their implementation. LISP and MUMPS are briefly described and compared as vehicles for building knowledge-based systems. The paper concludes with suggestions for extensions to MUMPS which might increase its usefulness in artificial intelligence applications without affecting the essential nature of the language. 8 references.

  20. Knowledge-based nursing diagnosis

    NASA Astrophysics Data System (ADS)

    Roy, Claudette; Hay, D. Robert

    1991-03-01

    Nursing diagnosis is an integral part of the nursing process and determines the interventions leading to outcomes for which the nurse is accountable. Diagnoses under the time constraints of modern nursing can benefit from a computer assist. A knowledge-based engineering approach was developed to address these problems. A number of problems were addressed during system design to make the system practical extended beyond capture of knowledge. The issues involved in implementing a professional knowledge base in a clinical setting are discussed. System functions, structure, interfaces, health care environment, and terminology and taxonomy are discussed. An integrated system concept from assessment through intervention and evaluation is outlined.

  1. Development of a component centered fault monitoring and diagnosis knowledge based system for space power system

    NASA Technical Reports Server (NTRS)

    Lee, S. C.; Lollar, Louis F.

    1988-01-01

    The overall approach currently being taken in the development of AMPERES (Autonomously Managed Power System Extendable Real-time Expert System), a knowledge-based expert system for fault monitoring and diagnosis of space power systems, is discussed. The system architecture, knowledge representation, and fault monitoring and diagnosis strategy are examined. A 'component-centered' approach developed in this project is described. Critical issues requiring further study are identified.

  2. Network-based approaches to climate knowledge discovery

    NASA Astrophysics Data System (ADS)

    Budich, Reinhard; Nyberg, Per; Weigel, Tobias

    2011-11-01

    Climate Knowledge Discovery Workshop; Hamburg, Germany, 30 March to 1 April 2011 Do complex networks combined with semantic Web technologies offer the next generation of solutions in climate science? To address this question, a first Climate Knowledge Discovery (CKD) Workshop, hosted by the German Climate Computing Center (Deutsches Klimarechenzentrum (DKRZ)), brought together climate and computer scientists from major American and European laboratories, data centers, and universities, as well as representatives from industry, the broader academic community, and the semantic Web communities. The participants, representing six countries, were concerned with large-scale Earth system modeling and computational data analysis. The motivation for the meeting was the growing problem that climate scientists generate data faster than it can be interpreted and the need to prepare for further exponential data increases. Current analysis approaches are focused primarily on traditional methods, which are best suited for large-scale phenomena and coarse-resolution data sets. The workshop focused on the open discussion of ideas and technologies to provide the next generation of solutions to cope with the increasing data volumes in climate science.

  3. From science to action: Principles for undertaking environmental research that enables knowledge exchange and evidence-based decision-making.

    PubMed

    Cvitanovic, C; McDonald, J; Hobday, A J

    2016-12-01

    Effective conservation requires knowledge exchange among scientists and decision-makers to enable learning and support evidence-based decision-making. Efforts to improve knowledge exchange have been hindered by a paucity of empirically-grounded guidance to help scientists and practitioners design and implement research programs that actively facilitate knowledge exchange. To address this, we evaluated the Ningaloo Research Program (NRP), which was designed to generate new scientific knowledge to support evidence-based decisions about the management of the Ningaloo Marine Park in north-western Australia. Specifically, we evaluated (1) outcomes of the NRP, including the extent to which new knowledge informed management decisions; (2) the barriers that prevented knowledge exchange among scientists and managers; (3) the key requirements for improving knowledge exchange processes in the future; and (4) the core capacities that are required to support knowledge exchange processes. While the NRP generated expansive and multidisciplinary science outputs directly relevant to the management of the Ningaloo Marine Park, decision-makers are largely unaware of this knowledge and little has been integrated into decision-making processes. A range of barriers prevented efficient and effective knowledge exchange among scientists and decision-makers including cultural differences among the groups, institutional barriers within decision-making agencies, scientific outputs that were not translated for decision-makers and poor alignment between research design and actual knowledge needs. We identify a set of principles to be implemented routinely as part of any applied research program, including; (i) stakeholder mapping prior to the commencement of research programs to identify all stakeholders, (ii) research questions to be co-developed with stakeholders, (iii) implementation of participatory research approaches, (iv) use of a knowledge broker, and (v) tailored knowledge management systems. Finally, we articulate the individual, institutional and financial capacities that must be developed to underpin successful knowledge exchange strategies. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  4. Perspectives on knowledge in engineering design

    NASA Technical Reports Server (NTRS)

    Rasdorf, W. J.

    1985-01-01

    Various perspectives are given of the knowledge currently used in engineering design, specifically dealing with knowledge-based expert systems (KBES). Constructing an expert system often reveals inconsistencies in domain knowledge while formalizing it. The types of domain knowledge (facts, procedures, judgments, and control) differ from the classes of that knowledge (creative, innovative, and routine). The feasible tasks for expert systems can be determined based on these types and classes of knowledge. Interpretive tasks require reasoning about a task in light of the knowledge available, where generative tasks create potential solutions to be tested against constraints. Only after classifying the domain by type and level can the engineer select a knowledge-engineering tool for the domain being considered. The critical features to be weighed after classification are knowledge representation techniques, control strategies, interface requirements, compatibility with traditional systems, and economic considerations.

  5. A formal approach to validation and verification for knowledge-based control systems

    NASA Technical Reports Server (NTRS)

    Castore, Glen

    1987-01-01

    As control systems become more complex in response to desires for greater system flexibility, performance and reliability, the promise is held out that artificial intelligence might provide the means for building such systems. An obstacle to the use of symbolic processing constructs in this domain is the need for verification and validation (V and V) of the systems. Techniques currently in use do not seem appropriate for knowledge-based software. An outline of a formal approach to V and V for knowledge-based control systems is presented.

  6. Expert system for the design of heating, ventilating, and air-conditioning systems. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camejo, P.J.

    1989-12-01

    Expert systems are computer programs that seek to mimic human reason. An expert system shelf, a software program commonly used for developing expert systems in a relatively short time, was used to develop a prototypical expert system for the design of heating, ventilating, and air-conditioning (HVAC) systems in buildings. Because HVAC design involves several related knowledge domains, developing an expert system for HVAC design requires the integration of several smaller expert systems known as knowledge bases. A menu program and several auxiliary programs for gathering data, completing calculations, printing project reports, and passing data between the knowledge bases are neededmore » and have been developed to join the separate knowledge bases into one simple-to-use program unit.« less

  7. An expert system prototype for aiding in the development of software functional requirements for NASA Goddard's command management system: A case study and lessons learned

    NASA Technical Reports Server (NTRS)

    Liebowitz, Jay

    1986-01-01

    At NASA Goddard, the role of the command management system (CMS) is to transform general requests for spacecraft opeerations into detailed operational plans to be uplinked to the spacecraft. The CMS is part of the NASA Data System which entails the downlink of science and engineering data from NASA near-earth satellites to the user, and the uplink of command and control data to the spacecraft. Presently, it takes one to three years, with meetings once or twice a week, to determine functional requirements for CMS software design. As an alternative approach to the present technique of developing CMS software functional requirements, an expert system prototype was developed to aid in this function. Specifically, the knowledge base was formulated through interactions with domain experts, and was then linked to an existing expert system application generator called 'Knowledge Engineering System (Version 1.3).' Knowledge base development focused on four major steps: (1) develop the problem-oriented attribute hierachy; (2) determine the knowledge management approach; (3) encode the knowledge base; and (4) validate, test, certify, and evaluate the knowledge base and the expert system prototype as a whole. Backcasting was accomplished for validating and testing the expert system prototype. Knowledge refinement, evaluation, and implementation procedures of the expert system prototype were then transacted.

  8. GPCR ontology: development and application of a G protein-coupled receptor pharmacology knowledge framework.

    PubMed

    Przydzial, Magdalena J; Bhhatarai, Barun; Koleti, Amar; Vempati, Uma; Schürer, Stephan C

    2013-12-15

    Novel tools need to be developed to help scientists analyze large amounts of available screening data with the goal to identify entry points for the development of novel chemical probes and drugs. As the largest class of drug targets, G protein-coupled receptors (GPCRs) remain of particular interest and are pursued by numerous academic and industrial research projects. We report the first GPCR ontology to facilitate integration and aggregation of GPCR-targeting drugs and demonstrate its application to classify and analyze a large subset of the PubChem database. The GPCR ontology, based on previously reported BioAssay Ontology, depicts available pharmacological, biochemical and physiological profiles of GPCRs and their ligands. The novelty of the GPCR ontology lies in the use of diverse experimental datasets linked by a model to formally define these concepts. Using a reasoning system, GPCR ontology offers potential for knowledge-based classification of individuals (such as small molecules) as a function of the data. The GPCR ontology is available at http://www.bioassayontology.org/bao_gpcr and the National Center for Biomedical Ontologies Web site.

  9. The trigger system for the external target experiment in the HIRFL cooling storage ring

    NASA Astrophysics Data System (ADS)

    Li, Min; Zhao, Lei; Liu, Jin-Xin; Lu, Yi-Ming; Liu, Shu-Bin; An, Qi

    2016-08-01

    A trigger system was designed for the external target experiment in the Cooling Storage Ring (CSR) of the Heavy Ion Research Facility in Lanzhou (HIRFL). Considering that different detectors are scattered over a large area, the trigger system is designed based on a master-slave structure and fiber-based serial data transmission technique. The trigger logic is organized in hierarchies, and flexible reconfiguration of the trigger function is achieved based on command register access or overall field-programmable gate array (FPGA) logic on-line reconfiguration controlled by remote computers. We also conducted tests to confirm the function of the trigger electronics, and the results indicate that this trigger system works well. Supported by the National Natural Science Foundation of China (11079003), the Knowledge Innovation Program of the Chinese Academy of Sciences (KJCX2-YW-N27), and the CAS Center for Excellence in Particle Physics (CCEPP).

  10. Visualizing the Topical Structure of the Medical Sciences: A Self-Organizing Map Approach

    PubMed Central

    Skupin, André; Biberstine, Joseph R.; Börner, Katy

    2013-01-01

    Background We implement a high-resolution visualization of the medical knowledge domain using the self-organizing map (SOM) method, based on a corpus of over two million publications. While self-organizing maps have been used for document visualization for some time, (1) little is known about how to deal with truly large document collections in conjunction with a large number of SOM neurons, (2) post-training geometric and semiotic transformations of the SOM tend to be limited, and (3) no user studies have been conducted with domain experts to validate the utility and readability of the resulting visualizations. Our study makes key contributions to all of these issues. Methodology Documents extracted from Medline and Scopus are analyzed on the basis of indexer-assigned MeSH terms. Initial dimensionality is reduced to include only the top 10% most frequent terms and the resulting document vectors are then used to train a large SOM consisting of over 75,000 neurons. The resulting two-dimensional model of the high-dimensional input space is then transformed into a large-format map by using geographic information system (GIS) techniques and cartographic design principles. This map is then annotated and evaluated by ten experts stemming from the biomedical and other domains. Conclusions Study results demonstrate that it is possible to transform a very large document corpus into a map that is visually engaging and conceptually stimulating to subject experts from both inside and outside of the particular knowledge domain. The challenges of dealing with a truly large corpus come to the fore and require embracing parallelization and use of supercomputing resources to solve otherwise intractable computational tasks. Among the envisaged future efforts are the creation of a highly interactive interface and the elaboration of the notion of this map of medicine acting as a base map, onto which other knowledge artifacts could be overlaid. PMID:23554924

  11. A knowledge-based system with learning for computer communication network design

    NASA Technical Reports Server (NTRS)

    Pierre, Samuel; Hoang, Hai Hoc; Tropper-Hausen, Evelyne

    1990-01-01

    Computer communication network design is well-known as complex and hard. For that reason, the most effective methods used to solve it are heuristic. Weaknesses of these techniques are listed and a new approach based on artificial intelligence for solving this problem is presented. This approach is particularly recommended for large packet switched communication networks, in the sense that it permits a high degree of reliability and offers a very flexible environment dealing with many relevant design parameters such as link cost, link capacity, and message delay.

  12. Foundation: Transforming data bases into knowledge bases

    NASA Technical Reports Server (NTRS)

    Purves, R. B.; Carnes, James R.; Cutts, Dannie E.

    1987-01-01

    One approach to transforming information stored in relational data bases into knowledge based representations and back again is described. This system, called Foundation, allows knowledge bases to take advantage of vast amounts of pre-existing data. A benefit of this approach is inspection, and even population, of data bases through an intelligent knowledge-based front-end.

  13. Institutional Memory Preservation at NASA Glenn Research Center

    NASA Technical Reports Server (NTRS)

    Coffey, J.; Moreman, Douglas; Dyer, J.; Hemminger, J. A.

    1999-01-01

    In this era of downsizing and deficit reduction, the preservation of institutional memory is a widespread concern for U.S. companies and governmental agencies. The National Aeronautical and Space Administration faces the pending retirement of many of the agency's long-term, senior engineers. NASA has a marvelous long-term history of success, but the agency faces a recurring problem caused by the loss of these engineers' unique knowledge and perspectives on NASA's role in aeronautics and space exploration. The current work describes a knowledge elicitation effort aimed at demonstrating the feasibility of preserving the more personal, heuristic knowledge accumulated over the years by NASA engineers, as contrasted with the "textbook" knowledge of launch vehicles. Work on this project was performed at NASA Glenn Research Center and elsewhere, and focused on launch vehicle systems integration. The initial effort was directed toward an historic view of the Centaur upper stage which is powered by two RL-10 engines. Various experts were consulted, employing a variety of knowledge elicitation techniques, regarding the Centaur and RL-10. Their knowledge is represented in searchable Web-based multimedia presentations. This paper discusses the various approaches to knowledge elicitation and knowledge representation employed, and assesses successes and challenges in trying to perform large-scale knowledge preservation of institutional memory. It is anticipated that strategies for knowledge elicitation and representation that have been developed in this grant will be utilized to elicit knowledge in a variety of domains including the complex heuristics that underly use of simulation software packages such as that being explored in the Expert System Architecture for Rocket Engine Numerical Simulators.

  14. Adding Learning to Knowledge-Based Systems: Taking the "Artificial" Out of AI

    Treesearch

    Daniel L. Schmoldt

    1997-01-01

    Both, knowledge-based systems (KBS) development and maintenance require time-consuming analysis of domain knowledge. Where example cases exist, KBS can be built, and later updated, by incorporating learning capabilities into their architecture. This applies to both supervised and unsupervised learning scenarios. In this paper, the important issues for learning systems-...

  15. Collaborative Learning and Knowledge-Construction through a Knowledge-Based WWW Authoring Tool.

    ERIC Educational Resources Information Center

    Haugsjaa, Erik

    This paper outlines hurdles to using the World Wide Web for learning, specifically in a collaborative knowledge-construction environment. Theoretical solutions based directly on existing Web environments, as well as on research and system prototypes in the areas of Intelligent Tutoring Systems (ITS) and ITS authoring systems, are suggested. Topics…

  16. Web-Based Knowledge Exchange through Social Links in the Workplace

    ERIC Educational Resources Information Center

    Filipowski, Tomasz; Kazienko, Przemyslaw; Brodka, Piotr; Kajdanowicz, Tomasz

    2012-01-01

    Knowledge exchange between employees is an essential feature of recent commercial organisations on the competitive market. Based on the data gathered by various information technology (IT) systems, social links can be extracted and exploited in knowledge exchange systems of a new kind. Users of such a system ask their queries and the system…

  17. Multi-viewpoint clustering analysis

    NASA Technical Reports Server (NTRS)

    Mehrotra, Mala; Wild, Chris

    1993-01-01

    In this paper, we address the feasibility of partitioning rule-based systems into a number of meaningful units to enhance the comprehensibility, maintainability and reliability of expert systems software. Preliminary results have shown that no single structuring principle or abstraction hierarchy is sufficient to understand complex knowledge bases. We therefore propose the Multi View Point - Clustering Analysis (MVP-CA) methodology to provide multiple views of the same expert system. We present the results of using this approach to partition a deployed knowledge-based system that navigates the Space Shuttle's entry. We also discuss the impact of this approach on verification and validation of knowledge-based systems.

  18. Review of integrated digital systems: evolution and adoption

    NASA Astrophysics Data System (ADS)

    Fritz, Lawrence W.

    The factors that are influencing the evolution of photogrammetric and remote sensing technology to transition into fully integrated digital systems are reviewed. These factors include societal pressures for new, more timely digital products from the Spatial Information Sciencesand the adoption of rapid technological advancements in digital processing hardware and software. Current major developments in leading government mapping agencies of the USA, such as the Digital Production System (DPS) modernization programme at the Defense Mapping Agency, and the Automated Nautical Charting System II (ANCS-II) programme and Integrated Digital Photogrammetric Facility (IDPF) at NOAA/National Ocean Service, illustrate the significant benefits to be realized. These programmes are examples of different levels of integrated systems that have been designed to produce digital products. They provide insights to the management complexities to be considered for very large integrated digital systems. In recognition of computer industry trends, a knowledge-based architecture for managing the complexity of the very large spatial information systems of the future is proposed.

  19. An object-oriented, knowledge-based system for cardiovascular rehabilitation--phase II.

    PubMed Central

    Ryder, R. M.; Inamdar, B.

    1995-01-01

    The Heart Monitor is an object-oriented, knowledge-based system designed to support the clinical activities of cardiovascular (CV) rehabilitation. The original concept was developed as part of graduate research completed in 1992. This paper describes the second generation system which is being implemented in collaboration with a local heart rehabilitation program. The PC UNIX-based system supports an extensive patient database organized by clinical areas. In addition, a knowledge base is employed to monitor patient status. Rule-based automated reasoning is employed to assess risk factors contraindicative to exercise therapy and to monitor administrative and statutory requirements. PMID:8563285

  20. A Model to Assess the Behavioral Impacts of Consultative Knowledge Based Systems.

    ERIC Educational Resources Information Center

    Mak, Brenda; Lyytinen, Kalle

    1997-01-01

    This research model studies the behavioral impacts of consultative knowledge based systems (KBS). A study of graduate students explored to what extent their decisions were affected by user participation in updating the knowledge base; ambiguity of decision setting; routinization of usage; and source credibility of the expertise embedded in the…

  1. Towards sustainable infrastructure management: knowledge-based service-oriented computing framework for visual analytics

    NASA Astrophysics Data System (ADS)

    Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd

    2009-05-01

    Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.

  2. Rule-based mechanisms of learning for intelligent adaptive flight control

    NASA Technical Reports Server (NTRS)

    Handelman, David A.; Stengel, Robert F.

    1990-01-01

    How certain aspects of human learning can be used to characterize learning in intelligent adaptive control systems is investigated. Reflexive and declarative memory and learning are described. It is shown that model-based systems-theoretic adaptive control methods exhibit attributes of reflexive learning, whereas the problem-solving capabilities of knowledge-based systems of artificial intelligence are naturally suited for implementing declarative learning. Issues related to learning in knowledge-based control systems are addressed, with particular attention given to rule-based systems. A mechanism for real-time rule-based knowledge acquisition is suggested, and utilization of this mechanism within the context of failure diagnosis for fault-tolerant flight control is demonstrated.

  3. Improving data workflow systems with cloud services and use of open data for bioinformatics research.

    PubMed

    Karim, Md Rezaul; Michel, Audrey; Zappa, Achille; Baranov, Pavel; Sahay, Ratnesh; Rebholz-Schuhmann, Dietrich

    2017-04-16

    Data workflow systems (DWFSs) enable bioinformatics researchers to combine components for data access and data analytics, and to share the final data analytics approach with their collaborators. Increasingly, such systems have to cope with large-scale data, such as full genomes (about 200 GB each), public fact repositories (about 100 TB of data) and 3D imaging data at even larger scales. As moving the data becomes cumbersome, the DWFS needs to embed its processes into a cloud infrastructure, where the data are already hosted. As the standardized public data play an increasingly important role, the DWFS needs to comply with Semantic Web technologies. This advancement to DWFS would reduce overhead costs and accelerate the progress in bioinformatics research based on large-scale data and public resources, as researchers would require less specialized IT knowledge for the implementation. Furthermore, the high data growth rates in bioinformatics research drive the demand for parallel and distributed computing, which then imposes a need for scalability and high-throughput capabilities onto the DWFS. As a result, requirements for data sharing and access to public knowledge bases suggest that compliance of the DWFS with Semantic Web standards is necessary. In this article, we will analyze the existing DWFS with regard to their capabilities toward public open data use as well as large-scale computational and human interface requirements. We untangle the parameters for selecting a preferable solution for bioinformatics research with particular consideration to using cloud services and Semantic Web technologies. Our analysis leads to research guidelines and recommendations toward the development of future DWFS for the bioinformatics research community. © The Author 2017. Published by Oxford University Press.

  4. Introduction to Radar Signal and Data Processing: The Opportunity

    DTIC Science & Technology

    2006-09-01

    SpA) Director of Analysis of Integrated Systems Group Via Tiburtina Km. 12.400 00131 Rome ITALY e.mail: afarina@selex-si.com Key words: radar...signal processing, data processing, adaptivity, space-time adaptive processing, knowledge based systems , CFAR. 1. SUMMARY This paper introduces to...the lecture series dedicated to the knowledge-based radar signal and data processing. Knowledge-based expert system (KBS) is in the realm of

  5. Study of Design Knowledge Capture (DKC) schemes implemented in magnetic bearing applications

    NASA Technical Reports Server (NTRS)

    1990-01-01

    A design knowledge capture (DKC) scheme was implemented using frame-based techniques. The objective of such a system is to capture not only the knowledge which describes a design, but also that which explains how the design decisions were reached. These knowledge types were labelled definitive and explanatory, respectively. Examination of the design process helped determine what knowledge to retain and at what stage that knowledge is used. A discussion of frames resulted in the recognition of their value to knowledge representation and organization. The FORMS frame system was used as a basis for further development, and for examples using magnetic bearing design. The specific contributions made by this research include: determination that frame-based systems provide a useful methodology for management and application of design knowledge; definition of specific user interface requirements, (this consists of a window-based browser); specification of syntax for DKC commands; and demonstration of the feasibility of DKC by applications to existing designs. It was determined that design knowledge capture could become an extremely valuable engineering tool for complicated, long-life systems, but that further work was needed, particularly the development of a graphic, window-based interface.

  6. Introducing the Big Knowledge to Use (BK2U) challenge

    PubMed Central

    Perl, Yehoshua; Geller, James; Halper, Michael; Ochs, Christopher; Zheng, Ling; Kapusnik-Uner, Joan

    2016-01-01

    The purpose of the Big Data to Knowledge (BD2K) initiative is to develop methods for discovering new knowledge from large amounts of data. However, if the resulting knowledge is so large that it resists comprehension, referred to here as Big Knowledge (BK), how can it be used properly and creatively? We call this secondary challenge, Big Knowledge to Use (BK2U). Without a high-level mental representation of the kinds of knowledge in a BK knowledgebase, effective or innovative use of the knowledge may be limited. We describe summarization and visualization techniques that capture the big picture of a BK knowledgebase, possibly created from Big Data. In this research, we distinguish between assertion BK and rule-based BK and demonstrate the usefulness of summarization and visualization techniques of assertion BK for clinical phenotyping. As an example, we illustrate how a summary of many intracranial bleeding concepts can improve phenotyping, compared to the traditional approach. We also demonstrate the usefulness of summarization and visualization techniques of rule-based BK for drug–drug interaction discovery. PMID:27750400

  7. Pharmacogenomic knowledge representation, reasoning and genome-based clinical decision support based on OWL 2 DL ontologies.

    PubMed

    Samwald, Matthias; Miñarro Giménez, Jose Antonio; Boyce, Richard D; Freimuth, Robert R; Adlassnig, Klaus-Peter; Dumontier, Michel

    2015-02-22

    Every year, hundreds of thousands of patients experience treatment failure or adverse drug reactions (ADRs), many of which could be prevented by pharmacogenomic testing. However, the primary knowledge needed for clinical pharmacogenomics is currently dispersed over disparate data structures and captured in unstructured or semi-structured formalizations. This is a source of potential ambiguity and complexity, making it difficult to create reliable information technology systems for enabling clinical pharmacogenomics. We developed Web Ontology Language (OWL) ontologies and automated reasoning methodologies to meet the following goals: 1) provide a simple and concise formalism for representing pharmacogenomic knowledge, 2) finde errors and insufficient definitions in pharmacogenomic knowledge bases, 3) automatically assign alleles and phenotypes to patients, 4) match patients to clinically appropriate pharmacogenomic guidelines and clinical decision support messages and 5) facilitate the detection of inconsistencies and overlaps between pharmacogenomic treatment guidelines from different sources. We evaluated different reasoning systems and test our approach with a large collection of publicly available genetic profiles. Our methodology proved to be a novel and useful choice for representing, analyzing and using pharmacogenomic data. The Genomic Clinical Decision Support (Genomic CDS) ontology represents 336 SNPs with 707 variants; 665 haplotypes related to 43 genes; 22 rules related to drug-response phenotypes; and 308 clinical decision support rules. OWL reasoning identified CDS rules with overlapping target populations but differing treatment recommendations. Only a modest number of clinical decision support rules were triggered for a collection of 943 public genetic profiles. We found significant performance differences across available OWL reasoners. The ontology-based framework we developed can be used to represent, organize and reason over the growing wealth of pharmacogenomic knowledge, as well as to identify errors, inconsistencies and insufficient definitions in source data sets or individual patient data. Our study highlights both advantages and potential practical issues with such an ontology-based approach.

  8. Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model

    NASA Astrophysics Data System (ADS)

    Pathak, Jaideep; Wikner, Alexander; Fussell, Rebeckah; Chandra, Sarthak; Hunt, Brian R.; Girvan, Michelle; Ott, Edward

    2018-04-01

    A model-based approach to forecasting chaotic dynamical systems utilizes knowledge of the mechanistic processes governing the dynamics to build an approximate mathematical model of the system. In contrast, machine learning techniques have demonstrated promising results for forecasting chaotic systems purely from past time series measurements of system state variables (training data), without prior knowledge of the system dynamics. The motivation for this paper is the potential of machine learning for filling in the gaps in our underlying mechanistic knowledge that cause widely-used knowledge-based models to be inaccurate. Thus, we here propose a general method that leverages the advantages of these two approaches by combining a knowledge-based model and a machine learning technique to build a hybrid forecasting scheme. Potential applications for such an approach are numerous (e.g., improving weather forecasting). We demonstrate and test the utility of this approach using a particular illustrative version of a machine learning known as reservoir computing, and we apply the resulting hybrid forecaster to a low-dimensional chaotic system, as well as to a high-dimensional spatiotemporal chaotic system. These tests yield extremely promising results in that our hybrid technique is able to accurately predict for a much longer period of time than either its machine-learning component or its model-based component alone.

  9. Knowledge Guided Evolutionary Algorithms in Financial Investing

    ERIC Educational Resources Information Center

    Wimmer, Hayden

    2013-01-01

    A large body of literature exists on evolutionary computing, genetic algorithms, decision trees, codified knowledge, and knowledge management systems; however, the intersection of these computing topics has not been widely researched. Moving through the set of all possible solutions--or traversing the search space--at random exhibits no control…

  10. Students' Refinement of Knowledge during the Development of Knowledge Bases for Expert Systems.

    ERIC Educational Resources Information Center

    Lippert, Renate; Finley, Fred

    The refinement of the cognitive knowledge base was studied through exploration of the transition from novice to expert and the use of an instructional strategy called novice knowledge engineering. Six college freshmen, who were enrolled in an honors physics course, used an expert system to create questions, decisions, rules, and explanations…

  11. A knowledge-based object recognition system for applications in the space station

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  12. Use of Knowledge Base Systems (EMDS) in Strategic and Tactical Forest Planning

    NASA Astrophysics Data System (ADS)

    Jensen, M. E.; Reynolds, K.; Stockmann, K.

    2008-12-01

    The USDA Forest Service 2008 Planning Rule requires Forest plans to provide a strategic vision for maintaining the sustainability of ecological, economic, and social systems across USFS lands through the identification of desired conditions and objectives. In this paper we show how knowledge-based systems can be efficiently used to evaluate disparate natural resource information to assess desired conditions and related objectives in Forest planning. We use the Ecosystem Management Decision Support (EMDS) system (http://www.institute.redlands.edu/emds/), which facilitates development of both logic-based models for evaluating ecosystem sustainability (desired conditions) and decision models to identify priority areas for integrated landscape restoration (objectives). The study area for our analysis spans 1,057 subwatersheds within western Montana and northern Idaho. Results of our study suggest that knowledge-based systems such as EMDS are well suited to both strategic and tactical planning and that the following points merit consideration in future National Forest (and other land management) planning efforts: 1) Logic models provide a consistent, transparent, and reproducible method for evaluating broad propositions about ecosystem sustainability such as: are watershed integrity, ecosystem and species diversity, social opportunities, and economic integrity in good shape across a planning area? The ability to evaluate such propositions in a formal logic framework also allows users the opportunity to evaluate statistical changes in outcomes over time, which could be very useful for regional and national reporting purposes and for addressing litigation; 2) The use of logic and decision models in strategic and tactical Forest planning provides a repository for expert knowledge (corporate memory) that is critical to the evaluation and management of ecosystem sustainability over time. This is especially true for the USFS and other federal resource agencies, which are likely to experience rapid turnover in tenured resource specialist positions within the next five years due to retirements; 3) Use of logic model output in decision models is an efficient method for synthesizing the typically large amounts of information needed to support integrated landscape restoration. Moreover, use of logic and decision models to design customized scenarios for integrated landscape restoration, as we have demonstrated with EMDS, offers substantial improvements to traditional GIS-based procedures such as suitability analysis. To our knowledge, this study represents the first attempt to link evaluations of desired conditions for ecosystem sustainability in strategic planning to tactical planning regarding the location of subwatersheds that best meet the objectives of integrated landscape restoration. The basic knowledge-based approach implemented in EMDS, with its logic (NetWeaver) and decision (Criterion Decision Plus) engines, is well suited both to multi-scale strategic planning and to multi-resource tactical planning.

  13. Intelligent fault management for the Space Station active thermal control system

    NASA Technical Reports Server (NTRS)

    Hill, Tim; Faltisco, Robert M.

    1992-01-01

    The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available.

  14. Course Ontology-Based User's Knowledge Requirement Acquisition from Behaviors within E-Learning Systems

    ERIC Educational Resources Information Center

    Zeng, Qingtian; Zhao, Zhongying; Liang, Yongquan

    2009-01-01

    User's knowledge requirement acquisition and analysis are very important for a personalized or user-adaptive learning system. Two approaches to capture user's knowledge requirement about course content within an e-learning system are proposed and implemented in this paper. The first approach is based on the historical data accumulated by an…

  15. SensorDB: a virtual laboratory for the integration, visualization and analysis of varied biological sensor data.

    PubMed

    Salehi, Ali; Jimenez-Berni, Jose; Deery, David M; Palmer, Doug; Holland, Edward; Rozas-Larraondo, Pablo; Chapman, Scott C; Georgakopoulos, Dimitrios; Furbank, Robert T

    2015-01-01

    To our knowledge, there is no software or database solution that supports large volumes of biological time series sensor data efficiently and enables data visualization and analysis in real time. Existing solutions for managing data typically use unstructured file systems or relational databases. These systems are not designed to provide instantaneous response to user queries. Furthermore, they do not support rapid data analysis and visualization to enable interactive experiments. In large scale experiments, this behaviour slows research discovery, discourages the widespread sharing and reuse of data that could otherwise inform critical decisions in a timely manner and encourage effective collaboration between groups. In this paper we present SensorDB, a web based virtual laboratory that can manage large volumes of biological time series sensor data while supporting rapid data queries and real-time user interaction. SensorDB is sensor agnostic and uses web-based, state-of-the-art cloud and storage technologies to efficiently gather, analyse and visualize data. Collaboration and data sharing between different agencies and groups is thereby facilitated. SensorDB is available online at http://sensordb.csiro.au.

  16. A knowledge-based support system for mechanical ventilation of the lungs. The KUSIVAR concept and prototype.

    PubMed

    Rudowski, R; Frostell, C; Gill, H

    1989-09-01

    The KUSIVAR is an expert system for mechanical ventilation of adult patients suffering from respiratory insufficiency. Its main objective is to provide guidance in respirator management. The knowledge base includes both qualitative, rule-based knowledge and quantitative knowledge expressed in the form of mathematical models (expert control) which is used for prediction of arterial gas tensions and optimization purposes. The system is data driven and uses a forward chaining mechanism for rule invocation. The interaction with the user will be performed in advisory, critiquing, semi-automatic and automatic modes. The system is at present in an advanced prototype stage. Prototyping is performed using KEE (Knowledge Engineering Environment) on a Sperry Explorer workstation. For further development and clinical use the expert system will be downloaded to an advanced PC. The system is intended to support therapy with a Siemens-Elema Servoventilator 900 C.

  17. NSF's Perspective on Space Weather Research for Building Forecasting Capabilities

    NASA Astrophysics Data System (ADS)

    Bisi, M. M.; Pulkkinen, A. A.; Bisi, M. M.; Pulkkinen, A. A.; Webb, D. F.; Oughton, E. J.; Azeem, S. I.

    2017-12-01

    Space weather research at the National Science Foundation (NSF) is focused on scientific discovery and on deepening knowledge of the Sun-Geospace system. The process of maturation of knowledge base is a requirement for the development of improved space weather forecast models and for the accurate assessment of potential mitigation strategies. Progress in space weather forecasting requires advancing in-depth understanding of the underlying physical processes, developing better instrumentation and measurement techniques, and capturing the advancements in understanding in large-scale physics based models that span the entire chain of events from the Sun to the Earth. This presentation will provide an overview of current and planned programs pertaining to space weather research at NSF and discuss the recommendations of the Geospace Section portfolio review panel within the context of space weather forecasting capabilities.

  18. From knowledge presentation to knowledge representation to knowledge construction: Future directions for hypermedia

    NASA Technical Reports Server (NTRS)

    Palumbo, David B.

    1990-01-01

    Relationships between human memory systems and hypermedia systems are discussed with particular emphasis on the underlying importance of associational memory. The distinctions between knowledge presentation, knowledge representation, and knowledge constructions are addressed. Issues involved in actually developing individualizable hypermedia based knowledge construction tools are presented.

  19. GUIDON-WATCH: A Graphic Interface for Viewing a Knowledge-Based System. Technical Report #14.

    ERIC Educational Resources Information Center

    Richer, Mark H.; Clancey, William J.

    This paper describes GUIDON-WATCH, a graphic interface that uses multiple windows and a mouse to allow a student to browse a knowledge base and view reasoning processes during diagnostic problem solving. The GUIDON project at Stanford University is investigating how knowledge-based systems can provide the basis for teaching programs, and this…

  20. GAMES II Project: a general architecture for medical knowledge-based systems.

    PubMed

    Bruno, F; Kindler, H; Leaning, M; Moustakis, V; Scherrer, J R; Schreiber, G; Stefanelli, M

    1994-10-01

    GAMES II aims at developing a comprehensive and commercially viable methodology to avoid problems ordinarily occurring in KBS development. GAMES II methodology proposes to design a KBS starting from an epistemological model of medical reasoning (the Select and Test Model). The design is viewed as a process of adding symbol level information to the epistemological model. The architectural framework provided by GAMES II integrates the use of different formalisms and techniques providing a large set of tools. The user can select the most suitable one for representing a piece of knowledge after a careful analysis of its epistemological characteristics. Special attention is devoted to the tools dealing with knowledge acquisition (both manual and automatic). A panel of practicing physicians are assessing the medical value of such a framework and its related tools by using it in a practical application.

  1. Towards organizing health knowledge on community-based health services.

    PubMed

    Akbari, Mohammad; Hu, Xia; Nie, Liqiang; Chua, Tat-Seng

    2016-12-01

    Online community-based health services accumulate a huge amount of unstructured health question answering (QA) records at a continuously increasing pace. The ability to organize these health QA records has been found to be effective for data access. The existing approaches for organizing information are often not applicable to health domain due to its domain nature as characterized by complex relation among entities, large vocabulary gap, and heterogeneity of users. To tackle these challenges, we propose a top-down organization scheme, which can automatically assign the unstructured health-related records into a hierarchy with prior domain knowledge. Besides automatic hierarchy prototype generation, it also enables each data instance to be associated with multiple leaf nodes and profiles each node with terminologies. Based on this scheme, we design a hierarchy-based health information retrieval system. Experiments on a real-world dataset demonstrate the effectiveness of our scheme in organizing health QA into a topic hierarchy and retrieving health QA records from the topic hierarchy.

  2. Integration of object-oriented knowledge representation with the CLIPS rule based system

    NASA Technical Reports Server (NTRS)

    Logie, David S.; Kamil, Hasan

    1990-01-01

    The paper describes a portion of the work aimed at developing an integrated, knowledge based environment for the development of engineering-oriented applications. An Object Representation Language (ORL) was implemented in C++ which is used to build and modify an object-oriented knowledge base. The ORL was designed in such a way so as to be easily integrated with other representation schemes that could effectively reason with the object base. Specifically, the integration of the ORL with the rule based system C Language Production Systems (CLIPS), developed at the NASA Johnson Space Center, will be discussed. The object-oriented knowledge representation provides a natural means of representing problem data as a collection of related objects. Objects are comprised of descriptive properties and interrelationships. The object-oriented model promotes efficient handling of the problem data by allowing knowledge to be encapsulated in objects. Data is inherited through an object network via the relationship links. Together, the two schemes complement each other in that the object-oriented approach efficiently handles problem data while the rule based knowledge is used to simulate the reasoning process. Alone, the object based knowledge is little more than an object-oriented data storage scheme; however, the CLIPS inference engine adds the mechanism to directly and automatically reason with that knowledge. In this hybrid scheme, the expert system dynamically queries for data and can modify the object base with complete access to all the functionality of the ORL from rules.

  3. Building a case-based diet recommendation system without a knowledge engineer.

    PubMed

    Khan, Abdus Salam; Hoffmann, Achim

    2003-02-01

    We present a new approach to the effective development of menu construction systems that allow to automatically construct a menu that is strongly tailored to the individual requirements and food preferences of a client. In hospitals and other health care institutions dietitians develop diets for clients which need to change their eating habits. Many clients have special needs in regards to their medical conditions, cultural backgrounds, or special levels of nutrient requirements for better recovery from diseases or surgery, etc. Existing computer support for this task is insufficient-many diets are not specifically tailored for the client's needs or require substantial time of a dietitian to be manually developed. Our approach is based on case-based reasoning, an artificial intelligence technique that finds increasing entry into industrial practice. Our approach goes beyond the traditional case-based reasoning (CBR) approach by allowing an incremental improvement of the system's competency during routine use of the system. The improvement of the system takes place through a direct expert user-system interaction while the expert is accomplishing their tasks of constructing a diet for a given client. Whenever the system performs unsatisfactorily, the expert will need to modify the system-produced diet 'manually', i.e. by entering the desired modifications into the system. Our implemented system, menu construction using an incremental knowledge acquisition system (MIKAS), asks the expert for simple explanations for each of the manual actions he/she takes and incorporates the explanations automatically into its knowledge base (KB) so that the system will perform these manually conducted actions automatically at the next occasion. We present MIKAS and discuss the results of our case study. While still being a prototype, the senior clinical dietitian involved in our evaluation studies judges the approach to have considerable potential to improve the daily routine of hospital dietitians as well as to improve the average quality of the dietary advice given to patients within the limited available time for dietary consultations. Our approach opens up a new avenue towards building highly specialised CBR systems in a more cost-effective way. Hence, our approach promises to allow a significantly more widespread development and practical deployment of CBR systems in a large variety of application domains including many medical applications.

  4. Tracking "Large" or "Smal": Boundaries and their Consequences for Veterinary Students within the Tracking System

    NASA Astrophysics Data System (ADS)

    Vermilya, Jenny R.

    In this dissertation, I use 42 in-depth qualitative interviews with veterinary medical students to explore the experience of being in an educational program that tracks students based on the species of non-human animals that they wish to treat. Specifically, I examine how tracking produces multiple boundaries for veterinary students. The boundaries between different animal species produce consequences for the treatment of those animals; this has been well documented. Using a symbolic interactionist perspective, my research extends the body of knowledge on species boundaries by revealing other consequences of this boundary work. For example, I analyze the symbolic boundaries involved in the gendering of animals, practitioners, and professions. I also examine how boundaries influence the collective identity of students entering an occupation segmented into various specialties. The collective identity of veterinarian is one characterized by care, thus students have to construct different definitions of care to access and maintain the collective identity. The tracking system additionally produces consequences for the knowledge created and reproduced in different areas of animal medicine, creating a system of power and inequality based on whose knowledge is privileged, how, and why. Finally, socially constructed boundaries generated from tracking inevitably lead to cases that do not fit. In particular, horses serve as a "border species" for veterinary students who struggle to place them into the tracking system. I argue that border species, like other metaphorical borders, have the potential to challenge discourses and lead to social change.

  5. Public Health Platforms: An Emerging Informatics Approach to Health Professional Learning and Development

    PubMed Central

    Gray, Kathleen

    2016-01-01

    Health informatics has a major role to play in optimising the management and use of data, information and knowledge in health systems. As health systems undergo digital transformation, it is important to consider informatics approaches not only to curriculum content but also to the design of learning environments and learning activities for health professional learning and development. An example of such an informatics approach is the use of large-scale, integrated public health platforms on the Internet as part of health professional learning and development. This article describes selected examples of such platforms, with a focus on how they may influence the direction of health professional learning and development. Significance for public health The landscape of healthcare systems, public health systems, health research systems and professional education systems is fragmented, with many gaps and silos. More sophistication in the management of health data, information, and knowledge, based on public health informatics expertise, is needed to tackle key issues of prevention, promotion and policy-making. Platform technologies represent an emerging large-scale, highly integrated informatics approach to public health, combining the technologies of Internet, the web, the cloud, social technologies, remote sensing and/or mobile apps into an online infrastructure that can allow more synergies in work within and across these systems. Health professional curricula need updating so that the health workforce has a deep and critical understanding of the way that platform technologies are becoming the foundation of the health sector. PMID:27190977

  6. Ultra-Structure database design methodology for managing systems biology data and analyses

    PubMed Central

    Maier, Christopher W; Long, Jeffrey G; Hemminger, Bradley M; Giddings, Morgan C

    2009-01-01

    Background Modern, high-throughput biological experiments generate copious, heterogeneous, interconnected data sets. Research is dynamic, with frequently changing protocols, techniques, instruments, and file formats. Because of these factors, systems designed to manage and integrate modern biological data sets often end up as large, unwieldy databases that become difficult to maintain or evolve. The novel rule-based approach of the Ultra-Structure design methodology presents a potential solution to this problem. By representing both data and processes as formal rules within a database, an Ultra-Structure system constitutes a flexible framework that enables users to explicitly store domain knowledge in both a machine- and human-readable form. End users themselves can change the system's capabilities without programmer intervention, simply by altering database contents; no computer code or schemas need be modified. This provides flexibility in adapting to change, and allows integration of disparate, heterogenous data sets within a small core set of database tables, facilitating joint analysis and visualization without becoming unwieldy. Here, we examine the application of Ultra-Structure to our ongoing research program for the integration of large proteomic and genomic data sets (proteogenomic mapping). Results We transitioned our proteogenomic mapping information system from a traditional entity-relationship design to one based on Ultra-Structure. Our system integrates tandem mass spectrum data, genomic annotation sets, and spectrum/peptide mappings, all within a small, general framework implemented within a standard relational database system. General software procedures driven by user-modifiable rules can perform tasks such as logical deduction and location-based computations. The system is not tied specifically to proteogenomic research, but is rather designed to accommodate virtually any kind of biological research. Conclusion We find Ultra-Structure offers substantial benefits for biological information systems, the largest being the integration of diverse information sources into a common framework. This facilitates systems biology research by integrating data from disparate high-throughput techniques. It also enables us to readily incorporate new data types, sources, and domain knowledge with no change to the database structure or associated computer code. Ultra-Structure may be a significant step towards solving the hard problem of data management and integration in the systems biology era. PMID:19691849

  7. VIP: A knowledge-based design aid for the engineering of space systems

    NASA Technical Reports Server (NTRS)

    Lewis, Steven M.; Bellman, Kirstie L.

    1990-01-01

    The Vehicles Implementation Project (VIP), a knowledge-based design aid for the engineering of space systems is described. VIP combines qualitative knowledge in the form of rules, quantitative knowledge in the form of equations, and other mathematical modeling tools. The system allows users rapidly to develop and experiment with models of spacecraft system designs. As information becomes available to the system, appropriate equations are solved symbolically and the results are displayed. Users may browse through the system, observing dependencies and the effects of altering specific parameters. The system can also suggest approaches to the derivation of specific parameter values. In addition to providing a tool for the development of specific designs, VIP aims at increasing the user's understanding of the design process. Users may rapidly examine the sensitivity of a given parameter to others in the system and perform tradeoffs or optimizations of specific parameters. A second major goal of VIP is to integrate the existing corporate knowledge base of models and rules into a central, symbolic form.

  8. Methods and systems for detecting abnormal digital traffic

    DOEpatents

    Goranson, Craig A [Kennewick, WA; Burnette, John R [Kennewick, WA

    2011-03-22

    Aspects of the present invention encompass methods and systems for detecting abnormal digital traffic by assigning characterizations of network behaviors according to knowledge nodes and calculating a confidence value based on the characterizations from at least one knowledge node and on weighting factors associated with the knowledge nodes. The knowledge nodes include a characterization model based on prior network information. At least one of the knowledge nodes should not be based on fixed thresholds or signatures. The confidence value includes a quantification of the degree of confidence that the network behaviors constitute abnormal network traffic.

  9. Structured product labeling improves detection of drug-intolerance issues.

    PubMed

    Schadow, Gunther

    2009-01-01

    This study sought to assess the value of the Health Level 7/U.S. Food and Drug Administration Structured Product Labeling (SPL) drug knowledge representation standard and its associated terminology sources for drug-intolerance (allergy) decision support in computerized provider order entry (CPOE) systems. The Regenstrief Institute CPOE drug-intolerance issue detection system and its knowledge base was compared with a method based on existing SPL label content enriched with knowledge sources used with SPL (NDF-RT/MeSH). Both methods were applied to a large set of drug-intolerance (allergy) records, drug orders, and medication dispensing records covering >50,000 patients over 30 years. The number of drug-intolerance issues detected by both methods was counted, as well as the number of patients with issues, number of distinct drugs, and number of distinct intolerances. The difference between drug-intolerance issues detected or missed by either method was qualitatively analyzed. Although <70% of terms were mapped to SPL, the new approach detected four times as many drug-intolerance issues on twice as many patients. The SPL-based approach is more sensitive and suggests that mapping local dictionaries to SPL, and enhancing the depth and breadth of coverage of SPL content are worth accelerating. The study also highlights specificity problems known to trouble drug-intolerance decision support and suggests how terminology and methods of recording drug intolerances could be improved.

  10. Structured Product Labeling Improves Detection of Drug-intolerance Issues

    PubMed Central

    Schadow, Gunther

    2009-01-01

    Objectives This study sought to assess the value of the Health Level 7/U.S. Food and Drug Administration Structured Product Labeling (SPL) drug knowledge representation standard and its associated terminology sources for drug-intolerance (allergy) decision support in computerized provider order entry (CPOE) systems. Design The Regenstrief Institute CPOE drug-intolerance issue detection system and its knowledge base was compared with a method based on existing SPL label content enriched with knowledge sources used with SPL (NDF-RT/MeSH). Both methods were applied to a large set of drug-intolerance (allergy) records, drug orders, and medication dispensing records covering >50,000 patients over 30 years. Measurements The number of drug-intolerance issues detected by both methods was counted, as well as the number of patients with issues, number of distinct drugs, and number of distinct intolerances. The difference between drug-intolerance issues detected or missed by either method was qualitatively analyzed. Results Although <70% of terms were mapped to SPL, the new approach detected four times as many drug-intolerance issues on twice as many patients. Conclusion The SPL-based approach is more sensitive and suggests that mapping local dictionaries to SPL, and enhancing the depth and breadth of coverage of SPL content are worth accelerating. The study also highlights specificity problems known to trouble drug-intolerance decision support and suggests how terminology and methods of recording drug intolerances could be improved. PMID:18952933

  11. The impact of innovation intermediary on knowledge transfer

    NASA Astrophysics Data System (ADS)

    Lin, Min; Wei, Jun

    2018-07-01

    Many firms have opened up their innovation process and actively transfer knowledge with external partners in the market of technology. To reduce some of the market inefficiencies, more and more firms collaborate with innovation intermediaries. In light of the increasing importance of intermediary in the context of open innovation, we in this paper systematically investigate the effect of innovation intermediary on knowledge transfer and innovation process in networked systems. We find that the existence of innovation intermediary is conducive to the knowledge diffusion and facilitate the knowledge growth at system level. Interestingly, the scale of the innovation intermediary has little effect on the growth of knowledge. We further investigate the selection of intermediary members by comparing four selection strategies: random selection, initial knowledge level based selection, absorptive capability based selection, and innovative ability based selection. It is found that the selection strategy based on innovative ability outperforms all the other strategies in promoting the system knowledge growth. Our study provides a theoretical understanding of the impact of innovation intermediary on knowledge transfer and sheds light on the design and selection of innovation intermediary in open innovation.

  12. BioTCM-SE: a semantic search engine for the information retrieval of modern biology and traditional Chinese medicine.

    PubMed

    Chen, Xi; Chen, Huajun; Bi, Xuan; Gu, Peiqin; Chen, Jiaoyan; Wu, Zhaohui

    2014-01-01

    Understanding the functional mechanisms of the complex biological system as a whole is drawing more and more attention in global health care management. Traditional Chinese Medicine (TCM), essentially different from Western Medicine (WM), is gaining increasing attention due to its emphasis on individual wellness and natural herbal medicine, which satisfies the goal of integrative medicine. However, with the explosive growth of biomedical data on the Web, biomedical researchers are now confronted with the problem of large-scale data analysis and data query. Besides that, biomedical data also has a wide coverage which usually comes from multiple heterogeneous data sources and has different taxonomies, making it hard to integrate and query the big biomedical data. Embedded with domain knowledge from different disciplines all regarding human biological systems, the heterogeneous data repositories are implicitly connected by human expert knowledge. Traditional search engines cannot provide accurate and comprehensive search results for the semantically associated knowledge since they only support keywords-based searches. In this paper, we present BioTCM-SE, a semantic search engine for the information retrieval of modern biology and TCM, which provides biologists with a comprehensive and accurate associated knowledge query platform to greatly facilitate the implicit knowledge discovery between WM and TCM.

  13. BioTCM-SE: A Semantic Search Engine for the Information Retrieval of Modern Biology and Traditional Chinese Medicine

    PubMed Central

    Chen, Xi; Chen, Huajun; Bi, Xuan; Gu, Peiqin; Chen, Jiaoyan; Wu, Zhaohui

    2014-01-01

    Understanding the functional mechanisms of the complex biological system as a whole is drawing more and more attention in global health care management. Traditional Chinese Medicine (TCM), essentially different from Western Medicine (WM), is gaining increasing attention due to its emphasis on individual wellness and natural herbal medicine, which satisfies the goal of integrative medicine. However, with the explosive growth of biomedical data on the Web, biomedical researchers are now confronted with the problem of large-scale data analysis and data query. Besides that, biomedical data also has a wide coverage which usually comes from multiple heterogeneous data sources and has different taxonomies, making it hard to integrate and query the big biomedical data. Embedded with domain knowledge from different disciplines all regarding human biological systems, the heterogeneous data repositories are implicitly connected by human expert knowledge. Traditional search engines cannot provide accurate and comprehensive search results for the semantically associated knowledge since they only support keywords-based searches. In this paper, we present BioTCM-SE, a semantic search engine for the information retrieval of modern biology and TCM, which provides biologists with a comprehensive and accurate associated knowledge query platform to greatly facilitate the implicit knowledge discovery between WM and TCM. PMID:24772189

  14. Design of Composite Structures Using Knowledge-Based and Case Based Reasoning

    NASA Technical Reports Server (NTRS)

    Lambright, Jonathan Paul

    1996-01-01

    A method of using knowledge based and case based reasoning to assist designers during conceptual design tasks of composite structures was proposed. The cooperative use of heuristics, procedural knowledge, and previous similar design cases suggests a potential reduction in design cycle time and ultimately product lead time. The hypothesis of this work is that the design process of composite structures can be improved by using Case-Based Reasoning (CBR) and Knowledge-Based (KB) reasoning in the early design stages. The technique of using knowledge-based and case-based reasoning facilitates the gathering of disparate information into one location that is easily and readily available. The method suggests that the inclusion of downstream life-cycle issues into the conceptual design phase reduces potential of defective, and sub-optimal composite structures. Three industry experts were interviewed extensively. The experts provided design rules, previous design cases, and test problems. A Knowledge Based Reasoning system was developed using the CLIPS (C Language Interpretive Procedural System) environment and a Case Based Reasoning System was developed using the Design Memory Utility For Sharing Experiences (MUSE) xviii environment. A Design Characteristic State (DCS) was used to document the design specifications, constraints, and problem areas using attribute-value pair relationships. The DCS provided consistent design information between the knowledge base and case base. Results indicated that the use of knowledge based and case based reasoning provided a robust design environment for composite structures. The knowledge base provided design guidance from well defined rules and procedural knowledge. The case base provided suggestions on design and manufacturing techniques based on previous similar designs and warnings of potential problems and pitfalls. The case base complemented the knowledge base and extended the problem solving capability beyond the existence of limited well defined rules. The findings indicated that the technique is most effective when used as a design aid and not as a tool to totally automate the composites design process. Other areas of application and implications for future research are discussed.

  15. Satellite Remote Sensing and the Hydroclimate: Two Specific Examples of Improved Knowledge and Applications

    NASA Astrophysics Data System (ADS)

    Shepherd, M.; Santanello, J. A., Jr.

    2017-12-01

    When Explorer 1 launched nearly 60 years ago, it helped usher in a golden age of scientific understanding of arguably the most important planet in our solar system. From its inception NASA and its partners were charged with leveraging the vantagepoint of space to advance knowledge outside and within Earth's atmosphere. Earth is a particularly complex natural system that is increasingly modified by human activities. The hydrological or water cycle is a critical circuit in the Earth system. Its complexity requires novel observations and simulation capability to fully understand it and predict changes. This talk will introduce some of the unique satellite-based observations used for hydroclimate studies. Two specific examples will be presented. The first example explores a relatively new thread of research examining the impact of soil moisture on landfalling and other types of tropical systems. Recent literature suggests that tropical cyclones or large rain-producing systems like the one that caused catastrophic flooding in Louisiana (2016) derive moisture from a "brown ocean" of wet soils or wetlands. The second example summarizes a decade of research on how urbanization has altered the precipitation and land surface hydrology components of the water cycle. With both cases, a multitude of satellite or model-based datesets will be summarized (e.g., TRMM, GPM, SMAP, NLDAS).

  16. A Logical Framework for Service Migration Based Survivability

    DTIC Science & Technology

    2016-06-24

    platforms; Service Migration Strategy Fuzzy Inference System Knowledge Base Fuzzy rules representing domain expert knowledge about implications of...service migration strategy. Our approach uses expert knowledge as linguistic reasoning rules and takes service programs damage assessment, service...programs complexity, and available network capability as input. The fuzzy inference system includes four components as shown in Figure 5: (1) a knowledge

  17. Construction of Expert Knowledge Monitoring and Assessment System Based on Integral Method of Knowledge Evaluation

    ERIC Educational Resources Information Center

    Golovachyova, Viktoriya N.; Menlibekova, Gulbakhyt Zh.; Abayeva, Nella F.; Ten, Tatyana L.; Kogaya, Galina D.

    2016-01-01

    Using computer-based monitoring systems that rely on tests could be the most effective way of knowledge evaluation. The problem of objective knowledge assessment by means of testing takes on a new dimension in the context of new paradigms in education. The analysis of the existing test methods enabled us to conclude that tests with selected…

  18. Transportable educational programs for scientific and technical professionals: More effective utilization of automated scientific and technical data base systems

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D.

    1987-01-01

    This grant final report executive summary documents a major, long-term program addressing innovative educational issues associated with the development, administration, evaluation, and widespread distribution of transportable educational programs for scientists and engineers to increase their knowledge of, and facilitate their utilization of automated scientific and technical information storage and retrieval systems. This educational program is of very broad scope, being targeted at Colleges of Engineering and Colleges of Physical sciences at a large number of colleges and universities throughout the United States. The educational program is designed to incorporate extensive hands-on, interactive usage of the NASA RECON system and is supported by a number of microcomputer-based software systems to facilitate the delivery and usage of the educational course materials developed as part of the program.

  19. Artificial intelligence applied to process signal analysis

    NASA Technical Reports Server (NTRS)

    Corsberg, Dan

    1988-01-01

    Many space station processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect of the human/machine interface is the analysis and display of process information. Human operators can be overwhelmed by large clusters of alarms that inhibit their ability to diagnose and respond to a disturbance. Using artificial intelligence techniques and a knowledge base approach to this problem, the power of the computer can be used to filter and analyze plant sensor data. This will provide operators with a better description of the process state. Once a process state is recognized, automatic action could be initiated and proper system response monitored.

  20. Towards Methodologies for Building Knowledge-Based Instructional Systems.

    ERIC Educational Resources Information Center

    Duchastel, Philippe

    1992-01-01

    Examines the processes involved in building instructional systems that are based on artificial intelligence and hypermedia technologies. Traditional instructional systems design methodology is discussed; design issues including system architecture and learning strategies are addressed; and a new methodology for building knowledge-based…

  1. Expert and Knowledge Based Systems.

    ERIC Educational Resources Information Center

    Demaid, Adrian; Edwards, Lyndon

    1987-01-01

    Discusses the nature and current state of knowledge-based systems and expert systems. Describes an expert system from the viewpoints of a computer programmer and an applications expert. Addresses concerns related to materials selection and forecasts future developments in the teaching of materials engineering. (ML)

  2. Knowledge-Based Information Retrieval.

    ERIC Educational Resources Information Center

    Ford, Nigel

    1991-01-01

    Discussion of information retrieval focuses on theoretical and empirical advances in knowledge-based information retrieval. Topics discussed include the use of natural language for queries; the use of expert systems; intelligent tutoring systems; user modeling; the need for evaluation of system effectiveness; and examples of systems, including…

  3. Studying Professional Knowledge Use in Practice Using Multimedia Scenarios Delivered Online

    ERIC Educational Resources Information Center

    Herbst, Patricio; Chazan, Daniel

    2015-01-01

    We describe how multimedia scenarios delivered online can be used in instruments for the study of professional knowledge. Based on our work in the study of the knowledge and rationality involved in mathematics teaching, we describe how the study of professional knowledge writ large can benefit from the capacity to represent know-how using…

  4. Genomics, "Discovery Science," Systems Biology, and Causal Explanation: What Really Works?

    PubMed

    Davidson, Eric H

    2015-01-01

    Diverse and non-coherent sets of epistemological principles currently inform research in the general area of functional genomics. Here, from the personal point of view of a scientist with over half a century of immersion in hypothesis driven scientific discovery, I compare and deconstruct the ideological bases of prominent recent alternatives, such as "discovery science," some productions of the ENCODE project, and aspects of large data set systems biology. The outputs of these types of scientific enterprise qualitatively reflect their radical definitions of scientific knowledge, and of its logical requirements. Their properties emerge in high relief when contrasted (as an example) to a recent, system-wide, predictive analysis of a developmental regulatory apparatus that was instead based directly on hypothesis-driven experimental tests of mechanism.

  5. Knowledge based word-concept model estimation and refinement for biomedical text mining.

    PubMed

    Jimeno Yepes, Antonio; Berlanga, Rafael

    2015-02-01

    Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation. In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences. The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Trust from the past: Bayesian Personalized Ranking based Link Prediction in Knowledge Graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Baichuan; Choudhury, Sutanay; Al-Hasan, Mohammad

    2016-02-01

    Estimating the confidence for a link is a critical task for Knowledge Graph construction. Link prediction, or predicting the likelihood of a link in a knowledge graph based on prior state is a key research direction within this area. We propose a Latent Feature Embedding based link recommendation model for prediction task and utilize Bayesian Personalized Ranking based optimization technique for learning models for each predicate. Experimental results on large-scale knowledge bases such as YAGO2 show that our approach achieves substantially higher performance than several state-of-art approaches. Furthermore, we also study the performance of the link prediction algorithm in termsmore » of topological properties of the Knowledge Graph and present a linear regression model to reason about its expected level of accuracy.« less

  7. Investigating potential transferability of place-based research in land system science

    NASA Astrophysics Data System (ADS)

    Václavík, Tomáš; Langerwisch, Fanny; Cotter, Marc; Fick, Johanna; Häuser, Inga; Hotes, Stefan; Kamp, Johannes; Settele, Josef; Spangenberg, Joachim H.; Seppelt, Ralf

    2016-09-01

    Much of our knowledge about land use and ecosystem services in interrelated social-ecological systems is derived from place-based research. While local and regional case studies provide valuable insights, it is often unclear how relevant this research is beyond the study areas. Drawing generalized conclusions about practical solutions to land management from local observations and formulating hypotheses applicable to other places in the world requires that we identify patterns of land systems that are similar to those represented by the case study. Here, we utilize the previously developed concept of land system archetypes to investigate potential transferability of research from twelve regional projects implemented in a large joint research framework that focus on issues of sustainable land management across four continents. For each project, we characterize its project archetype, i.e. the unique land system based on a synthesis of more than 30 datasets of land-use intensity, environmental conditions and socioeconomic indicators. We estimate the transferability potential of project research by calculating the statistical similarity of locations across the world to the project archetype, assuming higher transferability potentials in locations with similar land system characteristics. Results show that areas with high transferability potentials are typically clustered around project sites but for some case studies can be found in regions that are geographically distant, especially when values of considered variables are close to the global mean or where the project archetype is driven by large-scale environmental or socioeconomic conditions. Using specific examples from the local case studies, we highlight the merit of our approach and discuss the differences between local realities and information captured in global datasets. The proposed method provides a blueprint for large research programs to assess potential transferability of place-based studies to other geographical areas and to indicate possible gaps in research efforts.

  8. Knowledge-Driven Event Extraction in Russian: Corpus-Based Linguistic Resources

    PubMed Central

    Solovyev, Valery; Ivanov, Vladimir

    2016-01-01

    Automatic event extraction form text is an important step in knowledge acquisition and knowledge base population. Manual work in development of extraction system is indispensable either in corpus annotation or in vocabularies and pattern creation for a knowledge-based system. Recent works have been focused on adaptation of existing system (for extraction from English texts) to new domains. Event extraction in other languages was not studied due to the lack of resources and algorithms necessary for natural language processing. In this paper we define a set of linguistic resources that are necessary in development of a knowledge-based event extraction system in Russian: a vocabulary of subordination models, a vocabulary of event triggers, and a vocabulary of Frame Elements that are basic building blocks for semantic patterns. We propose a set of methods for creation of such vocabularies in Russian and other languages using Google Books NGram Corpus. The methods are evaluated in development of event extraction system for Russian. PMID:26955386

  9. Pairwise Maximum Entropy Models for Studying Large Biological Systems: When They Can Work and When They Can't

    PubMed Central

    Roudi, Yasser; Nirenberg, Sheila; Latham, Peter E.

    2009-01-01

    One of the most critical problems we face in the study of biological systems is building accurate statistical descriptions of them. This problem has been particularly challenging because biological systems typically contain large numbers of interacting elements, which precludes the use of standard brute force approaches. Recently, though, several groups have reported that there may be an alternate strategy. The reports show that reliable statistical models can be built without knowledge of all the interactions in a system; instead, pairwise interactions can suffice. These findings, however, are based on the analysis of small subsystems. Here, we ask whether the observations will generalize to systems of realistic size, that is, whether pairwise models will provide reliable descriptions of true biological systems. Our results show that, in most cases, they will not. The reason is that there is a crossover in the predictive power of pairwise models: If the size of the subsystem is below the crossover point, then the results have no predictive power for large systems. If the size is above the crossover point, then the results may have predictive power. This work thus provides a general framework for determining the extent to which pairwise models can be used to predict the behavior of large biological systems. Applied to neural data, the size of most systems studied so far is below the crossover point. PMID:19424487

  10. A data structure and algorithm for fault diagnosis

    NASA Technical Reports Server (NTRS)

    Bosworth, Edward L., Jr.

    1987-01-01

    Results of preliminary research on the design of a knowledge based fault diagnosis system for use with on-orbit spacecraft such as the Hubble Space Telescope are presented. A candidate data structure and associated search algorithm from which the knowledge based system can evolve is discussed. This algorithmic approach will then be examined in view of its inability to diagnose certain common faults. From that critique, a design for the corresponding knowledge based system will be given.

  11. iSMART: Ontology-based Semantic Query of CDA Documents

    PubMed Central

    Liu, Shengping; Ni, Yuan; Mei, Jing; Li, Hanyu; Xie, Guotong; Hu, Gang; Liu, Haifeng; Hou, Xueqiao; Pan, Yue

    2009-01-01

    The Health Level 7 Clinical Document Architecture (CDA) is widely accepted as the format for electronic clinical document. With the rich ontological references in CDA documents, the ontology-based semantic query could be performed to retrieve CDA documents. In this paper, we present iSMART (interactive Semantic MedicAl Record reTrieval), a prototype system designed for ontology-based semantic query of CDA documents. The clinical information in CDA documents will be extracted into RDF triples by a declarative XML to RDF transformer. An ontology reasoner is developed to infer additional information by combining the background knowledge from SNOMED CT ontology. Then an RDF query engine is leveraged to enable the semantic queries. This system has been evaluated using the real clinical documents collected from a large hospital in southern China. PMID:20351883

  12. A NASA/RAE cooperation in the development of a real-time knowledge-based autopilot

    NASA Technical Reports Server (NTRS)

    Daysh, Colin; Corbin, Malcolm; Butler, Geoff; Duke, Eugene L.; Belle, Steven D.; Brumbaugh, Randal W.

    1991-01-01

    As part of a US/UK cooperative aeronautical research program, a joint activity between the NASA Dryden Flight Research Facility and the Royal Aerospace Establishment on knowledge-based systems was established. This joint activity is concerned with tools and techniques for the implementation and validation of real-time knowledge-based systems. The proposed next stage of this research is described, in which some of the problems of implementing and validating a knowledge-based autopilot for a generic high-performance aircraft are investigated.

  13. A Survey of Knowledge Management Research & Development at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This chapter catalogs knowledge management research and development activities at NASA Ames Research Center as of April 2002. A general categorization scheme for knowledge management systems is first introduced. This categorization scheme divides knowledge management capabilities into five broad categories: knowledge capture, knowledge preservation, knowledge augmentation, knowledge dissemination, and knowledge infrastructure. Each of nearly 30 knowledge management systems developed at Ames is then classified according to this system. Finally, a capsule description of each system is presented along with information on deployment status, funding sources, contact information, and both published and internet-based references.

  14. Knowledge Acquisition and Management for the NASA Earth Exchange (NEX)

    NASA Astrophysics Data System (ADS)

    Votava, P.; Michaelis, A.; Nemani, R. R.

    2013-12-01

    NASA Earth Exchange (NEX) is a data, computing and knowledge collaboratory that houses NASA satellite, climate and ancillary data where a focused community can come together to share modeling and analysis codes, scientific results, knowledge and expertise on a centralized platform with access to large supercomputing resources. As more and more projects are being executed on NEX, we are increasingly focusing on capturing the knowledge of the NEX users and provide mechanisms for sharing it with the community in order to facilitate reuse and accelerate research. There are many possible knowledge contributions to NEX, it can be a wiki entry on the NEX portal contributed by a developer, information extracted from a publication in an automated way, or a workflow captured during code execution on the supercomputing platform. The goal of the NEX knowledge platform is to capture and organize this information and make it easily accessible to the NEX community and beyond. The knowledge acquisition process consists of three main faucets - data and metadata, workflows and processes, and web-based information. Once the knowledge is acquired, it is processed in a number of ways ranging from custom metadata parsers to entity extraction using natural language processing techniques. The processed information is linked with existing taxonomies and aligned with internal ontology (which heavily reuses number of external ontologies). This forms a knowledge graph that can then be used to improve users' search query results as well as provide additional analytics capabilities to the NEX system. Such a knowledge graph will be an important building block in creating a dynamic knowledge base for the NEX community where knowledge is both generated and easily shared.

  15. Development of an automated electrical power subsystem testbed for large spacecraft

    NASA Technical Reports Server (NTRS)

    Hall, David K.; Lollar, Louis F.

    1990-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed two autonomous electrical power system breadboards. The first breadboard, the autonomously managed power system (AMPS), is a two power channel system featuring energy generation and storage and 24-kW of switchable loads, all under computer control. The second breadboard, the space station module/power management and distribution (SSM/PMAD) testbed, is a two-bus 120-Vdc model of the Space Station power subsystem featuring smart switchgear and multiple knowledge-based control systems. NASA/MSFC is combining these two breadboards to form a complete autonomous source-to-load power system called the large autonomous spacecraft electrical power system (LASEPS). LASEPS is a high-power, intelligent, physical electrical power system testbed which can be used to derive and test new power system control techniques, new power switching components, and new energy storage elements in a more accurate and realistic fashion. LASEPS has the potential to be interfaced with other spacecraft subsystem breadboards in order to simulate an entire space vehicle. The two individual systems, the combined systems (hardware and software), and the current and future uses of LASEPS are described.

  16. The Nexus of Knowledge and Behavior for School-Aged Children: Implementation of Health Education Programs and a Nutritional Symbol System

    ERIC Educational Resources Information Center

    Miller, Judith; Graham, Lorraine; Pennington, Jim

    2013-01-01

    Health-related knowledge has been assumed to inform lifestyle choices for school-aged students. A "health-promoting school" provides the conceptual framework for this intervention. A large boarding school developed, implemented and refined a Nutritional Symbol System for their dining hall. The effectiveness of this social marketing…

  17. Robopedia: Leveraging Sensorpedia for Web-Enabled Robot Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Resseguie, David R

    There is a growing interest in building Internetscale sensor networks that integrate sensors from around the world into a single unified system. In contrast, robotics application development has primarily focused on building specialized systems. These specialized systems take scalability and reliability into consideration, but generally neglect exploring the key components required to build a large scale system. Integrating robotic applications with Internet-scale sensor networks will unify specialized robotics applications and provide answers to large scale implementation concerns. We focus on utilizing Internet-scale sensor network technology to construct a framework for unifying robotic systems. Our framework web-enables a surveillance robot smore » sensor observations and provides a webinterface to the robot s actuators. This lets robots seamlessly integrate into web applications. In addition, the framework eliminates most prerequisite robotics knowledge, allowing for the creation of general web-based robotics applications. The framework also provides mechanisms to create applications that can interface with any robot. Frameworks such as this one are key to solving large scale mobile robotics implementation problems. We provide an overview of previous Internetscale sensor networks, Sensorpedia (an ad-hoc Internet-scale sensor network), our framework for integrating robots with Sensorpedia, two applications which illustrate our frameworks ability to support general web-based robotic control, and offer experimental results that illustrate our framework s scalability, feasibility, and resource requirements.« less

  18. Knowledge Base Editor (SharpKBE)

    NASA Technical Reports Server (NTRS)

    Tikidjian, Raffi; James, Mark; Mackey, Ryan

    2007-01-01

    The SharpKBE software provides a graphical user interface environment for domain experts to build and manage knowledge base systems. Knowledge bases can be exported/translated to various target languages automatically, including customizable target languages.

  19. A Text Knowledge Base from the AI Handbook.

    ERIC Educational Resources Information Center

    Simmons, Robert F.

    1987-01-01

    Describes a prototype natural language text knowledge system (TKS) that was used to organize 50 pages of a handbook on artificial intelligence as an inferential knowledge base with natural language query and command capabilities. Representation of text, database navigation, query systems, discourse structuring, and future research needs are…

  20. U-Science (Invited)

    NASA Astrophysics Data System (ADS)

    Borne, K. D.

    2009-12-01

    The emergence of e-Science over the past decade as a paradigm for Internet-based science was an inevitable evolution of science that built upon the web protocols and access patterns that were prevalent at that time, including Web Services, XML-based information exchange, machine-to-machine communication, service registries, the Grid, and distributed data. We now see a major shift in web behavior patterns to social networks, user-provided content (e.g., tags and annotations), ubiquitous devices, user-centric experiences, and user-led activities. The inevitable accrual of these social networking patterns and protocols by scientists and science projects leads to U-Science as a new paradigm for online scientific research (i.e., ubiquitous, user-led, untethered, You-centered science). U-Science applications include components from semantic e-science (ontologies, taxonomies, folksonomies, tagging, annotations, and classification systems), which is much more than Web 2.0-based science (Wikis, blogs, and online environments like Second Life). Among the best examples of U-Science are Citizen Science projects, including Galaxy Zoo, Stardust@Home, Project Budburst, Volksdata, CoCoRaHS (the Community Collaborative Rain, Hail and Snow network), and projects utilizing Volunteer Geographic Information (VGI). There are also scientist-led projects for scientists that engage a wider community in building knowledge through user-provided content. Among the semantic-based U-Science projects for scientists are those that specifically enable user-based annotation of scientific results in databases. These include the Heliophysics Knowledgebase, BioDAS, WikiProteins, The Entity Describer, and eventually AstroDAS. Such collaborative tagging of scientific data addresses several petascale data challenges for scientists: how to find the most relevant data, how to reuse those data, how to integrate data from multiple sources, how to mine and discover new knowledge in large databases, how to represent and encode the new knowledge, and how to curate the discovered knowledge. This talk will address the emergence of U-Science as a type of Semantic e-Science, and will explore challenges, implementations, and results. Semantic e-Science and U-Science applications and concepts will be discussed within the context of one particular implementation (AstroDAS: Astronomy Distributed Annotation System) and its applicability to petascale science projects such as the LSST (Large Synoptic Survey Telescope), coming online within the next few years.

  1. Gathering and Exploring Scientific Knowledge in Pharmacovigilance

    PubMed Central

    Lopes, Pedro; Nunes, Tiago; Campos, David; Furlong, Laura Ines; Bauer-Mehren, Anna; Sanz, Ferran; Carrascosa, Maria Carmen; Mestres, Jordi; Kors, Jan; Singh, Bharat; van Mulligen, Erik; Van der Lei, Johan; Diallo, Gayo; Avillach, Paul; Ahlberg, Ernst; Boyer, Scott; Diaz, Carlos; Oliveira, José Luís

    2013-01-01

    Pharmacovigilance plays a key role in the healthcare domain through the assessment, monitoring and discovery of interactions amongst drugs and their effects in the human organism. However, technological advances in this field have been slowing down over the last decade due to miscellaneous legal, ethical and methodological constraints. Pharmaceutical companies started to realize that collaborative and integrative approaches boost current drug research and development processes. Hence, new strategies are required to connect researchers, datasets, biomedical knowledge and analysis algorithms, allowing them to fully exploit the true value behind state-of-the-art pharmacovigilance efforts. This manuscript introduces a new platform directed towards pharmacovigilance knowledge providers. This system, based on a service-oriented architecture, adopts a plugin-based approach to solve fundamental pharmacovigilance software challenges. With the wealth of collected clinical and pharmaceutical data, it is now possible to connect knowledge providers’ analysis and exploration algorithms with real data. As a result, new strategies allow a faster identification of high-risk interactions between marketed drugs and adverse events, and enable the automated uncovering of scientific evidence behind them. With this architecture, the pharmacovigilance field has a new platform to coordinate large-scale drug evaluation efforts in a unique ecosystem, publicly available at http://bioinformatics.ua.pt/euadr/. PMID:24349421

  2. Using Knowledge-Based Systems to Support Learning of Organizational Knowledge: A Case Study

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Nash, Rebecca L.; Phan, Tu-Anh T.; Bailey, Teresa R.

    2003-01-01

    This paper describes the deployment of a knowledge system to support learning of organizational knowledge at the Jet Propulsion Laboratory (JPL), a US national research laboratory whose mission is planetary exploration and to 'do what no one has done before.' Data collected over 19 weeks of operation were used to assess system performance with respect to design considerations, participation, effectiveness of communication mechanisms, and individual-based learning. These results are discussed in the context of organizational learning research and implications for practice.

  3. An expert system/ion trap mass spectrometry approach for life support systems monitoring

    NASA Technical Reports Server (NTRS)

    Palmer, Peter T.; Wong, Carla M.; Yost, Richard A.; Johnson, Jodie V.; Yates, Nathan A.; Story, Michael

    1992-01-01

    Efforts to develop sensor and control system technology to monitor air quality for life support have resulted in the development and preliminary testing of a concept based on expert systems and ion trap mass spectrometry (ITMS). An ITMS instrument provides the capability to identify and quantitate a large number of suspected contaminants at trace levels through the use of a variety of multidimensional experiments. An expert system provides specialized knowledge for control, analysis, and decision making. The system is intended for real-time, on-line, autonomous monitoring of air quality. The key characteristics of the system, performance data and analytical capabilities of the ITMS instrument, the design and operation of the expert system, and results from preliminary testing of the system for trace contaminant monitoring are described.

  4. Enrichment assessment of multiple virtual screening strategies for Toll-like receptor 8 agonists based on a maximal unbiased benchmarking data set.

    PubMed

    Pei, Fen; Jin, Hongwei; Zhou, Xin; Xia, Jie; Sun, Lidan; Liu, Zhenming; Zhang, Liangren

    2015-11-01

    Toll-like receptor 8 agonists, which activate adaptive immune responses by inducing robust production of T-helper 1-polarizing cytokines, are promising candidates for vaccine adjuvants. As the binding site of toll-like receptor 8 is large and highly flexible, virtual screening by individual method has inevitable limitations; thus, a comprehensive comparison of different methods may provide insights into seeking effective strategy for the discovery of novel toll-like receptor 8 agonists. In this study, the performance of knowledge-based pharmacophore, shape-based 3D screening, and combined strategies was assessed against a maximum unbiased benchmarking data set containing 13 actives and 1302 decoys specialized for toll-like receptor 8 agonists. Prior structure-activity relationship knowledge was involved in knowledge-based pharmacophore generation, and a set of antagonists was innovatively used to verify the selectivity of the selected knowledge-based pharmacophore. The benchmarking data set was generated from our recently developed 'mubd-decoymaker' protocol. The enrichment assessment demonstrated a considerable performance through our selected three-layer virtual screening strategy: knowledge-based pharmacophore (Phar1) screening, shape-based 3D similarity search (Q4_combo), and then a Gold docking screening. This virtual screening strategy could be further employed to perform large-scale database screening and to discover novel toll-like receptor 8 agonists. © 2015 John Wiley & Sons A/S.

  5. A Knowledge-Based Approach to Automatic Detection of Equipment Alarm Sounds in a Neonatal Intensive Care Unit Environment.

    PubMed

    Raboshchuk, Ganna; Nadeu, Climent; Jancovic, Peter; Lilja, Alex Peiro; Kokuer, Munevver; Munoz Mahamud, Blanca; Riverola De Veciana, Ana

    2018-01-01

    A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%.

  6. A Knowledge-Based Approach to Automatic Detection of Equipment Alarm Sounds in a Neonatal Intensive Care Unit Environment

    PubMed Central

    Nadeu, Climent; Jančovič, Peter; Lilja, Alex Peiró; Köküer, Münevver; Muñoz Mahamud, Blanca; Riverola De Veciana, Ana

    2018-01-01

    A large number of alarm sounds triggered by biomedical equipment occur frequently in the noisy environment of a neonatal intensive care unit (NICU) and play a key role in providing healthcare. In this paper, our work on the development of an automatic system for detection of acoustic alarms in that difficult environment is presented. Such automatic detection system is needed for the investigation of how a preterm infant reacts to auditory stimuli of the NICU environment and for an improved real-time patient monitoring. The approach presented in this paper consists of using the available knowledge about each alarm class in the design of the detection system. The information about the frequency structure is used in the feature extraction stage, and the time structure knowledge is incorporated at the post-processing stage. Several alternative methods are compared for feature extraction, modeling, and post-processing. The detection performance is evaluated with real data recorded in the NICU of the hospital, and by using both frame-level and period-level metrics. The experimental results show that the inclusion of both spectral and temporal information allows to improve the baseline detection performance by more than 60%. PMID:29404227

  7. Knowledge Acquisition Using Linguistic-Based Knowledge Analysis

    Treesearch

    Daniel L. Schmoldt

    1998-01-01

    Most knowledge-based system developmentefforts include acquiring knowledge from one or more sources. difficulties associated with this knowledge acquisition task are readily acknowledged by most researchers. While a variety of knowledge acquisition methods have been reported, little has been done to organize those different methods and to suggest how to apply them...

  8. Nano Mapper: an Internet knowledge mapping system for nanotechnology development

    NASA Astrophysics Data System (ADS)

    Li, Xin; Hu, Daning; Dang, Yan; Chen, Hsinchun; Roco, Mihail C.; Larson, Catherine A.; Chan, Joyce

    2009-04-01

    Nanotechnology research has experienced rapid growth in recent years. Advances in information technology enable efficient investigation of publications, their contents, and relationships for large sets of nanotechnology-related documents in order to assess the status of the field. This paper presents the development of a new knowledge mapping system, called Nano Mapper (http://nanomapper.eller.arizona.edu), which integrates the analysis of nanotechnology patents and research grants into a Web-based platform. The Nano Mapper system currently contains nanotechnology-related patents for 1976-2006 from the United States Patent and Trademark Office (USPTO), European Patent Office (EPO), and Japan Patent Office (JPO), as well as grant documents from the U.S. National Science Foundation (NSF) for the same time period. The system provides complex search functionalities, and makes available a set of analysis and visualization tools (statistics, trend graphs, citation networks, and content maps) that can be applied to different levels of analytical units (countries, institutions, technical fields) and for different time intervals. The paper shows important nanotechnology patenting activities at USPTO for 2005-2006 identified through the Nano Mapper system.

  9. Nano Mapper: an Internet knowledge mapping system for nanotechnology development

    PubMed Central

    Hu, Daning; Dang, Yan; Chen, Hsinchun; Roco, Mihail C.; Larson, Catherine A.; Chan, Joyce

    2008-01-01

    Nanotechnology research has experienced rapid growth in recent years. Advances in information technology enable efficient investigation of publications, their contents, and relationships for large sets of nanotechnology-related documents in order to assess the status of the field. This paper presents the development of a new knowledge mapping system, called Nano Mapper (http://nanomapper.eller.arizona.edu), which integrates the analysis of nanotechnology patents and research grants into a Web-based platform. The Nano Mapper system currently contains nanotechnology-related patents for 1976–2006 from the United States Patent and Trademark Office (USPTO), European Patent Office (EPO), and Japan Patent Office (JPO), as well as grant documents from the U.S. National Science Foundation (NSF) for the same time period. The system provides complex search functionalities, and makes available a set of analysis and visualization tools (statistics, trend graphs, citation networks, and content maps) that can be applied to different levels of analytical units (countries, institutions, technical fields) and for different time intervals. The paper shows important nanotechnology patenting activities at USPTO for 2005–2006 identified through the Nano Mapper system. PMID:21170121

  10. Proposed Conceptual Requirements for the CTBT Knowledge Base,

    DTIC Science & Technology

    1995-08-14

    knowledge available to automated processing routines and human analysts are significant, and solving these problems is an essential step in ensuring...knowledge storage in a CTBT system. In addition to providing regional knowledge to automated processing routines, the knowledge base will also address

  11. A Knowledge-Based System For Analysis, Intervention Planning and Prevention of Defects in Immovable Cultural Heritage Objects and Monuments

    NASA Astrophysics Data System (ADS)

    Valach, J.; Cacciotti, R.; Kuneš, P.; ČerÅanský, M.; Bláha, J.

    2012-04-01

    The paper presents a project aiming to develop a knowledge-based system for documentation and analysis of defects of cultural heritage objects and monuments. The MONDIS information system concentrates knowledge on damage of immovable structures due to various causes, and preventive/remedial actions performed to protect/repair them, where possible. The currently built system is to provide for understanding of causal relationships between a defect, materials, external load, and environment of built object. Foundation for the knowledge-based system will be the systemized and formalized knowledge on defects and their mitigation acquired in the process of analysis of a representative set of cases documented in the past. On the basis of design comparability, used technologies, materials and the nature of the external forces and surroundings, the developed software system has the capacity to indicate the most likely risks of new defect occurrence or the extension of the existing ones. The system will also allow for a comparison of the actual failure with similar cases documented and will propose a suitable technical intervention plan. The system will provide conservationists, administrators and owners of historical objects with a toolkit for defect documentation for their objects. Also, advanced artificial intelligence methods will offer accumulated knowledge to users and will also enable them to get oriented in relevant techniques of preventive interventions and reconstructions based on similarity with their case.

  12. Building Scalable Knowledge Graphs for Earth Science

    NASA Astrophysics Data System (ADS)

    Ramachandran, R.; Maskey, M.; Gatlin, P. N.; Zhang, J.; Duan, X.; Bugbee, K.; Christopher, S. A.; Miller, J. J.

    2017-12-01

    Estimates indicate that the world's information will grow by 800% in the next five years. In any given field, a single researcher or a team of researchers cannot keep up with this rate of knowledge expansion without the help of cognitive systems. Cognitive computing, defined as the use of information technology to augment human cognition, can help tackle large systemic problems. Knowledge graphs, one of the foundational components of cognitive systems, link key entities in a specific domain with other entities via relationships. Researchers could mine these graphs to make probabilistic recommendations and to infer new knowledge. At this point, however, there is a dearth of tools to generate scalable Knowledge graphs using existing corpus of scientific literature for Earth science research. Our project is currently developing an end-to-end automated methodology for incrementally constructing Knowledge graphs for Earth Science. Semantic Entity Recognition (SER) is one of the key steps in this methodology. SER for Earth Science uses external resources (including metadata catalogs and controlled vocabulary) as references to guide entity extraction and recognition (i.e., labeling) from unstructured text, in order to build a large training set to seed the subsequent auto-learning component in our algorithm. Results from several SER experiments will be presented as well as lessons learned.

  13. The Genome-based Knowledge Management in Cycles model: a complex adaptive systems framework for implementation of genomic applications.

    PubMed

    Arar, Nedal; Knight, Sara J; Modell, Stephen M; Issa, Amalia M

    2011-03-01

    The main mission of the Genomic Applications in Practice and Prevention Network™ is to advance collaborative efforts involving partners from across the public health sector to realize the promise of genomics in healthcare and disease prevention. We introduce a new framework that supports the Genomic Applications in Practice and Prevention Network mission and leverages the characteristics of the complex adaptive systems approach. We call this framework the Genome-based Knowledge Management in Cycles model (G-KNOMIC). G-KNOMIC proposes that the collaborative work of multidisciplinary teams utilizing genome-based applications will enhance translating evidence-based genomic findings by creating ongoing knowledge management cycles. Each cycle consists of knowledge synthesis, knowledge evaluation, knowledge implementation and knowledge utilization. Our framework acknowledges that all the elements in the knowledge translation process are interconnected and continuously changing. It also recognizes the importance of feedback loops, and the ability of teams to self-organize within a dynamic system. We demonstrate how this framework can be used to improve the adoption of genomic technologies into practice using two case studies of genomic uptake.

  14. Marshall Space Flight Center Propulsion Systems Department (PSD) Knowledge Management (KM) Initiative

    NASA Technical Reports Server (NTRS)

    Caraccioli, Paul; Varnedoe, Tom; Smith, Randy; McCarter, Mike; Wilson, Barry; Porter, Richard

    2006-01-01

    NASA Marshall Space Flight Center's Propulsion Systems Department (PSD) is four months into a fifteen month Knowledge Management (KM) initiative to support enhanced engineering decision making and analyses, faster resolution of anomalies (near-term) and effective, efficient knowledge infused engineering processes, reduced knowledge attrition, and reduced anomaly occurrences (long-term). The near-term objective of this initiative is developing a KM Pilot project, within the context of a 3-5 year KM strategy, to introduce and evaluate the use of KM within PSD. An internal NASA/MSFC PSD KM team was established early in project formulation to maintain a practitioner, user-centric focus throughout the conceptual development, planning and deployment of KM technologies and capabilities within the PSD. The PSD internal team is supported by the University of Alabama's Aging Infrastructure Systems Center of Excellence (AISCE), lntergraph Corporation, and The Knowledge Institute. The principle product of the initial four month effort has been strategic planning of PSD KNI implementation by first determining the "as is" state of KM capabilities and developing, planning and documenting the roadmap to achieve the desired "to be" state. Activities undertaken to suppoth e planning phase have included data gathering; cultural surveys, group work-sessions, interviews, documentation review, and independent research. Assessments and analyses have beon pedormed including industry benchmarking, related local and Agency initiatives, specific tools and techniques used and strategies for leveraging existing resources, people and technology to achieve common KM goals. Key findings captured in the PSD KM Strategic Plan include the system vision, purpose, stakeholders, prioritized strategic objectives mapped to the top ten practitioner needs and analysis of current resource usage. Opportunities identified from research, analyses, cultural1KM surveys and practitioner interviews include: executive and senior management sponsorship, KM awareness, promotion and training, cultural change management, process improvement, leveraging existing resources and new innovative technologies to align with other NASA KM initiatives (convergence: the big picture). To enable results based incremental implementation and future growth of the KM initiative, key performance measures have been identified including stakeholder value, system utility, learning and growth (knowledge capture, sharing, reduced anomaly recurrence), cultural change, process improvement and return-on-investment. The next steps for the initial implementation spiral (focused on SSME Turbomachinery) have been identified, largely based on the organization and compilation of summary level engineering process models, data capture matrices, functional models and conceptual-level svstems architecture. Key elements include detailed KM requirements definition, KM technology architecture assessment, - evaluation and selection, deployable KM Pilot design, development, implementation and evaluation, and justifying full implementation (estimated Return-on-Investment). Features identified for the notional system architecture include the knowledge presentation layer (and its components), knowledge network layer (and its components), knowledge storage layer (and its components), User Interface and capabilities. This paper provides a snapshot of the progress to date, the near term planning for deploying the KM pilot project and a forward look at results based growth of KM capabilities with-in the MSFC PSD.

  15. Cooperative knowledge evolution: a construction-integration approach to knowledge discovery in medicine.

    PubMed

    Schmalhofer, F J; Tschaitschian, B

    1998-11-01

    In this paper, we perform a cognitive analysis of knowledge discovery processes. As a result of this analysis, the construction-integration theory is proposed as a general framework for developing cooperative knowledge evolution systems. We thus suggest that for the acquisition of new domain knowledge in medicine, one should first construct pluralistic views on a given topic which may contain inconsistencies as well as redundancies. Only thereafter does this knowledge become consolidated into a situation-specific circumscription and the early inconsistencies become eliminated. As a proof for the viability of such knowledge acquisition processes in medicine, we present the IDEAS system, which can be used for the intelligent documentation of adverse events in clinical studies. This system provides a better documentation of the side-effects of medical drugs. Thereby, knowledge evolution occurs by achieving consistent explanations in increasingly larger contexts (i.e., more cases and more pharmaceutical substrates). Finally, it is shown how prototypes, model-based approaches and cooperative knowledge evolution systems can be distinguished as different classes of knowledge-based systems.

  16. Archetype-based conversion of EHR content models: pilot experience with a regional EHR system

    PubMed Central

    2009-01-01

    Background Exchange of Electronic Health Record (EHR) data between systems from different suppliers is a major challenge. EHR communication based on archetype methodology has been developed by openEHR and CEN/ISO. The experience of using archetypes in deployed EHR systems is quite limited today. Currently deployed EHR systems with large user bases have their own proprietary way of representing clinical content using various models. This study was designed to investigate the feasibility of representing EHR content models from a regional EHR system as openEHR archetypes and inversely to convert archetypes to the proprietary format. Methods The openEHR EHR Reference Model (RM) and Archetype Model (AM) specifications were used. The template model of the Cambio COSMIC, a regional EHR product from Sweden, was analyzed and compared to the openEHR RM and AM. This study was focused on the convertibility of the EHR semantic models. A semantic mapping between the openEHR RM/AM and the COSMIC template model was produced and used as the basis for developing prototype software that performs automated bi-directional conversion between openEHR archetypes and COSMIC templates. Results Automated bi-directional conversion between openEHR archetype format and COSMIC template format has been achieved. Several archetypes from the openEHR Clinical Knowledge Repository have been imported into COSMIC, preserving most of the structural and terminology related constraints. COSMIC templates from a large regional installation were successfully converted into the openEHR archetype format. The conversion from the COSMIC templates into archetype format preserves nearly all structural and semantic definitions of the original content models. A strategy of gradually adding archetype support to legacy EHR systems was formulated in order to allow sharing of clinical content models defined using different formats. Conclusion The openEHR RM and AM are expressive enough to represent the existing clinical content models from the template based EHR system tested and legacy content models can automatically be converted to archetype format for sharing of knowledge. With some limitations, internationally available archetypes could be converted to the legacy EHR models. Archetype support can be added to legacy EHR systems in an incremental way allowing a migration path to interoperability based on standards. PMID:19570196

  17. An Expert System for Automating Nuclear Strike Aircraft Replacement, Aircraft Beddown, and Logistics Movement for the Theater Warfare Exercise.

    DTIC Science & Technology

    1989-12-01

    that can be easily understood. (9) Parallelism. Several system components may need to execute in parallel. For example, the processing of sensor data...knowledge base are not accessible for processing by the database. Also in the likely case that the expert system poses a series of related queries, the...hiharken nxpfilcs’Iog - Knowledge base for the automation of loCgistics rr-ovenet T’he Ii rectorY containing the strike aircraft replacement knowledge base

  18. A new intrusion prevention model using planning knowledge graph

    NASA Astrophysics Data System (ADS)

    Cai, Zengyu; Feng, Yuan; Liu, Shuru; Gan, Yong

    2013-03-01

    Intelligent plan is a very important research in artificial intelligence, which has applied in network security. This paper proposes a new intrusion prevention model base on planning knowledge graph and discuses the system architecture and characteristics of this model. The Intrusion Prevention based on plan knowledge graph is completed by plan recognition based on planning knowledge graph, and the Intrusion response strategies and actions are completed by the hierarchical task network (HTN) planner in this paper. Intrusion prevention system has the advantages of intelligent planning, which has the advantage of the knowledge-sharing, the response focused, learning autonomy and protective ability.

  19. A Discretization Algorithm for Meteorological Data and its Parallelization Based on Hadoop

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Jin, Wen; Yu, Yuting; Qiu, Taorong; Bai, Xiaoming; Zou, Shuilong

    2017-10-01

    In view of the large amount of meteorological observation data, the property is more and the attribute values are continuous values, the correlation between the elements is the need for the application of meteorological data, this paper is devoted to solving the problem of how to better discretize large meteorological data to more effectively dig out the hidden knowledge in meteorological data and research on the improvement of discretization algorithm for large scale data, in order to achieve data in the large meteorological data discretization for the follow-up to better provide knowledge to provide protection, a discretization algorithm based on information entropy and inconsistency of meteorological attributes is proposed and the algorithm is parallelized under Hadoop platform. Finally, the comparison test validates the effectiveness of the proposed algorithm for discretization in the area of meteorological large data.

  20. Computational knowledge integration in biopharmaceutical research.

    PubMed

    Ficenec, David; Osborne, Mark; Pradines, Joel; Richards, Dan; Felciano, Ramon; Cho, Raymond J; Chen, Richard O; Liefeld, Ted; Owen, James; Ruttenberg, Alan; Reich, Christian; Horvath, Joseph; Clark, Tim

    2003-09-01

    An initiative to increase biopharmaceutical research productivity by capturing, sharing and computationally integrating proprietary scientific discoveries with public knowledge is described. This initiative involves both organisational process change and multiple interoperating software systems. The software components rely on mutually supporting integration techniques. These include a richly structured ontology, statistical analysis of experimental data against stored conclusions, natural language processing of public literature, secure document repositories with lightweight metadata, web services integration, enterprise web portals and relational databases. This approach has already begun to increase scientific productivity in our enterprise by creating an organisational memory (OM) of internal research findings, accessible on the web. Through bringing together these components it has also been possible to construct a very large and expanding repository of biological pathway information linked to this repository of findings which is extremely useful in analysis of DNA microarray data. This repository, in turn, enables our research paradigm to be shifted towards more comprehensive systems-based understandings of drug action.

  1. A knowledge-based system for prototypical reasoning

    NASA Astrophysics Data System (ADS)

    Lieto, Antonio; Minieri, Andrea; Piana, Alberto; Radicioni, Daniele P.

    2015-04-01

    In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired architecture for the representation of conceptual information. The proposed system aims at extending the classical representational and reasoning capabilities of the ontology-based frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base, composed of a classical symbolic component (grounded on a formal ontology) with a typicality based one (grounded on the conceptual spaces framework). The resulting system attempts to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual process theories of reasoning and rationality. The system has been experimentally assessed in a conceptual categorisation task where common sense linguistic descriptions were given in input, and the corresponding target concepts had to be identified. The results show that the proposed solution substantially extends the representational and reasoning 'conceptual' capabilities of standard ontology-based systems.

  2. The importance of knowledge-based technology.

    PubMed

    Cipriano, Pamela F

    2012-01-01

    Nurse executives are responsible for a workforce that can provide safer and more efficient care in a complex sociotechnical environment. National quality priorities rely on technologies to provide data collection, share information, and leverage analytic capabilities to interpret findings and inform approaches to care that will achieve better outcomes. As a key steward for quality, the nurse executive exercises leadership to provide the infrastructure to build and manage nursing knowledge and instill accountability for following evidence-based practices. These actions contribute to a learning health system where new knowledge is captured as a by-product of care delivery enabled by knowledge-based electronic systems. The learning health system also relies on rigorous scientific evidence embedded into practice at the point of care. The nurse executive optimizes use of knowledge-based technologies, integrated throughout the organization, that have the capacity to help transform health care.

  3. k-neighborhood Decentralization: A Comprehensive Solution to Index the UMLS for Large Scale Knowledge Discovery

    PubMed Central

    Xiang, Yang; Lu, Kewei; James, Stephen L.; Borlawsky, Tara B.; Huang, Kun; Payne, Philip R.O.

    2011-01-01

    The Unified Medical Language System (UMLS) is the largest thesaurus in the biomedical informatics domain. Previous works have shown that knowledge constructs comprised of transitively-associated UMLS concepts are effective for discovering potentially novel biomedical hypotheses. However, the extremely large size of the UMLS becomes a major challenge for these applications. To address this problem, we designed a k-neighborhood Decentralization Labeling Scheme (kDLS) for the UMLS, and the corresponding method to effectively evaluate the kDLS indexing results. kDLS provides a comprehensive solution for indexing the UMLS for very efficient large scale knowledge discovery. We demonstrated that it is highly effective to use kDLS paths to prioritize disease-gene relations across the whole genome, with extremely high fold-enrichment values. To our knowledge, this is the first indexing scheme capable of supporting efficient large scale knowledge discovery on the UMLS as a whole. Our expectation is that kDLS will become a vital engine for retrieving information and generating hypotheses from the UMLS for future medical informatics applications. PMID:22154838

  4. k-Neighborhood decentralization: a comprehensive solution to index the UMLS for large scale knowledge discovery.

    PubMed

    Xiang, Yang; Lu, Kewei; James, Stephen L; Borlawsky, Tara B; Huang, Kun; Payne, Philip R O

    2012-04-01

    The Unified Medical Language System (UMLS) is the largest thesaurus in the biomedical informatics domain. Previous works have shown that knowledge constructs comprised of transitively-associated UMLS concepts are effective for discovering potentially novel biomedical hypotheses. However, the extremely large size of the UMLS becomes a major challenge for these applications. To address this problem, we designed a k-neighborhood Decentralization Labeling Scheme (kDLS) for the UMLS, and the corresponding method to effectively evaluate the kDLS indexing results. kDLS provides a comprehensive solution for indexing the UMLS for very efficient large scale knowledge discovery. We demonstrated that it is highly effective to use kDLS paths to prioritize disease-gene relations across the whole genome, with extremely high fold-enrichment values. To our knowledge, this is the first indexing scheme capable of supporting efficient large scale knowledge discovery on the UMLS as a whole. Our expectation is that kDLS will become a vital engine for retrieving information and generating hypotheses from the UMLS for future medical informatics applications. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Famine Early Warning Systems Network (FEWS NET) Agro-climatology Analysis Tools and Knowledge Base Products for Food Security Applications

    NASA Astrophysics Data System (ADS)

    Budde, M. E.; Rowland, J.; Anthony, M.; Palka, S.; Martinez, J.; Hussain, R.

    2017-12-01

    The U.S. Geological Survey (USGS) supports the use of Earth observation data for food security monitoring through its role as an implementing partner of the Famine Early Warning Systems Network (FEWS NET). The USGS Earth Resources Observation and Science (EROS) Center has developed tools designed to aid food security analysts in developing assumptions of agro-climatological outcomes. There are four primary steps to developing agro-climatology assumptions; including: 1) understanding the climatology, 2) evaluating current climate modes, 3) interpretation of forecast information, and 4) incorporation of monitoring data. Analysts routinely forecast outcomes well in advance of the growing season, which relies on knowledge of climatology. A few months prior to the growing season, analysts can assess large-scale climate modes that might influence seasonal outcomes. Within two months of the growing season, analysts can evaluate seasonal forecast information as indicators. Once the growing season begins, monitoring data, based on remote sensing and field information, can characterize the start of season and remain integral monitoring tools throughout the duration of the season. Each subsequent step in the process can lead to modifications of the original climatology assumption. To support such analyses, we have created an agro-climatology analysis tool that characterizes each step in the assumption building process. Satellite-based rainfall and normalized difference vegetation index (NDVI)-based products support both the climatology and monitoring steps, sea-surface temperature data and knowledge of the global climate system inform the climate modes, and precipitation forecasts at multiple scales support the interpretation of forecast information. Organizing these data for a user-specified area provides a valuable tool for food security analysts to better formulate agro-climatology assumptions that feed into food security assessments. We have also developed a knowledge base for over 80 countries that provide rainfall and NDVI-based products, including annual and seasonal summaries, historical anomalies, coefficient of variation, and number of years below 70% of annual or seasonal averages. These products provide a quick look for analysts to assess the agro-climatology of a country.

  6. VHBuild.com: A Web-Based System for Managing Knowledge in Projects.

    ERIC Educational Resources Information Center

    Li, Heng; Tang, Sandy; Man, K. F.; Love, Peter E. D.

    2002-01-01

    Describes an intelligent Web-based construction project management system called VHBuild.com which integrates project management, knowledge management, and artificial intelligence technologies. Highlights include an information flow model; time-cost optimization based on genetic algorithms; rule-based drawing interpretation; and a case-based…

  7. PLAN-IT: Knowledge-Based Mission Sequencing

    NASA Astrophysics Data System (ADS)

    Biefeld, Eric W.

    1987-02-01

    Mission sequencing consumes a large amount of time and manpower during a space exploration effort. Plan-It is a knowledge-based approach to assist in mission sequencing. Plan-It uses a combined frame and blackboard architecture. This paper reports on the approach implemented by Plan-It and the current applications of Plan-It for sequencing at NASA.

  8. Automatic acquisition of domain and procedural knowledge

    NASA Technical Reports Server (NTRS)

    Ferber, H. J.; Ali, M.

    1988-01-01

    The design concept and performance of AKAS, an automated knowledge-acquisition system for the development of expert systems, are discussed. AKAS was developed using the FLES knowledge base for the electrical system of the B-737 aircraft and employs a 'learn by being told' strategy. The system comprises four basic modules, a system administration module, a natural-language concept-comprehension module, a knowledge-classification/extraction module, and a knowledge-incorporation module; details of the module architectures are explored.

  9. Medical Knowledge Bases.

    ERIC Educational Resources Information Center

    Miller, Randolph A.; Giuse, Nunzia B.

    1991-01-01

    Few commonly available, successful computer-based tools exist in medical informatics. Faculty expertise can be included in computer-based medical information systems. Computers allow dynamic recombination of knowledge to answer questions unanswerable with print textbooks. Such systems can also create stronger ties between academic and clinical…

  10. System Diagnostic Builder - A rule generation tool for expert systems that do intelligent data evaluation. [applied to Shuttle Mission Simulator

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph; Burke, Roger

    1993-01-01

    Consideration is given to the System Diagnostic Builder (SDB), an automated knowledge acquisition tool using state-of-the-art AI technologies. The SDB employs an inductive machine learning technique to generate rules from data sets that are classified by a subject matter expert. Thus, data are captured from the subject system, classified, and used to drive the rule generation process. These rule bases are used to represent the observable behavior of the subject system, and to represent knowledge about this system. The knowledge bases captured from the Shuttle Mission Simulator can be used as black box simulations by the Intelligent Computer Aided Training devices. The SDB can also be used to construct knowledge bases for the process control industry, such as chemical production or oil and gas production.

  11. SWAN: An expert system with natural language interface for tactical air capability assessment

    NASA Technical Reports Server (NTRS)

    Simmons, Robert M.

    1987-01-01

    SWAN is an expert system and natural language interface for assessing the war fighting capability of Air Force units in Europe. The expert system is an object oriented knowledge based simulation with an alternate worlds facility for performing what-if excursions. Responses from the system take the form of generated text, tables, or graphs. The natural language interface is an expert system in its own right, with a knowledge base and rules which understand how to access external databases, models, or expert systems. The distinguishing feature of the Air Force expert system is its use of meta-knowledge to generate explanations in the frame and procedure based environment.

  12. Marshall Space Flight Center Propulsion Systems Department (PSD) KM Initiative

    NASA Technical Reports Server (NTRS)

    Caraccioli, Paul; Varnadoe, Tom; McCarter, Mike

    2006-01-01

    NASA Marshall Space Flight Center s Propulsion Systems Department (PSD) is four months into a fifteen month Knowledge Management (KM) initiative to support enhanced engineering decision making and analyses, faster resolution of anomalies (near-term) and effective, efficient knowledge infused engineering processes, reduced knowledge attrition, and reduced anomaly occurrences (long-term). The near-term objective of this initiative is developing a KM Pilot project, within the context of a 3-5 year KM strategy, to introduce and evaluate the use of KM within PSD. An internal NASA/MSFC PSD KM team was established early in project formulation to maintain a practitioner, user-centric focus throughout the conceptual development, planning and deployment of KM technologies and capabilities with in the PSD. The PSD internal team is supported by the University of Alabama's Aging Infrastructure Systems Center Of Excellence (AISCE), Intergraph Corporation, and The Knowledge Institute. The principle product of the initial four month effort has been strategic planning of PSD KM implementation by first determining the "as is" state of KM capabilities and developing, planning and documenting the roadmap to achieve the desired "to be" state. Activities undertaken to support the planning phase have included data gathering; cultural surveys, group work-sessions, interviews, documentation review, and independent research. Assessments and analyses have been performed including industry benchmarking, related local and Agency initiatives, specific tools and techniques used and strategies for leveraging existing resources, people and technology to achieve common KM goals. Key findings captured in the PSD KM Strategic Plan include the system vision, purpose, stakeholders, prioritized strategic objectives mapped to the top ten practitioner needs and analysis of current resource usage. Opportunities identified from research, analyses, cultural/KM surveys and practitioner interviews include: executive and senior management sponsorship, KM awareness, promotion and training, cultural change management, process improvement, leveraging existing resources and new innovative technologies to align with other NASA KM initiatives (convergence: the big picture). To enable results based incremental implementation and future growth of the KM initiative, key performance measures have been identified including stakeholder value, system utility, learning and growth (knowledge capture, sharing, reduced anomaly recurrence), cultural change, process improvement and return-on-investment. The next steps for the initial implementation spiral (focused on SSME Turbomachinery) have been identified, largely based on the organization and compilation of summary level engineering process models, data capture matrices, functional models and conceptual-level systems architecture. Key elements include detailed KM requirements definition, KM technology architecture assessment, evaluation and selection, deployable KM Pilot design, development, implementation and evaluation, and justifying full implementation (estimated Return-on-Investment). Features identified for the notional system architecture include the knowledge presentation layer (and its components), knowledge network layer (and its components), knowledge storage layer (and its components), User Interface and capabilities. This paper provides a snapshot of the progress to date, the near term planning for deploying the KM pilot project and a forward look at results based growth of KM capabilities with-in the MSFC PSD.

  13. EHR based Genetic Testing Knowledge Base (iGTKB) Development

    PubMed Central

    2015-01-01

    Background The gap between a large growing number of genetic tests and a suboptimal clinical workflow of incorporating these tests into regular clinical practice poses barriers to effective reliance on advanced genetic technologies to improve quality of healthcare. A promising solution to fill this gap is to develop an intelligent genetic test recommendation system that not only can provide a comprehensive view of genetic tests as education resources, but also can recommend the most appropriate genetic tests to patients based on clinical evidence. In this study, we developed an EHR based Genetic Testing Knowledge Base for Individualized Medicine (iGTKB). Methods We extracted genetic testing information and patient medical records from EHR systems at Mayo Clinic. Clinical features have been semi-automatically annotated from the clinical notes by applying a Natural Language Processing (NLP) tool, MedTagger suite. To prioritize clinical features for each genetic test, we compared odds ratio across four population groups. Genetic tests, genetic disorders and clinical features with their odds ratios have been applied to establish iGTKB, which is to be integrated into the Genetic Testing Ontology (GTO). Results Overall, there are five genetic tests operated with sample size greater than 100 in 2013 at Mayo Clinic. A total of 1,450 patients who was tested by one of the five genetic tests have been selected. We assembled 243 clinical features from the Human Phenotype Ontology (HPO) for these five genetic tests. There are 60 clinical features with at least one mention in clinical notes of patients taking the test. Twenty-eight clinical features with high odds ratio (greater than 1) have been selected as dominant features and deposited into iGTKB with their associated information about genetic tests and genetic disorders. Conclusions In this study, we developed an EHR based genetic testing knowledge base, iGTKB. iGTKB will be integrated into the GTO by providing relevant clinical evidence, and ultimately to support development of genetic testing recommendation system, iGenetics. PMID:26606281

  14. EHR based Genetic Testing Knowledge Base (iGTKB) Development.

    PubMed

    Zhu, Qian; Liu, Hongfang; Chute, Christopher G; Ferber, Matthew

    2015-01-01

    The gap between a large growing number of genetic tests and a suboptimal clinical workflow of incorporating these tests into regular clinical practice poses barriers to effective reliance on advanced genetic technologies to improve quality of healthcare. A promising solution to fill this gap is to develop an intelligent genetic test recommendation system that not only can provide a comprehensive view of genetic tests as education resources, but also can recommend the most appropriate genetic tests to patients based on clinical evidence. In this study, we developed an EHR based Genetic Testing Knowledge Base for Individualized Medicine (iGTKB). We extracted genetic testing information and patient medical records from EHR systems at Mayo Clinic. Clinical features have been semi-automatically annotated from the clinical notes by applying a Natural Language Processing (NLP) tool, MedTagger suite. To prioritize clinical features for each genetic test, we compared odds ratio across four population groups. Genetic tests, genetic disorders and clinical features with their odds ratios have been applied to establish iGTKB, which is to be integrated into the Genetic Testing Ontology (GTO). Overall, there are five genetic tests operated with sample size greater than 100 in 2013 at Mayo Clinic. A total of 1,450 patients who was tested by one of the five genetic tests have been selected. We assembled 243 clinical features from the Human Phenotype Ontology (HPO) for these five genetic tests. There are 60 clinical features with at least one mention in clinical notes of patients taking the test. Twenty-eight clinical features with high odds ratio (greater than 1) have been selected as dominant features and deposited into iGTKB with their associated information about genetic tests and genetic disorders. In this study, we developed an EHR based genetic testing knowledge base, iGTKB. iGTKB will be integrated into the GTO by providing relevant clinical evidence, and ultimately to support development of genetic testing recommendation system, iGenetics.

  15. Mapping the Human Toxome by Systems Toxicology

    PubMed Central

    Bouhifd, Mounir; Hogberg, Helena T.; Kleensang, Andre; Maertens, Alexandra; Zhao, Liang; Hartung, Thomas

    2014-01-01

    Toxicity testing typically involves studying adverse health outcomes in animals subjected to high doses of toxicants with subsequent extrapolation to expected human responses at lower doses. The low-throughput of current toxicity testing approaches (which are largely the same for industrial chemicals, pesticides and drugs) has led to a backlog of more than 80,000 chemicals to which human beings are potentially exposed whose potential toxicity remains largely unknown. Employing new testing strategies that employ the use of predictive, high-throughput cell-based assays (of human origin) to evaluate perturbations in key pathways, referred as pathways of toxicity, and to conduct targeted testing against those pathways, we can begin to greatly accelerate our ability to test the vast “storehouses” of chemical compounds using a rational, risk-based approach to chemical prioritization, and provide test results that are more predictive of human toxicity than current methods. The NIH Transformative Research Grant project Mapping the Human Toxome by Systems Toxicology aims at developing the tools for pathway mapping, annotation and validation as well as the respective knowledge base to share this information. PMID:24443875

  16. Biomedical discovery acceleration, with applications to craniofacial development.

    PubMed

    Leach, Sonia M; Tipney, Hannah; Feng, Weiguo; Baumgartner, William A; Kasliwal, Priyanka; Schuyler, Ronald P; Williams, Trevor; Spritz, Richard A; Hunter, Lawrence

    2009-03-01

    The profusion of high-throughput instruments and the explosion of new results in the scientific literature, particularly in molecular biomedicine, is both a blessing and a curse to the bench researcher. Even knowledgeable and experienced scientists can benefit from computational tools that help navigate this vast and rapidly evolving terrain. In this paper, we describe a novel computational approach to this challenge, a knowledge-based system that combines reading, reasoning, and reporting methods to facilitate analysis of experimental data. Reading methods extract information from external resources, either by parsing structured data or using biomedical language processing to extract information from unstructured data, and track knowledge provenance. Reasoning methods enrich the knowledge that results from reading by, for example, noting two genes that are annotated to the same ontology term or database entry. Reasoning is also used to combine all sources into a knowledge network that represents the integration of all sorts of relationships between a pair of genes, and to calculate a combined reliability score. Reporting methods combine the knowledge network with a congruent network constructed from experimental data and visualize the combined network in a tool that facilitates the knowledge-based analysis of that data. An implementation of this approach, called the Hanalyzer, is demonstrated on a large-scale gene expression array dataset relevant to craniofacial development. The use of the tool was critical in the creation of hypotheses regarding the roles of four genes never previously characterized as involved in craniofacial development; each of these hypotheses was validated by further experimental work.

  17. EXPECT: Explicit Representations for Flexible Acquisition

    NASA Technical Reports Server (NTRS)

    Swartout, BIll; Gil, Yolanda

    1995-01-01

    To create more powerful knowledge acquisition systems, we not only need better acquisition tools, but we need to change the architecture of the knowledge based systems we create so that their structure will provide better support for acquisition. Current acquisition tools permit users to modify factual knowledge but they provide limited support for modifying problem solving knowledge. In this paper, the authors argue that this limitation (and others) stem from the use of incomplete models of problem-solving knowledge and inflexible specification of the interdependencies between problem-solving and factual knowledge. We describe the EXPECT architecture which addresses these problems by providing an explicit representation for problem-solving knowledge and intent. Using this more explicit representation, EXPECT can automatically derive the interdependencies between problem-solving and factual knowledge. By deriving these interdependencies from the structure of the knowledge-based system itself EXPECT supports more flexible and powerful knowledge acquisition.

  18. Towards a resilience management framework for complex enterprise systems upgrade implementation

    NASA Astrophysics Data System (ADS)

    Teoh, Say Yen; Yeoh, William; Zadeh, Hossein Seif

    2017-05-01

    The lack of knowledge of how resilience management supports enterprise system (ES) projects accounts for the failure of firms to leverage their investments in costly ES implementations. Using a structured-pragmatic-situational (SPS) case study research approach, this paper reports on an investigation into the resilience management of a large utility company as it implemented an ES upgrade. Drawing on the literature and on the case study findings, we developed a process-based resilience management framework that involves three strategies (developing situation awareness, demystifying threats, and executing restoration plans) and four organisational capabilities that transform resilience management concepts into practices. We identified the crucial phases of ES upgrade implementation and developed indicators for how different strategies and capabilities of resilience management can assist managers at different stages of an ES upgrade. This research advances the state of existing knowledge by providing specific and verifiable propositions for attaining a state of resilience, the knowledge being grounded in the empirical reality of a case study. Moreover, the framework offers ES practitioners a roadmap to better identify appropriate responses and levels of preparedness.

  19. Progress and challenges in the application of artificial intelligence to computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1987-01-01

    An approach to analyzing CFD knowledge-based systems is proposed which is based, in part, on the concept of knowledge-level analysis. Consideration is given to the expert cooling fan design system, the PAN AIR knowledge system, grid adaptation, and expert zonal grid generation. These AI/CFD systems demonstrate that current AI technology can be successfully applied to well-formulated problems that are solved by means of classification or selection of preenumerated solutions.

  20. An assessment of maintainability of elevator system to improve facilities management knowledge-base

    NASA Astrophysics Data System (ADS)

    Siti, N. A.; Asmone, A. S.; Chew, M. Y. L.

    2018-02-01

    Elevator system is a highly specialized machinery that requires technicians that have a wider array of knowledge in maintaining the system to be safe and reliable. While attaining reliable data of elevator malfunction become challenges, this study has filled the gap by gathering the management-maintenance issues and operational defects of elevator system. Forty-three types of operation defects were found and the consequence defects and their possible causes of occurrences were discussed. To respond to the prime challenges of maintaining elevator system provided by the industry players’ perspective, a theoretical framework is established as a recommendation to improve knowledge base of defects in elevator system which comprises good practices, and solutions to rectify each defects found. Hence, this research paper has theoretically improved the knowledge base of maintainability of elevator system and provide meaningful guidelines in practical senses to the industry professionals.

  1. Operator assistant systems - An experimental approach using a telerobotics application

    NASA Technical Reports Server (NTRS)

    Boy, Guy A.; Mathe, Nathalie

    1993-01-01

    This article presents a knowledge-based system methodology for developing operator assistant (OA) systems in dynamic and interactive environments. This is a problem both of training and design, which is the subject of this article. Design includes both design of the system to be controlled and design of procedures for operating this system. A specific knowledge representation is proposed for representing the corresponding system and operational knowledge. This representation is based on the situation recognition and analytical reasoning paradigm. It tries to make explicit common factors involved in both human and machine intelligence, including perception and reasoning. An OA system based on this representation has been developed for space telerobotics. Simulations have been carried out with astronauts and the resulting protocols have been analyzed. Results show the relevance of the approach and have been used for improving the knowledge representation and the OA architecture.

  2. Informatics in radiology: radiology gamuts ontology: differential diagnosis for the Semantic Web.

    PubMed

    Budovec, Joseph J; Lam, Cesar A; Kahn, Charles E

    2014-01-01

    The Semantic Web is an effort to add semantics, or "meaning," to empower automated searching and processing of Web-based information. The overarching goal of the Semantic Web is to enable users to more easily find, share, and combine information. Critical to this vision are knowledge models called ontologies, which define a set of concepts and formalize the relations between them. Ontologies have been developed to manage and exploit the large and rapidly growing volume of information in biomedical domains. In diagnostic radiology, lists of differential diagnoses of imaging observations, called gamuts, provide an important source of knowledge. The Radiology Gamuts Ontology (RGO) is a formal knowledge model of differential diagnoses in radiology that includes 1674 differential diagnoses, 19,017 terms, and 52,976 links between terms. Its knowledge is used to provide an interactive, freely available online reference of radiology gamuts ( www.gamuts.net ). A Web service allows its content to be discovered and consumed by other information systems. The RGO integrates radiologic knowledge with other biomedical ontologies as part of the Semantic Web. © RSNA, 2014.

  3. Large Scale Data Mining to Improve Usability of Data: An Intelligent Archive Testbed

    NASA Technical Reports Server (NTRS)

    Ramapriyan, Hampapuram; Isaac, David; Yang, Wenli; Morse, Steve

    2005-01-01

    Research in certain scientific disciplines - including Earth science, particle physics, and astrophysics - continually faces the challenge that the volume of data needed to perform valid scientific research can at times overwhelm even a sizable research community. The desire to improve utilization of this data gave rise to the Intelligent Archives project, which seeks to make data archives active participants in a knowledge building system capable of discovering events or patterns that represent new information or knowledge. Data mining can automatically discover patterns and events, but it is generally viewed as unsuited for large-scale use in disciplines like Earth science that routinely involve very high data volumes. Dozens of research projects have shown promising uses of data mining in Earth science, but all of these are based on experiments with data subsets of a few gigabytes or less, rather than the terabytes or petabytes typically encountered in operational systems. To bridge this gap, the Intelligent Archives project is establishing a testbed with the goal of demonstrating the use of data mining techniques in an operationally-relevant environment. This paper discusses the goals of the testbed and the design choices surrounding critical issues that arose during testbed implementation.

  4. Designing Interaction as a Learning Process: Supporting Users' Domain Knowledge Development in Interaction

    ERIC Educational Resources Information Center

    Choi, Jung-Min

    2010-01-01

    The primary concern in current interaction design is focused on how to help users solve problems and achieve goals more easily and efficiently. While users' sufficient knowledge acquisition of operating a product or system is considered important, their acquisition of problem-solving knowledge in the task domain has largely been disregarded. As a…

  5. Knowledge acquisition is governed by striatal prediction errors.

    PubMed

    Pine, Alex; Sadeh, Noa; Ben-Yakov, Aya; Dudai, Yadin; Mendelsohn, Avi

    2018-04-26

    Discrepancies between expectations and outcomes, or prediction errors, are central to trial-and-error learning based on reward and punishment, and their neurobiological basis is well characterized. It is not known, however, whether the same principles apply to declarative memory systems, such as those supporting semantic learning. Here, we demonstrate with fMRI that the brain parametrically encodes the degree to which new factual information violates expectations based on prior knowledge and beliefs-most prominently in the ventral striatum, and cortical regions supporting declarative memory encoding. These semantic prediction errors determine the extent to which information is incorporated into long-term memory, such that learning is superior when incoming information counters strong incorrect recollections, thereby eliciting large prediction errors. Paradoxically, by the same account, strong accurate recollections are more amenable to being supplanted by misinformation, engendering false memories. These findings highlight a commonality in brain mechanisms and computational rules that govern declarative and nondeclarative learning, traditionally deemed dissociable.

  6. Drug knowledge bases and their applications in biomedical informatics research.

    PubMed

    Zhu, Yongjun; Elemento, Olivier; Pathak, Jyotishman; Wang, Fei

    2018-01-03

    Recent advances in biomedical research have generated a large volume of drug-related data. To effectively handle this flood of data, many initiatives have been taken to help researchers make good use of them. As the results of these initiatives, many drug knowledge bases have been constructed. They range from simple ones with specific focuses to comprehensive ones that contain information on almost every aspect of a drug. These curated drug knowledge bases have made significant contributions to the development of efficient and effective health information technologies for better health-care service delivery. Understanding and comparing existing drug knowledge bases and how they are applied in various biomedical studies will help us recognize the state of the art and design better knowledge bases in the future. In addition, researchers can get insights on novel applications of the drug knowledge bases through a review of successful use cases. In this study, we provide a review of existing popular drug knowledge bases and their applications in drug-related studies. We discuss challenges in constructing and using drug knowledge bases as well as future research directions toward a better ecosystem of drug knowledge bases. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  7. ERP and Four Dimensions of Absorptive Capacity: Lessons from a Developing Country

    NASA Astrophysics Data System (ADS)

    Gil, María José Álvarez; Aksoy, Dilan; Kulcsar, Borbala

    Enterprise resource planning systems can grant crucial strategic, operational and information-based benefits to adopting firms when implemented successfully. However, a failed implementation can often result in financial losses rather than profits. Until now, the research on the failures and successes were focused on implementations in large manufacturing and service organizations firms located in western countries, particularly in USA. Nevertheless, IT has gained intense diffusion to developing countries through declining hardware costs and increasing benefits that merits attention as much as developed countries. The aim of this study is to examine the implications of knowledge transfer in a developing country, Turkey, as a paradigm in the knowledge society with a focus on the implementation activities that foster successful installations. We suggest that absorptive capacity is an important characteristic of a firm that explains the success level of such a knowledge transfer.

  8. Proper name retrieval in temporal lobe epilepsy: naming of famous faces and landmarks.

    PubMed

    Benke, Thomas; Kuen, Eva; Schwarz, Michael; Walser, Gerald

    2013-05-01

    The objective of this study was to further explore proper name (PN) retrieval and conceptual knowledge in patients with left and right temporal lobe epilepsy (69 patients with LTLE and 62 patients with RTLE) using a refined assessment procedure. Based on the performance of a large group of age- and education-matched normals, a new test of famous faces and famous landmarks was designed. Recognition, naming, and semantic knowledge were assessed consecutively, allowing for a better characterization of deficient levels in the naming system. Impairment in PN retrieval was common in the cohort with TLE. Furthermore, side of seizure onset impaired stages of name retrieval differently: LTLE impaired the lexico-phonological processing, whereas RTLE mainly impaired the perceptual-semantic stage of object recognition. In addition to deficient PN retrieval, patients with TLE had reduced conceptual knowledge regarding famous persons and landmarks. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Planting molecular functions in an ecological context with Arabidopsis thaliana.

    PubMed

    Krämer, Ute

    2015-03-25

    The vascular plant Arabidopsis thaliana is a central genetic model and universal reference organism in plant and crop science. The successful integration of different fields of research in the study of A. thaliana has made a large contribution to our molecular understanding of key concepts in biology. The availability and active development of experimental tools and resources, in combination with the accessibility of a wealth of cumulatively acquired knowledge about this plant, support the most advanced systems biology approaches among all land plants. Research in molecular ecology and evolution has also brought the natural history of A. thaliana into the limelight. This article showcases our current knowledge of the natural history of A. thaliana from the perspective of the most closely related plant species, providing an evolutionary framework for interpreting novel findings and for developing new hypotheses based on our knowledge of this plant.

  10. Breastfeeding knowledge, attitudes and training amongst Australian community pharmacists.

    PubMed

    Ryan, Morgan; Smith, Julie

    2016-07-01

    Pharmacists are one of the most accessible and trusted professionals in the Australian health care system and can have a large impact in supporting and encouraging breastfeeding. This study aimed to research the knowledge, attitudes and training satisfaction of Australian pharmacists in the area of infant nutrition and breastfeeding. The mixed method study involved quantitative data collection via an online survey and qualitative data collected via separate semi-structured interviews. All registered pharmacists in the Australian Capital Territory and surrounding regional areas were eligible. Participants were recruited via emailed information sheets and individual onsite recruitment. Positive attitudes towards and a desire to support and advocate for breastfeeding by pharmacists were hampered by a lack of knowledge, confidence, training and education. Government or other non-profit organisations can enhance community-based support for breastfeeding, including developing new education and training programs for pharmacy students and pharmacists.

  11. Development of the Spacecraft Materials Selector Expert System

    NASA Technical Reports Server (NTRS)

    Pippin, H. G.

    2000-01-01

    A specific knowledge base to evaluate the on-orbit performance of selected materials on spacecraft is being developed under contract to the NASA SEE program. An artificial intelligence software package, the Boeing Expert System Tool (BEST), contains an inference engine used to operate knowledge bases constructed to selectively recall and distribute information about materials performance in space applications. This same system is used to make estimates of the environmental exposures expected for a given space flight. The performance capabilities of the Spacecraft Materials Selector (SMS) knowledge base are described in this paper. A case history for a planned flight experiment on ISS is shown as an example of the use of the SMS, and capabilities and limitations of the knowledge base are discussed.

  12. Integrating knowledge and control into hypermedia-based training environments: Experiments with HyperCLIPS

    NASA Technical Reports Server (NTRS)

    Hill, Randall W., Jr.

    1990-01-01

    The issues of knowledge representation and control in hypermedia-based training environments are discussed. The main objective is to integrate the flexible presentation capability of hypermedia with a knowledge-based approach to lesson discourse management. The instructional goals and their associated concepts are represented in a knowledge representation structure called a 'concept network'. Its functional usages are many: it is used to control the navigation through a presentation space, generate tests for student evaluation, and model the student. This architecture was implemented in HyperCLIPS, a hybrid system that creates a bridge between HyperCard, a popular hypertext-like system used for building user interfaces to data bases and other applications, and CLIPS, a highly portable government-owned expert system shell.

  13. Assessing an AI knowledge-base for asymptomatic liver diseases.

    PubMed

    Babic, A; Mathiesen, U; Hedin, K; Bodemar, G; Wigertz, O

    1998-01-01

    Discovering not yet seen knowledge from clinical data is of importance in the field of asymptomatic liver diseases. Avoidance of liver biopsy which is used as the ultimate confirmation of diagnosis by making the decision based on relevant laboratory findings only, would be considered an essential support. The system based on Quinlan's ID3 algorithm was simple and efficient in extracting the sought knowledge. Basic principles of applying the AI systems are therefore described and complemented with medical evaluation. Some of the diagnostic rules were found to be useful as decision algorithms i.e. they could be directly applied in clinical work and made a part of the knowledge-base of the Liver Guide, an automated decision support system.

  14. The smooth (tractor) operator: insights of knowledge engineering.

    PubMed

    Cullen, Ralph H; Smarr, Cory-Ann; Serrano-Baquero, Daniel; McBride, Sara E; Beer, Jenay M; Rogers, Wendy A

    2012-11-01

    The design of and training for complex systems requires in-depth understanding of task demands imposed on users. In this project, we used the knowledge engineering approach (Bowles et al., 2004) to assess the task of mowing in a citrus grove. Knowledge engineering is divided into four phases: (1) Establish goals. We defined specific goals based on the stakeholders involved. The main goal was to identify operator demands to support improvement of the system. (2) Create a working model of the system. We reviewed product literature, analyzed the system, and conducted expert interviews. (3) Extract knowledge. We interviewed tractor operators to understand their knowledge base. (4) Structure knowledge. We analyzed and organized operator knowledge to inform project goals. We categorized the information and developed diagrams to display the knowledge effectively. This project illustrates the benefits of knowledge engineering as a qualitative research method to inform technology design and training. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. Research and development for Onboard Navigation (ONAV) ground based expert/trainer system: Preliminary ascent knowledge requirements

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.

    1988-01-01

    The preliminary version of expert knowledge for the Onboard Navigation (ONAV) Ground Based Expert Trainer Ascent system for the space shuttle is presented. Included is some brief background information along with the information describing the knowledge the system will contain. Information is given on rules and heuristics, telemetry status, landing sites, inertial measurement units, and a high speed trajectory determinator (HSTD) state vector.

  16. Knowledge Acquisition and Job Training for Advanced Technical Skills Using Immersive Virtual Environment

    NASA Astrophysics Data System (ADS)

    Watanuki, Keiichi; Kojima, Kazuyuki

    The environment in which Japanese industry has achieved great respect is changing tremendously due to the globalization of world economies, while Asian countries are undergoing economic and technical development as well as benefiting from the advances in information technology. For example, in the design of custom-made casting products, a designer who lacks knowledge of casting may not be able to produce a good design. In order to obtain a good design and manufacturing result, it is necessary to equip the designer and manufacturer with a support system related to casting design, or a so-called knowledge transfer and creation system. This paper proposes a new virtual reality based knowledge acquisition and job training system for casting design, which is composed of the explicit and tacit knowledge transfer systems using synchronized multimedia and the knowledge internalization system using portable virtual environment. In our proposed system, the education content is displayed in the immersive virtual environment, whereby a trainee may experience work in the virtual site operation. Provided that the trainee has gained explicit and tacit knowledge of casting through the multimedia-based knowledge transfer system, the immersive virtual environment catalyzes the internalization of knowledge and also enables the trainee to gain tacit knowledge before undergoing on-the-job training at a real-time operation site.

  17. A systems-based partnership learning model for strengthening primary healthcare

    PubMed Central

    2013-01-01

    Background Strengthening primary healthcare systems is vital to improving health outcomes and reducing inequity. However, there are few tools and models available in published literature showing how primary care system strengthening can be achieved on a large scale. Challenges to strengthening primary healthcare (PHC) systems include the dispersion, diversity and relative independence of primary care providers; the scope and complexity of PHC; limited infrastructure available to support population health approaches; and the generally poor and fragmented state of PHC information systems. Drawing on concepts of comprehensive PHC, integrated quality improvement (IQI) methods, system-based research networks, and system-based participatory action research, we describe a learning model for strengthening PHC that addresses these challenges. We describe the evolution of this model within the Australian Aboriginal and Torres Strait Islander primary healthcare context, successes and challenges in its application, and key issues for further research. Discussion IQI approaches combined with system-based participatory action research and system-based research networks offer potential to support program implementation and ongoing learning across a wide scope of primary healthcare practice and on a large scale. The Partnership Learning Model (PLM) can be seen as an integrated model for large-scale knowledge translation across the scope of priority aspects of PHC. With appropriate engagement of relevant stakeholders, the model may be applicable to a wide range of settings. In IQI, and in the PLM specifically, there is a clear role for research in contributing to refining and evaluating existing tools and processes, and in developing and trialling innovations. Achieving an appropriate balance between funding IQI activity as part of routine service delivery and funding IQI related research will be vital to developing and sustaining this type of PLM. Summary This paper draws together several different previously described concepts and extends the understanding of how PHC systems can be strengthened through systematic and partnership-based approaches. We describe a model developed from these concepts and its application in the Australian Indigenous primary healthcare context, and raise questions about sustainability and wider relevance of the model. PMID:24344640

  18. A review on computational systems biology of pathogen–host interactions

    PubMed Central

    Durmuş, Saliha; Çakır, Tunahan; Özgür, Arzucan; Guthke, Reinhard

    2015-01-01

    Pathogens manipulate the cellular mechanisms of host organisms via pathogen–host interactions (PHIs) in order to take advantage of the capabilities of host cells, leading to infections. The crucial role of these interspecies molecular interactions in initiating and sustaining infections necessitates a thorough understanding of the corresponding mechanisms. Unlike the traditional approach of considering the host or pathogen separately, a systems-level approach, considering the PHI system as a whole is indispensable to elucidate the mechanisms of infection. Following the technological advances in the post-genomic era, PHI data have been produced in large-scale within the last decade. Systems biology-based methods for the inference and analysis of PHI regulatory, metabolic, and protein–protein networks to shed light on infection mechanisms are gaining increasing demand thanks to the availability of omics data. The knowledge derived from the PHIs may largely contribute to the identification of new and more efficient therapeutics to prevent or cure infections. There are recent efforts for the detailed documentation of these experimentally verified PHI data through Web-based databases. Despite these advances in data archiving, there are still large amounts of PHI data in the biomedical literature yet to be discovered, and novel text mining methods are in development to unearth such hidden data. Here, we review a collection of recent studies on computational systems biology of PHIs with a special focus on the methods for the inference and analysis of PHI networks, covering also the Web-based databases and text-mining efforts to unravel the data hidden in the literature. PMID:25914674

  19. Using multimodal information for the segmentation of fluorescent micrographs with application to virology and microbiology.

    PubMed

    Held, Christian; Wenzel, Jens; Webel, Rike; Marschall, Manfred; Lang, Roland; Palmisano, Ralf; Wittenberg, Thomas

    2011-01-01

    In order to improve reproducibility and objectivity of fluorescence microscopy based experiments and to enable the evaluation of large datasets, flexible segmentation methods are required which are able to adapt to different stainings and cell types. This adaption is usually achieved by the manual adjustment of the segmentation methods parameters, which is time consuming and challenging for biologists with no knowledge on image processing. To avoid this, parameters of the presented methods automatically adapt to user generated ground truth to determine the best method and the optimal parameter setup. These settings can then be used for segmentation of the remaining images. As robust segmentation methods form the core of such a system, the currently used watershed transform based segmentation routine is replaced by a fast marching level set based segmentation routine which incorporates knowledge on the cell nuclei. Our evaluations reveal that incorporation of multimodal information improves segmentation quality for the presented fluorescent datasets.

  20. Intrusive and Non-Intrusive Instruction in Dynamic Skill Training.

    DTIC Science & Technology

    1981-10-01

    less sensitive to the processing load imposed by the dynaic task together with instructional feedback processing than were the decison - making and...betwee computer based instruction of knowledge systems and computer based instruction of dynamic skills. There is reason to expect that the findings of...knowledge 3Ytm and computer based instruction of dynlamic skill.. There is reason to expect that the findings of research on knowledge system

  1. Research on knowledge representation, machine learning, and knowledge acquisition

    NASA Technical Reports Server (NTRS)

    Buchanan, Bruce G.

    1987-01-01

    Research in knowledge representation, machine learning, and knowledge acquisition performed at Knowledge Systems Lab. is summarized. The major goal of the research was to develop flexible, effective methods for representing the qualitative knowledge necessary for solving large problems that require symbolic reasoning as well as numerical computation. The research focused on integrating different representation methods to describe different kinds of knowledge more effectively than any one method can alone. In particular, emphasis was placed on representing and using spatial information about three dimensional objects and constraints on the arrangement of these objects in space. Another major theme is the development of robust machine learning programs that can be integrated with a variety of intelligent systems. To achieve this goal, learning methods were designed, implemented and experimented within several different problem solving environments.

  2. Region Evolution eXplorer - A tool for discovering evolution trends in ontology regions.

    PubMed

    Christen, Victor; Hartung, Michael; Groß, Anika

    2015-01-01

    A large number of life science ontologies has been developed to support different application scenarios such as gene annotation or functional analysis. The continuous accumulation of new insights and knowledge affects specific portions in ontologies and thus leads to their adaptation. Therefore, it is valuable to study which ontology parts have been extensively modified or remained unchanged. Users can monitor the evolution of an ontology to improve its further development or apply the knowledge in their applications. Here we present REX (Region Evolution eXplorer) a web-based system for exploring the evolution of ontology parts (regions). REX provides an analysis platform for currently about 1,000 versions of 16 well-known life science ontologies. Interactive workflows allow an explorative analysis of changing ontology regions and can be used to study evolution trends for long-term periods. REX is a web application providing an interactive and user-friendly interface to identify (un)stable regions in large life science ontologies. It is available at http://www.izbi.de/rex.

  3. A knowledge-base generating hierarchical fuzzy-neural controller.

    PubMed

    Kandadai, R M; Tien, J M

    1997-01-01

    We present an innovative fuzzy-neural architecture that is able to automatically generate a knowledge base, in an extractable form, for use in hierarchical knowledge-based controllers. The knowledge base is in the form of a linguistic rule base appropriate for a fuzzy inference system. First, we modify Berenji and Khedkar's (1992) GARIC architecture to enable it to automatically generate a knowledge base; a pseudosupervised learning scheme using reinforcement learning and error backpropagation is employed. Next, we further extend this architecture to a hierarchical controller that is able to generate its own knowledge base. Example applications are provided to underscore its viability.

  4. Globalisation, Transnational Academic Mobility and the Chinese Knowledge Diaspora: An Australian Case Study

    ERIC Educational Resources Information Center

    Yang, Rui; Welch, Anthony R.

    2010-01-01

    The master discourses of economic globalisation and the knowledge economy each cite knowledge diasporas as vital "trans-national human capital". Based on a case study of a major Australian university, this article examines the potential to deploy China's large and highly-skilled diaspora in the service of Chinese and Australian…

  5. Extracting semantically enriched events from biomedical literature

    PubMed Central

    2012-01-01

    Background Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Results Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP’09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP’09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. Conclusions We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare. PMID:22621266

  6. Extracting semantically enriched events from biomedical literature.

    PubMed

    Miwa, Makoto; Thompson, Paul; McNaught, John; Kell, Douglas B; Ananiadou, Sophia

    2012-05-23

    Research into event-based text mining from the biomedical literature has been growing in popularity to facilitate the development of advanced biomedical text mining systems. Such technology permits advanced search, which goes beyond document or sentence-based retrieval. However, existing event-based systems typically ignore additional information within the textual context of events that can determine, amongst other things, whether an event represents a fact, hypothesis, experimental result or analysis of results, whether it describes new or previously reported knowledge, and whether it is speculated or negated. We refer to such contextual information as meta-knowledge. The automatic recognition of such information can permit the training of systems allowing finer-grained searching of events according to the meta-knowledge that is associated with them. Based on a corpus of 1,000 MEDLINE abstracts, fully manually annotated with both events and associated meta-knowledge, we have constructed a machine learning-based system that automatically assigns meta-knowledge information to events. This system has been integrated into EventMine, a state-of-the-art event extraction system, in order to create a more advanced system (EventMine-MK) that not only extracts events from text automatically, but also assigns five different types of meta-knowledge to these events. The meta-knowledge assignment module of EventMine-MK performs with macro-averaged F-scores in the range of 57-87% on the BioNLP'09 Shared Task corpus. EventMine-MK has been evaluated on the BioNLP'09 Shared Task subtask of detecting negated and speculated events. Our results show that EventMine-MK can outperform other state-of-the-art systems that participated in this task. We have constructed the first practical system that extracts both events and associated, detailed meta-knowledge information from biomedical literature. The automatically assigned meta-knowledge information can be used to refine search systems, in order to provide an extra search layer beyond entities and assertions, dealing with phenomena such as rhetorical intent, speculations, contradictions and negations. This finer grained search functionality can assist in several important tasks, e.g., database curation (by locating new experimental knowledge) and pathway enrichment (by providing information for inference). To allow easy integration into text mining systems, EventMine-MK is provided as a UIMA component that can be used in the interoperable text mining infrastructure, U-Compare.

  7. Advanced software development workstation project ACCESS user's guide

    NASA Technical Reports Server (NTRS)

    1990-01-01

    ACCESS is a knowledge based software information system designed to assist the user in modifying retrieved software to satisfy user specifications. A user's guide is presented for the knowledge engineer who wishes to create for ACCESS a knowledge base consisting of representations of objects in some software system. This knowledge is accessible to an end user who wishes to use the catalogued software objects to create a new application program or an input stream for an existing system. The application specific portion of an ACCESS knowledge base consists of a taxonomy of object classes, as well as instances of these classes. All objects in the knowledge base are stored in an associative memory. ACCESS provides a standard interface for the end user to browse and modify objects. In addition, the interface can be customized by the addition of application specific data entry forms and by specification of display order for the taxonomy and object attributes. These customization options are described.

  8. Knowledge-based support for the participatory design and implementation of shift systems.

    PubMed

    Gissel, A; Knauth, P

    1998-01-01

    This study developed a knowledge-based software system to support the participatory design and implementation of shift systems as a joint planning process including shift workers, the workers' committee, and management. The system was developed using a model-based approach. During the 1st phase, group discussions were repeatedly conducted with 2 experts. Thereafter a structure model of the process was generated and subsequently refined by the experts in additional semistructured interviews. Next, a factual knowledge base of 1713 relevant studies was collected on the effects of shift work. Finally, a prototype of the knowledge-based system was tested on 12 case studies. During the first 2 phases of the system, important basic information about the tasks to be carried out is provided for the user. During the 3rd phase this approach uses the problem-solving method of case-based reasoning to determine a shift rota which has already proved successful in other applications. It can then be modified in the 4th phase according to the shift workers' preferences. The last 2 phases support the final testing and evaluation of the system. The application of this system has shown that it is possible to obtain shift rotas suitable to actual problems and representative of good ergonomic solutions. A knowledge-based approach seems to provide valuable support for the complex task of designing and implementing a new shift system. The separation of the task into several phases, the provision of information at all stages, and the integration of all parties concerned seem to be essential factors for the success of the application.

  9. Distributed HUC-based modeling with SUMMA for ensemble streamflow forecasting over large regional domains.

    NASA Astrophysics Data System (ADS)

    Saharia, M.; Wood, A.; Clark, M. P.; Bennett, A.; Nijssen, B.; Clark, E.; Newman, A. J.

    2017-12-01

    Most operational streamflow forecasting systems rely on a forecaster-in-the-loop approach in which some parts of the forecast workflow require an experienced human forecaster. But this approach faces challenges surrounding process reproducibility, hindcasting capability, and extension to large domains. The operational hydrologic community is increasingly moving towards `over-the-loop' (completely automated) large-domain simulations yet recent developments indicate a widespread lack of community knowledge about the strengths and weaknesses of such systems for forecasting. A realistic representation of land surface hydrologic processes is a critical element for improving forecasts, but often comes at the substantial cost of forecast system agility and efficiency. While popular grid-based models support the distributed representation of land surface processes, intermediate-scale Hydrologic Unit Code (HUC)-based modeling could provide a more efficient and process-aligned spatial discretization, reducing the need for tradeoffs between model complexity and critical forecasting requirements such as ensemble methods and comprehensive model calibration. The National Center for Atmospheric Research is collaborating with the University of Washington, the Bureau of Reclamation and the USACE to implement, assess, and demonstrate real-time, over-the-loop distributed streamflow forecasting for several large western US river basins and regions. In this presentation, we present early results from short to medium range hydrologic and streamflow forecasts for the Pacific Northwest (PNW). We employ a real-time 1/16th degree daily ensemble model forcings as well as downscaled Global Ensemble Forecasting System (GEFS) meteorological forecasts. These datasets drive an intermediate-scale configuration of the Structure for Unifying Multiple Modeling Alternatives (SUMMA) model, which represents the PNW using over 11,700 HUCs. The system produces not only streamflow forecasts (using the MizuRoute channel routing tool) but also distributed model states such as soil moisture and snow water equivalent. We also describe challenges in distributed model-based forecasting, including the application and early results of real-time hydrologic data assimilation.

  10. A web-based knowledge management system integrating Western and Traditional Chinese Medicine for relational medical diagnosis.

    PubMed

    Herrera-Hernandez, Maria C; Lai-Yuen, Susana K; Piegl, Les A; Zhang, Xiao

    2016-10-26

    This article presents the design of a web-based knowledge management system as a training and research tool for the exploration of key relationships between Western and Traditional Chinese Medicine, in order to facilitate relational medical diagnosis integrating these mainstream healing modalities. The main goal of this system is to facilitate decision-making processes, while developing skills and creating new medical knowledge. Traditional Chinese Medicine can be considered as an ancient relational knowledge-based approach, focusing on balancing interrelated human functions to reach a healthy state. Western Medicine focuses on specialties and body systems and has achieved advanced methods to evaluate the impact of a health disorder on the body functions. Identifying key relationships between Traditional Chinese and Western Medicine opens new approaches for health care practices and can increase the understanding of human medical conditions. Our knowledge management system was designed from initial datasets of symptoms, known diagnosis and treatments, collected from both medicines. The datasets were subjected to process-oriented analysis, hierarchical knowledge representation and relational database interconnection. Web technology was implemented to develop a user-friendly interface, for easy navigation, training and research. Our system was prototyped with a case study on chronic prostatitis. This trial presented the system's capability for users to learn the correlation approach, connecting knowledge in Western and Traditional Chinese Medicine by querying the database, mapping validated medical information, accessing complementary information from official sites, and creating new knowledge as part of the learning process. By addressing the challenging tasks of data acquisition and modeling, organization, storage and transfer, the proposed web-based knowledge management system is presented as a tool for users in medical training and research to explore, learn and update relational information for the practice of integrated medical diagnosis. This proposal in education has the potential to enable further creation of medical knowledge from both Traditional Chinese and Western Medicine for improved care providing. The presented system positively improves the information visualization, learning process and knowledge sharing, for training and development of new skills for diagnosis and treatment, and a better understanding of medical diseases. © IMechE 2016.

  11. Northeast Artificial Intelligence Consortium Annual Report - 1988. Volume 12. Computer Architectures for Very Large Knowledge Bases

    DTIC Science & Technology

    1989-10-01

    Vol. 18, No. 5, 1975, pp. 253-263. [CAR84] D.B. Carlin, J.P. Bednarz, CJ. Kaiser, J.C. Connolly, M.G. Harvey , "Multichannel optical recording using... Kellog [31] takes a similar approach as ILEX in the sense that it uses existing systems rather than developing specialized hardwares (the Xerox 1100...parallel complexity. In Proceedings of the International Conference on Database Theory, pages 1-30, September 1986. [31] C. Kellog . From data management to

  12. Advanced Artificial Intelligence Technology Testbed

    NASA Technical Reports Server (NTRS)

    Anken, Craig S.

    1993-01-01

    The Advanced Artificial Intelligence Technology Testbed (AAITT) is a laboratory testbed for the design, analysis, integration, evaluation, and exercising of large-scale, complex, software systems, composed of both knowledge-based and conventional components. The AAITT assists its users in the following ways: configuring various problem-solving application suites; observing and measuring the behavior of these applications and the interactions between their constituent modules; gathering and analyzing statistics about the occurrence of key events; and flexibly and quickly altering the interaction of modules within the applications for further study.

  13. Towards Personalized Medicine Mediated by in Vitro Virus-Based Interactome Approaches

    PubMed Central

    Ohashi, Hiroyuki; Miyamoto-Sato, Etsuko

    2014-01-01

    We have developed a simple in vitro virus (IVV) selection system based on cell-free co-translation, using a highly stable and efficient mRNA display method. The IVV system is applicable to the high-throughput and comprehensive analysis of proteins and protein–ligand interactions. Huge amounts of genomic sequence data have been generated over the last decade. The accumulated genetic alterations and the interactome networks identified within cells represent a universal feature of a disease, and knowledge of these aspects can help to determine the optimal therapy for the disease. The concept of the “integrome” has been developed as a means of integrating large amounts of data. We have developed an interactome analysis method aimed at providing individually-targeted health care. We also consider future prospects for this system. PMID:24756093

  14. Segmentation of medical images using explicit anatomical knowledge

    NASA Astrophysics Data System (ADS)

    Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee

    1999-07-01

    Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.

  15. Design of customer knowledge management system for Aglaonema Nursery in South Tangerang, Indonesia

    NASA Astrophysics Data System (ADS)

    Sugiarto, D.; Mardianto, I.; Dewayana, TS; Khadafi, M.

    2017-12-01

    The purpose of this paper is to describe the design of customer knowledge management system to support customer relationship management activities for an aglaonema nursery in South Tangerang, Indonesia. System. The steps were knowledge identification (knowledge about customer, knowledge from customer, knowledge for customer), knowledge capture, codification, analysis of system requirement and create use case and activity diagram. The result showed that some key knowledge were about supporting customer in plant care (know how) and types of aglaonema including with the prices (know what). That knowledge for customer then codified and shared in knowledge portal website integrated with social media. Knowledge about customer were about customers and their behaviour in purchasing aglaonema. Knowledge from customer were about feedback, favorite and customer experience. Codified knowledge were placed and shared using content management system based on wordpress.

  16. SBexpert users guide (version 1.0): a knowledge-based decision-support system for spruce beetle management.

    Treesearch

    Keith M. Reynolds; Edward H. Holsten; Richard A. Werner

    1994-01-01

    SBexpert version 1.0 is a knowledge-based decision-support system for spruce beetle (Dendroctonus rutipennis (Kby.)) management developed for use in Microsoft Windows with the KnowledgePro Windows development language. The SBexpert users guide provides detailed instructions on the use of all SBexpert features. SBexpert has four main topics (...

  17. A veterinary anatomy tutoring system.

    PubMed

    Theodoropoulos, G; Loumos, V; Antonopoulos, J

    1994-02-14

    A veterinary anatomy tutoring system was developed by using Knowledge Pro, an object-oriented software development tool with hypermedia capabilities, and MS Access, a relational database. Communication between them is facilitated by using the Structured Query Language (SQL). The architecture of the system is based on knowledge sets, each of which covers four different descriptions of an organ, namely gross anatomy (general description), gross anatomy (comparative features), histology, and embryology, which constitute the knowledge units. These knowledge units are linked with three global variables that define the animals, the topographies, and the system to which this organ belongs, creating three data-bases. These three data-bases are interrelated through the organ field in order to establish a relational model. This system allows versatility in the student's navigation through the information space by offering different modes for information location and presentation. These include course mode, review mode, reference mode, dissection mode, and comparison mode. In addition, the system provides a self-evaluation mode.

  18. The computer integrated documentation project: A merge of hypermedia and AI techniques

    NASA Technical Reports Server (NTRS)

    Mathe, Nathalie; Boy, Guy

    1993-01-01

    To generate intelligent indexing that allows context-sensitive information retrieval, a system must be able to acquire knowledge directly through interaction with users. In this paper, we present the architecture for CID (Computer Integrated Documentation). CID is a system that enables integration of various technical documents in a hypertext framework and includes an intelligent browsing system that incorporates indexing in context. CID's knowledge-based indexing mechanism allows case based knowledge acquisition by experimentation. It utilizes on-line user information requirements and suggestions either to reinforce current indexing in case of success or to generate new knowledge in case of failure. This allows CID's intelligent interface system to provide helpful responses, based on previous experience (user feedback). We describe CID's current capabilities and provide an overview of our plans for extending the system.

  19. Decision Support Systems for Launch and Range Operations Using Jess

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar

    2007-01-01

    The virtual test bed for launch and range operations developed at NASA Ames Research Center consists of various independent expert systems advising on weather effects, toxic gas dispersions and human health risk assessment during space-flight operations. An individual dedicated server supports each expert system and the master system gather information from the dedicated servers to support the launch decision-making process. Since the test bed is based on the web system, reducing network traffic and optimizing the knowledge base is critical to its success of real-time or near real-time operations. Jess, a fast rule engine and powerful scripting environment developed at Sandia National Laboratory has been adopted to build the expert systems providing robustness and scalability. Jess also supports XML representation of knowledge base with forward and backward chaining inference mechanism. Facts added - to working memory during run-time operations facilitates analyses of multiple scenarios. Knowledge base can be distributed with one inference engine performing the inference process. This paper discusses details of the knowledge base and inference engine using Jess for a launch and range virtual test bed.

  20. Software For Fault-Tree Diagnosis Of A System

    NASA Technical Reports Server (NTRS)

    Iverson, Dave; Patterson-Hine, Ann; Liao, Jack

    1993-01-01

    Fault Tree Diagnosis System (FTDS) computer program is automated-diagnostic-system program identifying likely causes of specified failure on basis of information represented in system-reliability mathematical models known as fault trees. Is modified implementation of failure-cause-identification phase of Narayanan's and Viswanadham's methodology for acquisition of knowledge and reasoning in analyzing failures of systems. Knowledge base of if/then rules replaced with object-oriented fault-tree representation. Enhancement yields more-efficient identification of causes of failures and enables dynamic updating of knowledge base. Written in C language, C++, and Common LISP.

  1. Survey of Knowledge Representation and Reasoning Systems

    DTIC Science & Technology

    2009-07-01

    processing large volumes of unstructured information such as natural language documents, email, audio , images and video [Ferrucci et al. 2006]. Using this...information we hope to obtain improved es- timation and prediction, data-mining, social network analysis, and semantic search and visualisation . Knowledge

  2. Research and development for Onboard Navigation (ONAV) ground based expert/trainer system: ONAV entry knowledge requirements specification update

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.

    1988-01-01

    A revised version of expert knowledge for the onboard navigation (ONAV) entry system is given. Included is some brief background information together with information describing the knowledge that the system does contain.

  3. Knowledge based system verification and validation as related to automation of space station subsystems: Rationale for a knowledge based system lifecycle

    NASA Technical Reports Server (NTRS)

    Richardson, Keith; Wong, Carla

    1988-01-01

    The role of verification and validation (V and V) in software has been to support and strengthen the software lifecycle and to ensure that the resultant code meets the standards of the requirements documents. Knowledge Based System (KBS) V and V should serve the same role, but the KBS lifecycle is ill-defined. The rationale of a simple form of the KBS lifecycle is presented, including accommodation to certain critical KBS differences from software development.

  4. a Study on Satellite Diagnostic Expert Systems Using Case-Based Approach

    NASA Astrophysics Data System (ADS)

    Park, Young-Tack; Kim, Jae-Hoon; Park, Hyun-Soo

    1997-06-01

    Many research works are on going to monitor and diagnose diverse malfunctions of satellite systems as the complexity and number of satellites increase. Currently, many works on monitoring and diagnosis are carried out by human experts but there are needs to automate much of the routine works of them. Hence, it is necessary to study on using expert systems which can assist human experts routine work by doing automatically, thereby allow human experts devote their expertise more critical and important areas of monitoring and diagnosis. In this paper, we are employing artificial intelligence techniques to model human experts' knowledge and inference the constructed knowledge. Especially, case-based approaches are used to construct a knowledge base to model human expert capabilities which use previous typical exemplars. We have designed and implemented a prototype case-based system for diagnosing satellite malfunctions using cases. Our system remembers typical failure cases and diagnoses a current malfunction by indexing the case base. Diverse methods are used to build a more user friendly interface which allows human experts can build a knowledge base in as easy way.

  5. Autonomous power expert system

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.; Quinn, Todd M.

    1990-01-01

    The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling and dynamic replanning.

  6. Autonomous power expert system

    NASA Technical Reports Server (NTRS)

    Ringer, Mark J.; Quinn, Todd M.

    1990-01-01

    The goal of the Autonomous Power System (APS) program is to develop and apply intelligent problem solving and control technologies to the Space Station Freedom Electrical Power Systems (SSF/EPS). The objectives of the program are to establish artificial intelligence/expert system technology paths, to create knowledge based tools with advanced human-operator interfaces, and to integrate and interface knowledge-based and conventional control schemes. This program is being developed at the NASA-Lewis. The APS Brassboard represents a subset of a 20 KHz Space Station Power Management And Distribution (PMAD) testbed. A distributed control scheme is used to manage multiple levels of computers and switchgear. The brassboard is comprised of a set of intelligent switchgear used to effectively switch power from the sources to the loads. The Autonomous Power Expert System (APEX) portion of the APS program integrates a knowledge based fault diagnostic system, a power resource scheduler, and an interface to the APS Brassboard. The system includes knowledge bases for system diagnostics, fault detection and isolation, and recommended actions. The scheduler autonomously assigns start times to the attached loads based on temporal and power constraints. The scheduler is able to work in a near real time environment for both scheduling an dynamic replanning.

  7. Applied geodesy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, S.

    1987-01-01

    This volume is based on the proceedings of the CERN Accelerator School's course on Applied Geodesy for Particle Accelerators held in April 1986. The purpose was to record and disseminate the knowledge gained in recent years on the geodesy of accelerators and other large systems. The latest methods for positioning equipment to sub-millimetric accuracy in deep underground tunnels several tens of kilometers long are described, as well as such sophisticated techniques as the Navstar Global Positioning System and the Terrameter. Automation of better known instruments such as the gyroscope and Distinvar is also treated along with the highly evolved treatmentmore » of components in a modern accelerator. Use of the methods described can be of great benefit in many areas of research and industrial geodesy such as surveying, nautical and aeronautical engineering, astronomical radio-interferometry, metrology of large components, deformation studies, etc.« less

  8. Cultural adaptation, compounding vulnerabilities and conjunctures in Norse Greenland.

    PubMed

    Dugmore, Andrew J; McGovern, Thomas H; Vésteinsson, Orri; Arneborg, Jette; Streeter, Richard; Keller, Christian

    2012-03-06

    Norse Greenland has been seen as a classic case of maladaptation by an inflexible temperate zone society extending into the arctic and collapse driven by climate change. This paper, however, recognizes the successful arctic adaptation achieved in Norse Greenland and argues that, although climate change had impacts, the end of Norse settlement can only be truly understood as a complex socioenvironmental system that includes local and interregional interactions operating at different geographic and temporal scales and recognizes the cultural limits to adaptation of traditional ecological knowledge. This paper is not focused on a single discovery and its implications, an approach that can encourage monocausal and environmentally deterministic emphasis to explanation, but it is the product of sustained international interdisciplinary investigations in Greenland and the rest of the North Atlantic. It is based on data acquisitions, reinterpretation of established knowledge, and a somewhat different philosophical approach to the question of collapse. We argue that the Norse Greenlanders created a flexible and successful subsistence system that responded effectively to major environmental challenges but probably fell victim to a combination of conjunctures of large-scale historic processes and vulnerabilities created by their successful prior response to climate change. Their failure was an inability to anticipate an unknowable future, an inability to broaden their traditional ecological knowledge base, and a case of being too specialized, too small, and too isolated to be able to capitalize on and compete in the new protoworld system extending into the North Atlantic in the early 15th century.

  9. A Spacelab Expert System for Remote Engineering and Science

    NASA Technical Reports Server (NTRS)

    Groleau, Nick; Colombano, Silvano; Friedland, Peter (Technical Monitor)

    1994-01-01

    NASA's space science program is based on strictly pre-planned activities. This approach does not always result in the best science. We describe an existing computer system that enables space science to be conducted in a more reactive manner through advanced automation techniques that have recently been used in SLS-2 October 1993 space shuttle flight. Advanced computing techniques, usually developed in the field of Artificial Intelligence, allow large portions of the scientific investigator's knowledge to be "packaged" in a portable computer to present advice to the astronaut operator. We strongly believe that this technology has wide applicability to other forms of remote science/engineering. In this brief article, we present the technology of remote science/engineering assistance as implemented for the SLS-2 space shuttle flight. We begin with a logical overview of the system (paying particular attention to the implementation details relevant to the use of the embedded knowledge for system reasoning), then describe its use and success in space, and conclude with ideas about possible earth uses of the technology in the life and medical sciences.

  10. Big data and ophthalmic research.

    PubMed

    Clark, Antony; Ng, Jonathon Q; Morlet, Nigel; Semmens, James B

    2016-01-01

    Large population-based health administrative databases, clinical registries, and data linkage systems are a rapidly expanding resource for health research. Ophthalmic research has benefited from the use of these databases in expanding the breadth of knowledge in areas such as disease surveillance, disease etiology, health services utilization, and health outcomes. Furthermore, the quantity of data available for research has increased exponentially in recent times, particularly as e-health initiatives come online in health systems across the globe. We review some big data concepts, the databases and data linkage systems used in eye research-including their advantages and limitations, the types of studies previously undertaken, and the future direction for big data in eye research. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. The Design Manager's Aid for Intelligent Decomposition (DeMAID)

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1994-01-01

    Before the design of new complex systems such as large space platforms can begin, the possible interactions among subsystems and their parts must be determined. Once this is completed, the proposed system can be decomposed to identify its hierarchical structure. The design manager's aid for intelligent decomposition (DeMAID) is a knowledge based system for ordering the sequence of modules and identifying a possible multilevel structure for design. Although DeMAID requires an investment of time to generate and refine the list of modules for input, it could save considerable money and time in the total design process, particularly in new design problems where the ordering of the modules has not been defined.

  12. Evolution equation for quantum entanglement

    NASA Astrophysics Data System (ADS)

    Konrad, Thomas; de Melo, Fernando; Tiersch, Markus; Kasztelan, Christian; Aragão, Adriano; Buchleitner, Andreas

    2008-02-01

    Quantum information technology largely relies on a precious and fragile resource, quantum entanglement, a highly non-trivial manifestation of the coherent superposition of states of composite quantum systems. However, our knowledge of the time evolution of this resource under realistic conditions-that is, when corrupted by environment-induced decoherence-is so far limited, and general statements on entanglement dynamics in open systems are scarce. Here we prove a simple and general factorization law for quantum systems shared by two parties, which describes the time evolution of entanglement on passage of either component through an arbitrary noisy channel. The robustness of entanglement-based quantum information processing protocols is thus easily and fully characterized by a single quantity.

  13. Leveraging Event Reporting Through Knowledge Support: A Knowledge-Based Approach to Promoting Patient Fall Prevention.

    PubMed

    Yao, Bin; Kang, Hong; Miao, Qi; Zhou, Sicheng; Liang, Chen; Gong, Yang

    2017-01-01

    Patient falls are a common safety event type that impairs the healthcare quality. Strategies including solution tools and reporting systems for preventing patient falls have been developed and implemented in the U.S. However, the current strategies do not include timely knowledge support, which is in great need in bridging the gap between reporting and learning. In this study, we constructed a knowledge base of fall events by combining expert-reviewed fall prevention solutions and then integrating them into a reporting system. The knowledge base enables timely and tailored knowledge support and thus will serve as a prevailing fall prevention tool. This effort holds promise in making knowledge acquisition and management a routine process for enhancing the reporting and understanding of patient safety events.

  14. On-the-spot lung cancer differential diagnosis by label-free, molecular vibrational imaging and knowledge-based classification

    NASA Astrophysics Data System (ADS)

    Gao, Liang; Li, Fuhai; Thrall, Michael J.; Yang, Yaliang; Xing, Jiong; Hammoudi, Ahmad A.; Zhao, Hong; Massoud, Yehia; Cagle, Philip T.; Fan, Yubo; Wong, Kelvin K.; Wang, Zhiyong; Wong, Stephen T. C.

    2011-09-01

    We report the development and application of a knowledge-based coherent anti-Stokes Raman scattering (CARS) microscopy system for label-free imaging, pattern recognition, and classification of cells and tissue structures for differentiating lung cancer from non-neoplastic lung tissues and identifying lung cancer subtypes. A total of 1014 CARS images were acquired from 92 fresh frozen lung tissue samples. The established pathological workup and diagnostic cellular were used as prior knowledge for establishment of a knowledge-based CARS system using a machine learning approach. This system functions to separate normal, non-neoplastic, and subtypes of lung cancer tissues based on extracted quantitative features describing fibrils and cell morphology. The knowledge-based CARS system showed the ability to distinguish lung cancer from normal and non-neoplastic lung tissue with 91% sensitivity and 92% specificity. Small cell carcinomas were distinguished from nonsmall cell carcinomas with 100% sensitivity and specificity. As an adjunct to submitting tissue samples to routine pathology, our novel system recognizes the patterns of fibril and cell morphology, enabling medical practitioners to perform differential diagnosis of lung lesions in mere minutes. The demonstration of the strategy is also a necessary step toward in vivo point-of-care diagnosis of precancerous and cancerous lung lesions with a fiber-based CARS microendoscope.

  15. Organizational Influences on Interdisciplinary Interactions during Research and Design of Large-Scale Complex Engineered Systems

    NASA Technical Reports Server (NTRS)

    McGowan, Anna-Maria R.; Seifert, Colleen M.; Papalambros, Panos Y.

    2012-01-01

    The design of large-scale complex engineered systems (LaCES) such as an aircraft is inherently interdisciplinary. Multiple engineering disciplines, drawing from a team of hundreds to thousands of engineers and scientists, are woven together throughout the research, development, and systems engineering processes to realize one system. Though research and development (R&D) is typically focused in single disciplines, the interdependencies involved in LaCES require interdisciplinary R&D efforts. This study investigates the interdisciplinary interactions that take place during the R&D and early conceptual design phases in the design of LaCES. Our theoretical framework is informed by both engineering practices and social science research on complex organizations. This paper provides preliminary perspective on some of the organizational influences on interdisciplinary interactions based on organization theory (specifically sensemaking), data from a survey of LaCES experts, and the authors experience in the research and design. The analysis reveals couplings between the engineered system and the organization that creates it. Survey respondents noted the importance of interdisciplinary interactions and their significant benefit to the engineered system, such as innovation and problem mitigation. Substantial obstacles to interdisciplinarity are uncovered beyond engineering that include communication and organizational challenges. Addressing these challenges may ultimately foster greater efficiencies in the design and development of LaCES and improved system performance by assisting with the collective integration of interdependent knowledge bases early in the R&D effort. This research suggests that organizational and human dynamics heavily influence and even constrain the engineering effort for large-scale complex systems.

  16. Scheduling and rescheduling with iterative repair

    NASA Technical Reports Server (NTRS)

    Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael

    1992-01-01

    This paper describes the GERRY scheduling and rescheduling system being applied to coordinate Space Shuttle Ground Processing. The system uses constraint-based iterative repair, a technique that starts with a complete but possibly flawed schedule and iteratively improves it by using constraint knowledge within repair heuristics. In this paper we explore the tradeoff between the informedness and the computational cost of several repair heuristics. We show empirically that some knowledge can greatly improve the convergence speed of a repair-based system, but that too much knowledge, such as the knowledge embodied within the MIN-CONFLICTS lookahead heuristic, can overwhelm a system and result in degraded performance.

  17. An integrated knowledge system for the Space Shuttle hazardous gas detection system

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.; Shi, George Z.; Bangasser, Carl; Fensky, Connie; Cegielski, Eric; Overbey, Glenn

    1993-01-01

    A computer-based integrated Knowledge-Based System, the Intelligent Hypertext Manual (IHM), was developed for the Space Shuttle Hazardous Gas Detection System (HGDS) at NASA Marshall Space Flight Center (MSFC). The IHM stores HGDS related knowledge and presents it in an interactive and intuitive manner. This manual is a combination of hypertext and an expert system which store experts' knowledge and experience in hazardous gas detection and analysis. The IHM's purpose is to provide HGDS personnel with the capabilities of: locating applicable documentation related to procedures, constraints, and previous fault histories; assisting in the training of personnel; enhancing the interpretation of real time data; and recognizing and identifying possible faults in the Space Shuttle sub-systems related to hazardous gas detection.

  18. COM3/369: Knowledge-based Information Systems: A new approach for the representation and retrieval of medical information

    PubMed Central

    Mann, G; Birkmann, C; Schmidt, T; Schaeffler, V

    1999-01-01

    Introduction Present solutions for the representation and retrieval of medical information from online sources are not very satisfying. Either the retrieval process lacks of precision and completeness the representation does not support the update and maintenance of the represented information. Most efforts are currently put into improving the combination of search engines and HTML based documents. However, due to the current shortcomings of methods for natural language understanding there are clear limitations to this approach. Furthermore, this approach does not solve the maintenance problem. At least medical information exceeding a certain complexity seems to afford approaches that rely on structured knowledge representation and corresponding retrieval mechanisms. Methods Knowledge-based information systems are based on the following fundamental ideas. The representation of information is based on ontologies that define the structure of the domain's concepts and their relations. Views on domain models are defined and represented as retrieval schemata. Retrieval schemata can be interpreted as canonical query types focussing on specific aspects of the provided information (e.g. diagnosis or therapy centred views). Based on these retrieval schemata it can be decided which parts of the information in the domain model must be represented explicitly and formalised to support the retrieval process. As representation language propositional logic is used. All other information can be represented in a structured but informal way using text, images etc. Layout schemata are used to assign layout information to retrieved domain concepts. Depending on the target environment HTML or XML can be used. Results Based on this approach two knowledge-based information systems have been developed. The 'Ophthalmologic Knowledge-based Information System for Diabetic Retinopathy' (OKIS-DR) provides information on diagnoses, findings, examinations, guidelines, and reference images related to diabetic retinopathy. OKIS-DR uses combinations of findings to specify the information that must be retrieved. The second system focuses on nutrition related allergies and intolerances. Information on allergies and intolerances of a patient are used to retrieve general information on the specified combination of allergies and intolerances. As a special feature the system generates tables showing food types and products that are tolerated or not tolerated by patients. Evaluation by external experts and user groups showed that the described approach of knowledge-based information systems increases the precision and completeness of knowledge retrieval. Due to the structured and non-redundant representation of information the maintenance and update of the information can be simplified. Both systems are available as WWW based online knowledge bases and CD-ROMs (cf. http://mta.gsf.de topic: products).

  19. Conceptual information processing: A robust approach to KBS-DBMS integration

    NASA Technical Reports Server (NTRS)

    Lazzara, Allen V.; Tepfenhart, William; White, Richard C.; Liuzzi, Raymond

    1987-01-01

    Integrating the respective functionality and architectural features of knowledge base and data base management systems is a topic of considerable interest. Several aspects of this topic and associated issues are addressed. The significance of integration and the problems associated with accomplishing that integration are discussed. The shortcomings of current approaches to integration and the need to fuse the capabilities of both knowledge base and data base management systems motivates the investigation of information processing paradigms. One such paradigm is concept based processing, i.e., processing based on concepts and conceptual relations. An approach to robust knowledge and data base system integration is discussed by addressing progress made in the development of an experimental model for conceptual information processing.

  20. Delivering spacecraft control centers with embedded knowledge-based systems: The methodology issue

    NASA Technical Reports Server (NTRS)

    Ayache, S.; Haziza, M.; Cayrac, D.

    1994-01-01

    Matra Marconi Space (MMS) occupies a leading place in Europe in the domain of satellite and space data processing systems. The maturity of the knowledge-based systems (KBS) technology, the theoretical and practical experience acquired in the development of prototype, pre-operational and operational applications, make it possible today to consider the wide operational deployment of KBS's in space applications. In this perspective, MMS has to prepare the introduction of the new methods and support tools that will form the basis of the development of such systems. This paper introduces elements of the MMS methodology initiatives in the domain and the main rationale that motivated the approach. These initiatives develop along two main axes: knowledge engineering methods and tools, and a hybrid method approach for coexisting knowledge-based and conventional developments.

Top