Science.gov

Sample records for lcg mcdb-a knowledgebase

  1. LCG MCDB—a knowledgebase of Monte-Carlo simulated events

    NASA Astrophysics Data System (ADS)

    Belov, S.; Dudko, L.; Galkin, E.; Gusev, A.; Pokorski, W.; Sherstnev, A.

    2008-02-01

    In this paper we report on LCG Monte-Carlo Data Base (MCDB) and software which has been developed to operate MCDB. The main purpose of the LCG MCDB project is to provide a storage and documentation system for sophisticated event samples simulated for the LHC Collaborations by experts. In many cases, the modern Monte-Carlo simulation of physical processes requires expert knowledge in Monte-Carlo generators or significant amount of CPU time to produce the events. MCDB is a knowledgebase mainly dedicated to accumulate simulated events of this type. The main motivation behind LCG MCDB is to make the sophisticated MC event samples available for various physical groups. All the data from MCDB is accessible in several convenient ways. LCG MCDB is being developed within the CERN LCG Application Area Simulation project. Program summaryProgram title: LCG Monte-Carlo Data Base Catalogue identifier: ADZX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public Licence No. of lines in distributed program, including test data, etc.: 30 129 No. of bytes in distributed program, including test data, etc.: 216 943 Distribution format: tar.gz Programming language: Perl Computer: CPU: Intel Pentium 4, RAM: 1 Gb, HDD: 100 Gb Operating system: Scientific Linux CERN 3/4 RAM: 1 073 741 824 bytes (1 Gb) Classification: 9 External routines:perl >= 5.8.5; Perl modules DBD-mysql >= 2.9004, File::Basename, GD::SecurityImage, GD::SecurityImage::AC, Linux::Statistics, XML::LibXML > 1.6, XML::SAX, XML::NamespaceSupport; Apache HTTP Server >= 2.0.59; mod auth external >= 2.2.9; edg-utils-system RPM package; gd >= 2.0.28; rpm package CASTOR-client >= 2.1.2-4; arc-server (optional) Nature of problem: Often, different groups of experimentalists prepare similar samples of particle collision events or turn to the same group of authors of Monte-Carlo (MC

  2. LcgCAF: CDF access method to LCG resources

    NASA Astrophysics Data System (ADS)

    Compostella, Gabriele; Bauce, Matteo; Pagan Griso, Simone; Lucchesi, Donatella; Sgaravatto, Massimo; Cecchi, Marco

    2011-12-01

    Up to the early 2011, the CDF collaboration has collected more than 8 fb-1 of data from pbar p collisions at a center of mass energy TeV delivered by the Tevatron collider at Fermilab. Second generation physics measurements, like precision determinations of top properties or searches for the Standard Model higgs, require increasing computing power for data analysis and events simulation. Instead of expanding its set of dedicated Condor based analysis farms, CDF moved to Grid resources. While in the context of OSG this transition was performed using Condor glideins and keeping CDF custom middleware software almost intact, in LCG a complete rewrite of the experiment's submission and monitoring tools was realized, taking full advantage of the features offered by the gLite Workload Management System (WMS). This led to the development of a new computing facility called LcgCAF that CDF collaborators are using to exploit Grid resources in Europe in a transparent way. Given the opportunistic usage of the available resources, it is of crucial importance for CDF to maximize jobs efficiency from submission to output retrieval. This work describes how an experimental resubmisson feature implemented in the WMS was tested in LcgCAF with the aim of lowering the overall execution time of a typical CDF job.

  3. WHALE, a management tool for Tier-2 LCG sites

    NASA Astrophysics Data System (ADS)

    Barone, L. M.; Organtini, G.; Talamo, I. G.

    2012-12-01

    The LCG (Worldwide LHC Computing Grid) is a grid-based hierarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very similar situations, like jobs behaviour on the batch system, dataset transfers between sites, operating system and experiment software installation and configuration, monitoring of services. In this context we created WHALE (WHALE Handles Administration in an LCG Environment), a software actually used at the T2_IT_Rome site, an LCG Tier-2 for the CMS experiment. WHALE is a generic, site independent tool written in Python: it allows administrator to interact in a uniform and coherent way with several subsystems using a high level syntax which hides specific commands. The architecture of WHALE is based on the plugin concept and on the possibility of connecting the output of a plugin to the input of the next one, in a pipe-like system, giving the administrator the possibility of making complex functions by combining the simpler ones. The core of WHALE just handles the plugin orchestrations, while even the basic functions (eg. the WHALE activity logging) are performed by plugins, giving the capability to tune and possibly modify every component of the system. WHALE already provides many plugins useful for a LCG site and some more for a Tier-2 of the CMS experiment, especially in the field of job management, dataset transfer and analysis of performance results and availability tests (eg. Nagios tests, SAM tests). Thanks to its architecture and the provided plugins WHALE makes easy to perform tasks that, even if logically simple, are technically complex or tedious, like eg. closing all the worker nodes with a job-failure rate greater than a given threshold. Finally, thanks to the

  4. The Knowledgebase Kibbutz

    ERIC Educational Resources Information Center

    Singer, Ross

    2008-01-01

    As libraries' collections increasingly go digital, so too does their dependence on knowledgebases to access and maintain these electronic holdings. Somewhat different from other library-based knowledge management systems (catalogs, institutional repositories, etc.), the data found in the knowledgebases of link resolvers or electronic resource…

  5. Space Environmental Effects Knowledgebase

    NASA Technical Reports Server (NTRS)

    Wood, B. E.

    2007-01-01

    This report describes the results of an NRA funded program entitled Space Environmental Effects Knowledgebase that received funding through a NASA NRA (NRA8-31) and was monitored by personnel in the NASA Space Environmental Effects (SEE) Program. The NASA Project number was 02029. The Satellite Contamination and Materials Outgassing Knowledgebase (SCMOK) was created as a part of the earlier NRA8-20. One of the previous tasks and part of the previously developed Knowledgebase was to accumulate data from facilities using QCMs to measure the outgassing data for satellite materials. The main object of this current program was to increase the number of material outgassing datasets from 250 up to approximately 500. As a part of this effort, a round-robin series of materials outgassing measurements program was also executed that allowed comparison of the results for the same materials tested in 10 different test facilities. Other programs tasks included obtaining datasets or information packages for 1) optical effects of contaminants on optical surfaces, thermal radiators, and sensor systems and 2) space environmental effects data and incorporating these data into the already existing NASA/SEE Knowledgebase.

  6. BESC knowledgebase public portal†

    PubMed Central

    Syed, Mustafa H.; Karpinets, Tatiana V.; Parang, Morey; Leuze, Michael R.; Park, Byung H.; Hyatt, Doug; Brown, Steven D.; Moulton, Steve; Galloway, Michael D.; Uberbacher, Edward C.

    2012-01-01

    The BioEnergy Science Center (BESC) is undertaking large experimental campaigns to understand the biosynthesis and biodegradation of biomass and to develop biofuel solutions. BESC is generating large volumes of diverse data, including genome sequences, omics data and assay results. The purpose of the BESC Knowledgebase is to serve as a centralized repository for experimentally generated data and to provide an integrated, interactive and user-friendly analysis framework. The Portal makes available tools for visualization, integration and analysis of data either produced by BESC or obtained from external resources. Availability: http://besckb.ornl.gov Contact: syedmh@ornl.gov PMID:22238270

  7. The Reactome pathway Knowledgebase

    PubMed Central

    Fabregat, Antonio; Sidiropoulos, Konstantinos; Garapati, Phani; Gillespie, Marc; Hausmann, Kerstin; Haw, Robin; Jassal, Bijay; Jupe, Steven; Korninger, Florian; McKay, Sheldon; Matthews, Lisa; May, Bruce; Milacic, Marija; Rothfels, Karen; Shamovsky, Veronica; Webber, Marissa; Weiser, Joel; Williams, Mark; Wu, Guanming; Stein, Lincoln; Hermjakob, Henning; D'Eustachio, Peter

    2016-01-01

    The Reactome Knowledgebase (www.reactome.org) provides molecular details of signal transduction, transport, DNA replication, metabolism and other cellular processes as an ordered network of molecular transformations—an extended version of a classic metabolic map, in a single consistent data model. Reactome functions both as an archive of biological processes and as a tool for discovering unexpected functional relationships in data such as gene expression pattern surveys or somatic mutation catalogues from tumour cells. Over the last two years we redeveloped major components of the Reactome web interface to improve usability, responsiveness and data visualization. A new pathway diagram viewer provides a faster, clearer interface and smooth zooming from the entire reaction network to the details of individual reactions. Tool performance for analysis of user datasets has been substantially improved, now generating detailed results for genome-wide expression datasets within seconds. The analysis module can now be accessed through a RESTFul interface, facilitating its inclusion in third party applications. A new overview module allows the visualization of analysis results on a genome-wide Reactome pathway hierarchy using a single screen page. The search interface now provides auto-completion as well as a faceted search to narrow result lists efficiently. PMID:26656494

  8. ECOTOX knowledgebase: Search features and customized reports

    EPA Science Inventory

    The ECOTOXicology knowledgebase (ECOTOX) is a comprehensive, publicly available knowledgebase developed and maintained by ORD/NHEERL. It is used for environmental toxicity data on aquatic life, terrestrial plants and wildlife. ECOTOX has the capability to refine and filter search...

  9. LCG Persistency Framework (CORAL, COOL, POOL): Status and Outlook in 2012

    NASA Astrophysics Data System (ADS)

    Trentadue, R.; Clemencic, M.; Dykstra, D.; Frank, M.; Front, D.; Kalkhof, A.; Loth, A.; Nowak, M.; Salnikov, A.; Valassi, A.; Wache, M.

    2012-12-01

    The LCG Persistency Framework consists of three software packages (CORAL, COOL and POOL) that address the data access requirements of the LHC experiments in several different areas. The project is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that are using some or all of the Persistency Framework components to access their data. POOL is a hybrid technology store for C++ objects, using a mixture of streaming and relational technologies to implement both object persistency and object metadata catalogs and collections. CORAL is an abstraction layer with an SQL-free API for accessing data stored using relational database technologies. COOL provides specific software components and tools for the handling of the time variation and versioning of the experiment conditions data. This presentation reports on the status and outlook in each of the three sub-projects at the time of the CHEP2012 conference, reviewing the usage of each package in the three LHC experiments.

  10. Knowledge-based nursing diagnosis

    NASA Astrophysics Data System (ADS)

    Roy, Claudette; Hay, D. Robert

    1991-03-01

    Nursing diagnosis is an integral part of the nursing process and determines the interventions leading to outcomes for which the nurse is accountable. Diagnoses under the time constraints of modern nursing can benefit from a computer assist. A knowledge-based engineering approach was developed to address these problems. A number of problems were addressed during system design to make the system practical extended beyond capture of knowledge. The issues involved in implementing a professional knowledge base in a clinical setting are discussed. System functions, structure, interfaces, health care environment, and terminology and taxonomy are discussed. An integrated system concept from assessment through intervention and evaluation is outlined.

  11. Cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward A.; Buchanan, Bruce G.

    1988-01-01

    This final report covers work performed under Contract NCC2-220 between NASA Ames Research Center and the Knowledge Systems Laboratory, Stanford University. The period of research was from March 1, 1987 to February 29, 1988. Topics covered were as follows: (1) concurrent architectures for knowledge-based systems; (2) methods for the solution of geometric constraint satisfaction problems, and (3) reasoning under uncertainty. The research in concurrent architectures was co-funded by DARPA, as part of that agency's Strategic Computing Program. The research has been in progress since 1985, under DARPA and NASA sponsorship. The research in geometric constraint satisfaction has been done in the context of a particular application, that of determining the 3-D structure of complex protein molecules, using the constraints inferred from NMR measurements.

  12. The Coming of Knowledge-Based Business.

    ERIC Educational Resources Information Center

    Davis, Stan; Botkin, Jim

    1994-01-01

    Economic growth will come from knowledge-based businesses whose "smart" products filter and interpret information. Businesses will come to think of themselves as educators and their customers as learners. (SK)

  13. Distributed, cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    Some current research in the development and application of distributed, cooperating knowledge-based systems technology is addressed. The focus of the current research is the spacecraft ground operations environment. The underlying hypothesis is that, because of the increasing size, complexity, and cost of planned systems, conventional procedural approaches to the architecture of automated systems will give way to a more comprehensive knowledge-based approach. A hallmark of these future systems will be the integration of multiple knowledge-based agents which understand the operational goals of the system and cooperate with each other and the humans in the loop to attain the goals. The current work includes the development of a reference model for knowledge-base management, the development of a formal model of cooperating knowledge-based agents, the use of testbed for prototyping and evaluating various knowledge-based concepts, and beginning work on the establishment of an object-oriented model of an intelligent end-to-end (spacecraft to user) system. An introductory discussion of these activities is presented, the major concepts and principles being investigated are highlighted, and their potential use in other application domains is indicated.

  14. Knowledge-Based Network Operations

    NASA Astrophysics Data System (ADS)

    Wu, Chuan-lin; Hung, Chaw-Kwei; Stedry, Steven P.; McClure, James P.; Yeh, Show-Way

    1988-03-01

    An expert system is being implemented for enhancing operability of the Ground Communication Facility (GCF) of Jet Propulsion Laboratory's (JPL) Deep Space Network (DSN). The DSN is a tracking network for all of JPL's spacecraft plus a subset of spacecrafts launched by other NASA centers. A GCF upgrade task is set to replace the current GCF aging system with new, modern equipments which are capable of using knowledge-based monitor and control approach. The expert system, implemented in terms of KEE and SUN workstation, is used for performing network fault management, configuration management, and performance management in real-time. Monitor data are collected from each processor and DSCC's in every five seconds. In addition to serving as input parameters of the expert system, extracted management information is used to update a management information database. For the monitor and control purpose, software of each processor is divided into layers following the OSI standard. Each layer is modeled as a finite state machine. A System Management Application Process (SMAP) is implemented at application layer, which coordinates layer managers of the same processor and communicates with peer SMAPs of other processors. The expert system will be tuned by augmenting the production rules as the operation is going on, and its performance will be measured.

  15. Protective Effects of the Launch/Entry Suit (LES) and the Liquid Cooling Garment(LCG) During Re-entry and Landing After Spaceflight

    NASA Technical Reports Server (NTRS)

    Perez, Sondra A.; Charles, John B.; Fortner, G. William; Hurst, Victor, IV; Meck, Janice V.

    2002-01-01

    Heart rate and arterial pressure were measured during shuttle re-entry, landing and initial standing in crewmembers with and without inflated anti-g suits and with and without liquid cooling garments (LCG). Preflight, three measurements were obtained seated, then standing. Prior to and during re-entry, arterial pressure and heart rate were measured every five minutes until wheels stop (WS). Then crewmembers initiated three seated and three standing measurements. In subjects without inflated anti-g suits, SBP and DBP were significantly lower during preflight standing (P = 0.006; P = 0.001 respectively) and at touchdown (TD) (P = 0.001; P = 0.003 respectively); standing SBP was significantly lower after WS. on-LeG users developed significantly higher heart rates during re-entry (P = 0.029, maxG; P = 0.05, TD; P = 0.02, post-WS seated; P = 0.01, post-WS standing) than LCG users. Our data suggest that the anti-g suit is effective, but the combined anti-g suit with LCG is more effective.

  16. Effective knowledge-based potentials.

    PubMed

    Ferrada, Evandro; Melo, Francisco

    2009-07-01

    Empirical or knowledge-based potentials have many applications in structural biology such as the prediction of protein structure, protein-protein, and protein-ligand interactions and in the evaluation of stability for mutant proteins, the assessment of errors in experimentally solved structures, and the design of new proteins. Here, we describe a simple procedure to derive and use pairwise distance-dependent potentials that rely on the definition of effective atomic interactions, which attempt to capture interactions that are more likely to be physically relevant. Based on a difficult benchmark test composed of proteins with different secondary structure composition and representing many different folds, we show that the use of effective atomic interactions significantly improves the performance of potentials at discriminating between native and near-native conformations. We also found that, in agreement with previous reports, the potentials derived from the observed effective atomic interactions in native protein structures contain a larger amount of mutual information. A detailed analysis of the effective energy functions shows that atom connectivity effects, which mostly arise when deriving the potential by the incorporation of those indirect atomic interactions occurring beyond the first atomic shell, are clearly filtered out. The shape of the energy functions for direct atomic interactions representing hydrogen bonding and disulfide and salt bridges formation is almost unaffected when effective interactions are taken into account. On the contrary, the shape of the energy functions for indirect atom interactions (i.e., those describing the interaction between two atoms bound to a direct interacting pair) is clearly different when effective interactions are considered. Effective energy functions for indirect interacting atom pairs are not influenced by the shape or the energy minimum observed for the corresponding direct interacting atom pair. Our results

  17. Knowledge-based diagnosis for aerospace systems

    NASA Technical Reports Server (NTRS)

    Atkinson, David J.

    1988-01-01

    The need for automated diagnosis in aerospace systems and the approach of using knowledge-based systems are examined. Research issues in knowledge-based diagnosis which are important for aerospace applications are treated along with a review of recent relevant research developments in Artificial Intelligence. The design and operation of some existing knowledge-based diagnosis systems are described. The systems described and compared include the LES expert system for liquid oxygen loading at NASA Kennedy Space Center, the FAITH diagnosis system developed at the Jet Propulsion Laboratory, the PES procedural expert system developed at SRI International, the CSRL approach developed at Ohio State University, the StarPlan system developed by Ford Aerospace, the IDM integrated diagnostic model, and the DRAPhys diagnostic system developed at NASA Langley Research Center.

  18. Knowledge-based commodity distribution planning

    NASA Technical Reports Server (NTRS)

    Saks, Victor; Johnson, Ivan

    1994-01-01

    This paper presents an overview of a Decision Support System (DSS) that incorporates Knowledge-Based (KB) and commercial off the shelf (COTS) technology components. The Knowledge-Based Logistics Planning Shell (KBLPS) is a state-of-the-art DSS with an interactive map-oriented graphics user interface and powerful underlying planning algorithms. KBLPS was designed and implemented to support skilled Army logisticians to prepare and evaluate logistics plans rapidly, in order to support corps-level battle scenarios. KBLPS represents a substantial advance in graphical interactive planning tools, with the inclusion of intelligent planning algorithms that provide a powerful adjunct to the planning skills of commodity distribution planners.

  19. Decision Support and Knowledge-Based Systems.

    ERIC Educational Resources Information Center

    Konsynski, Benn R.; And Others

    1988-01-01

    A series of articles addresses issues concerning decision support and knowledge based systems. Topics covered include knowledge-based systems for information centers; object oriented systems; strategic information systems case studies; user perception; manipulation of certainty factors by individuals and expert systems; spreadsheet program use;…

  20. The importance of knowledge-based technology.

    PubMed

    Cipriano, Pamela F

    2012-01-01

    Nurse executives are responsible for a workforce that can provide safer and more efficient care in a complex sociotechnical environment. National quality priorities rely on technologies to provide data collection, share information, and leverage analytic capabilities to interpret findings and inform approaches to care that will achieve better outcomes. As a key steward for quality, the nurse executive exercises leadership to provide the infrastructure to build and manage nursing knowledge and instill accountability for following evidence-based practices. These actions contribute to a learning health system where new knowledge is captured as a by-product of care delivery enabled by knowledge-based electronic systems. The learning health system also relies on rigorous scientific evidence embedded into practice at the point of care. The nurse executive optimizes use of knowledge-based technologies, integrated throughout the organization, that have the capacity to help transform health care.

  1. Knowledge-based system for computer security

    SciTech Connect

    Hunteman, W.J.

    1988-01-01

    The rapid expansion of computer security information and technology has provided little support for the security officer to identify and implement the safeguards needed to secure a computing system. The Department of Energy Center for Computer Security is developing a knowledge-based computer security system to provide expert knowledge to the security officer. The system is policy-based and incorporates a comprehensive list of system attack scenarios and safeguards that implement the required policy while defending against the attacks. 10 figs.

  2. Knowledge-Based Entrepreneurship in a Boundless Research System

    ERIC Educational Resources Information Center

    Dell'Anno, Davide

    2008-01-01

    International entrepreneurship and knowledge-based entrepreneurship have recently generated considerable academic and non-academic attention. This paper explores the "new" field of knowledge-based entrepreneurship in a boundless research system. Cultural barriers to the development of business opportunities by researchers persist in some academic…

  3. Applying Knowledge-Based Techniques to Software Development.

    ERIC Educational Resources Information Center

    Harandi, Mehdi T.

    1986-01-01

    Reviews overall structure and design principles of a knowledge-based programming support tool, the Knowledge-Based Programming Assistant, which is being developed at University of Illinois Urbana-Champaign. The system's major units (program design program coding, and intelligent debugging) and additional functions are described. (MBR)

  4. Knowledge-based public health situation awareness

    NASA Astrophysics Data System (ADS)

    Mirhaji, Parsa; Zhang, Jiajie; Srinivasan, Arunkumar; Richesson, Rachel L.; Smith, Jack W.

    2004-09-01

    There have been numerous efforts to create comprehensive databases from multiple sources to monitor the dynamics of public health and most specifically to detect the potential threats of bioterrorism before widespread dissemination. But there are not many evidences for the assertion that these systems are timely and dependable, or can reliably identify man made from natural incident. One must evaluate the value of so called 'syndromic surveillance systems' along with the costs involved in design, development, implementation and maintenance of such systems and the costs involved in investigation of the inevitable false alarms1. In this article we will introduce a new perspective to the problem domain with a shift in paradigm from 'surveillance' toward 'awareness'. As we conceptualize a rather different approach to tackle the problem, we will introduce a different methodology in application of information science, computer science, cognitive science and human-computer interaction concepts in design and development of so called 'public health situation awareness systems'. We will share some of our design and implementation concepts for the prototype system that is under development in the Center for Biosecurity and Public Health Informatics Research, in the University of Texas Health Science Center at Houston. The system is based on a knowledgebase containing ontologies with different layers of abstraction, from multiple domains, that provide the context for information integration, knowledge discovery, interactive data mining, information visualization, information sharing and communications. The modular design of the knowledgebase and its knowledge representation formalism enables incremental evolution of the system from a partial system to a comprehensive knowledgebase of 'public health situation awareness' as it acquires new knowledge through interactions with domain experts or automatic discovery of new knowledge.

  5. Systems Biology Knowledgebase (GSC8 Meeting)

    ScienceCinema

    Cottingham, Robert W [ORNL

    2016-07-12

    The Genomic Standards Consortium was formed in September 2005. It is an international, open-membership working body which promotes standardization in the description of genomes and the exchange and integration of genomic data. The 2009 meeting was an activity of a five-year funding "Research Coordination Network" from the National Science Foundation and was organized held at the DOE Joint Genome Institute with organizational support provided by the JGI and by the University of California - San Diego. Robert W. Cottingham of Oak Ridge National Laboratory discusses the DOE KnowledgeBase at the Genomic Standards Consortium's 8th meeting at the DOE JGI in Walnut Creek, Calif. on Sept. 9, 2009.

  6. Knowledge-based systems in Japan

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward; Engelmore, Robert S.; Friedland, Peter E.; Johnson, Bruce B.; Nii, H. Penny; Schorr, Herbert; Shrobe, Howard

    1994-01-01

    This report summarizes a study of the state-of-the-art in knowledge-based systems technology in Japan, organized by the Japanese Technology Evaluation Center (JTEC) under the sponsorship of the National Science Foundation and the Advanced Research Projects Agency. The panel visited 19 Japanese sites in March 1992. Based on these site visits plus other interactions with Japanese organizations, both before and after the site visits, the panel prepared a draft final report. JTEC sent the draft to the host organizations for their review. The final report was published in May 1993.

  7. Knowledge-based Autonomous Test Engineer (KATE)

    NASA Technical Reports Server (NTRS)

    Parrish, Carrie L.; Brown, Barbara L.

    1991-01-01

    Mathematical models of system components have long been used to allow simulators to predict system behavior to various stimuli. Recent efforts to monitor, diagnose, and control real-time systems using component models have experienced similar success. NASA Kennedy is continuing the development of a tool for implementing real-time knowledge-based diagnostic and control systems called KATE (Knowledge based Autonomous Test Engineer). KATE is a model-based reasoning shell designed to provide autonomous control, monitoring, fault detection, and diagnostics for complex engineering systems by applying its reasoning techniques to an exchangeable quantitative model describing the structure and function of the various system components and their systemic behavior.

  8. An Introduction to the Heliophysics Event Knowledgebase

    NASA Astrophysics Data System (ADS)

    Hurlburt, Neal E.; Cheung, M.; Schrijver, C.; Chang, L.; Freeland, S.; Green, S.; Heck, C.; Jaffey, A.; Kobashi, A.; Schiff, D.; Serafin, J.; Seguin, R.; Slater, G.; Somani, A.; Timmons, R.

    2010-05-01

    The immense volume of data generated by the suite of instruments on SDO requires new tools for efficiently identifying and accessing data that are most relevant to research investigations. We have developed the Heliophysics Events Knowledgebase (HEK) to fill this need. The system developed to support the HEK combines automated datamining using feature detection methods; high-performance visualization systems for data markup; and web-services and clients for searching the resulting metadata, reviewing results and efficient access to the data. We will review these components and present examples of their use with SDO data.

  9. Tools for constructing knowledge-based systems

    SciTech Connect

    Cross, G.R.

    1986-03-01

    The original expert systems for the most part were handcrafted directly, using various dialects of the LISP programming language. The inference and knowledge representation components of these systems can be separated from the domain-specific portion of the expert system and can be used again for an entirely different task. Some of these tools, generically called shells, are discussed. Although these shells provide help in building knowledge-based systems, considerable skill in artificial intelligence programming is still necessary to create an expert system that accomplishes a nontrivial task.

  10. Analysis of Unit-Level Changes in Operations with Increased SPP Wind from EPRI/LCG Balancing Study

    SciTech Connect

    Hadley, Stanton W

    2012-01-01

    Wind power development in the United States is outpacing previous estimates for many regions, particularly those with good wind resources. The pace of wind power deployment may soon outstrip regional capabilities to provide transmission and integration services to achieve the most economic power system operation. Conversely, regions such as the Southeastern United States do not have good wind resources and will have difficulty meeting proposed federal Renewable Portfolio Standards with local supply. There is a growing need to explore innovative solutions for collaborating between regions to achieve the least cost solution for meeting such a renewable energy mandate. The Department of Energy funded the project 'Integrating Midwest Wind Energy into Southeast Electricity Markets' to be led by EPRI in coordination with the main authorities for the regions: SPP, Entergy, TVA, Southern Company and OPC. EPRI utilized several subcontractors for the project including LCG, the developers of the model UPLAN. The study aims to evaluate the operating cost benefits of coordination of scheduling and balancing for Southwest Power Pool (SPP) wind transfers to Southeastern Electric Reliability Council (SERC) Balancing Authorities (BAs). The primary objective of this project is to analyze the benefits of regional cooperation for integrating mid-western wind energy into southeast electricity markets. Scenarios were defined, modeled and investigated to address production variability and uncertainty and the associated balancing of large quantities of wind power in SPP and delivery to energy markets in the southern regions of the SERC. DOE funded Oak Ridge National Laboratory to provide additional support to the project, including a review of results and any side analysis that may provide additional insight. This report is a unit-by-unit analysis of changes in operations due to the different scenarios used in the overall study. It focuses on the change in capacity factors and the number

  11. Bioenergy Science Center KnowledgeBase

    DOE Data Explorer

    Syed, M. H.; Karpinets, T. V.; Parang, M.; Leuze, M. R.; Park, B. H.; Hyatt, D.; Brown, S. D.; Moulton, S. Galloway, M.D.; Uberbacher, E. C.

    The challenge of converting cellulosic biomass to sugars is the dominant obstacle to cost effective production of biofuels in s capable of significant enough quantities to displace U. S. consumption of fossil transportation fuels. The BioEnergy Science Center (BESC) tackles this challenge of biomass recalcitrance by closely linking (1) plant research to make cell walls easier to deconstruct, and (2) microbial research to develop multi-talented biocatalysts tailor-made to produce biofuels in a single step. [from the 2011 BESC factsheet] The BioEnergy Science Center (BESC) is a multi-institutional, multidisciplinary research (biological, chemical, physical and computational sciences, mathematics and engineering) organization focused on the fundamental understanding and elimination of biomass recalcitrance. The BESC Knowledgebase and its associated tools is a discovery platform for bioenergy research. It consists of a collection of metadata, data, and computational tools for data analysis, integration, comparison and visualization for plants and microbes in the center.The BESC Knowledgebase (KB) and BESC Laboratory Information Management System (LIMS) enable bioenergy researchers to perform systemic research. [http://bobcat.ornl.gov/besc/index.jsp

  12. Knowledge-based expert system configurator

    SciTech Connect

    Wakefield, K.A.; Gould, S.S.

    1990-01-01

    The term knowledge-based expert system'' usually brings to mind a rather extensive list of commercially available expert system shells with the associated complexity of implementing the given inferencing strategies to drive a rule base of knowledge for solving particular classes of problems. A significant amount of learning time is required to understand all of the intricacies of the systems in order to effectively utilize their salient features while working around the canned'' constraints. The amount of effort required to prototype the first attempt'' is therefore substantial and can quickly lead to the unfortunate effect of reticence toward applying expert systems. This paper describes an alternative approach to use of specialized shells in developing or prototyping first-attempting knowledge-based expert systems using Lotus 123, a commonly used spreadsheet software package. The advantages of using this approach are discussed. The working example presented makes use of the forward-chaining capabilities available to determine automatically the hardware jumper and switch configuration for a distributed process control system. Hardware configuration control documentation is generated for use by field engineers and maintenance technicians. 4 refs., 4 figs.

  13. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  14. Knowledge-based programming support tool

    SciTech Connect

    Harandi, M.T.

    1983-01-01

    This paper presents an overview of a knowledge based programming support tool. Although the system would not synthesize programs automatically, it has the capability of aiding programmers in various phases of program production such as design, coding, debugging and testing. The underlying design principles of this system are similar to those governing the implementation of knowledge-based expertise in other domains of human mental skill. The system is composed of several major units, each an expert system for a sub-domain of program development process. It implements various elements of programming expertise as an interactive system equipped with provisions by which the domain specialist could easily and effectively transfer to the system the knowledge it needs for its decision making. 19 references.

  15. Knowledge-based scheduling of arrival aircraft

    NASA Technical Reports Server (NTRS)

    Krzeczowski, K.; Davis, T.; Erzberger, H.; Lev-Ram, I.; Bergh, C.

    1995-01-01

    A knowledge-based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real-time simulation. The scheduling system automatically sequences, assigns landing times, and assigns runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithms is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real-time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reduction, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithms is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper will describe the scheduling algorithms, give examples of their use, and present data regarding their potential benefits to the air traffic system.

  16. Knowledge-based generalization of metabolic models.

    PubMed

    Zhukova, Anna; Sherman, David James

    2014-07-01

    Genome-scale metabolic model reconstruction is a complicated process beginning with (semi-)automatic inference of the reactions participating in the organism's metabolism, followed by many iterations of network analysis and improvement. Despite advances in automatic model inference and analysis tools, reconstruction may still miss some reactions or add erroneous ones. Consequently, a human expert's analysis of the model will continue to play an important role in all the iterations of the reconstruction process. This analysis is hampered by the size of the genome-scale models (typically thousands of reactions), which makes it hard for a human to understand them. To aid human experts in curating and analyzing metabolic models, we have developed a method for knowledge-based generalization that provides a higher-level view of a metabolic model, masking its inessential details while presenting its essential structure. The method groups biochemical species in the model into semantically equivalent classes based on the ChEBI ontology, identifies reactions that become equivalent with respect to the generalized species, and factors those reactions into generalized reactions. Generalization allows curators to quickly identify divergences from the expected structure of the model, such as alternative paths or missing reactions, that are the priority targets for further curation. We have applied our method to genome-scale yeast metabolic models and shown that it improves understanding by helping to identify both specificities and potential errors. PMID:24766276

  17. Knowledge-based reusable software synthesis system

    NASA Technical Reports Server (NTRS)

    Donaldson, Cammie

    1989-01-01

    The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.

  18. A Knowledge-Based Imagery Exploitation System

    NASA Astrophysics Data System (ADS)

    Smyrniotis, Chuck; Payton, Paul M.; Barrett, Eamon B.

    1989-03-01

    Automation of major portions of the imagery exploitation process is becoming a necessity for meeting current and future imagery exploitation needs. In this paper we describe a prototype Automated Exploitation System (AES) which addresses requirements for monitoring objects of interest and situation assessment in large geographic areas. The purpose of AES is to aid the image analyst in performing routine, commonplace tasks more effectively. AES consists of four main subsystems: Cue Extractor (CE), Knowledge-Based Exploitation (KBE), Interactive Work-Station (IWS), and a database subsystem. The CE processes raw image data, and identifies objects and target cues based on pixel- and object-model data. Cues and image registration coefficients are passed to KBE for screening and verification, situation assessment and planning. KBE combines the cues with ground-truth and doctrinal knowledge in screening the cues to determine their importance. KBE generates reports on image analysis which passes on to the IWS from which an image analyst can monitor, observe, and evaluate system functionality as well as respond to critical items identified by KBE. The database subsystem stores and shares reference imagery, collateral information and digital terrain data to support both automated and interactive processing. This partitioning of functions to subsystems facilitates hierarchical application of knowledge in image interpretation. The AES current prototype helps in identification, capture, representation, and refinement of knowledge. The KBE subsystem, which is the primary focus of the present paper, runs on a Symbolics 3675 computer and its software is written in the ART expert system and LISP language.

  19. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  20. A Knowledge-Based System Developer for aerospace applications

    NASA Technical Reports Server (NTRS)

    Shi, George Z.; Wu, Kewei; Fensky, Connie S.; Lo, Ching F.

    1993-01-01

    A prototype Knowledge-Based System Developer (KBSD) has been developed for aerospace applications by utilizing artificial intelligence technology. The KBSD directly acquires knowledge from domain experts through a graphical interface then builds expert systems from that knowledge. This raises the state of the art of knowledge acquisition/expert system technology to a new level by lessening the need for skilled knowledge engineers. The feasibility, applicability , and efficiency of the proposed concept was established, making a continuation which would develop the prototype to a full-scale general-purpose knowledge-based system developer justifiable. The KBSD has great commercial potential. It will provide a marketable software shell which alleviates the need for knowledge engineers and increase productivity in the workplace. The KBSD will therefore make knowledge-based systems available to a large portion of industry.

  1. The Knowledge-Based Economy and E-Learning: Critical Considerations for Workplace Democracy

    ERIC Educational Resources Information Center

    Remtulla, Karim A.

    2007-01-01

    The ideological shift by nation-states to "a knowledge-based economy" (also referred to as "knowledge-based society") is causing changes in the workplace. Brought about by the forces of globalisation and technological innovation, the ideologies of the "knowledge-based economy" are not limited to influencing the production, consumption and economic…

  2. Online Knowledge-Based Model for Big Data Topic Extraction

    PubMed Central

    Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan

    2016-01-01

    Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004

  3. Big Data Analytics in Immunology: A Knowledge-Based Approach

    PubMed Central

    Zhang, Guang Lan

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677

  4. Dynamic Strategic Planning in a Professional Knowledge-Based Organization

    ERIC Educational Resources Information Center

    Olivarius, Niels de Fine; Kousgaard, Marius Brostrom; Reventlow, Susanne; Quelle, Dan Grevelund; Tulinius, Charlotte

    2010-01-01

    Professional, knowledge-based institutions have a particular form of organization and culture that makes special demands on the strategic planning supervised by research administrators and managers. A model for dynamic strategic planning based on a pragmatic utilization of the multitude of strategy models was used in a small university-affiliated…

  5. CACTUS: Command and Control Training Using Knowledge-Based Simulations

    ERIC Educational Resources Information Center

    Hartley, Roger; Ravenscroft, Andrew; Williams, R. J.

    2008-01-01

    The CACTUS project was concerned with command and control training of large incidents where public order may be at risk, such as large demonstrations and marches. The training requirements and objectives of the project are first summarized justifying the use of knowledge-based computer methods to support and extend conventional training…

  6. PLAN-IT - Knowledge-based mission sequencing

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric W.

    1987-01-01

    PLAN-IT (Plan-Integrated Timelines), a knowledge-based approach to assist in mission sequencing, is discussed. PLAN-IT uses a large set of scheduling techniques known as strategies to develop and maintain a mission sequence. The approach implemented by PLAN-IT and the current applications of PLAN-IT for sequencing at NASA are reported.

  7. Big data analytics in immunology: a knowledge-based approach.

    PubMed

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677

  8. Big data analytics in immunology: a knowledge-based approach.

    PubMed

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  9. Value Creation in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Liu, Fang-Chun

    2013-01-01

    Effective investment strategies help companies form dynamic core organizational capabilities allowing them to adapt and survive in today's rapidly changing knowledge-based economy. This dissertation investigates three valuation issues that challenge managers with respect to developing business-critical investment strategies that can have…

  10. Multicultural Education Knowledgebase, Attitudes and Preparedness for Diversity

    ERIC Educational Resources Information Center

    Wasonga, Teresa A.

    2005-01-01

    Purpose: The paper aims to investigate the effect of multicultural knowledgebase on attitudes and feelings of preparedness to teach children from diverse backgrounds among pre-service teachers. Currently issues of multicultural education have been heightened by the academic achievement gap and emphasis on standardized test-scores as the indicator…

  11. Knowledge-Based Hierarchies: Using Organizations to Understand the Economy

    ERIC Educational Resources Information Center

    Garicano, Luis; Rossi-Hansberg, Esteban

    2015-01-01

    Incorporating the decision of how to organize the acquisition, use, and communication of knowledge into economic models is essential to understand a wide variety of economic phenomena. We survey the literature that has used knowledge-based hierarchies to study issues such as the evolution of wage inequality, the growth and productivity of firms,…

  12. Knowledge-Based Aid: A Four Agency Comparative Study

    ERIC Educational Resources Information Center

    McGrath, Simon; King, Kenneth

    2004-01-01

    Part of the response of many development cooperation agencies to the challenges of globalisation, ICTs and the knowledge economy is to emphasise the importance of knowledge for development. This paper looks at the discourses and practices of ''knowledge-based aid'' through an exploration of four agencies: the World Bank, DFID, Sida and JICA. It…

  13. Malaysia Transitions toward a Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Mustapha, Ramlee; Abdullah, Abu

    2004-01-01

    The emergence of a knowledge-based economy (k-economy) has spawned a "new" notion of workplace literacy, changing the relationship between employers and employees. The traditional covenant where employees expect a stable or lifelong employment will no longer apply. The retention of employees will most probably be based on their skills and…

  14. The Knowledge-Based Technology Applications Center (KBTAC) seminar series. Volume 1, Introduction to knowledge-based systems

    SciTech Connect

    Meyer, W.; Scherer, J.; DeLuke, R.; Wood, R.M.

    1992-12-01

    Knowledge-based systems are a means of capturing and productively and efficiently using utility accumulated knowledge and expertise. The first step in this process is to identify what types of problems and applications can benefit from the use of expert systems. Once potential applications have been identified, it is necessary to involve management in supporting the use and developing of the expert system. To do that, management must be made aware of the costs of benefits associated with the development, routine use and maintenance of these systems. To truly understand how knowledge-based systems differ from conventional programming the manager and potential user needs to become familiar with the concept of symbolic reasoning or programming where knowledge is manipulated, not just data as in conventional programming. Knowledge-based systems use all the information manipulation that is found in conventional programming but adds to that knowledge-based programming. How does a program use knowledge? That is accomplished in a knowledge-based system by the inferencing process. Rules allow reasoning to flow backward from a conclusion or a result to circumstances or a causes. Alternatively, certain data or information can lead to a conclusion or a result. The reader will be lead through this process of symbolic reasoning or programming including the presentation of several examples. The software available to develop expert systems is discussed as is the hardware on which that software is operable. Costs and other features of the hardware are presented in detail. Finally, the many different ways in which KBTAC can assist in developing expert systems is discussed. This assistance ranges from phone calls to assistance at KBTAC`s site or at your utility.

  15. Network fingerprint: a knowledge-based characterization of biomedical networks.

    PubMed

    Cui, Xiuliang; He, Haochen; He, Fuchu; Wang, Shengqi; Li, Fei; Bo, Xiaochen

    2015-08-26

    It can be difficult for biomedical researchers to understand complex molecular networks due to their unfamiliarity with the mathematical concepts employed. To represent molecular networks with clear meanings and familiar forms for biomedical researchers, we introduce a knowledge-based computational framework to decipher biomedical networks by making systematic comparisons to well-studied "basic networks". A biomedical network is characterized as a spectrum-like vector called "network fingerprint", which contains similarities to basic networks. This knowledge-based multidimensional characterization provides a more intuitive way to decipher molecular networks, especially for large-scale network comparisons and clustering analyses. As an example, we extracted network fingerprints of 44 disease networks in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The comparisons among the network fingerprints of disease networks revealed informative disease-disease and disease-signaling pathway associations, illustrating that the network fingerprinting framework will lead to new approaches for better understanding of biomedical networks.

  16. TVS: An Environment For Building Knowledge-Based Vision Systems

    NASA Astrophysics Data System (ADS)

    Weymouth, Terry E.; Amini, Amir A.; Tehrani, Saeid

    1989-03-01

    Advances in the field of knowledge-guided computer vision require the development of large scale projects and experimentation with them. One factor which impedes such development is the lack of software environments which combine standard image processing and graphics abilities with the ability to perform symbolic processing. In this paper, we describe a software environment that assists in the development of knowledge-based computer vision projects. We have built, upon Common LISP and C, a software development environment which combines standard image processing tools and a standard blackboard-based system, with the flexibility of the LISP programming environment. This environment has been used to develop research projects in knowledge-based computer vision and dynamic vision for robot navigation.

  17. Knowledge-based zonal grid generation for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.

  18. A knowledgebase system to enhance scientific discovery: Telemakus.

    PubMed

    Fuller, Sherrilynne S; Revere, Debra; Bugni, Paul F; Martin, George M

    2004-09-21

    BACKGROUND: With the rapid expansion of scientific research, the ability to effectively find or integrate new domain knowledge in the sciences is proving increasingly difficult. Efforts to improve and speed up scientific discovery are being explored on a number of fronts. However, much of this work is based on traditional search and retrieval approaches and the bibliographic citation presentation format remains unchanged. METHODS: Case study. RESULTS: The Telemakus KnowledgeBase System provides flexible new tools for creating knowledgebases to facilitate retrieval and review of scientific research reports. In formalizing the representation of the research methods and results of scientific reports, Telemakus offers a potential strategy to enhance the scientific discovery process. While other research has demonstrated that aggregating and analyzing research findings across domains augments knowledge discovery, the Telemakus system is unique in combining document surrogates with interactive concept maps of linked relationships across groups of research reports. CONCLUSION: Based on how scientists conduct research and read the literature, the Telemakus KnowledgeBase System brings together three innovations in analyzing, displaying and summarizing research reports across a domain: (1) research report schema, a document surrogate of extracted research methods and findings presented in a consistent and structured schema format which mimics the research process itself and provides a high-level surrogate to facilitate searching and rapid review of retrieved documents; (2) research findings, used to index the documents, allowing searchers to request, for example, research studies which have studied the relationship between neoplasms and vitamin E; and (3) visual exploration interface of linked relationships for interactive querying of research findings across the knowledgebase and graphical displays of what is known as well as, through gaps in the map, what is yet to be

  19. A knowledgebase system to enhance scientific discovery: Telemakus

    PubMed Central

    Fuller, Sherrilynne S; Revere, Debra; Bugni, Paul F; Martin, George M

    2004-01-01

    Background With the rapid expansion of scientific research, the ability to effectively find or integrate new domain knowledge in the sciences is proving increasingly difficult. Efforts to improve and speed up scientific discovery are being explored on a number of fronts. However, much of this work is based on traditional search and retrieval approaches and the bibliographic citation presentation format remains unchanged. Methods Case study. Results The Telemakus KnowledgeBase System provides flexible new tools for creating knowledgebases to facilitate retrieval and review of scientific research reports. In formalizing the representation of the research methods and results of scientific reports, Telemakus offers a potential strategy to enhance the scientific discovery process. While other research has demonstrated that aggregating and analyzing research findings across domains augments knowledge discovery, the Telemakus system is unique in combining document surrogates with interactive concept maps of linked relationships across groups of research reports. Conclusion Based on how scientists conduct research and read the literature, the Telemakus KnowledgeBase System brings together three innovations in analyzing, displaying and summarizing research reports across a domain: (1) research report schema, a document surrogate of extracted research methods and findings presented in a consistent and structured schema format which mimics the research process itself and provides a high-level surrogate to facilitate searching and rapid review of retrieved documents; (2) research findings, used to index the documents, allowing searchers to request, for example, research studies which have studied the relationship between neoplasms and vitamin E; and (3) visual exploration interface of linked relationships for interactive querying of research findings across the knowledgebase and graphical displays of what is known as well as, through gaps in the map, what is yet to be tested

  20. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul

    1994-01-01

    This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.

  1. Systems, methods and apparatus for verification of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Rash, James L. (Inventor); Erickson, John D. (Inventor); Gracinin, Denis (Inventor); Rouff, Christopher A. (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments, domain knowledge is translated into a knowledge-based system. In some embodiments, a formal specification is derived from rules of a knowledge-based system, the formal specification is analyzed, and flaws in the formal specification are used to identify and correct errors in the domain knowledge, from which a knowledge-based system is translated.

  2. A comparison of LISP and MUMPS as implementation languages for knowledge-based systems.

    PubMed

    Curtis, A C

    1984-10-01

    Major components of knowledge-based systems are summarized, along with the programming language features generally useful in their implementation. LISP and MUMPS are briefly described and compared as vehicles for building knowledge-based systems. The paper concludes with suggestions for extensions to MUMPS that might increase its usefulness in artificial intelligence applications without affecting the essential nature of the language.

  3. Comparison of LISP and MUMPS as implementation languages for knowledge-based systems

    SciTech Connect

    Curtis, A.C.

    1984-01-01

    Major components of knowledge-based systems are summarized, along with the programming language features generally useful in their implementation. LISP and MUMPS are briefly described and compared as vehicles for building knowledge-based systems. The paper concludes with suggestions for extensions to MUMPS which might increase its usefulness in artificial intelligence applications without affecting the essential nature of the language. 8 references.

  4. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  5. A comparison of LISP and MUMPS as implementation languages for knowledge-based systems.

    PubMed

    Curtis, A C

    1984-10-01

    Major components of knowledge-based systems are summarized, along with the programming language features generally useful in their implementation. LISP and MUMPS are briefly described and compared as vehicles for building knowledge-based systems. The paper concludes with suggestions for extensions to MUMPS that might increase its usefulness in artificial intelligence applications without affecting the essential nature of the language. PMID:6549025

  6. Manned spaceflight activity planning with knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Mogilensky, J.; Dalton, R. E.; Scarl, E. A.

    1983-01-01

    An on-board expert system, capable of assisting with crew-activity planning and platform-status monitoring, could provide unprecedented autonomy to the crew of a permanently manned space station. To demonstrate this concept's feasibility, an existing knowledge-based system is adapted to support Space Shuttle crew-activity timeline planning. Proposed timeline changes are to be checked for compliance with crew capabilities and mission operating guidelines, so that a nonexpert can be guided through a successful plan modification. Early lessons that have been learned about the scope of the adaptation needed to achieve this objective are presented.

  7. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  8. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  9. A prototype knowledge-based simulation support system

    SciTech Connect

    Hill, T.R.; Roberts, S.D.

    1987-04-01

    As a preliminary step toward the goal of an intelligent automated system for simulation modeling support, we explore the feasibility of the overall concept by generating and testing a prototypical framework. A prototype knowledge-based computer system was developed to support a senior level course in industrial engineering so that the overall feasibility of an expert simulation support system could be studied in a controlled and observable setting. The system behavior mimics the diagnostic (intelligent) process performed by the course instructor and teaching assistants, finding logical errors in INSIGHT simulation models and recommending appropriate corrective measures. The system was programmed in a non-procedural language (PROLOG) and designed to run interactively with students working on course homework and projects. The knowledge-based structure supports intelligent behavior, providing its users with access to an evolving accumulation of expert diagnostic knowledge. The non-procedural approach facilitates the maintenance of the system and helps merge the roles of expert and knowledge engineer by allowing new knowledge to be easily incorporated without regard to the existing flow of control. The background, features and design of the system are describe and preliminary results are reported. Initial success is judged to demonstrate the utility of the reported approach and support the ultimate goal of an intelligent modeling system which can support simulation modelers outside the classroom environment. Finally, future extensions are suggested.

  10. Hospital nurses' use of knowledge-based information resources.

    PubMed

    Tannery, Nancy Hrinya; Wessel, Charles B; Epstein, Barbara A; Gadd, Cynthia S

    2007-01-01

    The purpose of this study was to evaluate the information-seeking practices of nurses before and after access to a library's electronic collection of information resources. This is a pre/post intervention study of nurses at a rural community hospital. The hospital contracted with an academic health sciences library for access to a collection of online knowledge-based resources. Self-report surveys were used to obtain information about nurses' computer use and how they locate and access information to answer questions related to their patient care activities. In 2001, self-report surveys were sent to the hospital's 573 nurses during implementation of access to online resources with a post-implementation survey sent 1 year later. At the initiation of access to the library's electronic resources, nurses turned to colleagues and print textbooks or journals to satisfy their information needs. After 1 year of access, 20% of the nurses had begun to use the library's electronic resources. The study outcome suggests ready access to knowledge-based electronic information resources can lead to changes in behavior among some nurses.

  11. A proven knowledge-based approach to prioritizing process information

    NASA Technical Reports Server (NTRS)

    Corsberg, Daniel R.

    1991-01-01

    Many space-related processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect is rapid analysis of the changing process information. During a disturbance, this task can overwhelm humans as well as computers. Humans deal with this by applying heuristics in determining significant information. A simple, knowledge-based approach to prioritizing information is described. The approach models those heuristics that humans would use in similar circumstances. The approach described has received two patents and was implemented in the Alarm Filtering System (AFS) at the Idaho National Engineering Laboratory (INEL). AFS was first developed for application in a nuclear reactor control room. It has since been used in chemical processing applications, where it has had a significant impact on control room environments. The approach uses knowledge-based heuristics to analyze data from process instrumentation and respond to that data according to knowledge encapsulated in objects and rules. While AFS cannot perform the complete diagnosis and control task, it has proven to be extremely effective at filtering and prioritizing information. AFS was used for over two years as a first level of analysis for human diagnosticians. Given the approach's proven track record in a wide variety of practical applications, it should be useful in both ground- and space-based systems.

  12. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  13. Knowledge-based imaging-sensor fusion system

    NASA Technical Reports Server (NTRS)

    Westrom, George

    1989-01-01

    An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.

  14. Knowledge-based simulation using object-oriented programming

    NASA Technical Reports Server (NTRS)

    Sidoran, Karen M.

    1993-01-01

    Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.

  15. A knowledge-based care protocol system for ICU.

    PubMed

    Lau, F; Vincent, D D

    1995-01-01

    There is a growing interest in using care maps in ICU. So far, the emphasis has been on developing the critical path, problem/outcome, and variance reporting for specific diagnoses. This paper presents a conceptual knowledge-based care protocol system design for the ICU. It is based on the manual care map currently in use for managing myocardial infarction in the ICU of the Sturgeon General Hospital in Alberta. The proposed design uses expert rules, object schemas, case-based reasoning, and quantitative models as sources of its knowledge. Also being developed is a decision model with explicit linkages for outcome-process-measure from the care map. The resulting system is intended as a bedside charting and decision-support tool for caregivers. Proposed usage includes charting by acknowledgment, generation of alerts, and critiques on variances/events recorded, recommendations for planned interventions, and comparison with historical cases. Currently, a prototype is being developed on a PC-based network with Visual Basic, Level-Expert Object, and xBase. A clinical trial is also planned to evaluate whether this knowledge-based care protocol can reduce the length of stay of patients with myocardial infarction in the ICU. PMID:8591604

  16. Portable Knowledge-Based Diagnostic And Maintenance Systems

    NASA Astrophysics Data System (ADS)

    Darvish, John; Olson, Noreen S.

    1989-03-01

    It is difficult to diagnose faults and maintain weapon systems because (1) they are highly complex pieces of equipment composed of multiple mechanical, electrical, and hydraulic assemblies, and (2) talented maintenance personnel are continuously being lost through the attrition process. To solve this problem, we developed a portable diagnostic and maintenance aid that uses a knowledge-based expert system. This aid incorporates diagnostics, operational procedures, repair and replacement procedures, and regularly scheduled maintenance into one compact, 18-pound graphics workstation. Drawings and schematics can be pulled up from the CD-ROM to assist the operator in answering the expert system's questions. Work for this aid began with the development of the initial knowledge-based expert system in a fast prototyping environment using a LISP machine. The second phase saw the development of a personal computer-based system that used videodisc technology to pictorially assist the operator. The current version of the aid eliminates the high expenses associated with videodisc preparation by scanning in the art work already in the manuals. A number of generic software tools have been developed that streamlined the construction of each iteration of the aid; these tools will be applied to the development of future systems.

  17. A knowledge-based approach to software development

    SciTech Connect

    White, D.A.

    1995-09-01

    Traditional software development consists of many knowledge intensive and intellectual activities related to understanding a problem to be solved and designing a solution to that problem. These activities are informal, subjective, and undocumented and are the same for original development and subsequent support. Since 1982, the USAF Rome Laboratory has been developing the Knowledge-Based Software Assistant (KBSA), a revolutionary new paradigm for software development that will achieve orders of magnitude improvement in productivity and quality. KBSA does not pursue the improvement of traditional technologies or methodologies such as new programming languages and management procedures to fulfill this objective, but has instead adopted a revolutionary new approach. KBSA is a knowledge-based, computer-mediated paradigm for the evolutionary definition, specification, development, and long-term support of software. The computer becomes an `intelligent partner` and `corporate memory` in this paradigm, formally capturing the appropriate knowledge and actively using this knowledge to provide assistance and automation. The productivity of developers will dramatically improve because of the increased assistance, automation and re-utilization of domain and programming knowledge. The quality of software, both correctness and satisfying requirements, will also improve because the development process is formal and easier to use.

  18. A knowledgebase of the human Alu repetitive elements.

    PubMed

    Mallona, Izaskun; Jordà, Mireia; Peinado, Miguel A

    2016-04-01

    Alu elements are the most abundant retrotransposons in the human genome with more than one million copies. Alu repeats have been reported to participate in multiple processes related with genome regulation and compartmentalization. Moreover, they have been involved in the facilitation of pathological mutations in many diseases, including cancer. The contribution of Alus and other repeats in genomic regulation is often overlooked because their study poses technical and analytical challenges hardly attainable with conventional strategies. Here we propose the integration of ontology-based semantic methods to query a knowledgebase for the human Alus. The knowledgebase for the human Alus leverages Sequence (SO) and Gene Ontologies (GO) and is devoted to address functional and genetic information in the genomic context of the Alus. For each Alu element, the closest gene and transcript are stored, as well their functional annotation according to GO, the state of the chromatin and the transcription factors binding sites inside the Alu. The model uses Web Ontology Language (OWL) and Semantic Web Rule Language (SWRL). As a case of use and to illustrate the utility of the tool, we have evaluated the epigenetic states of Alu repeats associated with gene promoters according to their transcriptional activity. The ontology is easily extendable, offering a scaffold for the inclusion of new experimental data. The RDF/XML formalization is freely available at http://aluontology.sourceforge.net/.

  19. A knowledgebase of the human Alu repetitive elements.

    PubMed

    Mallona, Izaskun; Jordà, Mireia; Peinado, Miguel A

    2016-04-01

    Alu elements are the most abundant retrotransposons in the human genome with more than one million copies. Alu repeats have been reported to participate in multiple processes related with genome regulation and compartmentalization. Moreover, they have been involved in the facilitation of pathological mutations in many diseases, including cancer. The contribution of Alus and other repeats in genomic regulation is often overlooked because their study poses technical and analytical challenges hardly attainable with conventional strategies. Here we propose the integration of ontology-based semantic methods to query a knowledgebase for the human Alus. The knowledgebase for the human Alus leverages Sequence (SO) and Gene Ontologies (GO) and is devoted to address functional and genetic information in the genomic context of the Alus. For each Alu element, the closest gene and transcript are stored, as well their functional annotation according to GO, the state of the chromatin and the transcription factors binding sites inside the Alu. The model uses Web Ontology Language (OWL) and Semantic Web Rule Language (SWRL). As a case of use and to illustrate the utility of the tool, we have evaluated the epigenetic states of Alu repeats associated with gene promoters according to their transcriptional activity. The ontology is easily extendable, offering a scaffold for the inclusion of new experimental data. The RDF/XML formalization is freely available at http://aluontology.sourceforge.net/. PMID:26827622

  20. Developing a Physiologically-Based Pharmacokinetic Model Knowledgebase in Support of Provisional Model Construction

    PubMed Central

    Grulke, Christopher M.; Chang, Daniel T.; Brooks, Raina D.; Leonard, Jeremy A.; Phillips, Martin B.; Hypes, Ethan D.; Fair, Matthew J.; Tornero-Velez, Rogelio; Johnson, Jeffre; Dary, Curtis C.; Tan, Yu-Mei

    2016-01-01

    Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-vetted models can be a great resource for the construction of models pertaining to new chemicals. A PBPK knowledgebase was compiled and developed from existing PBPK-related articles and used to develop new models. From 2,039 PBPK-related articles published between 1977 and 2013, 307 unique chemicals were identified for use as the basis of our knowledgebase. Keywords related to species, gender, developmental stages, and organs were analyzed from the articles within the PBPK knowledgebase. A correlation matrix of the 307 chemicals in the PBPK knowledgebase was calculated based on pharmacokinetic-relevant molecular descriptors. Chemicals in the PBPK knowledgebase were ranked based on their correlation toward ethylbenzene and gefitinib. Next, multiple chemicals were selected to represent exact matches, close analogues, or non-analogues of the target case study chemicals. Parameters, equations, or experimental data relevant to existing models for these chemicals and their analogues were used to construct new models, and model predictions were compared to observed values. This compiled knowledgebase provides a chemical structure-based approach for identifying PBPK models relevant to other chemical entities. Using suitable correlation metrics, we demonstrated that models of chemical analogues in the PBPK knowledgebase can guide the construction of PBPK models for other chemicals. PMID:26871706

  1. Knowledge-Based Systems Approach to Wilderness Fire Management.

    NASA Astrophysics Data System (ADS)

    Saveland, James M.

    The 1988 and 1989 forest fire seasons in the Intermountain West highlight the shortcomings of current fire policy. To fully implement an optimization policy that minimizes the costs and net value change of resources affected by fire, long-range fire severity information is essential, yet lacking. This information is necessary for total mobility of suppression forces, implementing contain and confine suppression strategies, effectively dealing with multiple fire situations, scheduling summer prescribed burning, and wilderness fire management. A knowledge-based system, Delphi, was developed to help provide long-range information. Delphi provides: (1) a narrative of advice on where a fire might spread, if allowed to burn, (2) a summary of recent weather and fire danger information, and (3) a Bayesian analysis of long-range fire danger potential. Uncertainty is inherent in long-range information. Decision theory and judgment research can be used to help understand the heuristics experts use to make decisions under uncertainty, heuristics responsible both for expert performance and bias. Judgment heuristics and resulting bias are examined from a fire management perspective. Signal detection theory and receiver operating curve (ROC) analysis can be used to develop a long-range forecast to improve decisions. ROC analysis mimics some of the heuristics and compensates for some of the bias. Most importantly, ROC analysis displays a continuum of bias from which an optimum operating point can be selected. ROC analysis is especially appropriate for long-range forecasting since (1) the occurrence of possible future events is stated in terms of probability, (2) skill prediction is displayed, (3) inherent trade-offs are displayed, and (4) fire danger is explicitly defined. Statements on the probability of the energy release component of the National Fire Danger Rating System exceeding a critical value later in the fire season can be made early July in the Intermountain West

  2. Knowledge-based navigation of complex information spaces

    SciTech Connect

    Burke, R.D.; Hammond, K.J.; Young, B.C.

    1996-12-31

    While the explosion of on-line information has brought new opportunities for finding and using electronic data, it has also brought to the forefront the problem of isolating useful information and making sense of large multi-dimension information spaces. We have built several developed an approach to building data {open_quotes}tour guides,{close_quotes} called FINDME systems. These programs know enough about an information space to be able to help a user navigate through it. The user not only comes away with items of useful information but also insights into the structure of the information space itself. In these systems, we have combined ideas of instance-based browsing, structuring retrieval around the critiquing of previously-retrieved examples, and retrieval strategies, knowledge-based heuristics for finding relevant information. We illustrate these techniques with several examples, concentrating especially on the RENTME system, a FINDME system for helping users find suitable rental apartments in the Chicago metropolitan area.

  3. Autonomous Cryogenics Loading Operations Simulation Software: Knowledgebase Autonomous Test Engineer

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2012-01-01

    The Simulation Software, KATE (Knowledgebase Autonomous Test Engineer), is used to demonstrate the automatic identification of faults in a system. The ACLO (Autonomous Cryogenics Loading Operation) project uses KATE to monitor and find faults in the loading of the cryogenics int o a vehicle fuel tank. The KATE software interfaces with the IHM (Integrated Health Management) systems bus to communicate with other systems that are part of ACLO. One system that KATE uses the IHM bus to communicate with is AIS (Advanced Inspection System). KATE will send messages to AIS when there is a detected anomaly. These messages include visual inspection of specific valves, pressure gauges and control messages to have AIS open or close manual valves. My goals include implementing the connection to the IHM bus within KATE and for the AIS project. I will also be working on implementing changes to KATE's Ul and implementing the physics objects in KATE that will model portions of the cryogenics loading operation.

  4. Autonomous Cryogenics Loading Operations Simulation Software: Knowledgebase Autonomous Test Engineer

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S., Jr.

    2013-01-01

    Working on the ACLO (Autonomous Cryogenics Loading Operations) project I have had the opportunity to add functionality to the physics simulation software known as KATE (Knowledgebase Autonomous Test Engineer), create a new application allowing WYSIWYG (what-you-see-is-what-you-get) creation of KATE schematic files and begin a preliminary design and implementation of a new subsystem that will provide vision services on the IHM (Integrated Health Management) bus. The functionality I added to KATE over the past few months includes a dynamic visual representation of the fluid height in a pipe based on number of gallons of fluid in the pipe and implementing the IHM bus connection within KATE. I also fixed a broken feature in the system called the Browser Display, implemented many bug fixes and made changes to the GUI (Graphical User Interface).

  5. Knowledge-based system for flight information management. Thesis

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.

    1990-01-01

    The use of knowledge-based system (KBS) architectures to manage information on the primary flight display (PFD) of commercial aircraft is described. The PFD information management strategy used tailored the information on the PFD to the tasks the pilot performed. The KBS design and implementation of the task-tailored PFD information management application is described. The knowledge acquisition and subsequent system design of a flight-phase-detection KBS is also described. The flight-phase output of this KBS was used as input to the task-tailored PFD information management KBS. The implementation and integration of this KBS with existing aircraft systems and the other KBS is described. The flight tests are examined of both KBS's, collectively called the Task-Tailored Flight Information Manager (TTFIM), which verified their implementation and integration, and validated the software engineering advantages of the KBS approach in an operational environment.

  6. A knowledge-based system for prototypical reasoning

    NASA Astrophysics Data System (ADS)

    Lieto, Antonio; Minieri, Andrea; Piana, Alberto; Radicioni, Daniele P.

    2015-04-01

    In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired architecture for the representation of conceptual information. The proposed system aims at extending the classical representational and reasoning capabilities of the ontology-based frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base, composed of a classical symbolic component (grounded on a formal ontology) with a typicality based one (grounded on the conceptual spaces framework). The resulting system attempts to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual process theories of reasoning and rationality. The system has been experimentally assessed in a conceptual categorisation task where common sense linguistic descriptions were given in input, and the corresponding target concepts had to be identified. The results show that the proposed solution substantially extends the representational and reasoning 'conceptual' capabilities of standard ontology-based systems.

  7. Knowledge-based fault diagnosis system for refuse collection vehicle

    SciTech Connect

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-15

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  8. Knowledge-based generalization of metabolic networks: a practical study.

    PubMed

    Zhukova, Anna; Sherman, David J

    2014-04-01

    The complex process of genome-scale metabolic network reconstruction involves semi-automatic reaction inference, analysis, and refinement through curation by human experts. Unfortunately, decisions by experts are hampered by the complexity of the network, which can mask errors in the inferred network. In order to aid an expert in making sense out of the thousands of reactions in the organism's metabolism, we developed a method for knowledge-based generalization that provides a higher-level view of the network, highlighting the particularities and essential structure, while hiding the details. In this study, we show the application of this generalization method to 1,286 metabolic networks of organisms in Path2Models that describe fatty acid metabolism. We compare the generalised networks and show that we successfully highlight the aspects that are important for their curation and comparison. PMID:24712528

  9. Data integration and analysis using the Heliophysics Event Knowledgebase

    NASA Astrophysics Data System (ADS)

    Hurlburt, Neal; Reardon, Kevin

    The Heliophysics Event Knowledgebase (HEK) system provides an integrated framework for automated data mining using a variety of feature-detection methods; high-performance data systems to cope with over 1TB/day of multi-mission data; and web services and clients for searching the resulting metadata, reviewing results, and efficiently accessing the data products. We have recently enhanced the capabilities of the HEK to support the complex datasets being produced by the Interface Region Imaging Spectrograph (IRIS). We are also developing the mechanisms to incorporate descriptions of coordinated observations from ground-based facilities, including the NSO's Dunn Solar Telescope (DST). We will discuss the system and its recent evolution and demonstrate its ability to support coordinated science investigations.

  10. Knowledge-based fault diagnosis system for refuse collection vehicle

    NASA Astrophysics Data System (ADS)

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-01

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  11. Knowledge-based assistance in costing the space station DMS

    NASA Technical Reports Server (NTRS)

    Henson, Troy; Rone, Kyle

    1988-01-01

    The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.

  12. The Knowledge-Based Software Assistant: Beyond CASE

    NASA Technical Reports Server (NTRS)

    Carozzoni, Joseph A.

    1993-01-01

    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.

  13. Knowledge-based generalization of metabolic networks: a practical study.

    PubMed

    Zhukova, Anna; Sherman, David J

    2014-04-01

    The complex process of genome-scale metabolic network reconstruction involves semi-automatic reaction inference, analysis, and refinement through curation by human experts. Unfortunately, decisions by experts are hampered by the complexity of the network, which can mask errors in the inferred network. In order to aid an expert in making sense out of the thousands of reactions in the organism's metabolism, we developed a method for knowledge-based generalization that provides a higher-level view of the network, highlighting the particularities and essential structure, while hiding the details. In this study, we show the application of this generalization method to 1,286 metabolic networks of organisms in Path2Models that describe fatty acid metabolism. We compare the generalised networks and show that we successfully highlight the aspects that are important for their curation and comparison.

  14. Knowledge-based decision support for patient monitoring in cardioanesthesia.

    PubMed

    Schecke, T; Langen, M; Popp, H J; Rau, G; Käsmacher, H; Kalff, G

    1992-01-01

    An approach to generating 'intelligent alarms' is presented that aggregates many information items, i.e. measured vital signs, recent medications, etc., into state variables that more directly reflect the patient's physiological state. Based on these state variables the described decision support system AES-2 also provides therapy recommendations. The assessment of the state variables and the generation of therapeutic advice follow a knowledge-based approach. Aspects of uncertainty, e.g. a gradual transition between 'normal' and 'below normal', are considered applying a fuzzy set approach. Special emphasis is laid on the ergonomic design of the user interface, which is based on color graphics and finger touch input on the screen. Certain simulation techniques considerably support the design process of AES-2 as is demonstrated with a typical example from cardioanesthesia. PMID:1402299

  15. The Heliophysics Event Knowledgebase for the Solar Dynamics Observatory

    NASA Astrophysics Data System (ADS)

    Hurlburt, Neal E.; Cheung, M.; Schrijver, K.; HEK development Team

    2009-05-01

    The Solar Dynamics Observatory will generated over 2 petabytes of imagery in its 5 year mission. In order to improve scientific productivity and to reduce system requirements , we have developed a system for data markup to identify "interesting” datasets and direct scientists to them through an event-based querying system. The SDO Heliophysics Event Knowledgebase (HEK) will enable caching of commonly accessed datasets within the Joint Science Operations Center (JSOC) and reduces the (human) time spent searching for and downloading relevant data. We present an overview of our HEK including the ingestion of images, automated and manual tools for identifying and annotation features within the images, and interfaces and web tools for querying and accessing events and their associated data.

  16. Knowledge-based architecture for airborne mine and minefield detection

    NASA Astrophysics Data System (ADS)

    Agarwal, Sanjeev; Menon, Deepak; Swonger, C. W.

    2004-09-01

    One of the primary lessons learned from airborne mid-wave infrared (MWIR) based mine and minefield detection research and development over the last few years has been the fact that no single algorithm or static detection architecture is able to meet mine and minefield detection performance specifications. This is true not only because of the highly varied environmental and operational conditions under which an airborne sensor is expected to perform but also due to the highly data dependent nature of sensors and algorithms employed for detection. Attempts to make the algorithms themselves more robust to varying operating conditions have only been partially successful. In this paper, we present a knowledge-based architecture to tackle this challenging problem. The detailed algorithm architecture is discussed for such a mine/minefield detection system, with a description of each functional block and data interface. This dynamic and knowledge-driven architecture will provide more robust mine and minefield detection for a highly multi-modal operating environment. The acquisition of the knowledge for this system is predominantly data driven, incorporating not only the analysis of historical airborne mine and minefield imagery data collection, but also other "all source data" that may be available such as terrain information and time of day. This "all source data" is extremely important and embodies causal information that drives the detection performance. This information is not being used by current detection architectures. Data analysis for knowledge acquisition will facilitate better understanding of the factors that affect the detection performance and will provide insight into areas for improvement for both sensors and algorithms. Important aspects of this knowledge-based architecture, its motivations and the potential gains from its implementation are discussed, and some preliminary results are presented.

  17. Risk Management of New Microelectronics for NASA: Radiation Knowledge-base

    NASA Technical Reports Server (NTRS)

    LaBel, Kenneth A.

    2004-01-01

    Contents include the following: NASA Missions - implications to reliability and radiation constraints. Approach to Insertion of New Technologies Technology Knowledge-base development. Technology model/tool development and validation. Summary comments.

  18. Towards a knowledge-based system to assist the Brazilian data-collecting system operation

    NASA Technical Reports Server (NTRS)

    Rodrigues, Valter; Simoni, P. O.; Oliveira, P. P. B.; Oliveira, C. A.; Nogueira, C. A. M.

    1988-01-01

    A study is reported which was carried out to show how a knowledge-based approach would lead to a flexible tool to assist the operation task in a satellite-based environmental data collection system. Some characteristics of a hypothesized system comprised of a satellite and a network of Interrogable Data Collecting Platforms (IDCPs) are pointed out. The Knowledge-Based Planning Assistant System (KBPAS) and some aspects about how knowledge is organized in the IDCP's domain are briefly described.

  19. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  20. A clinical trial of a knowledge-based medical record.

    PubMed

    Safran, C; Rind, D M; Davis, R B; Sands, D Z; Caraballo, E; Rippel, K; Wang, Q; Rury, C; Makadon, H J; Cotton, D J

    1995-01-01

    To meet the needs of primary care physicians caring for patients with HIV infection, we developed a knowledge-based medical record to allow the on-line patient record to play an active role in the care process. These programs integrate the on-line patient record, rule-based decision support, and full-text information retrieval into a clinical workstation for the practicing clinician. To determine whether use of a knowledge-based medical record was associated with more rapid and complete adherence to practice guidelines and improved quality of care, we performed a controlled clinical trial among physicians and nurse practitioners caring for 349 patients infected with the human immuno-deficiency virus (HIV); 191 patients were treated by 65 physicians and nurse practitioners assigned to the intervention group, and 158 patients were treated by 61 physicians and nurse practitioners assigned to the control group. During the 18-month study period, the computer generated 303 alerts in the intervention group and 388 in the control group. The median response time of clinicians to these alerts was 11 days in the intervention group and 52 days in the control group (PJJ0.0001, log-rank test). During the study, the computer generated 432 primary care reminders for the intervention group and 360 reminders for the control group. The median response time of clinicians to these alerts was 114 days in the intervention group and more than 500 days in the control group (PJJ0.0001, log-rank test). Of the 191 patients in the intervention group, 67 (35%) had one or more hospitalizations, compared with 70 (44%) of the 158 patients in the control group (PJ=J0.04, Wilcoxon test stratified for initial CD4 count). There was no difference in survival between the intervention and control groups (P = 0.18, log-rank test). We conclude that our clinical workstation significantly changed physicians' behavior in terms of their response to alerts regarding primary care interventions and that these

  1. KBGIS-2: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, T.; Peuquet, D.; Menon, S.; Agarwal, P.

    1986-01-01

    The architecture and working of a recently implemented knowledge-based geographic information system (KBGIS-2) that was designed to satisfy several general criteria for the geographic information system are described. The system has four major functions that include query-answering, learning, and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial objects language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is currently performing all its designated tasks successfully, although currently implemented on inadequate hardware. Future reports will detail the performance characteristics of the system, and various new extensions are planned in order to enhance the power of KBGIS-2.

  2. KBGIS-II: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, Terence; Peuquet, Donna; Menon, Sudhakar; Agarwal, Pankaj

    1986-01-01

    The architecture and working of a recently implemented Knowledge-Based Geographic Information System (KBGIS-II), designed to satisfy several general criteria for the GIS, is described. The system has four major functions including query-answering, learning and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial object language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is performing all its designated tasks successfully. Future reports will relate performance characteristics of the system.

  3. A Generalized Knowledge-Based Discriminatory Function for Biomolecular Interactions

    PubMed Central

    Bernard, Brady; Samudrala, Ram

    2010-01-01

    Several novel and established knowledge-based discriminatory function formulations and reference state derivations have been evaluated to identify parameter sets capable of distinguishing native and near-native biomolecular interactions from incorrect ones. We developed the r·m·r function, a novel atomic level radial distribution function with mean reference state that averages over all pairwise atom types from a reduced atom type composition, using experimentally determined intermolecular complexes in the Cambridge Structural Database (CSD) and the Protein Data Bank (PDB) as the information sources. We demonstrate that r·m·r had the best discriminatory accuracy and power for protein-small molecule and protein-DNA interactions, regardless of whether the native complex was included or excluded from the test set. The superior performance of the r·m·r discriminatory function compared to seventeen alternative functions evaluated on publicly available test sets for protein-small molecule and protein-DNA interactions indicated that the function was not over optimized through back testing on a single class of biomolecular interactions. The initial success of the reduced composition and superior performance with the CSD as the distribution set over the PDB implies that further improvements and generality of the function are possible by deriving probabilities from subsets of the CSD, using structures that consist of only the atom types to be considered for given biomolecular interactions. The method is available as a web server module at http://protinfo.compbio.washington.edu. PMID:19127590

  4. Framework Support For Knowledge-Based Software Development

    NASA Astrophysics Data System (ADS)

    Huseth, Steve

    1988-03-01

    The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.

  5. Knowledge-based graphical interfaces for presenting technical information

    NASA Technical Reports Server (NTRS)

    Feiner, Steven

    1988-01-01

    Designing effective presentations of technical information is extremely difficult and time-consuming. Moreover, the combination of increasing task complexity and declining job skills makes the need for high-quality technical presentations especially urgent. We believe that this need can ultimately be met through the development of knowledge-based graphical interfaces that can design and present technical information. Since much material is most naturally communicated through pictures, our work has stressed the importance of well-designed graphics, concentrating on generating pictures and laying out displays containing them. We describe APEX, a testbed picture generation system that creates sequences of pictures that depict the performance of simple actions in a world of 3D objects. Our system supports rules for determining automatically the objects to be shown in a picture, the style and level of detail with which they should be rendered, the method by which the action itself should be indicated, and the picture's camera specification. We then describe work on GRIDS, an experimental display layout system that addresses some of the problems in designing displays containing these pictures, determining the position and size of the material to be presented.

  6. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  7. A knowledge-based system design/information tool

    NASA Technical Reports Server (NTRS)

    Allen, James G.; Sikora, Scott E.

    1990-01-01

    The objective of this effort was to develop a Knowledge Capture System (KCS) for the Integrated Test Facility (ITF) at the Dryden Flight Research Facility (DFRF). The DFRF is a NASA Ames Research Center (ARC) facility. This system was used to capture the design and implementation information for NASA's high angle-of-attack research vehicle (HARV), a modified F/A-18A. In particular, the KCS was used to capture specific characteristics of the design of the HARV fly-by-wire (FBW) flight control system (FCS). The KCS utilizes artificial intelligence (AI) knowledge-based system (KBS) technology. The KCS enables the user to capture the following characteristics of automated systems: the system design; the hardware (H/W) design and implementation; the software (S/W) design and implementation; and the utilities (electrical and hydraulic) design and implementation. A generic version of the KCS was developed which can be used to capture the design information for any automated system. The deliverable items for this project consist of the prototype generic KCS and an application, which captures selected design characteristics of the HARV FCS.

  8. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  9. Verification of Legal Knowledge-base with Conflictive Concept

    NASA Astrophysics Data System (ADS)

    Hagiwara, Shingo; Tojo, Satoshi

    In this paper, we propose a verification methodology of large-scale legal knowledge. With a revision of legal code, we are forced to revise also other affected code to keep the consistency of law. Thus, our task is to revise the affected area properly and to investigate its adequacy. In this study, we extend the notion of inconsistency besides of the ordinary logical inconsistency, to include the conceptual conflicts. We obtain these conflictions from taxonomy data, and thus, we can avoid tedious manual declarations of opponent words. In the verification process, we adopt extended disjunctive logic programming (EDLP) to tolerate multiple consequences for a given set of antecedents. In addition, we employ abductive logic programming (ALP) regarding the situations to which the rules are applied as premises. Also, we restrict a legal knowledge-base to acyclic program to avoid the circulation of definitions, to justify the relevance of verdicts. Therefore, detecting cyclic parts of legal knowledge would be one of our objectives. The system is composed of two subsystems; we implement the preprocessor in Ruby to facilitate string manipulation, and the verifier in Prolog to exert the logical inference. Also, we employ XML format in the system to retain readability. In this study, we verify actual code of ordinances of Toyama prefecture, and show the experimental results.

  10. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  11. Knowledge-based inference engine for online video dissemination

    NASA Astrophysics Data System (ADS)

    Zhou, Wensheng; Kuo, C.-C. Jay

    2000-10-01

    To facilitate easy access to rich information of multimedia over the Internet, we develop a knowledge-based classification system that supports automatic Indexing and filtering based on semantic concepts for the dissemination of on-line real-time media. Automatic segmentation, annotation and summarization of media for fast information browsing and updating are achieved in the same time. In the proposed system, a real-time scene-change detection proxy performs an initial video structuring process by splitting a video clip into scenes. Motional and visual features are extracted in real time for every detected scene by using online feature extraction proxies. Higher semantics are then derived through a joint use of low-level features along with inference rules in the knowledge base. Inference rules are derived through a supervised learning process based on representative samples. On-line media filtering based on semantic concepts becomes possible by using the proposed video inference engine. Video streams are either blocked or sent to certain channels depending on whether or not the video stream is matched with the user's profile. The proposed system is extensively evaluated by applying the engine to video of basketball games.

  12. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  13. ISPE: A knowledge-based system for fluidization studies

    SciTech Connect

    Reddy, S.

    1991-01-01

    Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.

  14. Designing the Cloud-based DOE Systems Biology Knowledgebase

    SciTech Connect

    Lansing, Carina S.; Liu, Yan; Yin, Jian; Corrigan, Abigail L.; Guillen, Zoe C.; Kleese van Dam, Kerstin; Gorton, Ian

    2011-09-01

    Systems Biology research, even more than many other scientific domains, is becoming increasingly data-intensive. Not only have advances in experimental and computational technologies lead to an exponential increase in scientific data volumes and their complexity, but increasingly such databases themselves are providing the basis for new scientific discoveries. To engage effectively with these community resources, integrated analyses, synthesis and simulation software is needed, regularly supported by scientific workflows. In order to provide a more collaborative, community driven research environment for this heterogeneous setting, the Department of Energy (DOE) has decided to develop a federated, cloud based cyber infrastructure - the Systems Biology Knowledgebase (Kbase). Pacific Northwest National Laboratory (PNNL) with its long tradition in data intensive science lead two of the five initial pilot projects, these two focusing on defining and testing the basic federated cloud-based system architecture and develop a prototype implementation. Hereby the community wide accessibility of biological data and the capability to integrate and analyze this data within its changing research context were seen as key technical functionalities the Kbase needed to enable. In this paper we describe the results of our investigations into the design of a cloud based federated infrastructure for: (1) Semantics driven data discovery, access and integration; (2) Data annotation, publication and sharing; (3) Workflow enabled data analysis; and (4) Project based collaborative working. We describe our approach, exemplary use cases and our prototype implementation that demonstrates the feasibility of this approach.

  15. Selection of construction methods: a knowledge-based approach.

    PubMed

    Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  16. Knowledge-based environment for hierarchical modeling and simulation

    SciTech Connect

    Kim, Taggon.

    1988-01-01

    This dissertation develops a knowledge-based environment for hierarchical modeling and simulation of discrete-event systems as the major part of a longer, ongoing research project in artificial intelligence and distributed simulation. In developing the environment, a knowledge representation framework for modeling and simulation, which unifies structural and behavioral knowledge of simulation models, is proposed by incorporating knowledge-representation schemes in artificial intelligence within simulation models. The knowledge base created using the framework is composed of a structural knowledge base called entity structure base and a behavioral knowledge base called model base. The DEVS-Scheme, a realization of DEVS (Discrete Event System Specifiation) formalism in a LISP-based, object-oriented environment, is extended to facilitate the specification of behavioral knowledge of models, especially for kernel models that are suited to model massively parallel computer architectures. The ESP Scheme, a realization of entity structure formalism in a frame-theoretic representation, is extended to represent structural knowledge of models and to manage it in the structural knowledge base.

  17. Compiling knowledge-based systems from KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.

  18. Embedded knowledge-based system for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Aboutalib, A. O.

    1990-10-01

    The development of a reliable Automatic Target Recognition (ATE) system is considered a very critical and challenging problem. Existing ATE Systems have inherent limitations in terms of recognition performance and the ability to learn and adapt. Artificial Intelligence Techniques have the potential to improve the performance of ATh Systems. In this paper, we presented a novel Knowledge-Engineering tool, termed, the Automatic Reasoning Process (ARP) , that can be used to automatically develop and maintain a Knowledge-Base (K-B) for the ATR Systems. In its learning mode, the ARP utilizes Learning samples to automatically develop the ATR K-B, which consists of minimum size sets of necessary and sufficient conditions for each target class. In its operational mode, the ARP infers the target class from sensor data using the ATh K-B System. The ARP also has the capability to reason under uncertainty, and can support both statistical and model-based approaches for ATR development. The capabilities of the ARP are compared and contrasted to those of another Knowledge-Engineering tool, termed, the Automatic Rule Induction (ARI) which is based on maximizing the mutual information. The AR? has been implemented in LISP on a VAX-GPX workstation.

  19. A knowledge-based modeling for plantar pressure image reconstruction.

    PubMed

    Ostadabbas, Sarah; Nourani, Mehrdad; Saeed, Adnan; Yousefi, Rasoul; Pompeo, Matthew

    2014-10-01

    It is known that prolonged pressure on the plantar area is one of the main factors in developing foot ulcers. With current technology, electronic pressure monitoring systems can be placed as an insole into regular shoes to continuously monitor the plantar area and provide evidence on ulcer formation process as well as insight for proper orthotic footwear design. The reliability of these systems heavily depends on the spatial resolution of their sensor platforms. However, due to the cost and energy constraints, practical wireless in-shoe pressure monitoring systems have a limited number of sensors, i.e., typically K < 10. In this paper, we present a knowledge-based regression model (SCPM) to reconstruct a spatially continuous plantar pressure image from a small number of pressure sensors. This model makes use of high-resolution pressure data collected clinically to train a per-subject regression function. SCPM is shown to outperform all other tested interpolation methods for K < 60 sensors, with less than one-third of the error for K = 10 sensors. SCPM bridges the gap between the technological capability and medical need and can play an important role in the adoption of sensing insole for a wide range of medical applications.

  20. Knowledge-Based Query Construction Using the CDSS Knowledge Base for Efficient Evidence Retrieval.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Lee, Sungyoung; Kang, Byeong Ho

    2015-08-28

    Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important.

  1. Knowledge-Based Query Construction Using the CDSS Knowledge Base for Efficient Evidence Retrieval

    PubMed Central

    Afzal, Muhammad; Hussain, Maqbool; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Lee, Sungyoung; Kang, Byeong Ho

    2015-01-01

    Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important. PMID:26343669

  2. Detailed Design of the Heliophysics Event Knowledgebase (HEK)

    NASA Astrophysics Data System (ADS)

    Somani, Ankur; Seguin, R.; Timmons, R.; Freeland, S.; Hurlburt, N.; Kobashi, A.; Jaffey, A.

    2010-05-01

    We present the Heliophysics Event Registry (HER) and the Heliophysics Coverage Registry (HCR), which serve as two components of the Heliophysics Event Knowledgebase (HEK). Using standardized XML formats built upon the IVOA VOEvent specification, events can be ingested, stored, and later searched upon. Various web services and SolarSoft routines are available to aid in these functions. One source of events for the HEK is an automated Event Detection System (EDS) that continuously runs feature finding modules on SDO data. Modules are primarily supplied by the Smithsonian Astrophysical Observatory-led Feature Finding Team. The distributed system will keep up with SDO's data rate and issue space weather alerts in near-real time. Some modules will be run on all data while others are run in response to certain solar phenomena found by other modules in the system. Panorama is a software tool used for rapid visualization of large volumes of solar image data in multiple channels/wavelengths. With the EVACS front-end GUI tool, Panorama allows the user to, in real-time, change channel pixel scaling, weights, alignment, blending and colorization of the data. The user can also easily create WYSIWYG movies and launch the Annotator tool to describe events and features the user observes in the data. Panorama can also be used to drive clustered HiperSpace walls using the CGLX toolkit. The Event Viewer and Control Software (EVACS) provides a GUI that the user can search both the HER and HCR with. By specifying a start and end time and selecting the types of events and instruments that are of interest, EVACS will display the events on a full disk image of the sun while displaying more detailed information for the events. As mentioned, the user can also launch Panorama via EVACS.

  3. Autonomous Cryogenic Load Operations: Knowledge-Based Autonomous Test Engineer

    NASA Technical Reports Server (NTRS)

    Schrading, J. Nicolas

    2013-01-01

    The Knowledge-Based Autonomous Test Engineer (KATE) program has a long history at KSC. Now a part of the Autonomous Cryogenic Load Operations (ACLO) mission, this software system has been sporadically developed over the past 20 years. Originally designed to provide health and status monitoring for a simple water-based fluid system, it was proven to be a capable autonomous test engineer for determining sources of failure in the system. As part of a new goal to provide this same anomaly-detection capability for a complicated cryogenic fluid system, software engineers, physicists, interns and KATE experts are working to upgrade the software capabilities and graphical user interface. Much progress was made during this effort to improve KATE. A display of the entire cryogenic system's graph, with nodes for components and edges for their connections, was added to the KATE software. A searching functionality was added to the new graph display, so that users could easily center their screen on specific components. The GUI was also modified so that it displayed information relevant to the new project goals. In addition, work began on adding new pneumatic and electronic subsystems into the KATE knowledge base, so that it could provide health and status monitoring for those systems. Finally, many fixes for bugs, memory leaks, and memory errors were implemented and the system was moved into a state in which it could be presented to stakeholders. Overall, the KATE system was improved and necessary additional features were added so that a presentation of the program and its functionality in the next few months would be a success.

  4. MetaShare: Enabling Knowledge-Based Data Management

    NASA Astrophysics Data System (ADS)

    Pennington, D. D.; Salayandia, L.; Gates, A.; Osuna, F.

    2013-12-01

    MetaShare is a free and open source knowledge-based system for supporting data management planning, now required by some agencies and publishers. MetaShare supports users as they describe the types of data they will collect, expected standards, and expected policies for sharing. MetaShare's semantic model captures relationships between disciplines, tools, data types, data formats, and metadata standards. As the user plans their data management activities, MetaShare recommends choices based on practices and decisions from a community that has used the system for similar purposes, and extends the knowledge base to capture new relationships. The MetaShare knowledge base is being seeded with information for geoscience and environmental science domains, and is currently undergoing testing on at the University of Texas at El Paso. Through time and usage, it is expected to grow to support a variety of research domains, enabling community-based learning of data management practices. Knowledge of a user's choices during the planning phase can be used to support other tasks in the data life cycle, e.g., collecting, disseminating, and archiving data. A key barrier to scientific data sharing is the lack of sufficient metadata that provides context under which data were collected. The next phase of MetaShare development will automatically generate data collection instruments with embedded metadata and semantic annotations based on the information provided during the planning phase. While not comprehensive, this metadata will be sufficient for discovery and will enable user's to focus on more detailed descriptions of their data. Details are available at: Salayandia, L., Pennington, D., Gates, A., and Osuna, F. (accepted). MetaShare: From data management plans to knowledge base systems. AAAI Fall Symposium Series Workshop on Discovery Informatics, November 15-17, 2013, Arlington, VA.

  5. Installing a Local Copy of the Reactome Web Site and Knowledgebase

    PubMed Central

    McKay, Sheldon J; Weiser, Joel

    2015-01-01

    The Reactome project builds, maintains, and publishes a knowledgebase of biological pathways. The information in the knowledgebase is gathered from the experts in the field, peer reviewed, and edited by Reactome editorial staff and then published to the Reactome Web site, http://www.reactome.org (see UNIT 8.7; Croft et al., 2013). The Reactome software is open source and builds on top of other open-source or freely available software. Reactome data and code can be freely downloaded in its entirety and the Web site installed locally. This allows for more flexible interrogation of the data and also makes it possible to add one’s own information to the knowledgebase. PMID:26087747

  6. A knowledge-based approach to automated flow-field zoning for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1989-01-01

    An automated three-dimensional zonal grid generation capability for computational fluid dynamics is shown through the development of a demonstration computer program capable of automatically zoning the flow field of representative two-dimensional (2-D) aerodynamic configurations. The applicability of a knowledge-based programming approach to the domain of flow-field zoning is examined. Several aspects of flow-field zoning make the application of knowledge-based techniques challenging: the need for perceptual information, the role of individual bias in the design and evaluation of zonings, and the fact that the zoning process is modeled as a constructive, design-type task (for which there are relatively few examples of successful knowledge-based systems in any domain). Engineering solutions to the problems arising from these aspects are developed, and a demonstration system is implemented which can design, generate, and output flow-field zonings for representative 2-D aerodynamic configurations.

  7. County level population estimation using knowledge-based image classification and regression models

    NASA Astrophysics Data System (ADS)

    Nepali, Anjeev

    This paper presents methods and results of county-level population estimation using Landsat Thematic Mapper (TM) images of Denton County and Collin County in Texas. Landsat TM images acquired in March 2000 were classified into residential and non-residential classes using maximum likelihood classification and knowledge-based classification methods. Accuracy assessment results from the classified image produced using knowledge-based classification and traditional supervised classification (maximum likelihood classification) methods suggest that knowledge-based classification is more effective than traditional supervised classification methods. Furthermore, using randomly selected samples of census block groups, ordinary least squares (OLS) and geographically weighted regression (GWR) models were created for total population estimation. The overall accuracy of the models is over 96% at the county level. The results also suggest that underestimation normally occurs in block groups with high population density, whereas overestimation occurs in block groups with low population density.

  8. PRAIS: Distributed, real-time knowledge-based systems made easy

    NASA Technical Reports Server (NTRS)

    Goldstein, David G.

    1990-01-01

    This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.

  9. A national knowledge-based crop recognition in Mediterranean environment

    NASA Astrophysics Data System (ADS)

    Cohen, Yafit; Shoshany, Maxim

    2002-08-01

    Population growth, urban expansion, land degradation, civil strife and war may place plant natural resources for food and agriculture at risk. Crop and yield monitoring is basic information necessary for wise management of these resources. Satellite remote sensing techniques have proven to be cost-effective in widespread agricultural lands in Africa, America, Europe and Australia. However, they have had limited success in Mediterranean regions that are characterized by a high rate of spatio-temporal ecological heterogeneity and high fragmentation of farming lands. An integrative knowledge-based approach is needed for this purpose, which combines imagery and geographical data within the framework of an intelligent recognition system. This paper describes the development of such a crop recognition methodology and its application to an area that comprises approximately 40% of the cropland in Israel. This area contains eight crop types that represent 70% of Israeli agricultural production. Multi-date Landsat TM images representing seasonal vegetation cover variations were converted to normalized difference vegetation index (NDVI) layers. Field boundaries were delineated by merging Landsat data with SPOT-panchromatic images. Crop recognition was then achieved in two-phases, by clustering multi-temporal NDVI layers using unsupervised classification, and then applying 'split-and-merge' rules to these clusters. These rules were formalized through comprehensive learning of relationships between crop types, imagery properties (spectral and NDVI) and auxiliary data including agricultural knowledge, precipitation and soil types. Assessment of the recognition results using ground data from the Israeli Agriculture Ministry indicated an average recognition accuracy exceeding 85% which accounts for both omission and commission errors. The two-phase strategy implemented in this study is apparently successful for heterogeneous regions. This is due to the fact that it allows

  10. Extending the Learning Experience Using the Web and a Knowledge-Based Virtual Environment.

    ERIC Educational Resources Information Center

    Parkinson, B.; Hudson, P.

    2002-01-01

    Identifies problems associated with teaching and learning a complex subject such as engineering design within a restrictive educational environment. Describes the development of a Web-based computer aid in the United Kingdom which employs a multimedia virtual environment incorporating domain-specific knowledge-based systems to emulate a range of…

  11. A Comparison of Books and Hypermedia for Knowledge-based Sports Coaching.

    ERIC Educational Resources Information Center

    Vickers, Joan N.; Gaines, Brian R.

    1988-01-01

    Summarizes and illustrates the knowledge-based approach to instructional material design. A series of sports coaching handbooks and hypermedia presentations of the same material are described and the different instantiations of the knowledge and training structures are compared. Figures show knowledge structures for badminton and the architecture…

  12. Limitations of Levels, Learning Outcomes and Qualifications as Drivers Towards a More Knowledge-Based Society

    ERIC Educational Resources Information Center

    Brown, Alan

    2008-01-01

    National (and European) qualifications frameworks, the specification of learning outcomes and grand targets like the Lisbon goals of increasing the supply of graduates in Europe in order to achieve a more knowledge-based society are all predicated upon the idea of moving people through to higher and well-defined levels of skills, knowledge and…

  13. GUIDON-WATCH: A Graphic Interface for Viewing a Knowledge-Based System. Technical Report #14.

    ERIC Educational Resources Information Center

    Richer, Mark H.; Clancey, William J.

    This paper describes GUIDON-WATCH, a graphic interface that uses multiple windows and a mouse to allow a student to browse a knowledge base and view reasoning processes during diagnostic problem solving. The GUIDON project at Stanford University is investigating how knowledge-based systems can provide the basis for teaching programs, and this…

  14. New knowledge-based genetic algorithm for excavator boom structural optimization

    NASA Astrophysics Data System (ADS)

    Hua, Haiyan; Lin, Shuwen

    2014-03-01

    Due to the insufficiency of utilizing knowledge to guide the complex optimal searching, existing genetic algorithms fail to effectively solve excavator boom structural optimization problem. To improve the optimization efficiency and quality, a new knowledge-based real-coded genetic algorithm is proposed. A dual evolution mechanism combining knowledge evolution with genetic algorithm is established to extract, handle and utilize the shallow and deep implicit constraint knowledge to guide the optimal searching of genetic algorithm circularly. Based on this dual evolution mechanism, knowledge evolution and population evolution can be connected by knowledge influence operators to improve the configurability of knowledge and genetic operators. Then, the new knowledge-based selection operator, crossover operator and mutation operator are proposed to integrate the optimal process knowledge and domain culture to guide the excavator boom structural optimization. Eight kinds of testing algorithms, which include different genetic operators, are taken as examples to solve the structural optimization of a medium-sized excavator boom. By comparing the results of optimization, it is shown that the algorithm including all the new knowledge-based genetic operators can more remarkably improve the evolutionary rate and searching ability than other testing algorithms, which demonstrates the effectiveness of knowledge for guiding optimal searching. The proposed knowledge-based genetic algorithm by combining multi-level knowledge evolution with numerical optimization provides a new effective method for solving the complex engineering optimization problem.

  15. Universities and the Knowledge-Based Economy: Perceptions from a Developing Country

    ERIC Educational Resources Information Center

    Bano, Shah; Taylor, John

    2015-01-01

    This paper considers the role of universities in the creation of a knowledge-based economy (KBE) in a developing country, Pakistan. Some developing countries have moved quickly to develop a KBE, but progress in Pakistan is much slower. Higher education plays a crucial role as part of the triple helix model for innovation. Based on the perceptions…

  16. Learning and Innovation in the Knowledge-Based Economy: Beyond Clusters and Qualifications

    ERIC Educational Resources Information Center

    James, Laura; Guile, David; Unwin, Lorna

    2013-01-01

    For over a decade policy-makers have claimed that advanced industrial societies should develop a knowledge-based economy (KBE) in response to economic globalisation and the transfer of manufacturing jobs to lower cost countries. In the UK, this vision shaped New Labour's policies for vocational education and training (VET), higher education…

  17. Metadata-based generation and management of knowledgebases from molecular biological databases.

    PubMed

    Eccles, J R; Saldanha, J W

    1990-06-01

    Present-day knowledge-based systems (or expert systems) and databases constitute 'islands of computing' with little or no connection to each other. The use of software to provide a communication channel between the two, and to integrate their separate functions, is particularly attractive in certain data-rich domains where there are already pre-existing database systems containing the data required by the relevant knowledge-based system. Our evolving program, GENPRO, provides such a communication channel. The original methodology has been extended to provide interactive Prolog clause input with syntactic and semantic verification. This enables automatic generation of clauses from the source database, together with complete management of subsequent interfacing to the specified knowledge-based system. The particular data-rich domain used in this paper is protein structure, where processes which require reasoning (modelled by knowledge-based systems), such as the inference of protein topology, protein model-building and protein structure prediction, often require large amounts of raw data (i.e., facts about particular proteins) in the form of logic programming ground clauses. These are generated in the proper format by use of the concept of metadata. PMID:2397635

  18. A knowledge-based object recognition system for applications in the space station

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  19. Knowledge-Based Indexing of the Medical Literature: The Indexing Aid Project.

    ERIC Educational Resources Information Center

    Humphrey, Suzanne; Miller, Nancy E.

    1987-01-01

    Describes the National Library of Medicine's (NLM) Indexing Aid Project for conducting research in knowledge representation and indexing for information retrieval, whose goal is to develop interactive knowledge-based systems for computer-assisted indexing of the periodical medical literature. Appendices include background information on NLM…

  20. Learning Spaces: An ICT-Enabled Model of Future Learning in the Knowledge-Based Society

    ERIC Educational Resources Information Center

    Punie, Yves

    2007-01-01

    This article presents elements of a future vision of learning in the knowledge-based society which is enabled by ICT. It is not only based on extrapolations from trends and drivers that are shaping learning in Europe but also consists of a holistic attempt to envisage and anticipate future learning needs and requirements in the KBS. The "learning…

  1. The Spread of Contingent Work in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Szabo, Katalin; Negyesi, Aron

    2005-01-01

    Permanent employment, typical of industrial societies and bolstered by numerous social guaranties, has been declining in the past 2 decades. There has been a steady expansion of various forms of contingent work. The decomposition of traditional work is a logical consequence of the characteristic patterns of the knowledge-based economy. According…

  2. A NASA/RAE cooperation in the development of a real-time knowledge-based autopilot

    NASA Technical Reports Server (NTRS)

    Daysh, Colin; Corbin, Malcolm; Butler, Geoff; Duke, Eugene L.; Belle, Steven D.; Brumbaugh, Randal W.

    1991-01-01

    As part of a US/UK cooperative aeronautical research program, a joint activity between the NASA Dryden Flight Research Facility and the Royal Aerospace Establishment on knowledge-based systems was established. This joint activity is concerned with tools and techniques for the implementation and validation of real-time knowledge-based systems. The proposed next stage of this research is described, in which some of the problems of implementing and validating a knowledge-based autopilot for a generic high-performance aircraft are investigated.

  3. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium...

  4. Knowledge-Based Reinforcement Learning for Data Mining

    NASA Astrophysics Data System (ADS)

    Kudenko, Daniel; Grzes, Marek

    experts have developed heuristics that help them in planning and scheduling resources in their work place. However, this domain knowledge is often rough and incomplete. When the domain knowledge is used directly by an automated expert system, the solutions are often sub-optimal, due to the incompleteness of the knowledge, the uncertainty of environments, and the possibility to encounter unexpected situations. RL, on the other hand, can overcome the weaknesses of the heuristic domain knowledge and produce optimal solutions. In the talk we propose two techniques, which represent first steps in the area of knowledge-based RL (KBRL). The first technique [1] uses high-level STRIPS operator knowledge in reward shaping to focus the search for the optimal policy. Empirical results show that the plan-based reward shaping approach outperforms other RL techniques, including alternative manual and MDP-based reward shaping when it is used in its basic form. We showed that MDP-based reward shaping may fail and successful experiments with STRIPS-based shaping suggest modifications which can overcome encountered problems. The STRIPSbased method we propose allows expressing the same domain knowledge in a different way and the domain expert can choose whether to define an MDP or STRIPS planning task. We also evaluated the robustness of the proposed STRIPS-based technique to errors in the plan knowledge. In case that STRIPS knowledge is not available, we propose a second technique [2] that shapes the reward with hierarchical tile coding. Where the Q-function is represented with low-level tile coding, a V-function with coarser tile coding can be learned in parallel and used to approximate the potential for ground states. In the context of data mining, our KBRL approaches can also be used for any data collection task where the acquisition of data may incur considerable cost. In addition, observing the data collection agent in specific scenarios may lead to new insights into optimal data

  5. Mathematics/Arithmetic Knowledge-Based Way of Thinking and Its Maintenance Needed for Engineers

    NASA Astrophysics Data System (ADS)

    Harada, Shoji

    Examining curriculum among universities revealed that no significant difference in math class or related subjects can be seen. However, amount and depth of those studies, in general, differed depending on content of curriculum and the level of achievement at entrance to individual university. Universalization of higher education shows that students have many problems in learning higher level of traditional math and that the memory of math they learned quickly fades away after passing in exam. It means that further development of higher math knowledgebased engineer after graduation from universities. Under these circumstances, the present author, as one of fun of math, propose how to maintain way of thinking generated by math knowledge. What necessary for engineer is to pay attention to common books, dealing with elementary mathematics or arithmetic- related matters. This surely leads engineer to nourish math/arithmetic knowledge-based way of thinking.

  6. A knowledge-based framework for image enhancement in aviation security.

    PubMed

    Singh, Maneesha; Singh, Sameer; Partridge, Derek

    2004-12-01

    The main aim of this paper is to present a knowledge-based framework for automatically selecting the best image enhancement algorithm from several available on a per image basis in the context of X-ray images of airport luggage. The approach detailed involves a system that learns to map image features that represent its viewability to one or more chosen enhancement algorithms. Viewability measures have been developed to provide an automatic check on the quality of the enhanced image, i.e., is it really enhanced? The choice is based on ground-truth information generated by human X-ray screening experts. Such a system, for a new image, predicts the best-suited enhancement algorithm. Our research details the various characteristics of the knowledge-based system and shows extensive results on real images.

  7. Increasing levels of assistance in refinement of knowledge-based retrieval systems

    NASA Technical Reports Server (NTRS)

    Baudin, Catherine; Kedar, Smadar; Pell, Barney

    1994-01-01

    The task of incrementally acquiring and refining the knowledge and algorithms of a knowledge-based system in order to improve its performance over time is discussed. In particular, the design of DE-KART, a tool whose goal is to provide increasing levels of assistance in acquiring and refining indexing and retrieval knowledge for a knowledge-based retrieval system, is presented. DE-KART starts with knowledge that was entered manually, and increases its level of assistance in acquiring and refining that knowledge, both in terms of the increased level of automation in interacting with users, and in terms of the increased generality of the knowledge. DE-KART is at the intersection of machine learning and knowledge acquisition: it is a first step towards a system which moves along a continuum from interactive knowledge acquisition to increasingly automated machine learning as it acquires more knowledge and experience.

  8. PVDaCS - A prototype knowledge-based expert system for certification of spacecraft data

    NASA Technical Reports Server (NTRS)

    Wharton, Cathleen; Shiroma, Patricia J.; Simmons, Karen E.

    1989-01-01

    On-line data management techniques to certify spacecraft information are mandated by increasing telemetry rates. Knowledge-based expert systems offer the ability to certify data electronically without the need for time-consuming human interaction. Issues of automatic certification are explored by designing a knowledge-based expert system to certify data from a scientific instrument, the Orbiter Ultraviolet Spectrometer, on an operating NASA planetary spacecraft, Pioneer Venus. The resulting rule-based system, called PVDaCS (Pioneer Venus Data Certification System), is a functional prototype demonstrating the concepts of a larger system design. A key element of the system design is the representation of an expert's knowledge through the usage of well ordered sequences. PVDaCS produces a certification value derived from expert knowledge and an analysis of the instrument's operation. Results of system performance are presented.

  9. Knowledge-based vision for space station object motion detection, recognition, and tracking

    NASA Technical Reports Server (NTRS)

    Symosek, P.; Panda, D.; Yalamanchili, S.; Wehner, W., III

    1987-01-01

    Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed.

  10. Interpreting Segmented Laser Radar Images Using a Knowledge-Based System

    NASA Astrophysics Data System (ADS)

    Chu, Chen-Chau; Nandhakumar, Nagaraj; Aggarwal, Jake K.

    1990-03-01

    This paper presents a knowledge-based system (KBS) for man-made object recognition and image interpretation using laser radar (ladar) images. The objective is to recognize military vehicles in rural scenes. The knowledge-based system is constructed using KEE rules and Lisp functions, and uses results from pre-processing modules for image segmentation and integration of segmentation maps. Low-level attributes of segments are computed and converted to KEE format as part of the data bases. The interpretation modules detect man-made objects from the background using low-level attributes. Segments are grouped into objects and then man-made objects and background segments are classified into pre-defined categories (tanks, ground, etc.) A concurrent server program is used to enhance the performance of the KBS by serving numerical and graphics-oriented tasks for the interpretation modules. Experimental results using real ladar data are presented.

  11. Delivering spacecraft control centers with embedded knowledge-based systems: The methodology issue

    NASA Technical Reports Server (NTRS)

    Ayache, S.; Haziza, M.; Cayrac, D.

    1994-01-01

    Matra Marconi Space (MMS) occupies a leading place in Europe in the domain of satellite and space data processing systems. The maturity of the knowledge-based systems (KBS) technology, the theoretical and practical experience acquired in the development of prototype, pre-operational and operational applications, make it possible today to consider the wide operational deployment of KBS's in space applications. In this perspective, MMS has to prepare the introduction of the new methods and support tools that will form the basis of the development of such systems. This paper introduces elements of the MMS methodology initiatives in the domain and the main rationale that motivated the approach. These initiatives develop along two main axes: knowledge engineering methods and tools, and a hybrid method approach for coexisting knowledge-based and conventional developments.

  12. Strategic Concept of Competition Model in Knowledge-Based Logistics in Machinebuilding

    NASA Astrophysics Data System (ADS)

    Medvedeva, O. V.

    2015-09-01

    A competitive labor market needs serious changing. Machinebuilding is one of the main problem domains. The current direction to promote human capital competition demands for modernization. Therefore, it is necessary to develop a strategy for social and economic promotion of competition in conditions of knowledge-based economy, in particularly, in machinebuilding. The necessity is demonstrated, as well as basic difficulties faced this strategy for machinebuilding.

  13. Towards knowledge-based retrieval of medical images. The role of semantic indexing, image content representation and knowledge-based retrieval.

    PubMed

    Lowe, H J; Antipov, I; Hersh, W; Smith, C A

    1998-01-01

    Medicine is increasingly image-intensive. The central importance of imaging technologies such as computerized tomography and magnetic resonance imaging in clinical decision making, combined with the trend to store many "traditional" clinical images such as conventional radiographs, microscopic pathology and dermatology images in digital format present both challenges and an opportunities for the designers of clinical information systems. The emergence of Multimedia Electronic Medical Record Systems (MEMRS), architectures that integrate medical images with text-based clinical data, will further hasten this trend. The development of these systems, storing a large and diverse set of medical images, suggests that in the future MEMRS will become important digital libraries supporting patient care, research and education. The representation and retrieval of clinical images within these systems is problematic as conventional database architectures and information retrieval models have, until recently, focused largely on text-based data. Medical imaging data differs in many ways from text-based medical data but perhaps the most important difference is that the information contained within imaging data is fundamentally knowledge-based. New representational and retrieval models for clinical images will be required to address this issue. Within the Image Engine multimedia medical record system project at the University of Pittsburgh we are evolving an approach to representation and retrieval of medical images which combines semantic indexing using the UMLS Metathesuarus, image content-based representation and knowledge-based image analysis. PMID:9929345

  14. Towards knowledge-based retrieval of medical images. The role of semantic indexing, image content representation and knowledge-based retrieval.

    PubMed

    Lowe, H J; Antipov, I; Hersh, W; Smith, C A

    1998-01-01

    Medicine is increasingly image-intensive. The central importance of imaging technologies such as computerized tomography and magnetic resonance imaging in clinical decision making, combined with the trend to store many "traditional" clinical images such as conventional radiographs, microscopic pathology and dermatology images in digital format present both challenges and an opportunities for the designers of clinical information systems. The emergence of Multimedia Electronic Medical Record Systems (MEMRS), architectures that integrate medical images with text-based clinical data, will further hasten this trend. The development of these systems, storing a large and diverse set of medical images, suggests that in the future MEMRS will become important digital libraries supporting patient care, research and education. The representation and retrieval of clinical images within these systems is problematic as conventional database architectures and information retrieval models have, until recently, focused largely on text-based data. Medical imaging data differs in many ways from text-based medical data but perhaps the most important difference is that the information contained within imaging data is fundamentally knowledge-based. New representational and retrieval models for clinical images will be required to address this issue. Within the Image Engine multimedia medical record system project at the University of Pittsburgh we are evolving an approach to representation and retrieval of medical images which combines semantic indexing using the UMLS Metathesuarus, image content-based representation and knowledge-based image analysis.

  15. Knowledge-based approach to fault diagnosis and control in distributed process environments

    NASA Astrophysics Data System (ADS)

    Chung, Kwangsue; Tou, Julius T.

    1991-03-01

    This paper presents a new design approach to knowledge-based decision support systems for fault diagnosis and control for quality assurance and productivity improvement in automated manufacturing environments. Based on the observed manifestations, the knowledge-based diagnostic system hypothesizes a set of the most plausible disorders by mimicking the reasoning process of a human diagnostician. The data integration technique is designed to generate error-free hierarchical category files. A novel approach to diagnostic problem solving has been proposed by integrating the PADIKS (Pattern-Directed Knowledge-Based System) concept and the symbolic model of diagnostic reasoning based on the categorical causal model. The combination of symbolic causal reasoning and pattern-directed reasoning produces a highly efficient diagnostic procedure and generates a more realistic expert behavior. In addition, three distinctive constraints are designed to further reduce the computational complexity and to eliminate non-plausible hypotheses involved in the multiple disorders problem. The proposed diagnostic mechanism, which consists of three different levels of reasoning operations, significantly reduces the computational complexity in the diagnostic problem with uncertainty by systematically shrinking the hypotheses space. This approach is applied to the test and inspection data collected from a PCB manufacturing operation.

  16. A knowledge-based flight status monitor for real-time application in digital avionics systems

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Disbrow, J. D.; Butler, G. F.

    1989-01-01

    The Dryden Flight Research Facility of the National Aeronautics and Space Administration (NASA) Ames Research Center (Ames-Dryden) is the principal NASA facility for the flight testing and evaluation of new and complex avionics systems. To aid in the interpretation of system health and status data, a knowledge-based flight status monitor was designed. The monitor was designed to use fault indicators from the onboard system which are telemetered to the ground and processed by a rule-based model of the aircraft failure management system to give timely advice and recommendations in the mission control room. One of the important constraints on the flight status monitor is the need to operate in real time, and to pursue this aspect, a joint research activity between NASA Ames-Dryden and the Royal Aerospace Establishment (RAE) on real-time knowledge-based systems was established. Under this agreement, the original LISP knowledge base for the flight status monitor was reimplemented using the intelligent knowledge-based system toolkit, MUSE, which was developed under RAE sponsorship. Details of the flight status monitor and the MUSE implementation are presented.

  17. Using CLIPS in the domain of knowledge-based massively parallel programming

    NASA Technical Reports Server (NTRS)

    Dvorak, Jiri J.

    1994-01-01

    The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.

  18. The Network of Excellence ``Knowledge-based Multicomponent Materials for Durable and Safe Performance''

    NASA Astrophysics Data System (ADS)

    Moreno, Arnaldo

    2008-02-01

    The Network of Excellence "Knowledge-based Multicomponent Materials for Durable and Safe Performance" (KMM-NoE) consists of 36 institutional partners from 10 countries representing leading European research institutes and university departments (25), small and medium enterprises, SMEs (5) and large industry (7) in the field of knowledge-based multicomponent materials (KMM), more specifically in intermetallics, metal-ceramic composites, functionally graded materials and thin layers. The main goal of the KMM-NoE (currently funded by the European Commission) is to mobilise and concentrate the fragmented scientific potential in the KMM field to create a durable and efficient organism capable of developing leading-edge research while spreading the accumulated knowledge outside the Network and enhancing the technological skills of the related industries. The long-term strategic goal of the KMM-NoE is to establish a self-supporting pan-European institution in the field of knowledge-based multicomponent materials—KMM Virtual Institute (KMM-VIN). It will combine industry oriented research with educational and training activities. The KMM Virtual Institute will be founded on three main pillars: KMM European Competence Centre, KMM Integrated Post-Graduate School, KMM Mobility Programme. The KMM-NoE is coordinated by the Institute of Fundamental Technological Research (IPPT) of the Polish Academy of Sciences, Warsaw, Poland.

  19. Can Croatia join Europe as competitive knowledge-based society by 2010?

    PubMed

    Petrovecki, Mladen; Paar, Vladimir; Primorac, Dragan

    2006-12-01

    The 21st century has brought important changes in the paradigms of economic development, one of them being a shift toward recognizing knowledge and information as the most valuable commodities of today. The European Union (EU) has been working hard to become the most competitive knowledge-based society in the world, and Croatia, an EU candidate country, has been faced with a similar task. To establish itself as one of the best knowledge-based country in the Eastern European region over the next 4 years, Croatia realized it has to create an education and science system correspondent with European standards and sensitive to labor market needs. For that purpose, the Croatian Ministry of Science, Education, and Sports (MSES) has created and started implementing a complex strategy, consisting of the following key components: the reform of education system in accordance with the Bologna Declaration; stimulation of scientific production by supporting national and international research projects; reversing the "brain drain" into "brain gain" and strengthening the links between science and technology; and informatization of the whole education and science system. In this comprehensive report, we describe the implementation of these measures, whose coordination with the EU goals presents a challenge, as well as an opportunity for Croatia to become a knowledge-based society by 2010.

  20. The Network of Excellence 'Knowledge-based Multicomponent Materials for Durable and Safe Performance'

    SciTech Connect

    Moreno, Arnaldo

    2008-02-15

    The Network of Excellence 'Knowledge-based Multicomponent Materials for Durable and Safe Performance' (KMM-NoE) consists of 36 institutional partners from 10 countries representing leading European research institutes and university departments (25), small and medium enterprises, SMEs (5) and large industry (7) in the field of knowledge-based multicomponent materials (KMM), more specifically in intermetallics, metal-ceramic composites, functionally graded materials and thin layers. The main goal of the KMM-NoE (currently funded by the European Commission) is to mobilise and concentrate the fragmented scientific potential in the KMM field to create a durable and efficient organism capable of developing leading-edge research while spreading the accumulated knowledge outside the Network and enhancing the technological skills of the related industries. The long-term strategic goal of the KMM-NoE is to establish a self-supporting pan-European institution in the field of knowledge-based multicomponent materials--KMM Virtual Institute (KMM-VIN). It will combine industry oriented research with educational and training activities. The KMM Virtual Institute will be founded on three main pillars: KMM European Competence Centre, KMM Integrated Post-Graduate School, KMM Mobility Programme. The KMM-NoE is coordinated by the Institute of Fundamental Technological Research (IPPT) of the Polish Academy of Sciences, Warsaw, Poland.

  1. A new collaborative knowledge-based approach for wireless sensor networks.

    PubMed

    Canada-Bago, Joaquin; Fernandez-Prieto, Jose Angel; Gadeo-Martos, Manuel Angel; Velasco, Juan Ramón

    2010-01-01

    This work presents a new approach for collaboration among sensors in Wireless Sensor Networks. These networks are composed of a large number of sensor nodes with constrained resources: limited computational capability, memory, power sources, etc. Nowadays, there is a growing interest in the integration of Soft Computing technologies into Wireless Sensor Networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks. The objective of this work is to design a collaborative knowledge-based network, in which each sensor executes an adapted Fuzzy Rule-Based System, which presents significant advantages such as: experts can define interpretable knowledge with uncertainty and imprecision, collaborative knowledge can be separated from control or modeling knowledge and the collaborative approach may support neighbor sensor failures and communication errors. As a real-world application of this approach, we demonstrate a collaborative modeling system for pests, in which an alarm about the development of olive tree fly is inferred. The results show that knowledge-based sensors are suitable for a wide range of applications and that the behavior of a knowledge-based sensor may be modified by inferences and knowledge of neighbor sensors in order to obtain a more accurate and reliable output.

  2. Can Croatia Join Europe as Competitive Knowledge-based Society by 2010?

    PubMed Central

    Petrovečki, Mladen; Paar, Vladimir; Primorac, Dragan

    2006-01-01

    The 21st century has brought important changes in the paradigms of economic development, one of them being a shift toward recognizing knowledge and information as the most important factors of today. The European Union (EU) has been working hard to become the most competitive knowledge-based society in the world, and Croatia, an EU candidate country, has been faced with a similar task. To establish itself as one of the best knowledge-based country in the Eastern European region over the next four years, Croatia realized it has to create an education and science system correspondent with European standards and sensitive to labor market needs. For that purpose, the Croatian Ministry of Science, Education, and Sports (MSES) has created and started implementing a complex strategy, consisting of the following key components: the reform of education system in accordance with the Bologna Declaration; stimulation of scientific production by supporting national and international research projects; reversing the “brain drain” into “brain gain” and strengthening the links between science and technology; and informatization of the whole education and science system. In this comprehensive report, we describe the implementation of these measures, whose coordination with the EU goals presents a challenge, as well as an opportunity for Croatia to become a knowledge-based society by 2010. PMID:17167853

  3. A knowledge-based design for assemble system for vehicle seat

    NASA Astrophysics Data System (ADS)

    Wahidin, L. S.; Tan, CheeFai; Khalil, S. N.; Juffrizal, K.; Nidzamuddin, M. Y.

    2015-05-01

    Companies worldwide are striving to reduce the costs of their products to impact their bottom line profitability. When it comes to improving profits, there are in two choices: sell more or cut the cost of what is currently being sold. Given the depressed economy of the last several years, the "sell more" option, in many cases, has been taken off the table. As a result, cost cutting is often the most effective path. One of the industrial challenges is to search for the shorten product development and lower manufacturing cost especially in the early stage of designing the product. Knowledge-based system is used to assist the industry when the expert is not available and to keep the expertise within the company. The application of knowledge-based system will enable the standardization and accuracy of the assembly process. For this purpose, a knowledge-based design for assemble system is developed to assist the industry to plan the assembly process of the vehicle seat.

  4. Knowledge-based indexing of the medical literature: the Indexing Aid Project.

    PubMed

    Humphrey, S M; Miller, N E

    1987-05-01

    This article describes the Indexing Aid Project for conducting research in the areas of knowledge representation and indexing for information retrieval in order to develop interactive knowledge-based systems for computer-assisted indexing of the periodical medical literature. The system uses an experimental frame-based knowledge representation language, FrameKit, implemented in Franz Lisp. The initial prototype is designed to interact with trained MEDLINE indexers who will be prompted to enter subject terms as slot values in filling in document-specific frame data structures that are derived from the knowledge-base frames. In addition, the automatic application of rules associated with the knowledge-base frames produces a set of Medical Subject Heading (MeSH) keyword indices to the document. Important features of the system are representation of explicit relationships through slots which express the relations; slot values, restrictions, and rules made available by inheritance through "is-a" hierarchies; slot values denoted by functions that retrieve values from other slots; and restrictions on slot values displayable during data entry. PMID:10301519

  5. Novel nonlinear knowledge-based mean force potentials based on machine learning.

    PubMed

    Dong, Qiwen; Zhou, Shuigeng

    2011-01-01

    The prediction of 3D structures of proteins from amino acid sequences is one of the most challenging problems in molecular biology. An essential task for solving this problem with coarse-grained models is to deduce effective interaction potentials. The development and evaluation of new energy functions is critical to accurately modeling the properties of biological macromolecules. Knowledge-based mean force potentials are derived from statistical analysis of proteins of known structures. Current knowledge-based potentials are almost in the form of weighted linear sum of interaction pairs. In this study, a class of novel nonlinear knowledge-based mean force potentials is presented. The potential parameters are obtained by nonlinear classifiers, instead of relative frequencies of interaction pairs against a reference state or linear classifiers. The support vector machine is used to derive the potential parameters on data sets that contain both native structures and decoy structures. Five knowledge-based mean force Boltzmann-based or linear potentials are introduced and their corresponding nonlinear potentials are implemented. They are the DIH potential (single-body residue-level Boltzmann-based potential), the DFIRE-SCM potential (two-body residue-level Boltzmann-based potential), the FS potential (two-body atom-level Boltzmann-based potential), the HR potential (two-body residue-level linear potential), and the T32S3 potential (two-body atom-level linear potential). Experiments are performed on well-established decoy sets, including the LKF data set, the CASP7 data set, and the Decoys “R”Us data set. The evaluation metrics include the energy Z score and the ability of each potential to discriminate native structures from a set of decoy structures. Experimental results show that all nonlinear potentials significantly outperform the corresponding Boltzmann-based or linear potentials, and the proposed discriminative framework is effective in developing knowledge-based

  6. Feasibility of using a knowledge-based system concept for in-flight primary flight display research

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.

    1991-01-01

    A study was conducted to determine the feasibility of using knowledge-based systems architectures for inflight research of primary flight display information management issues. The feasibility relied on the ability to integrate knowledge-based systems with existing onboard aircraft systems. And, given the hardware and software platforms available, the feasibility also depended on the ability to use interpreted LISP software with the real time operation of the primary flight display. In addition to evaluating these feasibility issues, the study determined whether the software engineering advantages of knowledge-based systems found for this application in the earlier workstation study extended to the inflight research environment. To study these issues, two integrated knowledge-based systems were designed to control the primary flight display according to pre-existing specifications of an ongoing primary flight display information management research effort. These two systems were implemented to assess the feasibility and software engineering issues listed. Flight test results were successful in showing the feasibility of using knowledge-based systems inflight with actual aircraft data.

  7. Evaluating Social and National Education Textbooks Based on the Criteria of Knowledge-Based Economy from the Perspectives of Elementary Teachers in Jordan

    ERIC Educational Resources Information Center

    Al-Edwan, Zaid Suleiman; Hamaidi, Diala Abdul Hadi

    2011-01-01

    Knowledge-based economy is a new implemented trend in the field of education in Jordan. The ministry of education in Jordan attempts to implement this trend's philosophy in its textbooks. This study examined the extent to which the (1st-3rd grade) social and national textbooks reflect knowledge-based economy criteria from the perspective of…

  8. SU-E-J-71: Spatially Preserving Prior Knowledge-Based Treatment Planning

    SciTech Connect

    Wang, H; Xing, L

    2015-06-15

    Purpose: Prior knowledge-based treatment planning is impeded by the use of a single dose volume histogram (DVH) curve. Critical spatial information is lost from collapsing the dose distribution into a histogram. Even similar patients possess geometric variations that becomes inaccessible in the form of a single DVH. We propose a simple prior knowledge-based planning scheme that extracts features from prior dose distribution while still preserving the spatial information. Methods: A prior patient plan is not used as a mere starting point for a new patient but rather stopping criteria are constructed. Each structure from the prior patient is partitioned into multiple shells. For instance, the PTV is partitioned into an inner, middle, and outer shell. Prior dose statistics are then extracted for each shell and translated into the appropriate Dmin and Dmax parameters for the new patient. Results: The partitioned dose information from a prior case has been applied onto 14 2-D prostate cases. Using prior case yielded final DVHs that was comparable to manual planning, even though the DVH for the prior case was different from the DVH for the 14 cases. Solely using a single DVH for the entire organ was also performed for comparison but showed a much poorer performance. Different ways of translating the prior dose statistics into parameters for the new patient was also tested. Conclusion: Prior knowledge-based treatment planning need to salvage the spatial information without transforming the patients on a voxel to voxel basis. An efficient balance between the anatomy and dose domain is gained through partitioning the organs into multiple shells. The use of prior knowledge not only serves as a starting point for a new case but the information extracted from the partitioned shells are also translated into stopping criteria for the optimization problem at hand.

  9. Knowledge-based computer-aided detection of masses on digitized mammograms: a preliminary assessment.

    PubMed

    Chang, Y H; Hardesty, L A; Hakim, C M; Chang, T S; Zheng, B; Good, W F; Gur, D

    2001-04-01

    The purpose of this work was to develop and evaluate a computer-aided detection (CAD) scheme for the improvement of mass identification on digitized mammograms using a knowledge-based approach. Three hundred pathologically verified masses and 300 negative, but suspicious, regions, as initially identified by a rule-based CAD scheme, were randomly selected from a large clinical database for development purposes. In addition, 500 different positive and 500 negative regions were used to test the scheme. This suspicious region pruning scheme includes a learning process to establish a knowledge base that is then used to determine whether a previously identified suspicious region is likely to depict a true mass. This is accomplished by quantitatively characterizing the set of known masses, measuring "similarity" between a suspicious region and a "known" mass, then deriving a composite "likelihood" measure based on all "known" masses to determine the state of the suspicious region. To assess the performance of this method, receiver-operating characteristic (ROC) analyses were employed. Using a leave-one-out validation method with the development set of 600 regions, the knowledge-based CAD scheme achieved an area under the ROC curve of 0.83. Fifty-one percent of the previously identified false-positive regions were eliminated, while maintaining 90% sensitivity. During testing of the 1,000 independent regions, an area under the ROC curve as high as 0.80 was achieved. Knowledge-based approaches can yield a significant reduction in false-positive detections while maintaining reasonable sensitivity. This approach has the potential of improving the performance of other rule-based CAD schemes.

  10. Knowledge-based monitoring of the pointing control system on the Hubble space telescope

    NASA Technical Reports Server (NTRS)

    Dunham, Larry L.; Laffey, Thomas J.; Kao, Simon M.; Schmidt, James L.; Read, Jackson Y.

    1987-01-01

    A knowledge-based system for the real time monitoring of telemetry data from the Pointing and Control System (PCS) of the Hubble Space Telescope (HST) that enables the retention of design expertise throughout the three decade project lifespan by means other than personnel and documentation is described. The system will monitor performance, vehicle status, success or failure of various maneuvers, and in some cases diagnose problems and recommend corrective actions using a knowledge base built using mission scenarios and the more than 4,500 telemetry monitors from the HST.

  11. Knowledge-based approach for generating target system specifications from a domain model

    NASA Technical Reports Server (NTRS)

    Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan

    1992-01-01

    Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.

  12. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  13. Knowledge-based control of grasping in robot hands using heuristics from human motor skills

    SciTech Connect

    Bekey, G.A. . Computer Science Dept.); Liu, H. . Artificial Intelligence Systems Section); Tomovic, R. . Dept. of Electrical Engineering); Karplus, W.J. . Computer Science Dept.)

    1993-12-01

    The development of a grasp planner for multifingered robot hands is described. The planner is knowledge-based, selecting grasp postures by reasoning from symbolic information on target object geometry and the nature of the task. The ability of the planner to utilize task information is based on an attempt to mimic human grasping behavior. Several task attributes and a set of heuristics derived from observation of human motor skills are included in the system. The paper gives several examples of the reasoning of the system in selecting the appropriate grasp mode for spherical and cylindrical objects for different tasks.

  14. Application of flight systems methodologies to the validation of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.

    1988-01-01

    Flight and mission-critical systems are verified, qualified for flight, and validated using well-known and well-established techniques. These techniques define the validation methodology used for such systems. In order to verify, qualify, and validate knowledge-based systems (KBS's), the methodology used for conventional systems must be addressed, and the applicability and limitations of that methodology to KBS's must be identified. An outline of how this approach to the validation of KBS's is being developed and used is presented.

  15. Application of flight systems methodologies to the validation of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.

    1988-01-01

    Flight and mission-critical systems are verified, qualified for flight, and validated using well-known and well-established techniques. These techniques define the validation methodology used for such systems. In order to verify, qualify, and validate knowledge-based systems (KBS's), the methodology used for conventional systems must be addressed, and the applicability and limitations of that methodology to KBS's must be identified. The author presents an outline of how this approach to the validation of KBS's is being developed and used at the Dryden Flight Research Facility of the NASA Ames Research Center.

  16. Studies in knowledge-based diagnosis of failures in robotic assembly

    NASA Technical Reports Server (NTRS)

    Lam, Raymond K.; Pollard, Nancy S.; Desai, Rajiv S.

    1990-01-01

    The telerobot diagnostic system (TDS) is a knowledge-based system that is being developed for identification and diagnosis of failures in the space robotic domain. The system is able to isolate the symptoms of the failure, generate failure hypotheses based on these symptoms, and test their validity at various levels by interpreting or simulating the effects of the hypotheses on results of plan execution. The implementation of the TDS is outlined. The classification of failures and the types of system models used by the TDS are discussed. A detailed example of the TDS approach to failure diagnosis is provided.

  17. A knowledge-based expert system for scheduling of airborne astronomical observations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, P. R.; Gevarter, W. B.; Stutz, J. C.; Banda, C. P.

    1985-01-01

    The Kuiper Airborne Observatory Scheduler (KAOS) is a knowledge-based expert system developed at NASA Ames Research Center to assist in route planning of a C-141 flying astronomical observatory. This program determines a sequence of flight legs that enables sequential observations of a set of heavenly bodies derived from a list of desirable objects. The possible flight legs are constrained by problems of observability, avoiding flyovers of warning and restricted military zones, and running out of fuel. A significant contribution of the KAOS program is that it couples computational capability with a reasoning system.

  18. Facilitating superior chronic disease management through a knowledge-based systems development model.

    PubMed

    Wickramasinghe, Nilmini S; Goldberg, Steve

    2008-01-01

    To date, the adoption and diffusion of technology-enabled solutions to deliver better healthcare has been slow. There are many reasons for this. One of the most significant is that the existing methodologies that are normally used in general for Information and Communications Technology (ICT) implementations tend to be less successful in a healthcare context. This paper describes a knowledge-based adaptive mapping to realisation methodology to traverse successfully from idea to realisation rapidly and without compromising rigour so that success ensues. It is discussed in connection with trying to implement superior ICT-enabled approaches to facilitate superior Chronic Disease Management (CDM).

  19. Improvements on transient characteristics of transverse flux homopolar linear machines using artificial knowledge-based strategy

    SciTech Connect

    Liu, C.T.; Kuo, J.L.

    1995-06-01

    This paper, which continues the preceding works will provide further detailed discussions about both parasitic hunting-effect alleviation of transverse flux homopolar linear induction machine (TFLIM), and improvement on closed-loop transient characteristics of transverse flux homopolar linear oscillating machine (TFLOM). Novel artificial knowledge-based compensators are proposed here to solve above problems for these time-varying and highly nonlinear machine systems. It will be shown that not only this approach is easy of practical implementation, but also the involved design tasks of such compensators are applicable for other linear machine control objectives. Illustrations and verifications will be supplied to confirm the graceful features of this intelligent strategy.

  20. Knowledge-Based Motion Control of AN Intelligent Mobile Autonomous System

    NASA Astrophysics Data System (ADS)

    Isik, Can

    An Intelligent Mobile Autonomous System (IMAS), which is equipped with vision and low level sensors to cope with unknown obstacles, is modeled as a hierarchy of path planning and motion control. This dissertation concentrates on the lower level of this hierarchy (Pilot) with a knowledge-based controller. The basis of a theory of knowledge-based controllers is established, using the example of the Pilot level motion control of IMAS. In this context, the knowledge-based controller with a linguistic world concept is shown to be adequate for the minimum time control of an autonomous mobile robot motion. The Pilot level motion control of IMAS is approached in the framework of production systems. The three major components of the knowledge-based control that are included here are the hierarchies of the database, the rule base and the rule evaluator. The database, which is the representation of the state of the world, is organized as a semantic network, using a concept of minimal admissible vocabulary. The hierarchy of rule base is derived from the analytical formulation of minimum-time control of IMAS motion. The procedure introduced for rule derivation, which is called analytical model verbalization, utilizes the concept of causalities to describe the system behavior. A realistic analytical system model is developed and the minimum-time motion control in an obstacle strewn environment is decomposed to a hierarchy of motion planning and control. The conditions for the validity of the hierarchical problem decomposition are established, and the consistency of operation is maintained by detecting the long term conflicting decisions of the levels of the hierarchy. The imprecision in the world description is modeled using the theory of fuzzy sets. The method developed for the choice of the rule that prescribes the minimum-time motion control among the redundant set of applicable rules is explained and the usage of fuzzy set operators is justified. Also included in the

  1. Use of metaknowledge in the verification of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Morell, Larry J.

    1989-01-01

    Knowledge-based systems are modeled as deductive systems. The model indicates that the two primary areas of concern in verification are demonstrating consistency and completeness. A system is inconsistent if it asserts something that is not true of the modeled domain. A system is incomplete if it lacks deductive capability. Two forms of consistency are discussed along with appropriate verification methods. Three forms of incompleteness are discussed. The use of metaknowledge, knowledge about knowledge, is explored in connection to each form of incompleteness.

  2. The Heliophysics Event Knowledgebase for the Solar Dynamics Observatory - A User's Perspective

    NASA Astrophysics Data System (ADS)

    Slater, Gregory L.; Cheung, M.; Hurlburt, N.; Schrijver, C.; Somani, A.; Freeland, S. L.; Timmons, R.; Kobashi, A.; Serafin, J.; Schiff, D.; Seguin, R.

    2010-05-01

    The recently launched Solar Dynamics Observatory (SDO) will generated over 2 petabytes of imagery in its 5 year mission. The Heliophysics Events Knowledgebase (HEK) system has been developed to continuously build a database of solar features and events contributed by a combination of machine recognition algorithms run on every single image, and human interactive data exploration. Access to this growing database is provided through a set of currently existing tools as well as an open source API. We present an overview of the user interface tools including illustrative examples of their use.

  3. Heliophysics Event Knowledgebase for the Solar Dynamics Observatory (SDO) and Beyond

    NASA Astrophysics Data System (ADS)

    Hurlburt, N.; Cheung, M.; Schrijver, C.; Chang, L.; Freeland, S.; Green, S.; Heck, C.; Jaffey, A.; Kobashi, A.; Schiff, D.; Serafin, J.; Seguin, R.; Slater, G.; Somani, A.; Timmons, R.

    2012-01-01

    The immense volume of data generated by the suite of instruments on the Solar Dynamics Observatory (SDO) requires new tools for efficient identifying and accessing data that is most relevant for research. We have developed the Heliophysics Events Knowledgebase (HEK) to fill this need. The HEK system combines automated data mining using feature-detection methods and high-performance visualization systems for data markup. In addition, web services and clients are provided for searching the resulting metadata, reviewing results, and efficiently accessing the data. We review these components and present examples of their use with SDO data.

  4. Workflow analysis and evidence-based medicine: towards integration of knowledge-based functions in hospital information systems.

    PubMed Central

    Mueller, M. L.; Ganslandt, T.; Frankewitsch, T.; Krieglstein, C. F.; Senninger, N.; Prokosch, H. U.

    1999-01-01

    The large extent and complexity of scientific evidence described in the concept of evidence-based medicine often overwhelms clinicians who want to apply best external evidence. Hospital Information Systems usually do not provide knowledge-based functions to support context-sensitive linking to external information sources. Knowledge-based components need specific data, which must be entered manually and should be well adapted to clinical environment to be accepted by clinicians. This paper describes a workflow-based approach to understand and visualize clinical reality as a preliminary to designing software applications, and possible starting points for further software development. PMID:10566375

  5. Feasibility of using a knowledge-based system concept for in-flight primary-flight-display research

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.

    1991-01-01

    Flight test results have been obtained which demonstrate the feasibility and desirability of using knowledge-based systems architectures for flight test investigations of primary flight display information management-related issues. LISP-based software was used for real-time operation of the primary flight display. The two integrated knowledge-based systems designed to control the primary flight displays were implemented aboard a NASA-Langley B-737. The programmer is noted to be capable of more easily developing initial systems via the present method than with more conventional techniques.

  6. Norine, the knowledgebase dedicated to non-ribosomal peptides, is now open to crowdsourcing.

    PubMed

    Flissi, Areski; Dufresne, Yoann; Michalik, Juraj; Tonon, Laurie; Janot, Stéphane; Noé, Laurent; Jacques, Philippe; Leclère, Valérie; Pupin, Maude

    2016-01-01

    Since its creation in 2006, Norine remains the unique knowledgebase dedicated to non-ribosomal peptides (NRPs). These secondary metabolites, produced by bacteria and fungi, harbor diverse interesting biological activities (such as antibiotic, antitumor, siderophore or surfactant) directly related to the diversity of their structures. The Norine team goal is to collect the NRPs and provide tools to analyze them efficiently. We have developed a user-friendly interface and dedicated tools to provide a complete bioinformatics platform. The knowledgebase gathers abundant and valuable annotations on more than 1100 NRPs. To increase the quantity of described NRPs and improve the quality of associated annotations, we are now opening Norine to crowdsourcing. We believe that contributors from the scientific community are the best experts to annotate the NRPs they work on. We have developed MyNorine to facilitate the submission of new NRPs or modifications of stored ones. This article presents MyNorine and other novelties of Norine interface released since the first publication. Norine is freely accessible from the following URL: http://bioinfo.lifl.fr/NRP.

  7. Knowledge-based video compression for search and rescue robots and multiple sensor networks

    NASA Astrophysics Data System (ADS)

    Williams, Chris; Murphy, Robin R.

    2006-05-01

    Robot and sensor networks are needed for safety, security, and rescue applications such as port security and reconnaissance during a disaster. These applications rely on real-time transmission of images, which generally saturate the available wireless network infrastructure. Knowledge-based compression is a method for reducing the video frame transmission rate between robots or sensors and remote operators. Because images may need to be archived as evidence and/or distributed to multiple applications with different post processing needs, lossy compression schemes, such as MPEG, H.26x, etc., are not acceptable. This work proposes a lossless video server system consisting of three classes of filters (redundancy, task, and priority) which use different levels of knowledge (local sensed environment, human factors associated with a local task, and relative global priority of a task) at the application layer of the network. It demonstrates the redundancy and task filters for a realistic robot search scenario. The redundancy filter is shown to reduce the overall transmission bandwidth by 24.07% to 33.42%, and, when combined with the task filter, reduces overall transmission bandwidth by 59.08%to 67.83%. By itself, the task filter has the capability to reduce transmission bandwidth by 32.95% to 33.78%. While knowledge-based compression generally does not reach the same levels of reduction as MPEG, there are instances where the system outperforms MPEG encoding.

  8. Dealing with difficult deformations: construction of a knowledge-based deformation atlas

    NASA Astrophysics Data System (ADS)

    Thorup, S. S.; Darvann, T. A.; Hermann, N. V.; Larsen, P.; Ólafsdóttir, H.; Paulsen, R. R.; Kane, A. A.; Govier, D.; Lo, L.-J.; Kreiborg, S.; Larsen, R.

    2010-03-01

    Twenty-three Taiwanese infants with unilateral cleft lip and palate (UCLP) were CT-scanned before lip repair at the age of 3 months, and again after lip repair at the age of 12 months. In order to evaluate the surgical result, detailed point correspondence between pre- and post-surgical images was needed. We have previously demonstrated that non-rigid registration using B-splines is able to provide automated determination of point correspondences in populations of infants without cleft lip. However, this type of registration fails when applied to the task of determining the complex deformation from before to after lip closure in infants with UCLP. The purpose of the present work was to show that use of prior information about typical deformations due to lip closure, through the construction of a knowledge-based atlas of deformations, could overcome the problem. Initially, mean volumes (atlases) for the pre- and post-surgical populations, respectively, were automatically constructed by non-rigid registration. An expert placed corresponding landmarks in the cleft area in the two atlases; this provided prior information used to build a knowledge-based deformation atlas. We model the change from pre- to post-surgery using thin-plate spline warping. The registration results are convincing and represent a first move towards an automatic registration method for dealing with difficult deformations due to this type of surgery.

  9. A knowledge-based control system for air-scour optimisation in membrane bioreactors.

    PubMed

    Ferrero, G; Monclús, H; Sancho, L; Garrido, J M; Comas, J; Rodríguez-Roda, I

    2011-01-01

    Although membrane bioreactors (MBRs) technology is still a growing sector, its progressive implementation all over the world, together with great technical achievements, has allowed it to reach a mature degree, just comparable to other more conventional wastewater treatment technologies. With current energy requirements around 0.6-1.1 kWh/m3 of treated wastewater and investment costs similar to conventional treatment plants, main market niche for MBRs can be areas with very high restrictive discharge limits, where treatment plants have to be compact or where water reuse is necessary. Operational costs are higher than for conventional treatments; consequently there is still a need and possibilities for energy saving and optimisation. This paper presents the development of a knowledge-based decision support system (DSS) for the integrated operation and remote control of the biological and physical (filtration and backwashing or relaxation) processes in MBRs. The core of the DSS is a knowledge-based control module for air-scour consumption automation and energy consumption minimisation.

  10. Hybrid hill-climbing and knowledge-based methods for intelligent news filtering

    SciTech Connect

    Mock, K.J.

    1996-12-31

    As the size of the Internet increases, the amount of data available to users has dramatically risen, resulting in an information overload for users. This work involved the creation of an intelligent information news filtering system named INFOS (Intelligent News Filtering Organizational System) to reduce the user`s search burden by automatically eliminating Usenet news articles predicted to be irrelevant. These predictions are learned automatically by adapting an internal user model that is based upon features taken from articles and collaborative features derived from other users. The features are manipulated through keyword-based techniques and knowledge-based techniques to perform the actual filtering. Knowledge-based systems have the advantage of analyzing input text in detail, but at the cost of computational complexity and the difficulty of scaling up to large domains. In contrast, statistical and keyword approaches scale up readily but result in a shallower understanding of the input. A hybrid system integrating both approaches improves accuracy over keyword approaches, supports domain knowledge, and retains scalability. The system would be enhanced by more robust word disambiguation.

  11. Short term load forecasting of Taiwan power system using a knowledge-based expert system

    SciTech Connect

    Ho, K.L.; Hsu, Y.Y.; Chen, C.F.; Lee, T.E. . Dept. of Electrical Engineering); Liang, C.C.; Lai, T.S.; Chen, K.K. )

    1990-11-01

    A knowledge-based expert system is proposed for the short term load forecasting of Taiwan power system. The developed expert system, which was implemented on a personal computer, was written in PROLOG using a 5-year data base. To benefit from the expert knowledge and experience of the system operator, eleven different load shapes, each with different means of load calculations, are established. With these load shapes at hand, some peculiar load characteristics pertaining to Taiwan Power Company can be taken into account. The special load types considered by the expert system include the extremely low load levels during the week of the Chinese New Year, the special load characteristics of the days following a tropical storm or a typhoon, the partial shutdown of certain factories on Saturdays, and the special event caused by a holiday on Friday or on Tuesday, etc. A characteristic feature of the proposed knowledge-based expert system is that it is easy to add new information and new rules to the knowledge base. To illustrate the effectiveness of the presented expert system, short-term load forecasting is performed on Taiwan power system by using both the developed algorithm and the conventional Box-Jenkins statistical method. It is found that a mean absolute error of 2.52% for a year is achieved by the expert system approach as compared to an error of 3.86% by the statistical method.

  12. Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris

    2010-01-01

    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

  13. An architecture for integrating distributed and cooperating knowledge-based Air Force decision aids

    NASA Technical Reports Server (NTRS)

    Nugent, Richard O.; Tucker, Richard W.

    1988-01-01

    MITRE has been developing a Knowledge-Based Battle Management Testbed for evaluating the viability of integrating independently-developed knowledge-based decision aids in the Air Force tactical domain. The primary goal for the testbed architecture is to permit a new system to be added to a testbed with little change to the system's software. Each system that connects to the testbed network declares that it can provide a number of services to other systems. When a system wants to use another system's service, it does not address the server system by name, but instead transmits a request to the testbed network asking for a particular service to be performed. A key component of the testbed architecture is a common database which uses a relational database management system (RDBMS). The RDBMS provides a database update notification service to requesting systems. Normally, each system is expected to monitor data relations of interest to it. Alternatively, a system may broadcast an announcement message to inform other systems that an event of potential interest has occurred. Current research is aimed at dealing with issues resulting from integration efforts, such as dealing with potential mismatches of each system's assumptions about the common database, decentralizing network control, and coordinating multiple agents.

  14. Integrated knowledge-based tools for documenting and monitoring damages to built heritage

    NASA Astrophysics Data System (ADS)

    Cacciotti, R.

    2015-08-01

    The advancements of information technologies as applied to the most diverse fields of science define a breakthrough in the accessibility and processing of data for both expert and non-expert users. Nowadays it is possible to evidence an increasingly relevant research effort in the context of those domains, such as that of cultural heritage protection, in which knowledge mapping and sharing constitute critical prerequisites for accomplishing complex professional tasks. The aim of this paper is to outline the main results and outputs of the MONDIS research project. This project focusses on the development of integrated knowledge-based tools grounded on an ontological representation of the field of heritage conservation. The scope is to overcome the limitations of earlier databases by the application of modern semantic technologies able to integrate, organize and process useful information concerning damages to built heritage objects. In particular MONDIS addresses the need for supporting a diverse range of stakeholders (e.g. administrators, owners and professionals) in the documentation and monitoring of damages to historical constructions and in finding related remedies. The paper concentrates on the presentation of the following integrated knowledgebased components developed within the project: (I) MONDIS mobile application (plus desktop version), (II) MONDIS record explorer, (III) Ontomind profiles, (IV) knowledge matrix and (V) terminology editor. An example of practical application of the MONDIS integrated system is also provided and finally discussed.

  15. Norine, the knowledgebase dedicated to non-ribosomal peptides, is now open to crowdsourcing.

    PubMed

    Flissi, Areski; Dufresne, Yoann; Michalik, Juraj; Tonon, Laurie; Janot, Stéphane; Noé, Laurent; Jacques, Philippe; Leclère, Valérie; Pupin, Maude

    2016-01-01

    Since its creation in 2006, Norine remains the unique knowledgebase dedicated to non-ribosomal peptides (NRPs). These secondary metabolites, produced by bacteria and fungi, harbor diverse interesting biological activities (such as antibiotic, antitumor, siderophore or surfactant) directly related to the diversity of their structures. The Norine team goal is to collect the NRPs and provide tools to analyze them efficiently. We have developed a user-friendly interface and dedicated tools to provide a complete bioinformatics platform. The knowledgebase gathers abundant and valuable annotations on more than 1100 NRPs. To increase the quantity of described NRPs and improve the quality of associated annotations, we are now opening Norine to crowdsourcing. We believe that contributors from the scientific community are the best experts to annotate the NRPs they work on. We have developed MyNorine to facilitate the submission of new NRPs or modifications of stored ones. This article presents MyNorine and other novelties of Norine interface released since the first publication. Norine is freely accessible from the following URL: http://bioinfo.lifl.fr/NRP. PMID:26527733

  16. Norine, the knowledgebase dedicated to non-ribosomal peptides, is now open to crowdsourcing

    PubMed Central

    Flissi, Areski; Dufresne, Yoann; Michalik, Juraj; Tonon, Laurie; Janot, Stéphane; Noé, Laurent; Jacques, Philippe; Leclère, Valérie; Pupin, Maude

    2016-01-01

    Since its creation in 2006, Norine remains the unique knowledgebase dedicated to non-ribosomal peptides (NRPs). These secondary metabolites, produced by bacteria and fungi, harbor diverse interesting biological activities (such as antibiotic, antitumor, siderophore or surfactant) directly related to the diversity of their structures. The Norine team goal is to collect the NRPs and provide tools to analyze them efficiently. We have developed a user-friendly interface and dedicated tools to provide a complete bioinformatics platform. The knowledgebase gathers abundant and valuable annotations on more than 1100 NRPs. To increase the quantity of described NRPs and improve the quality of associated annotations, we are now opening Norine to crowdsourcing. We believe that contributors from the scientific community are the best experts to annotate the NRPs they work on. We have developed MyNorine to facilitate the submission of new NRPs or modifications of stored ones. This article presents MyNorine and other novelties of Norine interface released since the first publication. Norine is freely accessible from the following URL: http://bioinfo.lifl.fr/NRP. PMID:26527733

  17. Predicting Mycobacterium tuberculosis Complex Clades Using Knowledge-Based Bayesian Networks

    PubMed Central

    Bennett, Kristin P.

    2014-01-01

    We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238

  18. A knowledge-based machine vision system for space station automation

    NASA Technical Reports Server (NTRS)

    Chipman, Laure J.; Ranganath, H. S.

    1989-01-01

    A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.

  19. Predicting Mycobacterium tuberculosis complex clades using knowledge-based Bayesian networks.

    PubMed

    Aminian, Minoo; Couvin, David; Shabbeer, Amina; Hadley, Kane; Vandenberg, Scott; Rastogi, Nalin; Bennett, Kristin P

    2014-01-01

    We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web.

  20. Autonomous Knowledge-Based Navigation In An Unkown Two-Dimensional Environment With Convex Polygon Obstacles

    NASA Astrophysics Data System (ADS)

    Cheng, Linfu; McKendrick, John D.

    1989-03-01

    Navigation of autonomous vehicles in environments where the exact locations of obstacles are know has been the focus of research for two decades. More recently, algorithms for controlling progress through unknown environments have been proposed. The utilization of knowledge-based systems for studying the behavior of an autonomous vehicles has not received much study. A knowledge-driven autonomous system simulation was developed which enabled an autonomous mobile system to move in a two-dimensional environment and to use a simulated ranging/vision sensor to test whether a selected goal position was visible or whether the goal was obscured by one of the multiple polygon obstacles. As the mobile system gains information about the location of obstacles, it is added to the system's knowledge-base. Considerable attention was given to the computation of what vertices were mutually visible in the multi-obstacle environment and that computation was carried out in Lisp. The study relied on a program implemented in a generalized decision-making paradigm, OPS5.

  1. Young People's Management of the Transition from Education to Employment in the Knowledge-Based Sector in Shanghai

    ERIC Educational Resources Information Center

    Wang, Qi; Lowe, John

    2011-01-01

    This paper reports on a study of the transition from university to work by students/employees in the complex and rapidly changing socio-economic context of contemporary Shanghai. It aims at understanding how highly educated young people perceive the nature and mode of operation of the newly emerging labour market for knowledge-based jobs, and how…

  2. Development of the Knowledge-Based Standard for the Written Certification Examination of the American Board of Anesthesiology.

    ERIC Educational Resources Information Center

    Slogoff, Stephen; And Others

    1992-01-01

    Application of a knowledge-based standard in evaluating a written certification examination developed by the American Board of Anesthesiology established a standard of 57 percent correct over two years' examinations. This process is recommended for developing mastery-based (rather than normative-based) success criteria for evaluation of medical…

  3. A Comparative Analysis of New Governance Instruments in the Transnational Educational Space: A Shift to Knowledge-Based Instruments?

    ERIC Educational Resources Information Center

    Ioannidou, Alexandra

    2007-01-01

    In recent years, the ongoing development towards a knowledge-based society--associated with globalization, an aging population, new technologies and organizational changes--has led to a more intensive analysis of education and learning throughout life with regard to quantitative, qualitative and financial aspects. In this framework, education…

  4. Enhancing Student Learning in Knowledge-Based Courses: Integrating Team-Based Learning in Mass Communication Theory Classes

    ERIC Educational Resources Information Center

    Han, Gang; Newell, Jay

    2014-01-01

    This study explores the adoption of the team-based learning (TBL) method in knowledge-based and theory-oriented journalism and mass communication (J&MC) courses. It first reviews the origin and concept of TBL, the relevant theories, and then introduces the TBL method and implementation, including procedures and assessments, employed in an…

  5. Transformative Pedagogy, Leadership and School Organisation for the Twenty-First-Century Knowledge-Based Economy: The Case of Singapore

    ERIC Educational Resources Information Center

    Dimmock, Clive; Goh, Jonathan W. P.

    2011-01-01

    Singapore has a high performing school system; its students top international tests in maths and science. Yet while the Singapore government cherishes its world class "brand", it realises that in a globally competitive world, its schools need to prepare students for the twenty-first-century knowledge-based economy (KBE). Accordingly, over the past…

  6. Delivering Electronic Information in a Knowledge-Based Democracy. Summary of Proceedings (Washington, DC, July 14, 1993).

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC.

    The Library of Congress hosted a 1-day conference, "Delivering Electronic Information in a Knowledge-Based Democracy" to explore the public policy framework essential to creating electronic information resources and making them broadly available. Participants from a variety of sectors contributed to wide-ranging discussions on issues related to…

  7. Enabling the use of hereditary information from pedigree tools in medical knowledge-based systems.

    PubMed

    Gay, Pablo; López, Beatriz; Plà, Albert; Saperas, Jordi; Pous, Carles

    2013-08-01

    The use of family information is a key issue to deal with inheritance illnesses. This kind of information use to come in the form of pedigree files, which contain structured information as tree or graphs, which explains the family relationships. Knowledge-based systems should incorporate the information gathered by pedigree tools to assess medical decision making. In this paper, we propose a method to achieve such a goal, which consists on the definition of new indicators, and methods and rules to compute them from family trees. The method is illustrated with several case studies. We provide information about its implementation and integration on a case-based reasoning tool. The method has been experimentally tested with breast cancer diagnosis data. The results show the feasibility of our methodology.

  8. Collaborative development of knowledge-based support systems: a case study.

    PubMed

    Lindgren, Helena; Winnberg, Patrik J; Yan, Chunli

    2012-01-01

    We investigate a user-driven collaborative knowledge engineering and interaction design process. The outcome is a knowledge-based support application tailored to physicians in the local dementia care community. The activity is organized as a part of a collaborative effort between different organizations to develop their local clinical practice. Six local practitioners used the generic decision-support prototype system DMSS-R developed for the dementia domain during a period and participated in evaluations and re-design. Additional two local domain experts and a domain expert external to the local community modeled the content and design of DMSS-R by using the modeling system ACKTUS. Obstacles and success factors occurring when enabling the end-users to design their own tools are detected and interpreted using a proposed framework for improving care through the use of clinical guidelines. The results are discussed.

  9. Constructing Clinical Decision Support Systems for Adverse Drug Event Prevention: A Knowledge-based Approach.

    PubMed

    Koutkias, Vassilis; Kilintzis, Vassilis; Stalidis, George; Lazou, Katerina; Collyda, Chrysa; Chazard, Emmanuel; McNair, Peter; Beuscart, Regis; Maglaveras, Nicos

    2010-11-13

    A knowledge-based approach is proposed that is employed for the construction of a framework suitable for the management and effective use of knowledge on Adverse Drug Event (ADE) prevention. The framework has as its core part a Knowledge Base (KB) comprised of rule-based knowledge sources, that is accompanied by the necessary inference and query mechanisms to provide healthcare professionals and patients with decision support services in clinical practice, in terms of alerts and recommendations on preventable ADEs. The relevant Knowledge Based System (KBS) is developed in the context of the EU-funded research project PSIP (Patient Safety through Intelligent Procedures in Medication). In the current paper, we present the foundations of the framework, its knowledge model and KB structure, as well as recent progress as regards the population of the KB, the implementation of the KBS, and results on the KBS verification in decision support operation.

  10. KIPSE1: A Knowledge-based Interactive Problem Solving Environment for data estimation and pattern classification

    NASA Technical Reports Server (NTRS)

    Han, Chia Yung; Wan, Liqun; Wee, William G.

    1990-01-01

    A knowledge-based interactive problem solving environment called KIPSE1 is presented. The KIPSE1 is a system built on a commercial expert system shell, the KEE system. This environment gives user capability to carry out exploratory data analysis and pattern classification tasks. A good solution often consists of a sequence of steps with a set of methods used at each step. In KIPSE1, solution is represented in the form of a decision tree and each node of the solution tree represents a partial solution to the problem. Many methodologies are provided at each node to the user such that the user can interactively select the method and data sets to test and subsequently examine the results. Otherwise, users are allowed to make decisions at various stages of problem solving to subdivide the problem into smaller subproblems such that a large problem can be handled and a better solution can be found.

  11. Building organisational cyber resilience: A strategic knowledge-based view of cyber security management.

    PubMed

    Ferdinand, Jason

    The concept of cyber resilience has emerged in recent years in response to the recognition that cyber security is more than just risk management. Cyber resilience is the goal of organisations, institutions and governments across the world and yet the emerging literature is somewhat fragmented due to the lack of a common approach to the subject. This limits the possibility of effective collaboration across public, private and governmental actors in their efforts to build and maintain cyber resilience. In response to this limitation, and to calls for a more strategically focused approach, this paper offers a knowledge-based view of cyber security management that explains how an organisation can build, assess, and maintain cyber resilience. PMID:26642176

  12. Knowledge-based potentials in bioinformatics: From a physicist’s viewpoint

    NASA Astrophysics Data System (ADS)

    Zheng, Wei-Mou

    2015-12-01

    Biological raw data are growing exponentially, providing a large amount of information on what life is. It is believed that potential functions and the rules governing protein behaviors can be revealed from analysis on known native structures of proteins. Many knowledge-based potentials for proteins have been proposed. Contrary to most existing review articles which mainly describe technical details and applications of various potential models, the main foci for the discussion here are ideas and concepts involving the construction of potentials, including the relation between free energy and energy, the additivity of potentials of mean force and some key issues in potential construction. Sequence analysis is briefly viewed from an energetic viewpoint. Project supported in part by the National Natural Science Foundation of China (Grant Nos. 11175224 and 11121403).

  13. Application of knowledge-based network management techniques for packet radio networks

    NASA Astrophysics Data System (ADS)

    Doyle, R. J.; Sastry, A. R. K.

    The authors developed a preliminary version of a knowledge-based model for network management and reconfiguration using blackboard techniques and have applied it to packet radio networks. The analysis is concerned with developing procedures for evaluation of candidate recovery/reconfiguration methodologies and techniques for fault isolation and related monitoring functions. As an initial step, the generic blackboard was chosen as the artificial intelligence environment to develop the management tools and interlink it to a packet radio network simulator that was used as a testbed network to be controlled and monitored. The details of the interaction of the management environment and the packet radio simulator as implemented in the model so far, and present numerical results obtained through the execution of some preliminary rules are described.

  14. FTDD973: A multimedia knowledge-based system and methodology for operator training and diagnostics

    NASA Technical Reports Server (NTRS)

    Hekmatpour, Amir; Brown, Gary; Brault, Randy; Bowen, Greg

    1993-01-01

    FTDD973 (973 Fabricator Training, Documentation, and Diagnostics) is an interactive multimedia knowledge based system and methodology for computer-aided training and certification of operators, as well as tool and process diagnostics in IBM's CMOS SGP fabrication line (building 973). FTDD973 is an example of what can be achieved with modern multimedia workstations. Knowledge-based systems, hypertext, hypergraphics, high resolution images, audio, motion video, and animation are technologies that in synergy can be far more useful than each by itself. FTDD973's modular and object-oriented architecture is also an example of how improvements in software engineering are finally making it possible to combine many software modules into one application. FTDD973 is developed in ExperMedia/2; and OS/2 multimedia expert system shell for domain experts.

  15. Building organisational cyber resilience: A strategic knowledge-based view of cyber security management.

    PubMed

    Ferdinand, Jason

    The concept of cyber resilience has emerged in recent years in response to the recognition that cyber security is more than just risk management. Cyber resilience is the goal of organisations, institutions and governments across the world and yet the emerging literature is somewhat fragmented due to the lack of a common approach to the subject. This limits the possibility of effective collaboration across public, private and governmental actors in their efforts to build and maintain cyber resilience. In response to this limitation, and to calls for a more strategically focused approach, this paper offers a knowledge-based view of cyber security management that explains how an organisation can build, assess, and maintain cyber resilience.

  16. Knowledge-Based Manufacturing and Structural Design for a High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Marx, William J.; Mavris, Dimitri N.; Schrage, Daniel P.

    1994-01-01

    The aerospace industry is currently addressing the problem of integrating manufacturing and design. To address the difficulties associated with using many conventional procedural techniques and algorithms, one feasible way to integrate the two concepts is with the development of an appropriate Knowledge-Based System (KBS). The authors present their reasons for selecting a KBS to integrate design and manufacturing. A methodology for an aircraft producibility assessment is proposed, utilizing a KBS for manufacturing process selection, that addresses both procedural and heuristic aspects of designing and manufacturing of a High Speed Civil Transport (HSCT) wing. A cost model is discussed that would allow system level trades utilizing information describing the material characteristics as well as the manufacturing process selections. Statements of future work conclude the paper.

  17. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.

    PubMed

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/.

  18. Optimization of knowledge-based systems and expert system building tools

    NASA Technical Reports Server (NTRS)

    Yasuda, Phyllis; Mckellar, Donald

    1993-01-01

    The objectives of the NASA-AMES Cooperative Agreement were to investigate, develop, and evaluate, via test cases, the system parameters and processing algorithms that constrain the overall performance of the Information Sciences Division's Artificial Intelligence Research Facility. Written reports covering various aspects of the grant were submitted to the co-investigators for the grant. Research studies concentrated on the field of artificial intelligence knowledge-based systems technology. Activities included the following areas: (1) AI training classes; (2) merging optical and digital processing; (3) science experiment remote coaching; (4) SSF data management system tests; (5) computer integrated documentation project; (6) conservation of design knowledge project; (7) project management calendar and reporting system; (8) automation and robotics technology assessment; (9) advanced computer architectures and operating systems; and (10) honors program.

  19. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies

    PubMed Central

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/. PMID:26313379

  20. A Knowledge-Based System for the Computer Assisted Diagnosis of Endoscopic Images

    NASA Astrophysics Data System (ADS)

    Kage, Andreas; Münzenmayer, Christian; Wittenberg, Thomas

    Due to the actual demographic development the use of Computer-Assisted Diagnosis (CAD) systems becomes a more important part of clinical workflows and clinical decision making. Because changes on the mucosa of the esophagus can indicate the first stage of cancerous developments, there is a large interest to detect and correctly diagnose any such lesion. We present a knowledge-based system which is able to support a physician with the interpretation and diagnosis of endoscopic images of the esophagus. Our system is designed to support the physician directly during the examination of the patient, thus prodving diagnostic assistence at the point of care (POC). Based on an interactively marked region in an endoscopic image of interest, the system provides a diagnostic suggestion, based on an annotated reference image database. Furthermore, using relevant feedback mechanisms, the results can be enhanced interactively.

  1. Using fuzzy logic to integrate neural networks and knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Yen, John

    1991-01-01

    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.

  2. Knowledge-based reasoning in the Paladin tactical decision generation system

    NASA Technical Reports Server (NTRS)

    Chappell, Alan R.

    1993-01-01

    A real-time tactical decision generation system for air combat engagements, Paladin, has been developed. A pilot's job in air combat includes tasks that are largely symbolic. These symbolic tasks are generally performed through the application of experience and training (i.e. knowledge) gathered over years of flying a fighter aircraft. Two such tasks, situation assessment and throttle control, are identified and broken out in Paladin to be handled by specialized knowledge based systems. Knowledge pertaining to these tasks is encoded into rule-bases to provide the foundation for decisions. Paladin uses a custom built inference engine and a partitioned rule-base structure to give these symbolic results in real-time. This paper provides an overview of knowledge-based reasoning systems as a subset of rule-based systems. The knowledge used by Paladin in generating results as well as the system design for real-time execution is discussed.

  3. Knowledge-Based Parallel Performance Technology for Scientific Application Competitiveness Final Report

    SciTech Connect

    Malony, Allen D; Shende, Sameer

    2011-08-15

    The primary goal of the University of Oregon's DOE "œcompetitiveness" project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translation of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.

  4. Design of knowledge-based image retrieval system: implications from radiologists' cognitive processes

    NASA Astrophysics Data System (ADS)

    Liu Sheng, Olivia R.; Wei, Chih-Ping; Ozeki, Takeshi; Ovitt, Theron W.; Ishida, Jiro

    1992-07-01

    In a radiological examination reading, radiologists usually compare a newly generated examination with previous examinations of the same patient. For this reason, the retrieval of old images is a critical design requirement of totally digital radiology using Picture Archiving and Communication Systems (PACS). To achieve the required performance in a PACS with a hierarchical and possibly distributed image archival system, pre-fetching of images from slower or remote storage devices to the local buffers of workstations is proposed. Image Retrieval Expert System (IRES) is a knowledge-based image retrieval system which will predict and then pre-fetch relevant old images. Previous work on IRES design focused on the knowledge acquisition phase and the development of an efficient modeling methodology and architecture. The goal of this paper is to evaluate the effectiveness of the current IRES design and to identify appropriate directions for exploring other design features and alternatives by means of a cognitive study and an associated survey study.

  5. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  6. A knowledge-based system for optimization of fuel reload configurations

    SciTech Connect

    Galperin, A.; Kimhi, S.; Segev, M. )

    1989-05-01

    The authors discuss a knowledge-based production system developed for generating optimal fuel reload configurations. The system was based on a heuristic search method and implemented in Common Lisp programming language. The knowledge base embodied the reactor physics, reactor operations, and a general approach to fuel management strategy. The data base included a description of the physical system involved, i.e., the core geometry and fuel storage. The fifth cycle of the Three Mile Island Unit 1 pressurized water reactor was chosen as a test case. Application of the system to the test case revealed a self-learning process by which a relatively large number of near-optimal configurations were discovered. Several selected solutions were subjected to detailed analysis and demonstrated excellent performance. To summarize, applicability of the proposed heuristic search method in the domain of nuclear fuel management was proved unequivocally.

  7. Programming constructs for real-time distributed knowledge-based systems

    SciTech Connect

    Cromarty, A.S.

    1988-01-01

    This study presents a set of mechanisms for use in the construction of distributed knowledge-based systems that must meet such practical performance constraints. These mechanisms are made manifest in the form of programming-language constructs designed to support the development of DKBS's that implement a multiplicity of alternative policies and abstract architectures. The proposed constructs are studied, using two techniques: implementation in an experimental testbed environment and performance measurement in controlled experiments that assess the time cost of interagent communications among loosely-coupled LISP processes. The author's empirical data on interagent communications should prove valuable to DKBS implementors who must meet real-time constraints using contemporary hardware and software technology. It is concluded that it is possible to construct interagent communications cost models that are useful and have a high degree of predictive value. Experimental results indicate that the cost of message-based interagent communications between symbolic processing agents is very high.

  8. A development environment for knowledge-based medical applications on the World-Wide Web.

    PubMed

    Riva, A; Bellazzi, R; Lanzola, G; Stefanelli, M

    1998-11-01

    The World-Wide Web (WWW) is increasingly being used as a platform to develop distributed applications, particularly in contexts, such as medical ones, where high usability and availability are required. In this paper we propose a methodology for the development of knowledge-based medical applications on the web, based on the use of an explicit domain ontology to automatically generate parts of the system. We describe a development environment, centred on the LISPWEB Common Lisp HTTP server, that supports this methodology, and we show how it facilitates the creation of complex web-based applications, by overcoming the limitations that normally affect the adequacy of the web for this purpose. Finally, we present an outline of a system for the management of diabetic patients built using the LISPWEB environment. PMID:9821518

  9. Comparative development of knowledge-based bioeconomy in the European Union and Turkey.

    PubMed

    Celikkanat Ozan, Didem; Baran, Yusuf

    2014-09-01

    Biotechnology, defined as the technological application that uses biological systems and living organisms, or their derivatives, to create or modify diverse products or processes, is widely used for healthcare, agricultural and environmental applications. The continuity in industrial applications of biotechnology enables the rise and development of the bioeconomy concept. Bioeconomy, including all applications of biotechnology, is defined as translation of knowledge received from life sciences into new, sustainable, environment friendly and competitive products. With the advanced research and eco-efficient processes in the scope of bioeconomy, more healthy and sustainable life is promised. Knowledge-based bioeconomy with its economic, social and environmental potential has already been brought to the research agendas of European Union (EU) countries. The aim of this study is to summarize the development of knowledge-based bioeconomy in EU countries and to evaluate Turkey's current situation compared to them. EU-funded biotechnology research projects under FP6 and FP7 and nationally-funded biotechnology projects under The Scientific and Technological Research Council of Turkey (TUBITAK) Academic Research Funding Program Directorate (ARDEB) and Technology and Innovation Funding Programs Directorate (TEYDEB) were examined. In the context of this study, the main research areas and subfields which have been funded, the budget spent and the number of projects funded since 2003 both nationally and EU-wide and the gaps and overlapping topics were analyzed. In consideration of the results, detailed suggestions for Turkey have been proposed. The research results are expected to be used as a roadmap for coordinating the stakeholders of bioeconomy and integrating Turkish Research Areas into European Research Areas.

  10. Knowledge-based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.

    1993-01-01

    Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scalable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded with this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on concept called the DataHub. With the DataHub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (address, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.

  11. Knowledge-based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.

    1992-01-01

    Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scaleable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded within this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on the concept called the Data Hub. With the Data Hub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (access, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.

  12. Knowledge-based medical image analysis and representation for integrating content definition with the radiological report.

    PubMed

    Kulikowski, C A; Gong, L; Mezrich, R S

    1995-03-01

    Technology breakthroughs in high-speed, high-capacity, and high performance desk-top computers and workstations make the possibility of integrating multimedia medical data to better support clinical decision making, computer-aided education, and research not only attractive, but feasible. To systematically evaluate results from increasingly automated image segmentation it is necessary to correlate them with the expert judgments of radiologists and other clinical specialists interpreting the images. These are contained in increasingly computerized radiological reports and other related clinical records. But to make automated comparison feasible it is necessary to first ensure compatibility of the knowledge content of images with the descriptions contained in these records. Enough common vocabulary, language, and knowledge representation components must be represented on the computer, followed by automated extraction of image-content descriptions from the text, which can then be matched to the results of automated image segmentation. A knowledge-based approach to image segmentation is essential to obtain the structured image descriptions needed for matching against the expert's descriptions. We have developed a new approach to medical image analysis which helps generate such descriptions: a knowledge-based object-centered hierarchical planning method for automatically composing the image analysis processes. The problem-solving steps of specialists are represented at the knowledge level in terms of goals, tasks, and domain objects and concepts separately from the implementation level for specific representations of different image types, and generic analysis methods. This system can serve as a major functional component in incrementally building and updating a structured and integrated hybrid information system of patient data.(ABSTRACT TRUNCATED AT 250 WORDS)

  13. The role of textual semantic constraints in knowledge-based inference generation during reading comprehension: A computational approach.

    PubMed

    Yeari, Menahem; van den Broek, Paul

    2015-01-01

    The present research adopted a computational approach to explore the extent to which the semantic content of texts constrains the activation of knowledge-based inferences. Specifically, we examined whether textual semantic constraints (TSC) can explain (1) the activation of predictive inferences, (2) the activation of bridging inferences and (3) the higher prevalence of the activation of bridging inferences compared to predictive inferences. To examine these hypotheses, we computed the strength of semantic associations between texts and probe items as presented to human readers in previous behavioural studies, using the Latent Semantic Analysis (LSA) algorithm. We tested whether stronger semantic associations are observed for inferred items compared to control items. Our results show that in 15 out of 17 planned comparisons, the computed strength of semantic associations successfully simulated the activation of inferences. These findings suggest that TSC play a central role in the activation of knowledge-based inferences.

  14. Automated knowledge-based fuzzy models generation for weaning of patients receiving ventricular assist device (VAD) therapy.

    PubMed

    Tsipouras, Markos G; Karvounis, Evaggelos C; Tzallas, Alexandros T; Goletsis, Yorgos; Fotiadis, Dimitrios I; Adamopoulos, Stamatis; Trivella, Maria G

    2012-01-01

    The SensorART project focus on the management of heart failure (HF) patients which are treated with implantable ventricular assist devices (VADs). This work presents the way that crisp models are transformed into fuzzy in the weaning module, which is one of the core modules of the specialist's decision support system (DSS) in SensorART. The weaning module is a DSS that supports the medical expert on the weaning and remove VAD from the patient decision. Weaning module has been developed following a "mixture of experts" philosophy, with the experts being fuzzy knowledge-based models, automatically generated from initial crisp knowledge-based set of rules and criteria for weaning. PMID:23366361

  15. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    PubMed

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. PMID:23149160

  16. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    PubMed

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes.

  17. On the Importance of the Distance Measures Used to Train and Test Knowledge-Based Potentials for Proteins

    PubMed Central

    Carlsen, Martin; Koehl, Patrice; Røgen, Peter

    2014-01-01

    Knowledge-based potentials are energy functions derived from the analysis of databases of protein structures and sequences. They can be divided into two classes. Potentials from the first class are based on a direct conversion of the distributions of some geometric properties observed in native protein structures into energy values, while potentials from the second class are trained to mimic quantitatively the geometric differences between incorrectly folded models and native structures. In this paper, we focus on the relationship between energy and geometry when training the second class of knowledge-based potentials. We assume that the difference in energy between a decoy structure and the corresponding native structure is linearly related to the distance between the two structures. We trained two distance-based knowledge-based potentials accordingly, one based on all inter-residue distances (PPD), while the other had the set of all distances filtered to reflect consistency in an ensemble of decoys (PPE). We tested four types of metric to characterize the distance between the decoy and the native structure, two based on extrinsic geometry (RMSD and GTD-TS*), and two based on intrinsic geometry (Q* and MT). The corresponding eight potentials were tested on a large collection of decoy sets. We found that it is usually better to train a potential using an intrinsic distance measure. We also found that PPE outperforms PPD, emphasizing the benefits of capturing consistent information in an ensemble. The relevance of these results for the design of knowledge-based potentials is discussed. PMID:25411785

  18. Evaluation of a Knowledge-Based Planning Solution for Head and Neck Cancer

    SciTech Connect

    Tol, Jim P. Delaney, Alexander R.; Dahele, Max; Slotman, Ben J.; Verbakel, Wilko F.A.R.

    2015-03-01

    Purpose: Automated and knowledge-based planning techniques aim to reduce variations in plan quality. RapidPlan uses a library consisting of different patient plans to make a model that can predict achievable dose-volume histograms (DVHs) for new patients and uses those models for setting optimization objectives. We benchmarked RapidPlan versus clinical plans for 2 patient groups, using 3 different libraries. Methods and Materials: Volumetric modulated arc therapy plans of 60 recent head and neck cancer patients that included sparing of the salivary glands, swallowing muscles, and oral cavity were evenly divided between 2 models, Model{sub 30A} and Model{sub 30B}, and were combined in a third model, Model{sub 60}. Knowledge-based plans were created for 2 evaluation groups: evaluation group 1 (EG1), consisting of 15 recent patients, and evaluation group 2 (EG2), consisting of 15 older patients in whom only the salivary glands were spared. RapidPlan results were compared with clinical plans (CP) for boost and/or elective planning target volume homogeneity index, using HI{sub B}/HI{sub E} = 100 × (D2% − D98%)/D50%, and mean dose to composite salivary glands, swallowing muscles, and oral cavity (D{sub sal}, D{sub swal}, and D{sub oc}, respectively). Results: For EG1, RapidPlan improved HI{sub B} and HI{sub E} values compared with CP by 1.0% to 1.3% and 1.0% to 0.6%, respectively. Comparable D{sub sal} and D{sub swal} values were seen in Model{sub 30A}, Model{sub 30B}, and Model{sub 60}, decreasing by an average of 0.1, 1.0, and 0.8 Gy and 4.8, 3.7, and 4.4 Gy, respectively. However, differences were noted between individual organs at risk (OARs), with Model{sub 30B} increasing D{sub oc} by 0.1, 3.2, and 2.8 Gy compared with CP, Model{sub 30A}, and Model{sub 60}. Plan quality was less consistent when the patient was flagged as an outlier. For EG2, RapidPlan decreased D{sub sal} by 4.1 to 4.9 Gy on average, whereas HI{sub B} and HI{sub E} decreased by 1.1% to

  19. An intelligent knowledge-based and customizable home care system framework with ubiquitous patient monitoring and alerting techniques.

    PubMed

    Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions.

  20. Shewanella knowledgebase: integration of the experimental data and computational predictions suggests a biological role for transcription of intergenic regions

    SciTech Connect

    Karpinets, Tatiana V; Romine, Margaret; Schmoyer, Denise D; Kora, Guruprasad H; Syed, Mustafa H; Leuze, Michael Rex; Serres, Margrethe H.; Park, Byung; Uberbacher, Edward C

    2010-01-01

    Shewanellae are facultative gamma-proteobacteria whose remarkable respiratory versatility has resulted in interest in their utility for bioremediation of heavy metals and radionuclides and for energy generation in microbial fuel cells. Extensive experimental efforts over the last several years and the availability of 21 sequenced Shewanella genomes made it possible to collect and integrate a wealth of information on the genus into one public resource providing new avenues for making biological discoveries and for developing a system level understanding of the cellular processes. The Shewanella knowledgebase was established in 2005 to provide a framework for integrated genome-based studies on Shewanella ecophysiology. The present version of the knowledgebase provides access to a diverse set of experimental and genomic data along with tools for curation of genome annotations and visualization and integration of genomic data with experimental data. As a demonstration of the utility of this resource, we examined a single microarray data set from Shewanella oneidensis MR-1 for new insights into regulatory processes. The integrated analysis of the data predicted a new type of bacterial transcriptional regulation involving co-transcription of the intergenic region with the downstream gene and suggested a biological role for co-transcription that likely prevents the binding of a regulator of the upstream gene to the regulator binding site located in the intergenic region. Database URL: http://shewanella-knowledgebase.org:8080/Shewanella/ or http://spruce.ornl.gov:8080/Shewanella/

  1. Design of Composite Structures Using Knowledge-Based and Case Based Reasoning

    NASA Technical Reports Server (NTRS)

    Lambright, Jonathan Paul

    1996-01-01

    A method of using knowledge based and case based reasoning to assist designers during conceptual design tasks of composite structures was proposed. The cooperative use of heuristics, procedural knowledge, and previous similar design cases suggests a potential reduction in design cycle time and ultimately product lead time. The hypothesis of this work is that the design process of composite structures can be improved by using Case-Based Reasoning (CBR) and Knowledge-Based (KB) reasoning in the early design stages. The technique of using knowledge-based and case-based reasoning facilitates the gathering of disparate information into one location that is easily and readily available. The method suggests that the inclusion of downstream life-cycle issues into the conceptual design phase reduces potential of defective, and sub-optimal composite structures. Three industry experts were interviewed extensively. The experts provided design rules, previous design cases, and test problems. A Knowledge Based Reasoning system was developed using the CLIPS (C Language Interpretive Procedural System) environment and a Case Based Reasoning System was developed using the Design Memory Utility For Sharing Experiences (MUSE) xviii environment. A Design Characteristic State (DCS) was used to document the design specifications, constraints, and problem areas using attribute-value pair relationships. The DCS provided consistent design information between the knowledge base and case base. Results indicated that the use of knowledge based and case based reasoning provided a robust design environment for composite structures. The knowledge base provided design guidance from well defined rules and procedural knowledge. The case base provided suggestions on design and manufacturing techniques based on previous similar designs and warnings of potential problems and pitfalls. The case base complemented the knowledge base and extended the problem solving capability beyond the existence of

  2. An expert knowledge-based approach to landslide susceptibility mapping using GIS and fuzzy logic

    NASA Astrophysics Data System (ADS)

    Zhu, A.-Xing; Wang, Rongxun; Qiao, Jianping; Qin, Cheng-Zhi; Chen, Yongbo; Liu, Jing; Du, Fei; Lin, Yang; Zhu, Tongxin

    2014-06-01

    This paper presents an expert knowledge-based approach to landslide susceptibility mapping in an effort to overcome the deficiencies of data-driven approaches. The proposed approach consists of three generic steps: (1) extraction of knowledge on the relationship between landslide susceptibility and predisposing factors from domain experts, (2) characterization of predisposing factors using GIS techniques, and (3) prediction of landslide susceptibility under fuzzy logic. The approach was tested in two study areas in China - the Kaixian study area (about 250 km2) and the Three Gorges study area (about 4600 km2). The Kaixian study area was used to develop the approach and to evaluate its validity. The Three Gorges study area was used to test both the portability and the applicability of the developed approach for mapping landslide susceptibility over large study areas. Performance was evaluated by examining if the mean of the computed susceptibility values at landslide sites was statistically different from that of the entire study area. A z-score test was used to examine the statistical significance of the difference. The computed z for the Kaixian area was 3.70 and the corresponding p-value was less than 0.001. This suggests that the computed landslide susceptibility values are good indicators of landslide occurrences. In the Three Gorges study area, the computed z was 10.75 and the corresponding p-value was less than 0.001. In addition, we divided the susceptibility value into four levels: low (0.0-0.25), moderate (0.25-0.5), high (0.5-0.75) and very high (0.75-1.0). No landslides were found for areas of low susceptibility. Landslide density was about three times higher in areas of very high susceptibility than that in the moderate susceptibility areas, and more than twice as high as that in the high susceptibility areas. The results from the Three Gorge study area suggest that the extracted expert knowledge can be extrapolated to another study area and the

  3. A Knowledge-Based Approach to Improving and Homogenizing Intensity Modulated Radiation Therapy Planning Quality Among Treatment Centers: An Example Application to Prostate Cancer Planning

    SciTech Connect

    Good, David; Lo, Joseph; Lee, W. Robert; Wu, Q. Jackie; Yin, Fang-Fang; Das, Shiva K.

    2013-09-01

    Purpose: Intensity modulated radiation therapy (IMRT) treatment planning can have wide variation among different treatment centers. We propose a system to leverage the IMRT planning experience of larger institutions to automatically create high-quality plans for outside clinics. We explore feasibility by generating plans for patient datasets from an outside institution by adapting plans from our institution. Methods and Materials: A knowledge database was created from 132 IMRT treatment plans for prostate cancer at our institution. The outside institution, a community hospital, provided the datasets for 55 prostate cancer cases, including their original treatment plans. For each “query” case from the outside institution, a similar “match” case was identified in the knowledge database, and the match case’s plan parameters were then adapted and optimized to the query case by use of a semiautomated approach that required no expert planning knowledge. The plans generated with this knowledge-based approach were compared with the original treatment plans at several dose cutpoints. Results: Compared with the original plan, the knowledge-based plan had a significantly more homogeneous dose to the planning target volume and a significantly lower maximum dose. The volumes of the rectum, bladder, and femoral heads above all cutpoints were nominally lower for the knowledge-based plan; the reductions were significantly lower for the rectum. In 40% of cases, the knowledge-based plan had overall superior (lower) dose–volume histograms for rectum and bladder; in 54% of cases, the comparison was equivocal; in 6% of cases, the knowledge-based plan was inferior for both bladder and rectum. Conclusions: Knowledge-based planning was superior or equivalent to the original plan in 95% of cases. The knowledge-based approach shows promise for homogenizing plan quality by transferring planning expertise from more experienced to less experienced institutions.

  4. Development of a Knowledgebase to Integrate, Analyze, Distribute, and Visualize Microbial Community Systems Biology Data

    SciTech Connect

    Banfield, Jillian; Thomas, Brian

    2015-01-15

    We have developed a flexible knowledgebase system, ggKbase, (http://gg.berkeley.edu), to enable effective data analysis and knowledge generation from samples from which metagenomic and other ‘omics’ data are obtained. Within ggKbase, data can be interpreted, integrated and linked to other databases and services. Sequence information from complex metagenomic samples can be quickly and effectively resolved into genomes and biologically meaningful investigations of an organism’s metabolic potential can then be conducted. Critical features make analyses efficient, allowing analysis of hundreds of genomes at a time. The system is being used to support research in multiple DOE-relevant systems, including the LBNL SFA subsurface science biogeochemical cycling research at Rifle, Colorado. ggKbase is supporting the research of a rapidly growing group of users. It has enabled studies of carbon cycling in acid mine drainage ecosystems, biologically-mediated transformations in deep subsurface biomes sampled from mines and the north slope of Alaska, to study the human microbiome and for laboratory bioreactor-based remediation investigations.

  5. Diagnosis by integrating model-based reasoning with knowledge-based reasoning

    NASA Technical Reports Server (NTRS)

    Bylander, Tom

    1988-01-01

    Our research investigates how observations can be categorized by integrating a qualitative physical model with experiential knowledge. Our domain is diagnosis of pathologic gait in humans, in which the observations are the gait motions, muscle activity during gait, and physical exam data, and the diagnostic hypotheses are the potential muscle weaknesses, muscle mistimings, and joint restrictions. Patients with underlying neurological disorders typically have several malfunctions. Among the problems that need to be faced are: the ambiguity of the observations, the ambiguity of the qualitative physical model, correspondence of the observations and hypotheses to the qualitative physical model, the inherent uncertainty of experiential knowledge, and the combinatorics involved in forming composite hypotheses. Our system divides the work so that the knowledge-based reasoning suggests which hypotheses appear more likely than others, the qualitative physical model is used to determine which hypotheses explain which observations, and another process combines these functionalities to construct a composite hypothesis based on explanatory power and plausibility. We speculate that the reasoning architecture of our system is generally applicable to complex domains in which a less-than-perfect physical model and less-than-perfect experiential knowledge need to be combined to perform diagnosis.

  6. Human Disease Insight: An integrated knowledge-based platform for disease-gene-drug information.

    PubMed

    Tasleem, Munazzah; Ishrat, Romana; Islam, Asimul; Ahmad, Faizan; Hassan, Md Imtaiyaz

    2016-01-01

    The scope of the Human Disease Insight (HDI) database is not limited to researchers or physicians as it also provides basic information to non-professionals and creates disease awareness, thereby reducing the chances of patient suffering due to ignorance. HDI is a knowledge-based resource providing information on human diseases to both scientists and the general public. Here, our mission is to provide a comprehensive human disease database containing most of the available useful information, with extensive cross-referencing. HDI is a knowledge management system that acts as a central hub to access information about human diseases and associated drugs and genes. In addition, HDI contains well-classified bioinformatics tools with helpful descriptions. These integrated bioinformatics tools enable researchers to annotate disease-specific genes and perform protein analysis, search for biomarkers and identify potential vaccine candidates. Eventually, these tools will facilitate the analysis of disease-associated data. The HDI provides two types of search capabilities and includes provisions for downloading, uploading and searching disease/gene/drug-related information. The logistical design of the HDI allows for regular updating. The database is designed to work best with Mozilla Firefox and Google Chrome and is freely accessible at http://humandiseaseinsight.com.

  7. T2D@ZJU: a knowledgebase integrating heterogeneous connections associated with type 2 diabetes mellitus.

    PubMed

    Yang, Zhenzhong; Yang, Jihong; Liu, Wei; Wu, Leihong; Xing, Li; Wang, Yi; Fan, Xiaohui; Cheng, Yiyu

    2013-01-01

    Type 2 diabetes mellitus (T2D), affecting >90% of the diabetic patients, is one of the major threats to human health. A comprehensive understanding of the mechanisms of T2D at molecular level is essential to facilitate the related translational research. Here, we introduce a comprehensive and up-to-date knowledgebase for T2D, i.e. T2D@ZJU. T2D@ZJU contains three levels of heterogeneous connections associated with T2D, which is retrieved from pathway databases, protein-protein interaction databases and literature, respectively. In current release, T2D@ZJU contains 1078 T2D related entities such as proteins, protein complexes, drugs and others together with their corresponding relationships, which include 3069 manually curated connections, 14,893 protein-protein interactions and 26,716 relationships identified by text-mining technology. Moreover, T2D@ZJU provides a user-friendly web interface for users to browse and search data. A Cytoscape Web-based interactive network browser is available to visualize the corresponding network relationships between T2D-related entities. The functionality of T2D@ZJU is shown by means of several case studies. Database URL: http://tcm.zju.edu.cn/t2d.

  8. A methodology for evaluating potential KBS (Knowledge-Based Systems) applications

    SciTech Connect

    Melton, R.B.; DeVaney, D.M.; Whiting, M.A.; Laufmann, S.C.

    1989-06-01

    It is often difficult to assess how well Knowledge-Based Systems (KBS) techniques and paradigms may be applied to automating various tasks. This report describes the approach and organization of an assessment procedure that involves two levels of analysis. Level One can be performed by individuals with little technical expertise relative to KBS development, while Level Two is intended to be used by experienced KBS developers. The two levels review four groups of issues: goals, appropriateness, resources, and non-technical considerations. Those criteria are identified which are important at each step in the assessment. A qualitative methodology for scoring the task relative to the assessment criteria is provided to alloy analysts to make better informed decisions with regard to the potential effectiveness of applying KBS technology. In addition to this documentation, the assessment methodology has been implemented for personal computers use using the HYPERCARD{trademark} software on a Macintosh{trademark} computer. This interactive mode facilities small group analysis of potential KBS applications and permits a non-sequential appraisal with provisions for automated note-keeping and question scoring. The results provide a useful tool for assessing the feasibility of using KBS techniques in performing tasks in support of treaty verification or IC functions. 13 refs., 3 figs.

  9. Knowledge-based discovery for designing CRISPR-CAS systems against invading mobilomes in thermophiles.

    PubMed

    Chellapandi, P; Ranjani, J

    2015-09-01

    Clustered regularly interspaced short palindromic repeats (CRISPRs) are direct features of the prokaryotic genomes involved in resistance to their bacterial viruses and phages. Herein, we have identified CRISPR loci together with CRISPR-associated sequences (CAS) genes to reveal their immunity against genome invaders in the thermophilic archaea and bacteria. Genomic survey of this study implied that genomic distribution of CRISPR-CAS systems was varied from strain to strain, which was determined by the degree of invading mobiloms. Direct repeats found to be equal in some extent in many thermopiles, but their spacers were differed in each strain. Phylogenetic analyses of CAS superfamily revealed that genes cmr, csh, csx11, HD domain, devR were belonged to the subtypes of cas gene family. The members in cas gene family of thermophiles were functionally diverged within closely related genomes and may contribute to develop several defense strategies. Nevertheless, genome dynamics, geological variation and host defense mechanism were contributed to share their molecular functions across the thermophiles. A thermophilic archaean, Thermococcus gammotolerans and thermophilic bacteria, Petrotoga mobilis and Thermotoga lettingae have shown superoperons-like appearance to cluster cas genes, which were typically evolved for their defense pathways. A cmr operon was identified with a specific promoter in a thermophilic archaean, Caldivirga maquilingensis. Overall, we concluded that knowledge-based genomic survey and phylogeny-based functional assignment have suggested for designing a reliable genetic regulatory circuit naturally from CRISPR-CAS systems, acquired defense pathways, to thermophiles in future synthetic biology.

  10. The Application of Integrated Knowledge-based Systems for the Biomedical Risk Assessment Intelligent Network (BRAIN)

    NASA Technical Reports Server (NTRS)

    Loftin, Karin C.; Ly, Bebe; Webster, Laurie; Verlander, James; Taylor, Gerald R.; Riley, Gary; Culbert, Chris; Holden, Tina; Rudisill, Marianne

    1993-01-01

    One of NASA's goals for long duration space flight is to maintain acceptable levels of crew health, safety, and performance. One way of meeting this goal is through the Biomedical Risk Assessment Intelligent Network (BRAIN), an integrated network of both human and computer elements. The BRAIN will function as an advisor to flight surgeons by assessing the risk of in-flight biomedical problems and recommending appropriate countermeasures. This paper describes the joint effort among various NASA elements to develop BRAIN and an Infectious Disease Risk Assessment (IDRA) prototype. The implementation of this effort addresses the technological aspects of the following: (1) knowledge acquisition; (2) integration of IDRA components; (3) use of expert systems to automate the biomedical prediction process; (4) development of a user-friendly interface; and (5) integration of the IDRA prototype and Exercise Countermeasures Intelligent System (ExerCISys). Because the C Language, CLIPS (the C Language Integrated Production System), and the X-Window System were portable and easily integrated, they were chosen as the tools for the initial IDRA prototype. The feasibility was tested by developing an IDRA prototype that predicts the individual risk of influenza. The application of knowledge-based systems to risk assessment is of great market value to the medical technology industry.

  11. Designing optimal transportation networks: a knowledge-based computer-aided multicriteria approach

    SciTech Connect

    Tung, S.I.

    1986-01-01

    The dissertation investigates the applicability of using knowledge-based expert systems (KBES) approach to solve the single-mode (automobile), fixed-demand, discrete, multicriteria, equilibrium transportation-network-design problem. Previous works on this problem has found that mathematical programming method perform well on small networks with only one objective. Needed is a solution technique that can be used on large networks having multiple, conflicting criteria with different relative importance weights. The KBES approach developed in this dissertation represents a new way to solve network design problems. The development of an expert system involves three major tasks: knowledge acquisition, knowledge representation, and testing. For knowledge acquisition, a computer aided network design/evaluation model (UFOS) was developed to explore the design space. This study is limited to the problem of designing an optimal transportation network by adding and deleting capacity increments to/from any link in the network. Three weighted criteria were adopted for use in evaluating each design alternative: cost, average V/C ratio, and average travel time.

  12. Weaning Patients From Mechanical Ventilation: A Knowledge-Based System Approach

    PubMed Central

    Tong, David A.

    1990-01-01

    The WEANing PROtocol (WEANPRO) knowledge-based system assists respiratory therapists and nurses in weaning post-operative cardiovascular patients from mechanical ventilation in the intensive care unit. The knowledge contained in WEANPRO is represented by rules and is implemented in M.1® by Teknowledge, Inc. WEANPRO will run on any IBM® compatible microcomputer. WEANPRO's performance in weaning patients in the intensive care unit was evaluated three ways: (1) a statistical comparison between the mean number of arterial blood gases required to wean patients to a T-piece with and without the use of WEANPRO, (2) a critique of the suggestions offered by the system by clinicians not involved in the system development, and (3) an inspection of the user's acceptance of WEANPRO in the intensive care unit. The results of the evaluations revealed that using WEANPRO significantly decreases the number of arterial blood gas analyses needed to wean patients from total dependance on mechanical ventilation to independent breathing using a T-piece. In doing so, WEANPRO's suggestions are accurate and its use is accepted by the clinicians. Currently, WEANPRO is being used in the intensive care unit at the East Unit of Baptist Memorial Hospital in Memphis, Tennessee.

  13. Improving Loop Modeling of the Antibody Complementarity-Determining Region 3 Using Knowledge-Based Restraints.

    PubMed

    Finn, Jessica A; Koehler Leman, Julia; Willis, Jordan R; Cisneros, Alberto; Crowe, James E; Meiler, Jens

    2016-01-01

    Structural restrictions are present even in the most sequence diverse portions of antibodies, the complementary determining region (CDR) loops. Previous studies identified robust rules that define canonical structures for five of the six CDR loops, however the heavy chain CDR 3 (HCDR3) defies standard classification attempts. The HCDR3 loop can be subdivided into two domains referred to as the "torso" and the "head" domains and two major families of canonical torso structures have been identified; the more prevalent "bulged" and less frequent "non-bulged" torsos. In the present study, we found that Rosetta loop modeling of 28 benchmark bulged HCDR3 loops is improved with knowledge-based structural restraints developed from available antibody crystal structures in the PDB. These restraints restrict the sampling space Rosetta searches in the torso domain, limiting the φ and ψ angles of these residues to conformations that have been experimentally observed. The application of these restraints in Rosetta result in more native-like structure sampling and improved score-based differentiation of native-like HCDR3 models, significantly improving our ability to model antibody HCDR3 loops. PMID:27182833

  14. A Knowledge-Based and Model-Driven Requirements Engineering Approach to Conceptual Satellite Design

    NASA Astrophysics Data System (ADS)

    Dos Santos, Walter A.; Leonor, Bruno B. F.; Stephany, Stephan

    Satellite systems are becoming even more complex, making technical issues a significant cost driver. The increasing complexity of these systems makes requirements engineering activities both more important and difficult. Additionally, today's competitive pressures and other market forces drive manufacturing companies to improve the efficiency with which they design and manufacture space products and systems. This imposes a heavy burden on systems-of-systems engineering skills and particularly on requirements engineering which is an important phase in a system's life cycle. When this is poorly performed, various problems may occur, such as failures, cost overruns and delays. One solution is to underpin the preliminary conceptual satellite design with computer-based information reuse and integration to deal with the interdisciplinary nature of this problem domain. This can be attained by taking a model-driven engineering approach (MDE), in which models are the main artifacts during system development. MDE is an emergent approach that tries to address system complexity by the intense use of models. This work outlines the use of SysML (Systems Modeling Language) and a novel knowledge-based software tool, named SatBudgets, to deal with these and other challenges confronted during the conceptual phase of a university satellite system, called ITASAT, currently being developed by INPE and some Brazilian universities.

  15. The fault monitoring and diagnosis knowledge-based system for space power systems: AMPERES, phase 1

    NASA Technical Reports Server (NTRS)

    Lee, S. C.

    1989-01-01

    The objective is to develop a real time fault monitoring and diagnosis knowledge-based system (KBS) for space power systems which can save costly operational manpower and can achieve more reliable space power system operation. The proposed KBS was developed using the Autonomously Managed Power System (AMPS) test facility currently installed at NASA Marshall Space Flight Center (MSFC), but the basic approach taken for this project could be applicable for other space power systems. The proposed KBS is entitled Autonomously Managed Power-System Extendible Real-time Expert System (AMPERES). In Phase 1 the emphasis was put on the design of the overall KBS, the identification of the basic research required, the initial performance of the research, and the development of a prototype KBS. In Phase 2, emphasis is put on the completion of the research initiated in Phase 1, and the enhancement of the prototype KBS developed in Phase 1. This enhancement is intended to achieve a working real time KBS incorporated with the NASA space power system test facilities. Three major research areas were identified and progress was made in each area. These areas are real time data acquisition and its supporting data structure; sensor value validations; development of inference scheme for effective fault monitoring and diagnosis, and its supporting knowledge representation scheme.

  16. Human Disease Insight: An integrated knowledge-based platform for disease-gene-drug information.

    PubMed

    Tasleem, Munazzah; Ishrat, Romana; Islam, Asimul; Ahmad, Faizan; Hassan, Md Imtaiyaz

    2016-01-01

    The scope of the Human Disease Insight (HDI) database is not limited to researchers or physicians as it also provides basic information to non-professionals and creates disease awareness, thereby reducing the chances of patient suffering due to ignorance. HDI is a knowledge-based resource providing information on human diseases to both scientists and the general public. Here, our mission is to provide a comprehensive human disease database containing most of the available useful information, with extensive cross-referencing. HDI is a knowledge management system that acts as a central hub to access information about human diseases and associated drugs and genes. In addition, HDI contains well-classified bioinformatics tools with helpful descriptions. These integrated bioinformatics tools enable researchers to annotate disease-specific genes and perform protein analysis, search for biomarkers and identify potential vaccine candidates. Eventually, these tools will facilitate the analysis of disease-associated data. The HDI provides two types of search capabilities and includes provisions for downloading, uploading and searching disease/gene/drug-related information. The logistical design of the HDI allows for regular updating. The database is designed to work best with Mozilla Firefox and Google Chrome and is freely accessible at http://humandiseaseinsight.com. PMID:26631432

  17. A Knowledge-Based System For The Delineation Of The Coronary Arteries

    NASA Astrophysics Data System (ADS)

    Smets, Carl; Suetens, Paul; Oosterlinck, Andre J.; van de Werf, Frans

    1989-05-01

    In this article we will present work in progress concerning a knowledge-based system for the labeling of the coronary arteries on single projections. The approach is based on a gradual refinement of the interpretation results, starting from the detection of blood vessel center lines, the extraction of bar-like primitives and the connection into blood vessel segments. In this paper we will focus on the final stage which is the labeling of the delineated blood vessel segments. In contrast with most existing approaches which are mainly based on a sequential labeling of the vessels starting from the most important segment, our system uses a constraint satisfaction technique. Mainly, because most anatomical knowledge can be easily formalized as constraints on local attributes such as position, greyvalue, thickness and orientation and as constraints on relations between blood vessel segments such as "left of" or "in same direction". Anatomical models are developed for the Left Coronary Artery in standard RAO and LAO views. In general, only 1-2 interpretations are left, which is an encouraging result if you take into account that for some projections there is a considerable overlap between vessel segments.

  18. VIP: A knowledge-based design aid for the engineering of space systems

    NASA Technical Reports Server (NTRS)

    Lewis, Steven M.; Bellman, Kirstie L.

    1990-01-01

    The Vehicles Implementation Project (VIP), a knowledge-based design aid for the engineering of space systems is described. VIP combines qualitative knowledge in the form of rules, quantitative knowledge in the form of equations, and other mathematical modeling tools. The system allows users rapidly to develop and experiment with models of spacecraft system designs. As information becomes available to the system, appropriate equations are solved symbolically and the results are displayed. Users may browse through the system, observing dependencies and the effects of altering specific parameters. The system can also suggest approaches to the derivation of specific parameter values. In addition to providing a tool for the development of specific designs, VIP aims at increasing the user's understanding of the design process. Users may rapidly examine the sensitivity of a given parameter to others in the system and perform tradeoffs or optimizations of specific parameters. A second major goal of VIP is to integrate the existing corporate knowledge base of models and rules into a central, symbolic form.

  19. lncRNome: a comprehensive knowledgebase of human long noncoding RNAs

    PubMed Central

    Bhartiya, Deeksha; Pal, Koustav; Ghosh, Sourav; Kapoor, Shruti; Jalali, Saakshi; Panwar, Bharat; Jain, Sakshi; Sati, Satish; Sengupta, Shantanu; Sachidanandan, Chetana; Raghava, Gajendra Pal Singh; Sivasubbu, Sridhar; Scaria, Vinod

    2013-01-01

    The advent of high-throughput genome scale technologies has enabled us to unravel a large amount of the previously unknown transcriptionally active regions of the genome. Recent genome-wide studies have provided annotations of a large repertoire of various classes of noncoding transcripts. Long noncoding RNAs (lncRNAs) form a major proportion of these novel annotated noncoding transcripts, and presently known to be involved in a number of functionally distinct biological processes. Over 18 000 transcripts are presently annotated as lncRNA, and encompass previously annotated classes of noncoding transcripts including large intergenic noncoding RNA, antisense RNA and processed pseudogenes. There is a significant gap in the resources providing a stable annotation, cross-referencing and biologically relevant information. lncRNome has been envisioned with the aim of filling this gap by integrating annotations on a wide variety of biologically significant information into a comprehensive knowledgebase. To the best of our knowledge, lncRNome is one of the largest and most comprehensive resources for lncRNAs. Database URL: http://genome.igib.res.in/lncRNome PMID:23846593

  20. MHCWeb: converting a WWW database into a knowledge-based collaborative environment.

    PubMed Central

    Hon, L.; Abernethy, N. F.; Brusic, V.; Chai, J.; Altman, R. B.

    1998-01-01

    The World Wide Web (WWW) is useful for distributing scientific data. Most existing web data resources organize their information either in structured flat files or relational databases with basic retrieval capabilities. For databases with one or a few simple relations, these approaches are successful, but they can be cumbersome when there is a data model involving multiple relations between complex data. We believe that knowledge-based resources offer a solution in these cases. Knowledge bases have explicit declarations of the concepts in the domain, along with the relations between them. They are usually organized hierarchically, and provide a global data model with a controlled vocabulary. We have created the OWEB architecture for building online scientific data resources using knowledge bases. OWEB provides a shell for structuring data, providing secure and shared access, and creating computational modules for processing and displaying data. In this paper, we describe the translation of the online immunological database MHCPEP into an OWEB system called MHCWeb. This effort involved building a conceptual model for the data, creating a controlled terminology for the legal values for different types of data, and then translating the original data into the new structure. The OWEB environment allows for flexible access to the data by both users and computer programs. Images Figure 1 Figure 2 Figure 3 Figure 4 PMID:9929358

  1. TSGene 2.0: an updated literature-based knowledgebase for tumor suppressor genes.

    PubMed

    Zhao, Min; Kim, Pora; Mitra, Ramkrishna; Zhao, Junfei; Zhao, Zhongming

    2016-01-01

    Tumor suppressor genes (TSGs) are a major type of gatekeeper genes in the cell growth. A knowledgebase with the systematic collection and curation of TSGs in multiple cancer types is critically important for further studying their biological functions as well as for developing therapeutic strategies. Since its development in 2012, the Tumor Suppressor Gene database (TSGene), has become a popular resource in the cancer research community. Here, we reported the TSGene version 2.0, which has substantial updates of contents (e.g. up-to-date literature and pan-cancer genomic data collection and curation), data types (noncoding RNAs and protein-coding genes) and content accessibility. Specifically, the current TSGene 2.0 contains 1217 human TSGs (1018 protein-coding and 199 non-coding genes) curated from over 9000 articles. Additionally, TSGene 2.0 provides thousands of expression and mutation patterns derived from pan-cancer data of The Cancer Genome Atlas. A new web interface is available at http://bioinfo.mc.vanderbilt.edu/TSGene/. Systematic analyses of 199 non-coding TSGs provide numerous cancer-specific non-coding mutational events for further screening and clinical use. Intriguingly, we identified 49 protein-coding TSGs that were consistently down-regulated in 11 cancer types. In summary, TSGene 2.0, which is the only available database for TSGs, provides the most updated TSGs and their features in pan-cancer.

  2. Human Resource Development for Knowledge-based Society and Challenges of Nagoya University

    NASA Astrophysics Data System (ADS)

    Miyata, Takashi

    Innovation in the previous century resulted in development of useful products ranging from automobiles and aircraft to cellular phones. However, the innovation and development of science and technology have changed the society and brought about negative issues. The issues emerged in the previous century remain in the excessive forms in the 21st century. The 21st century is seeing the rise of knowledge-based society, and paradigm shift is now going on. Human resources of university for creation of innovation are being called on to contribute to solving issues. Young people who pass through a doctor program must play a role as an innovator who can promote the paradigm shift. However, the higher education system of the universities in Japan is now required to be changed to dissolve the mismatch on the doctor program with industries, government and students. The discussion in the Business-University Forum of Japan for innovation of education system and a few challenges of the Nagoya University are introduced in this paper.

  3. Knowledge-based analysis of microarrays for the discovery of transcriptional regulation relationships

    PubMed Central

    2010-01-01

    Background The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. Results In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. Conclusion High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data. PMID:20122245

  4. XROUTE: A knowledge-based routing system using neural networks and genetic algorithms

    SciTech Connect

    Kadaba, N.

    1990-01-01

    This dissertation is concerned with applying alternative methods of artificial intelligence (AI) in conjunction with mathematical methods to Vehicle Routing Problems. The combination of good mathematical models, knowledge-based systems, artificial neural networks, and adaptive genetic algorithms (GA) - which are shown to be synergistic - produces near-optimal results, which none of the individual methods can produce on its own. A significant problem associated with application of the Back Propagation learning paradigm for pattern classification with neural networks is the lack of high accuracy in generalization when the domain is large. In this work, a multiple neural network system is employed, using two self-organizing neural networks that work as feature extractors, producing information that is used to train a generalization neural network. The technique was successfully applied to the selection of control rules for a Traveling Salesman Problem heuristic, thus making it adaptive to the input problem instance. XROUTE provides an interactive visualization system, using state-of-the-art vehicle routing models and AI tools, yet allows an interactive environment for human expertise to be utilized in powerful ways. XROUTE provides an experimental, exploratory framework that allows many variations, and alternatives to problems with different characteristics. XROUTE is dynamic, expandable, and adaptive, and typically outperforms alternative methods in computer-aided vehicle routing.

  5. SIGMA: A Knowledge-Based Simulation Tool Applied to Ecosystem Modeling

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Keller, Richard; Lawless, James G. (Technical Monitor)

    1994-01-01

    The need for better technology to facilitate building, sharing and reusing models is generally recognized within the ecosystem modeling community. The Scientists' Intelligent Graphical Modelling Assistant (SIGMA) creates an environment for model building, sharing and reuse which provides an alternative to more conventional approaches which too often yield poorly documented, awkwardly structured model code. The SIGMA interface presents the user a list of model quantities which can be selected for computation. Equations to calculate the model quantities may be chosen from an existing library of ecosystem modeling equations, or built using a specialized equation editor. Inputs for dim equations may be supplied by data or by calculation from other equations. Each variable and equation is expressed using ecological terminology and scientific units, and is documented with explanatory descriptions and optional literature citations. Automatic scientific unit conversion is supported and only physically-consistent equations are accepted by the system. The system uses knowledge-based semantic conditions to decide which equations in its library make sense to apply in a given situation, and supplies these to the user for selection. "Me equations and variables are graphically represented as a flow diagram which provides a complete summary of the model. Forest-BGC, a stand-level model that simulates photosynthesis and evapo-transpiration for conifer canopies, was originally implemented in Fortran and subsequenty re-implemented using SIGMA. The SIGMA version reproduces daily results and also provides a knowledge base which greatly facilitates inspection, modification and extension of Forest-BGC.

  6. Knowledge-based deformable surface model with application to segmentation of brain structures in MRI

    NASA Astrophysics Data System (ADS)

    Ghanei, Amir; Soltanian-Zadeh, Hamid; Elisevich, Kost; Fessler, Jeffrey A.

    2001-07-01

    We have developed a knowledge-based deformable surface for segmentation of medical images. This work has been done in the context of segmentation of hippocampus from brain MRI, due to its challenge and clinical importance. The model has a polyhedral discrete structure and is initialized automatically by analyzing brain MRI sliced by slice, and finding few landmark features at each slice using an expert system. The expert system decides on the presence of the hippocampus and its general location in each slice. The landmarks found are connected together by a triangulation method, to generate a closed initial surface. The surface deforms under defined internal and external force terms thereafter, to generate an accurate and reproducible boundary for the hippocampus. The anterior and posterior (AP) limits of the hippocampus is estimated by automatic analysis of the location of brain stem, and some of the features extracted in the initialization process. These data are combined together with a priori knowledge using Bayes method to estimate a probability density function (pdf) for the length of the structure in sagittal direction. The hippocampus AP limits are found by optimizing this pdf. The model is tested on real clinical data and the results show very good model performance.

  7. Structure of a protein (H2AX): a comparative study with knowledge-based interactions

    NASA Astrophysics Data System (ADS)

    Fritsche, Miriam; Heermann, Dieter; Farmer, Barry; Pandey, Ras

    2013-03-01

    The structural and conformational properties of the histone protein H2AX (with143 residues) is studied by a coarse-grained model as a function of temperature (T). Three knowledge-based phenomenological interactions (MJ, BT, and BFKV) are used as input to a generalized Lennard-Jones potential for residue-residue interactions. Large-scale Monte Carlo simulations are performed to identify similarity and differences in the equilibrium structures with these potentials. Multi-scale structures of the protein are examined by a detailed analysis of their structure functions. We find that the radius of gyration (Rg) of H2AX depends non-monotonically on temperature with a maximum at a characteristic value Tc, a common feature to each interaction. The characteristic temperature and the range of non-monotonic thermal response and decay pattern are, however, sensitive to interactions. A comparison of the structural properties emerging from three potentials will be presented in this talk. This work is supported by Air Force Research Laboratory.

  8. Knowledge-based model of hydrogen-bonding propensity in organic crystals.

    PubMed

    Galek, Peter T A; Fábián, László; Motherwell, W D Samuel; Allen, Frank H; Feeder, Neil

    2007-10-01

    A new method is presented to predict which donors and acceptors form hydrogen bonds in a crystal structure, based on the statistical analysis of hydrogen bonds in the Cambridge Structural Database (CSD). The method is named the logit hydrogen-bonding propensity (LHP) model. The approach has a potential application in identifying both likely and unusual hydrogen bonding, which can help to rationalize stable and metastable crystalline forms, of relevance to drug development in the pharmaceutical industry. Whilst polymorph prediction techniques are widely used, the LHP model is knowledge-based and is not restricted by the computational issues of polymorph prediction, and as such may form a valuable precursor to polymorph screening. Model construction applies logistic regression, using training data obtained with a new survey method based on the CSD system. The survey categorizes the hydrogen bonds and extracts model parameter values using descriptive structural and chemical properties from three-dimensional organic crystal structures. LHP predictions from a fitted model are made using two-dimensional observables alone. In the initial cases analysed, the model is highly accurate, achieving approximately 90% correct classification of both observed hydrogen bonds and non-interacting donor-acceptor pairs. Extensive statistical validation shows the LHP model to be robust across a range of small-molecule organic crystal structures. PMID:17873446

  9. Socio-cultural and Knowledge-Based Barriers to Tuberculosis Diagnosis for Women in Bhopal, India

    PubMed Central

    McArthur, Evonne; Bali, Surya; Khan, Azim A.

    2016-01-01

    Background: In India, only one woman is diagnosed with tuberculosis (TB) for every 2.4 men. Previous studies have indicated gender disparities in care-seeking behavior and TB diagnosis; however, little is known about the specific barriers women face. Objectives: This study aimed to characterize socio-cultural and knowledge-based barriers that affected TB diagnosis for women in Bhopal, India. Materials and Methods: In-depth interviews were conducted with 13 affected women and 6 health-care workers. The Bhopal Diagnostic Microscopy Laboratory Register (n = 121) and the Bhopal district report (n = 261) were examined for diagnostic and care-seeking trends. Results: Women, especially younger women, faced socio-cultural barriers and stigma, causing many to hide their symptoms. Older women had little awareness about TB. Women often sought treatment from private practitioners, resulting in delayed diagnosis. Conclusions: Understanding these diagnostic and help-seeking behaviors barriers for women is critical for development of a gender-sensitive TB control program. PMID:26917876

  10. ISPE: A knowledge-based system for fluidization studies. 1990 Annual report

    SciTech Connect

    Reddy, S.

    1991-01-01

    Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all ``specified goals`` are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.

  11. Capturing district nursing through a knowledge-based electronic caseload analysis tool (eCAT).

    PubMed

    Kane, Kay

    2014-03-01

    The Electronic Caseload Analysis Tool (eCAT) is a knowledge-based software tool to assist the caseload analysis process. The tool provides a wide range of graphical reports, along with an integrated clinical advisor, to assist district nurses, team leaders, operational and strategic managers with caseload analysis by describing, comparing and benchmarking district nursing practice in the context of population need, staff resources, and service structure. District nurses and clinical lead nurses in Northern Ireland developed the tool, along with academic colleagues from the University of Ulster, working in partnership with a leading software company. The aim was to use the eCAT tool to identify the nursing need of local populations, along with the variances in district nursing practice, and match the workforce accordingly. This article reviews the literature, describes the eCAT solution and discusses the impact of eCAT on nursing practice, staff allocation, service delivery and workforce planning, using fictitious exemplars and a post-implementation evaluation from the trusts.

  12. ESPRE: a knowledge-based system to support platelet transfusion decisions.

    PubMed

    Sielaff, B H; Connelly, D P; Scott, E P

    1989-05-01

    ESPRE is a knowledge-based system which aids in the review of requests for platelet transfusions in the hospital blood bank. It is a microcomputer-based decision support system written in LISP and utilizes a hybrid frame and rule architecture. By automatically obtaining most of the required patient data directly from the hospital's main laboratory computers via a direct link, very little keyboard entry is required. Assessment of time trends computed from the data constitutes an important aspect of this system. To aid the blood bank personnel in deciding on the appropriateness of the requested transfusion, the system provides an explanatory report which includes a list of patient-specific data, a list of the conditions for which a transfusion would be appropriate for the particular patient (given the clinical condition), and the conclusions drawn by the system. In an early clinical evaluation of ESPRE, out of a random sample of 75 platelet transfusion requests, there were only three disagreements between ESPRE and blood bank personnel. PMID:2656504

  13. The Protein Structure Initiative Structural Biology Knowledgebase Technology Portal: a structural biology web resource.

    PubMed

    Gifford, Lida K; Carter, Lester G; Gabanyi, Margaret J; Berman, Helen M; Adams, Paul D

    2012-06-01

    The Technology Portal of the Protein Structure Initiative Structural Biology Knowledgebase (PSI SBKB; http://technology.sbkb.org/portal/ ) is a web resource providing information about methods and tools that can be used to relieve bottlenecks in many areas of protein production and structural biology research. Several useful features are available on the web site, including multiple ways to search the database of over 250 technological advances, a link to videos of methods on YouTube, and access to a technology forum where scientists can connect, ask questions, get news, and develop collaborations. The Technology Portal is a component of the PSI SBKB ( http://sbkb.org ), which presents integrated genomic, structural, and functional information for all protein sequence targets selected by the Protein Structure Initiative. Created in collaboration with the Nature Publishing Group, the SBKB offers an array of resources for structural biologists, such as a research library, editorials about new research advances, a featured biological system each month, and a functional sleuth for searching protein structures of unknown function. An overview of the various features and examples of user searches highlight the information, tools, and avenues for scientific interaction available through the Technology Portal.

  14. Structural semantic interconnections: a knowledge-based approach to word sense disambiguation.

    PubMed

    Navigli, Roberto; Velardi, Paola

    2005-07-01

    Word Sense Disambiguation (WSD) is traditionally considered an Al-hard problem. A break-through in this field would have a significant impact on many relevant Web-based applications, such as Web information retrieval, improved access to Web services, information extraction, etc. Early approaches to WSD, based on knowledge representation techniques, have been replaced in the past few years by more robust machine learning and statistical techniques. The results of recent comparative evaluations of WSD systems, however, show that these methods have inherent limitations. On the other hand, the increasing availability of large-scale, rich lexical knowledge resources seems to provide new challenges to knowledge-based approaches. In this paper, we present a method, called structural semantic interconnections (SSI), which creates structural specifications of the possible senses for each word in a context and selects the best hypothesis according to a grammar G, describing relations between sense specifications. Sense specifications are created from several available lexical resources that we integrated in part manually, in part with the help of automatic procedures. The SSI algorithm has been applied to different semantic disambiguation problems, like automatic ontology population, disambiguation of sentences in generic texts, disambiguation of words in glossary definitions. Evaluation experiments have been performed on specific knowledge domains (e.g., tourism, computer networks, enterprise interoperability), as well as on standard disambiguation test sets.

  15. A knowledge-based decision support system in bioinformatics: an application to protein complex extraction

    PubMed Central

    2013-01-01

    Background We introduce a Knowledge-based Decision Support System (KDSS) in order to face the Protein Complex Extraction issue. Using a Knowledge Base (KB) coding the expertise about the proposed scenario, our KDSS is able to suggest both strategies and tools, according to the features of input dataset. Our system provides a navigable workflow for the current experiment and furthermore it offers support in the configuration and running of every processing component of that workflow. This last feature makes our system a crossover between classical DSS and Workflow Management Systems. Results We briefly present the KDSS' architecture and basic concepts used in the design of the knowledge base and the reasoning component. The system is then tested using a subset of Saccharomyces cerevisiae Protein-Protein interaction dataset. We used this subset because it has been well studied in literature by several research groups in the field of complex extraction: in this way we could easily compare the results obtained through our KDSS with theirs. Our system suggests both a preprocessing and a clustering strategy, and for each of them it proposes and eventually runs suited algorithms. Our system's final results are then composed of a workflow of tasks, that can be reused for other experiments, and the specific numerical results for that particular trial. Conclusions The proposed approach, using the KDSS' knowledge base, provides a novel workflow that gives the best results with regard to the other workflows produced by the system. This workflow and its numeric results have been compared with other approaches about PPI network analysis found in literature, offering similar results. PMID:23368995

  16. MetRxn: a knowledgebase of metabolites and reactions spanning metabolic models and databases

    PubMed Central

    2012-01-01

    Background Increasingly, metabolite and reaction information is organized in the form of genome-scale metabolic reconstructions that describe the reaction stoichiometry, directionality, and gene to protein to reaction associations. A key bottleneck in the pace of reconstruction of new, high-quality metabolic models is the inability to directly make use of metabolite/reaction information from biological databases or other models due to incompatibilities in content representation (i.e., metabolites with multiple names across databases and models), stoichiometric errors such as elemental or charge imbalances, and incomplete atomistic detail (e.g., use of generic R-group or non-explicit specification of stereo-specificity). Description MetRxn is a knowledgebase that includes standardized metabolite and reaction descriptions by integrating information from BRENDA, KEGG, MetaCyc, Reactome.org and 44 metabolic models into a single unified data set. All metabolite entries have matched synonyms, resolved protonation states, and are linked to unique structures. All reaction entries are elementally and charge balanced. This is accomplished through the use of a workflow of lexicographic, phonetic, and structural comparison algorithms. MetRxn allows for the download of standardized versions of existing genome-scale metabolic models and the use of metabolic information for the rapid reconstruction of new ones. Conclusions The standardization in description allows for the direct comparison of the metabolite and reaction content between metabolic models and databases and the exhaustive prospecting of pathways for biotechnological production. This ever-growing dataset currently consists of over 76,000 metabolites participating in more than 72,000 reactions (including unresolved entries). MetRxn is hosted on a web-based platform that uses relational database models (MySQL). PMID:22233419

  17. Ab Initio Protein Structure Assembly Using Continuous Structure Fragments and Optimized Knowledge-based Force Field

    PubMed Central

    Xu, Dong; Zhang, Yang

    2012-01-01

    Ab initio protein folding is one of the major unsolved problems in computational biology due to the difficulties in force field design and conformational search. We developed a novel program, QUARK, for template-free protein structure prediction. Query sequences are first broken into fragments of 1–20 residues where multiple fragment structures are retrieved at each position from unrelated experimental structures. Full-length structure models are then assembled from fragments using replica-exchange Monte Carlo simulations, which are guided by a composite knowledge-based force field. A number of novel energy terms and Monte Carlo movements are introduced and the particular contributions to enhancing the efficiency of both force field and search engine are analyzed in detail. QUARK prediction procedure is depicted and tested on the structure modeling of 145 non-homologous proteins. Although no global templates are used and all fragments from experimental structures with template modeling score (TM-score) >0.5 are excluded, QUARK can successfully construct 3D models of correct folds in 1/3 cases of short proteins up to 100 residues. In the ninth community-wide Critical Assessment of protein Structure Prediction (CASP9) experiment, QUARK server outperformed the second and third best servers by 18% and 47% based on the cumulative Z-score of global distance test-total (GDT-TS) scores in the free modeling (FM) category. Although ab initio protein folding remains a significant challenge, these data demonstrate new progress towards the solution of the most important problem in the field. PMID:22411565

  18. MetazSecKB: the human and animal secretome and subcellular proteome knowledgebase

    PubMed Central

    Meinken, John; Walker, Gary; Cooper, Chester R.; Min, Xiang Jia

    2015-01-01

    The subcellular location of a protein is a key factor in determining the molecular function of the protein in an organism. MetazSecKB is a secretome and subcellular proteome knowledgebase specifically designed for metazoan, i.e. human and animals. The protein sequence data, consisting of over 4 million entries with 121 species having a complete proteome, were retrieved from UniProtKB. Protein subcellular locations including secreted and 15 other subcellular locations were assigned based on either curated experimental evidence or prediction using seven computational tools. The protein or subcellular proteome data can be searched and downloaded using several different types of identifiers, gene name or keyword(s), and species. BLAST search and community annotation of subcellular locations are also supported. Our primary analysis revealed that the proteome sizes, secretome sizes and other subcellular proteome sizes vary tremendously in different animal species. The proportions of secretomes vary from 3 to 22% (average 8%) in metazoa species. The proportions of other major subcellular proteomes ranged approximately 21–43% (average 31%) in cytoplasm, 20–37% (average 30%) in nucleus, 3–19% (average 12%) as plasma membrane proteins and 3–9% (average 6%) in mitochondria. We also compared the protein families in secretomes of different primates. The Gene Ontology and protein family domain analysis of human secreted proteins revealed that these proteins play important roles in regulation of human structure development, signal transduction, immune systems and many other biological processes. Database URL: http://proteomics.ysu.edu/secretomes/animal/index.php PMID:26255309

  19. Chemogenomics knowledgebased polypharmacology analyses of drug abuse related G-protein coupled receptors and their ligands.

    PubMed

    Xie, Xiang-Qun; Wang, Lirong; Liu, Haibin; Ouyang, Qin; Fang, Cheng; Su, Weiwei

    2014-01-01

    Drug abuse (DA) and addiction is a complex illness, broadly viewed as a neurobiological impairment with genetic and environmental factors that influence its development and manifestation. Abused substances can disrupt the activity of neurons by interacting with many proteins, particularly G-protein coupled receptors (GPCRs). A few medicines that target the central nervous system (CNS) can also modulate DA related proteins, such as GPCRs, which can act in conjunction with the controlled psychoactive substance(s) and increase side effects. To fully explore the molecular interaction networks that underlie DA and to effectively modulate the GPCRs in these networks with small molecules for DA treatment, we built a drug-abuse domain specific chemogenomics knowledgebase (DA-KB) to centralize the reported chemogenomics research information related to DA and CNS disorders in an effort to benefit researchers across a broad range of disciplines. We then focus on the analysis of GPCRs as many of them are closely related with DA. Their distribution in human tissues was also analyzed for the study of side effects caused by abused drugs. We further implement our computational algorithms/tools to explore DA targets, DA mechanisms and pathways involved in polydrug addiction and to explore polypharmacological effects of the GPCR ligands. Finally, the polypharmacology effects of GPCRs-targeted medicines for DA treatment were investigated and such effects can be exploited for the development of drugs with polypharmacophore for DA intervention. The chemogenomics database and the analysis tools will help us better understand the mechanism of drugs abuse and facilitate to design new medications for system pharmacotherapy of DA. PMID:24567719

  20. Chemogenomics knowledgebased polypharmacology analyses of drug abuse related G-protein coupled receptors and their ligands

    PubMed Central

    Xie, Xiang-Qun; Wang, Lirong; Liu, Haibin; Ouyang, Qin; Fang, Cheng; Su, Weiwei

    2013-01-01

    Drug abuse (DA) and addiction is a complex illness, broadly viewed as a neurobiological impairment with genetic and environmental factors that influence its development and manifestation. Abused substances can disrupt the activity of neurons by interacting with many proteins, particularly G-protein coupled receptors (GPCRs). A few medicines that target the central nervous system (CNS) can also modulate DA related proteins, such as GPCRs, which can act in conjunction with the controlled psychoactive substance(s) and increase side effects. To fully explore the molecular interaction networks that underlie DA and to effectively modulate the GPCRs in these networks with small molecules for DA treatment, we built a drug-abuse domain specific chemogenomics knowledgebase (DA-KB) to centralize the reported chemogenomics research information related to DA and CNS disorders in an effort to benefit researchers across a broad range of disciplines. We then focus on the analysis of GPCRs as many of them are closely related with DA. Their distribution in human tissues was also analyzed for the study of side effects caused by abused drugs. We further implement our computational algorithms/tools to explore DA targets, DA mechanisms and pathways involved in polydrug addiction and to explore polypharmacological effects of the GPCR ligands. Finally, the polypharmacology effects of GPCRs-targeted medicines for DA treatment were investigated and such effects can be exploited for the development of drugs with polypharmacophore for DA intervention. The chemogenomics database and the analysis tools will help us better understand the mechanism of drugs abuse and facilitate to design new medications for system pharmacotherapy of DA. PMID:24567719

  1. Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles

    PubMed Central

    2016-01-01

    Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522

  2. A knowledge-based approach to estimating the magnitude and spatial patterns of potential threats to soil biodiversity.

    PubMed

    Orgiazzi, Alberto; Panagos, Panos; Yigini, Yusuf; Dunbar, Martha B; Gardi, Ciro; Montanarella, Luca; Ballabio, Cristiano

    2016-03-01

    Because of the increasing pressures exerted on soil, below-ground life is under threat. Knowledge-based rankings of potential threats to different components of soil biodiversity were developed in order to assess the spatial distribution of threats on a European scale. A list of 13 potential threats to soil biodiversity was proposed to experts with different backgrounds in order to assess the potential for three major components of soil biodiversity: soil microorganisms, fauna, and biological functions. This approach allowed us to obtain knowledge-based rankings of threats. These classifications formed the basis for the development of indices through an additive aggregation model that, along with ad-hoc proxies for each pressure, allowed us to preliminarily assess the spatial patterns of potential threats. Intensive exploitation was identified as the highest pressure. In contrast, the use of genetically modified organisms in agriculture was considered as the threat with least potential. The potential impact of climate change showed the highest uncertainty. Fourteen out of the 27 considered countries have more than 40% of their soils with moderate-high to high potential risk for all three components of soil biodiversity. Arable soils are the most exposed to pressures. Soils within the boreal biogeographic region showed the lowest risk potential. The majority of soils at risk are outside the boundaries of protected areas. First maps of risks to three components of soil biodiversity based on the current scientific knowledge were developed. Despite the intrinsic limits of knowledge-based assessments, a remarkable potential risk to soil biodiversity was observed. Guidelines to preliminarily identify and circumscribe soils potentially at risk are provided. This approach may be used in future research to assess threat at both local and global scale and identify areas of possible risk and, subsequently, design appropriate strategies for monitoring and protection of soil

  3. Database decomposition of a knowledge-based CAD system in mammography: an ensemble approach to improve detection

    NASA Astrophysics Data System (ADS)

    Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.

    2008-03-01

    Although ensemble techniques have been investigated in supervised machine learning, their potential with knowledge-based systems is unexplored. The purpose of this study is to investigate the ensemble approach with a knowledge-based (KB) CAD system for the detection of masses in screening mammograms. The system is designed to determine the presence of a mass in a query mammographic region of interest (ROI) based on its similarity with previously acquired examples of mass and normal cases. Similarity between images is assessed using normalized mutual information. Two different approaches of knowledge database decomposition were investigated to create the ensemble. The first approach was random division of the knowledge database into a pre-specified number of equal size, separate groups. The second approach was based on k-means clustering of the knowledge cases according to common texture features extracted from the ROIs. The ensemble components were fused using a linear classifier. Based on a database of 1820 ROIs (901 masses and 919 and the leave-one-out crossvalidation scheme, the ensemble techniques improved the performance of the original KB-CAD system (A z = 0.86+/-0.01). Specifically, random division resulted in ROC area index of A z = 0.90 +/- 0.01 while k-means clustering provided further improvement (A z = 0.91 +/- 0.01). Although marginally better, the improvement was statistically significant. The superiority of the k-means clustering scheme was robust regardless of the number of clusters. This study supports the idea of incorporation of ensemble techniques with knowledge-based systems in mammography.

  4. A knowledge-based approach to estimating the magnitude and spatial patterns of potential threats to soil biodiversity.

    PubMed

    Orgiazzi, Alberto; Panagos, Panos; Yigini, Yusuf; Dunbar, Martha B; Gardi, Ciro; Montanarella, Luca; Ballabio, Cristiano

    2016-03-01

    Because of the increasing pressures exerted on soil, below-ground life is under threat. Knowledge-based rankings of potential threats to different components of soil biodiversity were developed in order to assess the spatial distribution of threats on a European scale. A list of 13 potential threats to soil biodiversity was proposed to experts with different backgrounds in order to assess the potential for three major components of soil biodiversity: soil microorganisms, fauna, and biological functions. This approach allowed us to obtain knowledge-based rankings of threats. These classifications formed the basis for the development of indices through an additive aggregation model that, along with ad-hoc proxies for each pressure, allowed us to preliminarily assess the spatial patterns of potential threats. Intensive exploitation was identified as the highest pressure. In contrast, the use of genetically modified organisms in agriculture was considered as the threat with least potential. The potential impact of climate change showed the highest uncertainty. Fourteen out of the 27 considered countries have more than 40% of their soils with moderate-high to high potential risk for all three components of soil biodiversity. Arable soils are the most exposed to pressures. Soils within the boreal biogeographic region showed the lowest risk potential. The majority of soils at risk are outside the boundaries of protected areas. First maps of risks to three components of soil biodiversity based on the current scientific knowledge were developed. Despite the intrinsic limits of knowledge-based assessments, a remarkable potential risk to soil biodiversity was observed. Guidelines to preliminarily identify and circumscribe soils potentially at risk are provided. This approach may be used in future research to assess threat at both local and global scale and identify areas of possible risk and, subsequently, design appropriate strategies for monitoring and protection of soil

  5. Integrating knowledge-based systems into operations at the McMaster University FN tandem accelerator laboratory

    SciTech Connect

    Poehlman, W.F.S. ); Stark, J.W. . Tandem Accelerator Lab.)

    1989-10-01

    The introduction of computer-based expertise in accelerator operations has resulted in the development of an Accelerator Operators' Companion which incorporates a knowledge-based front-end that is tuned to user operational expertise. The front-end also provides connections to traditional software packages such as database and spreadsheet programs. During work on the back-end, that is, real-time expert system control development, the knowledge engineering phase has revealed the importance of modifying expert procedures when a multitasking environment is involved.

  6. Wearable real-time and adaptive feedback device to face the stuttering: a knowledge-based telehealthcare proposal.

    PubMed

    Prado, Manuel; Roa, Laura M

    2007-01-01

    Despite first written references to permanent developmental stuttering occurred more than 2500 years ago, the mechanisms underlying this disorder are still unknown. This paper briefly reviews stuttering causal hypothesis and treatments, and presents the requirements that a new stuttering therapeutic device should verify. As a result of the analysis, an adaptive altered auditory feedback device based on a multimodal intelligent monitor, within the framework of a knowledge-based telehealthcare system, is presented. The subsequent discussion, based partly on the successful outcomes of a similar intelligent monitor, suggests that this novel device is feasible and could help to fill the gap between research and clinic.

  7. Wearable real-time and adaptive feedback device to face the stuttering: a knowledge-based telehealthcare proposal.

    PubMed

    Prado, Manuel; Roa, Laura M

    2007-01-01

    Despite first written references to permanent developmental stuttering occurred more than 2500 years ago, the mechanisms underlying this disorder are still unknown. This paper briefly reviews stuttering causal hypothesis and treatments, and presents the requirements that a new stuttering therapeutic device should verify. As a result of the analysis, an adaptive altered auditory feedback device based on a multimodal intelligent monitor, within the framework of a knowledge-based telehealthcare system, is presented. The subsequent discussion, based partly on the successful outcomes of a similar intelligent monitor, suggests that this novel device is feasible and could help to fill the gap between research and clinic. PMID:17901608

  8. Ontology-Driven Knowledge-Based Health-Care System, An Emerging Area - Challenges And Opportunities - Indian Scenario

    NASA Astrophysics Data System (ADS)

    Sunitha, A.; Babu, G. Suresh

    2014-11-01

    Recent studies in the decision making efforts in the area of public healthcare systems have been tremendously inspired and influenced by the entry of ontology. Ontology driven systems results in the effective implementation of healthcare strategies for the policy makers. The central source of knowledge is the ontology containing all the relevant domain concepts such as locations, diseases, environments and their domain sensitive inter-relationships which is the prime objective, concern and the motivation behind this paper. The paper further focuses on the development of a semantic knowledge-base for public healthcare system. This paper describes the approach and methodologies in bringing out a novel conceptual theme in establishing a firm linkage between three different ontologies related to diseases, places and environments in one integrated platform. This platform correlates the real-time mechanisms prevailing within the semantic knowledgebase and establishing their inter-relationships for the first time in India. This is hoped to formulate a strong foundation for establishing a much awaited basic need for a meaningful healthcare decision making system in the country. Introduction through a wide range of best practices facilitate the adoption of this approach for better appreciation, understanding and long term outcomes in the area. The methods and approach illustrated in the paper relate to health mapping methods, reusability of health applications, and interoperability issues based on mapping of the data attributes with ontology concepts in generating semantic integrated data driving an inference engine for user-interfaced semantic queries.

  9. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    SciTech Connect

    Thiele, Ines; Hyduke, Daniel R.; Steeb, Benjamin; Fankam, Guy; Allen, Douglas K.; Bazzani, Susanna; Charusanti, Pep; Chen, Feng-Chi; Fleming, Ronan MT; Hsiung, Chao A.; De Keersmaecker, Sigrid CJ; Liao, Yu-Chieh; Marchal, Kathleen; Mo, Monica L.; Özdemir, Emre; Raghunathan, Anu; Reed, Jennifer L.; Shin, Sook-Il; Sigurbjörnsdóttir, Sara; Steinmann, Jonas; Sudarsan, Suresh; Swainston, Neil; Thijs, Inge M.; Zengler, Karsten; Palsson, Bernhard O.; Adkins, Joshua N.; Bumann, Dirk

    2011-01-01

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium is a human pathogen, causes various diseases and its increasing antibiotic resistance poses a public health problem. Here, we describe a community-driven effort, in which more than 20 experts in S. Typhimurium biology and systems biology collaborated to reconcile and expand the S. Typhimurium BiGG knowledge-base. The consensus MR was obtained starting from two independently developed MRs for S. Typhimurium. Key results of this reconstruction jamboree include i) development and implementation of a community-based workflow for MR annotation and reconciliation; ii) incorporation of thermodynamic information; and iii) use of the consensus MR to identify potential multi-target drug therapy approaches. Finally, taken together, with the growing number of parallel MRs a structured, community-driven approach will be necessary to maximize quality while increasing adoption of MRs in experimental design and interpretation.

  10. Development of the Knowledge-based & Empirical Combined Scoring Algorithm (KECSA) to Score Protein-Ligand Interactions

    PubMed Central

    Zheng, Zheng

    2013-01-01

    We describe a novel knowledge-based protein-ligand scoring function that employs a new definition for the reference state, allowing us to relate a statistical potential to a Lennard-Jones (LJ) potential. In this way, the LJ potential parameters were generated from protein-ligand complex structural data contained in the PDB. Forty-nine types of atomic pairwise interactions were derived using this method, which we call the knowledge-based and empirical combined scoring algorithm (KECSA). Two validation benchmarks were introduced to test the performance of KECSA. The first validation benchmark included two test sets that address the training-set and enthalpy/entropy of KECSA The second validation benchmark suite included two large-scale and five small-scale test sets to compare the reproducibility of KECSA with respect to two empirical score functions previously developed in our laboratory (LISA and LISA+), as well as to other well-known scoring methods. Validation results illustrate that KECSA shows improved performance in all test sets when compared with other scoring methods especially in its ability to minimize the RMSE. LISA and LISA+ displayed similar performance using the correlation coefficient and Kendall τ as the metric of quality for some of the small test sets. Further pathways for improvement are discussed which would KECSA more sensitive to subtle changes in ligand structure. PMID:23560465

  11. From protein sequences to 3D-structures and beyond: the example of the UniProt knowledgebase.

    PubMed

    Hinz, Ursula

    2010-04-01

    With the dramatic increase in the volume of experimental results in every domain of life sciences, assembling pertinent data and combining information from different fields has become a challenge. Information is dispersed over numerous specialized databases and is presented in many different formats. Rapid access to experiment-based information about well-characterized proteins helps predict the function of uncharacterized proteins identified by large-scale sequencing. In this context, universal knowledgebases play essential roles in providing access to data from complementary types of experiments and serving as hubs with cross-references to many specialized databases. This review outlines how the value of experimental data is optimized by combining high-quality protein sequences with complementary experimental results, including information derived from protein 3D-structures, using as an example the UniProt knowledgebase (UniProtKB) and the tools and links provided on its website ( http://www.uniprot.org/ ). It also evokes precautions that are necessary for successful predictions and extrapolations.

  12. Development of an intelligent interface for adding spatial objects to a knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Goettsche, Craig

    1989-01-01

    Earth Scientists lack adequate tools for quantifying complex relationships between existing data layers and studying and modeling the dynamic interactions of these data layers. There is a need for an earth systems tool to manipulate multi-layered, heterogeneous data sets that are spatially indexed, such as sensor imagery and maps, easily and intelligently in a single system. The system can access and manipulate data from multiple sensor sources, maps, and from a learned object hierarchy using an advanced knowledge-based geographical information system. A prototype Knowledge-Based Geographic Information System (KBGIS) was recently constructed. Many of the system internals are well developed, but the system lacks an adequate user interface. A methodology is described for developing an intelligent user interface and extending KBGIS to interconnect with existing NASA systems, such as imagery from the Land Analysis System (LAS), atmospheric data in Common Data Format (CDF), and visualization of complex data with the National Space Science Data Center Graphics System. This would allow NASA to quickly explore the utility of such a system, given the ability to transfer data in and out of KBGIS easily. The use and maintenance of the object hierarchies as polymorphic data types brings, to data management, a while new set of problems and issues, few of which have been explored above the prototype level.

  13. Personal profile of medical students selected through a knowledge-based exam only: are we missing suitable students?

    PubMed Central

    Abbiati, Milena; Baroffio, Anne; Gerbase, Margaret W.

    2016-01-01

    Introduction A consistent body of literature highlights the importance of a broader approach to select medical school candidates both assessing cognitive capacity and individual characteristics. However, selection in a great number of medical schools worldwide is still based on knowledge exams, a procedure that might neglect students with needed personal characteristics for future medical practice. We investigated whether the personal profile of students selected through a knowledge-based exam differed from those not selected. Methods Students applying for medical school (N=311) completed questionnaires assessing motivations for becoming a doctor, learning approaches, personality traits, empathy, and coping styles. Selection was based on the results of MCQ tests. Principal component analysis was used to draw a profile of the students. Differences between selected and non-selected students were examined by Multivariate ANOVAs, and their impact on selection by logistic regression analysis. Results Students demonstrating a profile of diligence with higher conscientiousness, deep learning approach, and task-focused coping were more frequently selected (p=0.01). Other personal characteristics such as motivation, sociability, and empathy did not significantly differ, comparing selected and non-selected students. Conclusion Selection through a knowledge-based exam privileged diligent students. It did neither advantage nor preclude candidates with a more humane profile. PMID:27079886

  14. Knowledge-based annotation of small molecule binding sites in proteins

    PubMed Central

    2010-01-01

    developed to predict binding sites with high accuracy in terms of their biological validity. It also provides a common platform for function prediction, knowledge-based docking and for small molecule virtual screening. The method can be applied even for a query sequence without structure. The method is available at http://www.ncbi.nlm.nih.gov/Structure/ibis/ibis.cgi. PMID:20594344

  15. Architecture for Knowledge-Based and Federated Search of Online Clinical Evidence

    PubMed Central

    Walther, Martin; Nguyen, Ken; Lovell, Nigel H

    2005-01-01

    Background It is increasingly difficult for clinicians to keep up-to-date with the rapidly growing biomedical literature. Online evidence retrieval methods are now seen as a core tool to support evidence-based health practice. However, standard search engine technology is not designed to manage the many different types of evidence sources that are available or to handle the very different information needs of various clinical groups, who often work in widely different settings. Objectives The objectives of this paper are (1) to describe the design considerations and system architecture of a wrapper-mediator approach to federate search system design, including the use of knowledge-based, meta-search filters, and (2) to analyze the implications of system design choices on performance measurements. Methods A trial was performed to evaluate the technical performance of a federated evidence retrieval system, which provided access to eight distinct online resources, including e-journals, PubMed, and electronic guidelines. The Quick Clinical system architecture utilized a universal query language to reformulate queries internally and utilized meta-search filters to optimize search strategies across resources. We recruited 227 family physicians from across Australia who used the system to retrieve evidence in a routine clinical setting over a 4-week period. The total search time for a query was recorded, along with the duration of individual queries sent to different online resources. Results Clinicians performed 1662 searches over the trial. The average search duration was 4.9 ± 3.2 s (N = 1662 searches). Mean search duration to the individual sources was between 0.05 s and 4.55 s. Average system time (ie, system overhead) was 0.12 s. Conclusions The relatively small system overhead compared to the average time it takes to perform a search for an individual source shows that the system achieves a good trade-off between performance and reliability. Furthermore, despite

  16. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    SciTech Connect

    Shiraishi, Satomi; Moore, Kevin L.; Tan, Jun; Olsen, Lindsey A.

    2015-02-15

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V{sub 10Gy} (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM{sub clin} − QM{sub pred}, and a coefficient of determination, R{sup 2}. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are

  17. Considering Human Capital Theory in Assessment and Training: Mapping the Gap between Current Skills and the Needs of a Knowledge-Based Economy in Northeast Iowa

    ERIC Educational Resources Information Center

    Mihm-Herold, Wendy

    2010-01-01

    In light of the current economic downturn, thousands of Iowans are unemployed and this is the ideal time to build the skills of the workforce to compete in the knowledge-based economy so businesses and entrepreneurs can compete in a global economy. A tool for assessing the skills and knowledge of dislocated workers and students as well as…

  18. Reflexive Professionalism as a Second Generation of Evidence-Based Practice: Some Considerations on the Special Issue "What Works? Modernizing the Knowledge-Base of Social Work"

    ERIC Educational Resources Information Center

    Otto, Hans-Uwe; Polutta, Andreas; Ziegler, Holger

    2009-01-01

    This article refers sympathetically to the thoughtful debates and positions in the "Research on Social Work Practice" ("RSWP"; Special Issue, July, 2008 issue) on "What Works? Modernizing the Knowledge-Base of Social Work." It highlights the need for empirical efficacy and effectiveness research in social work and appreciates empirical rigor…

  19. An Emerging Knowledge-Based Economy in China? Indicators from OECD Databases. OECD Science, Technology and Industry Working Papers, 2004/4

    ERIC Educational Resources Information Center

    Criscuolo, Chiara; Martin, Ralf

    2004-01-01

    The main objective of this Working Paper is to show a set of indicators on the knowledge-based economy for China, mainly compiled from databases within EAS, although data from databases maintained by other parts of the OECD are included as well. These indicators are put in context by comparison with data for the United States, Japan and the EU (or…

  20. Creating a Knowledge-Based Economy in the United Arab Emirates: Realising the Unfulfilled Potential of Women in the Science, Technology and Engineering Fields

    ERIC Educational Resources Information Center

    Aswad, Noor Ghazal; Vidican, Georgeta; Samulewicz, Diana

    2011-01-01

    As the United Arab Emirates (UAE) moves towards a knowledge-based economy, maximising the participation of the national workforce, especially women, in the transformation process is crucial. Using survey methods and semi-structured interviews, this paper examines the factors that influence women's decisions regarding their degree programme and…

  1. D0 data processing within EDG/LCG

    SciTech Connect

    Harenberg, Torsten; Bos, Kors; Byrom, Rob; Fisher, Steve; Groep, David; van Leeuwen, Willem; Mattig, Peter; Templon, Jeff; /NIKHEF, Amsterdam

    2004-12-01

    In September 2003, the D0 experiment at TEvatron has launched a reprocessing effort. In total 519,212,822 of the experiment's events have been reprocessed to use the new perceptions of the detector's behavior. Out of these events 97,619,114 have been reprocessed at remote sites. For the first time, the European DataGRID has been used to re-process a part of these events as an evaluation of the EDG application testbed. They used EDG's own R-GMA database for monitoring and bookkeeping and constructed four tables: (1) submission table--records the submission of jobs to the Resource Broker; (2) job start table--holds the time the job started on a Worker Node together with process ID and many more; (3) job end table--information is published immediately before the job stops; and (4) command table--a command list table for debugging purposes. As D0 has its own data management system called ''SAM'', some sort of channel between SAM and the EDG data management system is required. The approach used is shown to the left, were a certain storage area, physically present on a back-end server machine, is visible both from a SAM-enabled machine (''SAM station'') and from EDG machines at the same site. This has bene achieved at NIKHEF. The D0 software has been adapted to run in the EDG framework. Only a few changes has to be made. Missing libraries were included and some extra packages were shipped (pyxml and Python 2.1). However, arounding wrapper scripts were written to handle the in- and output and put/get it from/to EDG's Data Management System (DMS).

  2. LCG Persistency Framework (CORAL, COOL, POOL): Status and Outlook

    NASA Astrophysics Data System (ADS)

    Valassi, A.; Clemencic, M.; Dykstra, D.; Frank, M.; Front, D.; Govi, G.; Kalkhof, A.; Loth, A.; Nowak, M.; Pokorski, W.; Salnikov, A.; Schmidt, S. A.; Trentadue, R.; Wache, M.; Xie, Z.

    2011-12-01

    The Persistency Framework consists of three software packages (CORAL, COOL and POOL) addressing the data access requirements of the LHC experiments in different areas. It is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that use this software to access their data. POOL is a hybrid technology store for C++ objects, metadata catalogs and collections. CORAL is a relational database abstraction layer with an SQL-free API. COOL provides specific software tools and components for the handling of conditions data. This paper reports on the status and outlook of the project and reviews in detail the usage of each package in the three experiments.

  3. LCG Persistency Framework (CORAL, COOL, POOL): Status and Outlook

    SciTech Connect

    Valassi, A.; Clemencic, M.; Dykstra, D.; Frank, M.; Front, D.; Govi, G.; Kalkhof, A.; Loth, A.; Nowak, M.; Pokorski, W.; Salnikov, A.; Schmidt, S.A.; Trentadue, R.; Wache, M.; Xie, Z.; /Princeton U.

    2012-04-19

    The Persistency Framework consists of three software packages (CORAL, COOL and POOL) addressing the data access requirements of the LHC experiments in different areas. It is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that use this software to access their data. POOL is a hybrid technology store for C++ objects, metadata catalogs and collections. CORAL is a relational database abstraction layer with an SQL-free API. COOL provides specific software tools and components for the handling of conditions data. This paper reports on the status and outlook of the project and reviews in detail the usage of each package in the three experiments.

  4. Computational and human observer image quality evaluation of low dose, knowledge-based CT iterative reconstruction

    SciTech Connect

    Eck, Brendan L.; Fahmi, Rachid; Miao, Jun; Brown, Kevin M.; Zabic, Stanislav; Raihani, Nilgoun; Wilson, David L.

    2015-10-15

    Purpose: Aims in this study are to (1) develop a computational model observer which reliably tracks the detectability of human observers in low dose computed tomography (CT) images reconstructed with knowledge-based iterative reconstruction (IMR™, Philips Healthcare) and filtered back projection (FBP) across a range of independent variables, (2) use the model to evaluate detectability trends across reconstructions and make predictions of human observer detectability, and (3) perform human observer studies based on model predictions to demonstrate applications of the model in CT imaging. Methods: Detectability (d′) was evaluated in phantom studies across a range of conditions. Images were generated using a numerical CT simulator. Trained observers performed 4-alternative forced choice (4-AFC) experiments across dose (1.3, 2.7, 4.0 mGy), pin size (4, 6, 8 mm), contrast (0.3%, 0.5%, 1.0%), and reconstruction (FBP, IMR), at fixed display window. A five-channel Laguerre–Gauss channelized Hotelling observer (CHO) was developed with internal noise added to the decision variable and/or to channel outputs, creating six different internal noise models. Semianalytic internal noise computation was tested against Monte Carlo and used to accelerate internal noise parameter optimization. Model parameters were estimated from all experiments at once using maximum likelihood on the probability correct, P{sub C}. Akaike information criterion (AIC) was used to compare models of different orders. The best model was selected according to AIC and used to predict detectability in blended FBP-IMR images, analyze trends in IMR detectability improvements, and predict dose savings with IMR. Predicted dose savings were compared against 4-AFC study results using physical CT phantom images. Results: Detection in IMR was greater than FBP in all tested conditions. The CHO with internal noise proportional to channel output standard deviations, Model-k4, showed the best trade-off between fit

  5. An intelligent, knowledge-based multiple criteria decision making advisor for systems design

    NASA Astrophysics Data System (ADS)

    Li, Yongchang

    of an appropriate decision making method. Furthermore, some DMs may be exclusively using one or two specific methods which they are familiar with or trust and not realizing that they may be inappropriate to handle certain classes of the problems, thus yielding erroneous results. These issues reveal that in order to ensure a good decision a suitable decision method should be chosen before the decision making process proceeds. The first part of this dissertation proposes an MCDM process supported by an intelligent, knowledge-based advisor system referred to as Multi-Criteria Interactive Decision-Making Advisor and Synthesis process (MIDAS), which is able to facilitate the selection of the most appropriate decision making method and which provides insight to the user for fulfilling different preferences. The second part of this dissertation presents an autonomous decision making advisor which is capable of dealing with ever-evolving real time information and making autonomous decisions under uncertain conditions. The advisor encompasses a Markov Decision Process (MDP) formulation which takes uncertainty into account when determines the best action for each system state. (Abstract shortened by UMI.)

  6. Computational and human observer image quality evaluation of low dose, knowledge-based CT iterative reconstruction

    PubMed Central

    Eck, Brendan L.; Fahmi, Rachid; Brown, Kevin M.; Zabic, Stanislav; Raihani, Nilgoun; Miao, Jun; Wilson, David L.

    2015-01-01

    Purpose: Aims in this study are to (1) develop a computational model observer which reliably tracks the detectability of human observers in low dose computed tomography (CT) images reconstructed with knowledge-based iterative reconstruction (IMR™, Philips Healthcare) and filtered back projection (FBP) across a range of independent variables, (2) use the model to evaluate detectability trends across reconstructions and make predictions of human observer detectability, and (3) perform human observer studies based on model predictions to demonstrate applications of the model in CT imaging. Methods: Detectability (d′) was evaluated in phantom studies across a range of conditions. Images were generated using a numerical CT simulator. Trained observers performed 4-alternative forced choice (4-AFC) experiments across dose (1.3, 2.7, 4.0 mGy), pin size (4, 6, 8 mm), contrast (0.3%, 0.5%, 1.0%), and reconstruction (FBP, IMR), at fixed display window. A five-channel Laguerre–Gauss channelized Hotelling observer (CHO) was developed with internal noise added to the decision variable and/or to channel outputs, creating six different internal noise models. Semianalytic internal noise computation was tested against Monte Carlo and used to accelerate internal noise parameter optimization. Model parameters were estimated from all experiments at once using maximum likelihood on the probability correct, PC. Akaike information criterion (AIC) was used to compare models of different orders. The best model was selected according to AIC and used to predict detectability in blended FBP-IMR images, analyze trends in IMR detectability improvements, and predict dose savings with IMR. Predicted dose savings were compared against 4-AFC study results using physical CT phantom images. Results: Detection in IMR was greater than FBP in all tested conditions. The CHO with internal noise proportional to channel output standard deviations, Model-k4, showed the best trade-off between fit and

  7. Intelligent personal navigator supported by knowledge-based systems for estimating dead reckoning navigation parameters

    NASA Astrophysics Data System (ADS)

    Moafipoor, Shahram

    Personal navigators (PN) have been studied for about a decade in different fields and applications, such as safety and rescue operations, security and emergency services, and police and military applications. The common goal of all these applications is to provide precise and reliable position, velocity, and heading information of each individual in various environments. In the PN system developed in this dissertation, the underlying assumption is that the system does not require pre-existing infrastructure to enable pedestrian navigation. To facilitate this capability, a multisensor system concept, based on the Global Positioning System (GPS), inertial navigation, barometer, magnetometer, and a human pedometry model has been developed. An important aspect of this design is to use the human body as navigation sensor to facilitate Dead Reckoning (DR) navigation in GPS-challenged environments. The system is designed predominantly for outdoor environments, where occasional loss of GPS lock may happen; however, testing and performance demonstration have been extended to indoor environments. DR navigation is based on a relative-measurement approach, with the key idea of integrating the incremental motion information in the form of step direction (SD) and step length (SL) over time. The foundation of the intelligent navigation system concept proposed here rests in exploiting the human locomotion pattern, as well as change of locomotion in varying environments. In this context, the term intelligent navigation represents the transition from the conventional point-to-point DR to dynamic navigation using the knowledge about the mechanism of the moving person. This approach increasingly relies on integrating knowledge-based systems (KBS) and artificial intelligence (AI) methodologies, including artificial neural networks (ANN) and fuzzy logic (FL). In addition, a general framework of the quality control for the real-time validation of the DR processing is proposed, based on a

  8. Ontology Language to Support Description of Experiment Control System Semantics, Collaborative Knowledge-Base Design and Ontology Reuse

    SciTech Connect

    Vardan Gyurjyan, D Abbott, G Heyes, E Jastrzembski, B Moffit, C Timmer, E Wolin

    2009-10-01

    In this paper we discuss the control domain specific ontology that is built on top of the domain-neutral Resource Definition Framework (RDF). Specifically, we will discuss the relevant set of ontology concepts along with the relationships among them in order to describe experiment control components and generic event-based state machines. Control Oriented Ontology Language (COOL) is a meta-data modeling language that provides generic means for representation of physics experiment control processes and components, and their relationships, rules and axioms. It provides a semantic reference frame that is useful for automating the communication of information for configuration, deployment and operation. COOL has been successfully used to develop a complete and dynamic knowledge-base for experiment control systems, developed using the AFECS framework.

  9. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks.

    PubMed

    Khan, Taimoor; De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616

  10. Northeast Artificial Intelligence Consortium (NAIC). Volume 14. Knowledge-base retrieval using plausible inference. Final report, Sep 84-Dec 89

    SciTech Connect

    Croft, W.B.; Cohen, P.R.

    1990-12-01

    The Northeast Artificial Intelligence Consortium (NAIC) was created by the Air Force Systems Command, Rome Air Development Center, and the Office of Scientific Research. Its purpose was conduct pertinent research artificial intelligence and to perform activities to this research. This report describes progress during the existence of the NAIC on the technical research tasks undertaken at the member universities. The topics covered in general are: versatile expert system for equipment maintenance, distributed AI for communications system control, automatic photointerpretation, time-oriented problem solving, speech understanding systems, knowledge base maintenance, hardware architectures for very large systems, knowledge-based reasoning and planning, and a knowledge acquisition, assistance, and explanation system. The specific topic for this volume is plausible inference as an effective computational framework for the retrieval of complex objects.

  11. Knowledge-based method for determining the meaning of ambiguous biomedical terms using information content measures of similarity.

    PubMed

    McInnes, Bridget T; Pedersen, Ted; Liu, Ying; Melton, Genevieve B; Pakhomov, Serguei V

    2011-01-01

    In this paper, we introduce a novel knowledge-based word sense disambiguation method that determines the sense of an ambiguous word in biomedical text using semantic similarity or relatedness measures. These measures quantify the degree of similarity between concepts in the Unified Medical Language System (UMLS). The objective of this work was to develop a method that can disambiguate terms in biomedical text by exploiting similarity information extracted from the UMLS and to evaluate the efficacy of information content-based semantic similarity measures, which augment path-based information with probabilities derived from biomedical corpora. We show that information content-based measures obtain a higher disambiguation accuracy than path-based measures because they weight the path based on where it exists in the taxonomy coupled with the probability of the concepts occurring in a corpus of text.

  12. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    NASA Technical Reports Server (NTRS)

    Robers, James L.; Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    Only recently have engineers begun making use of Artificial Intelligence (AI) tools in the area of conceptual design. To continue filling this void in the design process, a prototype knowledge-based system, called STRUTEX has been developed to initially configure a structure to support point loads in two dimensions. This prototype was developed for testing the application of AI tools to conceptual design as opposed to being a testbed for new methods for improving structural analysis and optimization. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user. How the system is constructed to interact with the user is described. Of special interest is the information flow between the knowledge base and the data base under control of the algorithmic main program. Examples of computed and refined structures are presented during the explanation of the system.

  13. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Feyock, Stefan; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    The purpose of this research effort is to investigate the benefits that might be derived from applying artificial intelligence tools in the area of conceptual design. Therefore, the emphasis is on the artificial intelligence aspects of conceptual design rather than structural and optimization aspects. A prototype knowledge-based system, called STRUTEX, was developed to initially configure a structure to support point loads in two dimensions. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user by integrating a knowledge base interface and inference engine, a data base interface, and graphics while keeping the knowledge base and data base files separate. The system writes a file which can be input into a structural synthesis system, which combines structural analysis and optimization.

  14. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks

    PubMed Central

    De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616

  15. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks.

    PubMed

    Khan, Taimoor; De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results.

  16. An approach to knowledge engineering to support knowledge-based simulation of payload ground processing at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Mcmanus, Shawn; Mcdaniel, Michael

    1989-01-01

    Planning for processing payloads was always difficult and time-consuming. With the advent of Space Station Freedom and its capability to support a myriad of complex payloads, the planning to support this ground processing maze involves thousands of man-hours of often tedious data manipulation. To provide the capability to analyze various processing schedules, an object oriented knowledge-based simulation environment called the Advanced Generic Accomodations Planning Environment (AGAPE) is being developed. Having nearly completed the baseline system, the emphasis in this paper is directed toward rule definition and its relation to model development and simulation. The focus is specifically on the methodologies implemented during knowledge acquisition, analysis, and representation within the AGAPE rule structure. A model is provided to illustrate the concepts presented. The approach demonstrates a framework for AGAPE rule development to assist expert system development.

  17. Experiences in improving the state of the practice in verification and validation of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Culbert, Chris; French, Scott W.; Hamilton, David

    1994-01-01

    Knowledge-based systems (KBS's) are in general use in a wide variety of domains, both commercial and government. As reliance on these types of systems grows, the need to assess their quality and validity reaches critical importance. As with any software, the reliability of a KBS can be directly attributed to the application of disciplined programming and testing practices throughout the development life-cycle. However, there are some essential differences between conventional software and KBSs, both in construction and use. The identification of these differences affect the verification and validation (V&V) process and the development of techniques to handle them. The recognition of these differences is the basis of considerable on-going research in this field. For the past three years IBM (Federal Systems Company - Houston) and the Software Technology Branch (STB) of NASA/Johnson Space Center have been working to improve the 'state of the practice' in V&V of Knowledge-based systems. This work was motivated by the need to maintain NASA's ability to produce high quality software while taking advantage of new KBS technology. To date, the primary accomplishment has been the development and teaching of a four-day workshop on KBS V&V. With the hope of improving the impact of these workshops, we also worked directly with NASA KBS projects to employ concepts taught in the workshop. This paper describes two projects that were part of this effort. In addition to describing each project, this paper describes problems encountered and solutions proposed in each case, with particular emphasis on implications for transferring KBS V&V technology beyond the NASA domain.

  18. Applications of artificial intelligence 1993: Knowledge-based systems in aerospace and industry; Proceedings of the Meeting, Orlando, FL, Apr. 13-15, 1993

    NASA Technical Reports Server (NTRS)

    Fayyad, Usama M. (Editor); Uthurusamy, Ramasamy (Editor)

    1993-01-01

    The present volume on applications of artificial intelligence with regard to knowledge-based systems in aerospace and industry discusses machine learning and clustering, expert systems and optimization techniques, monitoring and diagnosis, and automated design and expert systems. Attention is given to the integration of AI reasoning systems and hardware description languages, care-based reasoning, knowledge, retrieval, and training systems, and scheduling and planning. Topics addressed include the preprocessing of remotely sensed data for efficient analysis and classification, autonomous agents as air combat simulation adversaries, intelligent data presentation for real-time spacecraft monitoring, and an integrated reasoner for diagnosis in satellite control. Also discussed are a knowledge-based system for the design of heat exchangers, reuse of design information for model-based diagnosis, automatic compilation of expert systems, and a case-based approach to handling aircraft malfunctions.

  19. XML-based data model and architecture for a knowledge-based grid-enabled problem-solving environment for high-throughput biological imaging.

    PubMed

    Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif

    2008-03-01

    High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.

  20. A knowledge-based taxonomy of critical factors for adopting electronic health record systems by physicians: a systematic literature review

    PubMed Central

    2010-01-01

    Background The health care sector is an area of social and economic interest in several countries; therefore, there have been lots of efforts in the use of electronic health records. Nevertheless, there is evidence suggesting that these systems have not been adopted as it was expected, and although there are some proposals to support their adoption, the proposed support is not by means of information and communication technology which can provide automatic tools of support. The aim of this study is to identify the critical adoption factors for electronic health records by physicians and to use them as a guide to support their adoption process automatically. Methods This paper presents, based on the PRISMA statement, a systematic literature review in electronic databases with adoption studies of electronic health records published in English. Software applications that manage and process the data in the electronic health record have been considered, i.e.: computerized physician prescription, electronic medical records, and electronic capture of clinical data. Our review was conducted with the purpose of obtaining a taxonomy of the physicians main barriers for adopting electronic health records, that can be addressed by means of information and communication technology; in particular with the information technology roles of the knowledge management processes. Which take us to the question that we want to address in this work: "What are the critical adoption factors of electronic health records that can be supported by information and communication technology?". Reports from eight databases covering electronic health records adoption studies in the medical domain, in particular those focused on physicians, were analyzed. Results The review identifies two main issues: 1) a knowledge-based classification of critical factors for adopting electronic health records by physicians; and 2) the definition of a base for the design of a conceptual framework for supporting the

  1. Development of a Knowledge-based Application Utilizing Ontologies for the Continuing Site-specific JJ1017 Master Maintenance.

    PubMed

    Kobayashi, Tatsuaki; Tsuji, Shintaro; Yagahara, Ayako; Tanikawa, Takumi; Umeda, Tokuo

    2015-07-01

    The purpose of this study was to develop the JJ1017 Knowledge-based Application (JKA) to support the continuing maintenance of a site-specific JJ1017 master defined by the JJ1017 guideline as a standard radiologic procedure master for medical information systems that are being adopted by some medical facilities in Japan. The method consisted of the following three steps: (1) construction of the JJ1017 Ontology (JJOnt) as a knowledge base using the Hozo (an environment for building/using ontologies); (2) development of modules (operation, I/O, graph modules) that are required to continue the maintenance of a site-specific JJ1017 master; and (3) unit testing of the JKA that consists of the JJOnt and the modules. As a result, the number of classes included in the JJOnt was 21,697. Within the radiologic procedure classes included in the above, the ratio of a JJ1017 master code for an external beam radiotherapy was the highest (51%). In unit testing of the JKA, we checked the main operations (e.g., keyword search of a JJ1017 master code/code meaning, editing the description of classes, etc.). The JJOnt is a knowledge base for implementing features that medical technologists find necessary in medical information systems. To enable medical technologists to exchange/retrieve semantically accurate information while using medical information systems in the future, we expect the JKA to support the maintenance and improvement of the site-specific JJ1017 master.

  2. Knowledge-based expert systems and a proof-of-concept case study for multiple sequence alignment construction and analysis.

    PubMed

    Aniba, Mohamed Radhouene; Siguenza, Sophie; Friedrich, Anne; Plewniak, Frédéric; Poch, Olivier; Marchler-Bauer, Aron; Thompson, Julie Dawn

    2009-01-01

    The traditional approach to bioinformatics analyses relies on independent task-specific services and applications, using different input and output formats, often idiosyncratic, and frequently not designed to inter-operate. In general, such analyses were performed by experts who manually verified the results obtained at each step in the process. Today, the amount of bioinformatics information continuously being produced means that handling the various applications used to study this information presents a major data management and analysis challenge to researchers. It is now impossible to manually analyse all this information and new approaches are needed that are capable of processing the large-scale heterogeneous data in order to extract the pertinent information. We review the recent use of integrated expert systems aimed at providing more efficient knowledge extraction for bioinformatics research. A general methodology for building knowledge-based expert systems is described, focusing on the unstructured information management architecture, UIMA, which provides facilities for both data and process management. A case study involving a multiple alignment expert system prototype called AlexSys is also presented.

  3. Knowledge-based expert systems and a proof-of-concept case study for multiple sequence alignment construction and analysis

    PubMed Central

    Aniba, Mohamed Radhouene; Siguenza, Sophie; Friedrich, Anne; Plewniak, Frédéric; Poch, Olivier; Marchler-Bauer, Aron

    2009-01-01

    The traditional approach to bioinformatics analyses relies on independent task-specific services and applications, using different input and output formats, often idiosyncratic, and frequently not designed to inter-operate. In general, such analyses were performed by experts who manually verified the results obtained at each step in the process. Today, the amount of bioinformatics information continuously being produced means that handling the various applications used to study this information presents a major data management and analysis challenge to researchers. It is now impossible to manually analyse all this information and new approaches are needed that are capable of processing the large-scale heterogeneous data in order to extract the pertinent information. We review the recent use of integrated expert systems aimed at providing more efficient knowledge extraction for bioinformatics research. A general methodology for building knowledge-based expert systems is described, focusing on the unstructured information management architecture, UIMA, which provides facilities for both data and process management. A case study involving a multiple alignment expert system prototype called AlexSys is also presented. PMID:18971242

  4. Data acquisition for a real time fault monitoring and diagnosis knowledge-based system for space power system

    NASA Technical Reports Server (NTRS)

    Wilhite, Larry D.; Lee, S. C.; Lollar, Louis F.

    1989-01-01

    The design and implementation of the real-time data acquisition and processing system employed in the AMPERES project is described, including effective data structures for efficient storage and flexible manipulation of the data by the knowledge-based system (KBS), the interprocess communication mechanism required between the data acquisition system and the KBS, and the appropriate data acquisition protocols for collecting data from the sensors. Sensor data are categorized as critical or noncritical data on the basis of the inherent frequencies of the signals and the diagnostic requirements reflected in their values. The critical data set contains 30 analog values and 42 digital values and is collected every 10 ms. The noncritical data set contains 240 analog values and is collected every second. The collected critical and noncritical data are stored in separate circular buffers. Buffers are created in shared memory to enable other processes, i.e., the fault monitoring and diagnosis process and the user interface process, to freely access the data sets.

  5. Classification and comparison of ligand-binding sites derived from grid-mapped knowledge-based potentials.

    PubMed

    Hoppe, Christian; Steinbeck, Christoph; Wohlfahrt, Gerd

    2006-03-01

    We describe the application of knowledge-based potentials implemented in the MOE program to compare the ligand-binding sites of several proteins. The binding probabilities for a polar and a hydrophobic probe are calculated on a grid to allow easy comparison of binding sites of superimposed related proteins. The method is fast and simple enough to simultaneously use structural information of multiple proteins of a target family. The method can be used to rapidly cluster proteins into subfamilies according to the similarity of hydrophobic and polar fields of their ligand-binding sites. Regions of the binding site which are common within a protein family can be identified and analysed for the design of family-targeted libraries or those which differ for improvement of ligand selectivity. The field-based hierarchical clustering is demonstrated for three protein families: the ligand-binding domains of nuclear receptors, the ATP-binding sites of protein kinases and the substrate binding sites of proteases. More detailed comparisons are presented for serine proteases of the chymotrypsin family, for the peroxisome proliferator-activated receptor subfamily of nuclear receptors and for progesterone and androgen receptor. The results are in good accordance with structure-based analysis and highlight important differences of the binding sites, which have been also described in the literature.

  6. Structure for a knowledge-based system to estimate Soviet tactics in the air-land battle. Master's thesis

    SciTech Connect

    Fletcher, A.M.

    1988-03-01

    The purpose of this thesis was to build a prototype decision aid that can use knowledge about Soviet military doctrine and tactics to infer when, where, and how the Soviet Army plans to attack NATO defenses given intelligence data about Soviet (Red) military units, terrain data, and the positions of the NATO (Blue) defenses. Issues are raised that must be resolved before such a decision aid, which is part of the Rapid Application of Air Power concept, can become operational. First examined is the need to shorten the C2 decision cycle in order for the ATOC staff to keep pace with the tempo of modern warfare. The Rapid Application of Air Power is a concept that includes automating various steps in the decision cycle to allow air power to be applied proactively to stop Soviet forces before they obtain critical objectives. A structure is presented for automating the second step in the decision cycle, assessing and clarifying the situation, through a knowledge-based decision aid for interpreting intelligence data from the perspective of Soviet (Red) doctrine and estimating future Red tactical objectives and maneuvers.

  7. PCOSKB: A KnowledgeBase on genes, diseases, ontology terms and biochemical pathways associated with PolyCystic Ovary Syndrome

    PubMed Central

    Joseph, Shaini; Barai, Ram Shankar; Bhujbalrao, Rasika; Idicula-Thomas, Susan

    2016-01-01

    Polycystic ovary syndrome (PCOS) is one of the major causes of female subfertility worldwide and ≈7–10% of women in reproductive age are affected by it. The affected individuals exhibit varying types and levels of comorbid conditions, along with the classical PCOS symptoms. Extensive studies on PCOS across diverse ethnic populations have resulted in a plethora of information on dysregulated genes, gene polymorphisms and diseases linked to PCOS. However, efforts have not been taken to collate and link these data. Our group, for the first time, has compiled PCOS-related information available through scientific literature; cross-linked it with molecular, biochemical and clinical databases and presented it as a user-friendly, web-based online knowledgebase for the benefit of the scientific and clinical community. Manually curated information on associated genes, single nucleotide polymorphisms, diseases, gene ontology terms and pathways along with supporting reference literature has been collated and included in PCOSKB (http://pcoskb.bicnirrh.res.in). PMID:26578565

  8. Practical Guidelines for Incorporating Knowledge-Based and Data-Driven Strategies into the Inference of Gene Regulatory Networks.

    PubMed

    Hsiao, Yu-Ting; Lee, Wei-Po; Yang, Wei; Müller, Stefan; Flamm, Christoph; Hofacker, Ivo; Kügler, Philipp

    2016-01-01

    Modeling gene regulatory networks (GRNs) is essential for conceptualizing how genes are expressed and how they influence each other. Typically, a reverse engineering approach is employed; this strategy is effective in reproducing possible fitting models of GRNs. To use this strategy, however, two daunting tasks must be undertaken: one task is to optimize the accuracy of inferred network behaviors; and the other task is to designate valid biological topologies for target networks. Although existing studies have addressed these two tasks for years, few of the studies can satisfy both of the requirements simultaneously. To address these difficulties, we propose an integrative modeling framework that combines knowledge-based and data-driven input sources to construct biological topologies with their corresponding network behaviors. To validate the proposed approach, a real dataset collected from the cell cycle of the yeast S. cerevisiae is used. The results show that the proposed framework can successfully infer solutions that meet the requirements of both the network behaviors and biological structures. Therefore, the outcomes are exploitable for future in vivo experimental design.

  9. AlzPlatform: An Alzheimer’s Disease Domain-Specific Chemogenomics Knowledgebase for Polypharmacology and Target Identification Research

    PubMed Central

    2015-01-01

    Alzheimer’s disease (AD) is one of the most complicated progressive neurodegeneration diseases that involve many genes, proteins, and their complex interactions. No effective medicines or treatments are available yet to stop or reverse the progression of the disease due to its polygenic nature. To facilitate discovery of new AD drugs and better understand the AD neurosignaling pathways involved, we have constructed an Alzheimer’s disease domain-specific chemogenomics knowledgebase, AlzPlatform (www.cbligand.org/AD/) with cloud computing and sourcing functions. AlzPlatform is implemented with powerful computational algorithms, including our established TargetHunter, HTDocking, and BBB Predictor for target identification and polypharmacology analysis for AD research. The platform has assembled various AD-related chemogenomics data records, including 928 genes and 320 proteins related to AD, 194 AD drugs approved or in clinical trials, and 405 188 chemicals associated with 1 023 137 records of reported bioactivities from 38 284 corresponding bioassays and 10 050 references. Furthermore, we have demonstrated the application of the AlzPlatform in three case studies for identification of multitargets and polypharmacology analysis of FDA-approved drugs and also for screening and prediction of new AD active small chemical molecules and potential novel AD drug targets by our established TargetHunter and/or HTDocking programs. The predictions were confirmed by reported bioactivity data and our in vitro experimental validation. Overall, AlzPlatform will enrich our knowledge for AD target identification, drug discovery, and polypharmacology analyses and, also, facilitate the chemogenomics data sharing and information exchange/communications in aid of new anti-AD drug discovery and development. PMID:24597646

  10. Prediction of conformational epitopes with the use of a knowledge-based energy function and geometrically related neighboring residue characteristics

    PubMed Central

    2013-01-01

    Background A conformational epitope (CE) in an antigentic protein is composed of amino acid residues that are spatially near each other on the antigen's surface but are separated in sequence; CEs bind their complementary paratopes in B-cell receptors and/or antibodies. CE predication is used during vaccine design and in immuno-biological experiments. Here, we develop a novel system, CE-KEG, which predicts CEs based on knowledge-based energy and geometrical neighboring residue contents. The workflow applied grid-based mathematical morphological algorithms to efficiently detect the surface atoms of the antigens. After extracting surface residues, we ranked CE candidate residues first according to their local average energy distributions. Then, the frequencies at which geometrically related neighboring residue combinations in the potential CEs occurred were incorporated into our workflow, and the weighted combinations of the average energies and neighboring residue frequencies were used to assess the sensitivity, accuracy, and efficiency of our prediction workflow. Results We prepared a database containing 247 antigen structures and a second database containing the 163 non-redundant antigen structures in the first database to test our workflow. Our predictive workflow performed better than did algorithms found in the literature in terms of accuracy and efficiency. For the non-redundant dataset tested, our workflow achieved an average of 47.8% sensitivity, 84.3% specificity, and 80.7% accuracy according to a 10-fold cross-validation mechanism, and the performance was evaluated under providing top three predicted CE candidates for each antigen. Conclusions Our method combines an energy profile for surface residues with the frequency that each geometrically related amino acid residue pair occurs to identify possible CEs in antigens. This combination of these features facilitates improved identification for immuno-biological studies and synthetic vaccine design. CE-KEG is

  11. A knowledge-based reactive-transport approach for the modeling of biogeochemical cycles at the continent-ocean interface

    NASA Astrophysics Data System (ADS)

    Regnier, P.; Aguilera, D.; Jourabchi, P.; Meile, C.; van Cappellen, P.; Vanderborght, J.-P.

    2003-04-01

    Reactive-transport models (RTMs) are traditionally developed and used to investigate the fate and transport of a selected set of chemical constituents within a given compartment of the earth, mainly at the local or subregional scale. As a result, existing RTMs tend to be environment and application specific. For instance, at the continent-ocean interface, RTMs have been used to simulate, among others, biogeochemical dynamics in rivers, estuaries, coastal areas, aquifers, and sediments. The development of upscaling protocols, where RTMs of interconnected environments are progressively aggregated into larger system units is critical for merging marine and continental approaches to biogeochemical cycles. However, one of the major challenges to achieve this goal is in the realistic and consistent representation of highly complex reaction networks that characterize the chemical dynamics of the natural environments present along the continent-ocean continuum (rivers, estuaries, coastal areas, sediments). The expanding knowledge about (bio)geochemical transformation processes achieved via field- and laboratory-based experiments needs also to be made available and integrated consistently (i.e. with comparable level of complexities) across traditional disciplinary barriers, by utilizing the unifying conceptual and mathematical principles underlying all RTMs. Our modeling approach, based on a modular concept, offers the necessary flexibility for the implementation of new theoretical and experimental information on the rates and pathways of biogeochemical reactions. A key component of our reaction network simulator is the "Knowledge Base", which acts as a single evolving repository of up-to-date information on biogeochemical processes. The development of self-consistent, "Knowledge-Based" biogeochemical reaction network modules, which can be merged with existing transport models of the various compartments of the hydrosphere along the continent-ocean continuum, creates a

  12. Materials Characterization at Utah State University: Facilities and Knowledge-base of Electronic Properties of Materials Applicable to Spacecraft Charging

    NASA Technical Reports Server (NTRS)

    Dennison, J. R.; Thomson, C. D.; Kite, J.; Zavyalov, V.; Corbridge, Jodie

    2004-01-01

    In an effort to improve the reliability and versatility of spacecraft charging models designed to assist spacecraft designers in accommodating and mitigating the harmful effects of charging on spacecraft, the NASA Space Environments and Effects (SEE) Program has funded development of facilities at Utah State University for the measurement of the electronic properties of both conducting and insulating spacecraft materials. We present here an overview of our instrumentation and capabilities, which are particularly well suited to study electron emission as related to spacecraft charging. These measurements include electron-induced secondary and backscattered yields, spectra, and angular resolved measurements as a function of incident energy, species and angle, plus investigations of ion-induced electron yields, photoelectron yields, sample charging and dielectric breakdown. Extensive surface science characterization capabilities are also available to fully characterize the samples in situ. Our measurements for a wide array of conducting and insulating spacecraft materials have been incorporated into the SEE Charge Collector Knowledge-base as a Database of Electronic Properties of Materials Applicable to Spacecraft Charging. This Database provides an extensive compilation of electronic properties, together with parameterization of these properties in a format that can be easily used with existing spacecraft charging engineering tools and with next generation plasma, charging, and radiation models. Tabulated properties in the Database include: electron-induced secondary electron yield, backscattered yield and emitted electron spectra; He, Ar and Xe ion-induced electron yields and emitted electron spectra; photoyield and solar emittance spectra; and materials characterization including reflectivity, dielectric constant, resistivity, arcing, optical microscopy images, scanning electron micrographs, scanning tunneling microscopy images, and Auger electron spectra. Further

  13. Towards knowledge-based systems in clinical practice: development of an integrated clinical information and knowledge management support system.

    PubMed

    Kalogeropoulos, Dimitris A; Carson, Ewart R; Collinson, Paul O

    2003-09-01

    Given that clinicians presented with identical clinical information will act in different ways, there is a need to introduce into routine clinical practice methods and tools to support the scientific homogeneity and accountability of healthcare decisions and actions. The benefits expected from such action include an overall reduction in cost, improved quality of care, patient and public opinion satisfaction. Computer-based medical data processing has yielded methods and tools for managing the task away from the hospital management level and closer to the desired disease and patient management level. To this end, advanced applications of information and disease process modelling technologies have already demonstrated an ability to significantly augment clinical decision making as a by-product. The wide-spread acceptance of evidence-based medicine as the basis of cost-conscious and concurrently quality-wise accountable clinical practice suffices as evidence supporting this claim. Electronic libraries are one-step towards an online status of this key health-care delivery quality control environment. Nonetheless, to date, the underlying information and knowledge management technologies have failed to be integrated into any form of pragmatic or marketable online and real-time clinical decision making tool. One of the main obstacles that needs to be overcome is the development of systems that treat both information and knowledge as clinical objects with same modelling requirements. This paper describes the development of such a system in the form of an intelligent clinical information management system: a system which at the most fundamental level of clinical decision support facilitates both the organised acquisition of clinical information and knowledge and provides a test-bed for the development and evaluation of knowledge-based decision support functions.

  14. MitProNet: A knowledgebase and analysis platform of proteome, interactome and diseases for mammalian mitochondria.

    PubMed

    Wang, Jiabin; Yang, Jian; Mao, Song; Chai, Xiaoqiang; Hu, Yuling; Hou, Xugang; Tang, Yiheng; Bi, Cheng; Li, Xiao

    2014-01-01

    Mitochondrion plays a central role in diverse biological processes in most eukaryotes, and its dysfunctions are critically involved in a large number of diseases and the aging process. A systematic identification of mitochondrial proteomes and characterization of functional linkages among mitochondrial proteins are fundamental in understanding the mechanisms underlying biological functions and human diseases associated with mitochondria. Here we present a database MitProNet which provides a comprehensive knowledgebase for mitochondrial proteome, interactome and human diseases. First an inventory of mammalian mitochondrial proteins was compiled by widely collecting proteomic datasets, and the proteins were classified by machine learning to achieve a high-confidence list of mitochondrial proteins. The current version of MitProNet covers 1124 high-confidence proteins, and the remainders were further classified as middle- or low-confidence. An organelle-specific network of functional linkages among mitochondrial proteins was then generated by integrating genomic features encoded by a wide range of datasets including genomic context, gene expression profiles, protein-protein interactions, functional similarity and metabolic pathways. The functional-linkage network should be a valuable resource for the study of biological functions of mitochondrial proteins and human mitochondrial diseases. Furthermore, we utilized the network to predict candidate genes for mitochondrial diseases using prioritization algorithms. All proteins, functional linkages and disease candidate genes in MitProNet were annotated according to the information collected from their original sources including GO, GEO, OMIM, KEGG, MIPS, HPRD and so on. MitProNet features a user-friendly graphic visualization interface to present functional analysis of linkage networks. As an up-to-date database and analysis platform, MitProNet should be particularly helpful in comprehensive studies of complicated

  15. plantMirP: an efficient computational program for the prediction of plant pre-miRNA by incorporating knowledge-based energy features.

    PubMed

    Yao, Yuangen; Ma, Chengzhang; Deng, Haiyou; Liu, Quan; Zhang, Jiying; Yi, Ming

    2016-10-20

    MicroRNAs are a predominant type of small non-coding RNAs approximately 21 nucleotides in length that play an essential role at the post-transcriptional level by either RNA degradation, translational repression or both through an RNA-induced silencing complex. Identification of these molecules can aid the dissecting of their regulatory functions. The secondary structures of plant pre-miRNAs are much more complex than those of animal pre-miRNAs. In contrast to prediction tools for animal pre-miRNAs, much less effort has been contributed to plant pre-miRNAs. In this study, a set of novel knowledge-based energy features that has very high discriminatory power is proposed and incorporated with the existing features for specifically distinguishing the hairpins of real/pseudo plant pre-miRNAs. A promising performance area under a receiver operating characteristic curve of 0.9444 indicates that 5 knowledge-based energy features have very high discriminatory power. The 10-fold cross-validation result demonstrates that plantMirP with full features has a promising sensitivity of 92.61% and a specificity of 98.88%. Based on various different datasets, it was found that plantMirP has a higher prediction performance by comparison with miPlantPreMat, PlantMiRNAPred, triplet-SVM, and microPred. Meanwhile, plantMirP can greatly balance sensitivity and specificity for real/pseudo plant pre-miRNAs. Taken together, we developed a promising SVM-based program, plantMirP, for predicting plant pre-miRNAs by incorporating knowledge-based energy features. This study shows it to be a valuable tool for miRNA-related studies. PMID:27472470

  16. Sbexpert users guide (version 1.0): A knowledge-based decision-support system for spruce beetle management. Forest Service general technical report

    SciTech Connect

    Reynolds, K.M.; Holsten, E.H.; Werner, R.A.

    1995-03-01

    SBexpert version 1.0 is a knowledge-based decision-support system for management of spruce beetle developed for use in Microsoft Windows. The users guide provides detailed instructions on the use of all SBexpert features. SBexpert has four main subprograms; introduction, analysis, textbook, and literature. The introduction is the first of the five subtopics in the SBexpert help system. The analysis topic is an advisory system for spruce beetle management that provides recommendation for reducing spruce beetle hazard and risk to spruce stands and is the main analytical topic in SBexpert. The textbook and literature topics provide complementary decision support for analysis.

  17. Application of knowledge-based classification techniques and geographic information systems (GIS) on satellite imagery for stormwater management

    NASA Astrophysics Data System (ADS)

    Abellera, Lourdes Villanueva

    Stormwater management is concerned with runoff control and water quality optimization. A stormwater model is a tool applied to reach this goal. Hydrologic variables required to run this model are usually obtained from field surveys and aerial photo-interpretation. However, these procedures are slow and difficult. An alternative is the automated processing of satellite imagery. We examined various studies that utilized satellite data to provide inputs to stormwater models. The overall results of the modeling effort are acceptable even if the outputs of satellite data processing are used instead of those obtained from standard techniques. One important model input parameter is land use because it is associated with the amounts of runoff and pollutants generated in a parcel of land. Hence, we also explored new ways that land use can be identified from satellite imagery. Next, we demonstrated how the combined technologies of satellite remote sensing, knowledge-based systems, and geographic information systems (GIS) are used to delineate impervious surfaces from a Landsat ETM+ data. Imperviousness is a critical model input parameter because it is proportional to runoff rates and volumes. We found that raw satellite image, normalized difference vegetation image, and ancillary data can provide rules to distinguish impervious surfaces satisfactorily. We also identified different levels of pollutant loadings (high, medium, low) from the same satellite imagery using similar techniques. It is useful to identify areas with high stormwater pollutant emissions so that they can be prioritized for the implementation of best management practices. The contaminants studied were total suspended solids, biochemical oxygen demand, total phosphorus, total Kjeldahl nitrogen, copper, and oil and grease. We observed that raw data, tasseled cap transformed images, and ancillary data can be utilized to make rules for mapping pollution levels. Finally, we devised a method to compute weights

  18. A knowledge-based method for reducing attenuation artefacts caused by cardiac appliances in myocardial PET/CT

    NASA Astrophysics Data System (ADS)

    Hamill, James J.; Brunken, Richard C.; Bybel, Bohdan; Di Filippo, Frank P.; Faul, David D.

    2006-06-01

    Attenuation artefacts due to implanted cardiac defibrillator leads have previously been shown to adversely impact cardiac PET/CT imaging. In this study, the severity of the problem is characterized, and an image-based method is described which reduces the resulting artefact in PET. Automatic implantable cardioverter defibrillator (AICD) leads cause a moving-metal artefact in the CT sections from which the PET attenuation correction factors (ACFs) are derived. Fluoroscopic cine images were measured to demonstrate that the defibrillator's highly attenuating distal shocking coil moves rhythmically across distances on the order of 1 cm. Rhythmic motion of this magnitude was created in a phantom with a moving defibrillator lead. A CT study of the phantom showed that the artefact contained regions of incorrect, very high CT values and adjacent regions of incorrect, very low CT values. The study also showed that motion made the artefact more severe. A knowledge-based metal artefact reduction method (MAR) is described that reduces the magnitude of the error in the CT images, without use of the corrupted sinograms. The method modifies the corrupted image through a sequence of artefact detection procedures, morphological operations, adjustments of CT values and three-dimensional filtering. The method treats bone the same as metal. The artefact reduction method is shown to run in a few seconds, and is validated by applying it to a series of phantom studies in which reconstructed PET tracer distribution values are wrong by as much as 60% in regions near the CT artefact when MAR is not applied, but the errors are reduced to about 10% of expected values when MAR is applied. MAR changes PET image values by a few per cent in regions not close to the artefact. The changes can be larger in the vicinity of bone. In patient studies, the PET reconstruction without MAR sometimes results in anomalously high values in the infero-septal wall. Clinical performance of MAR is assessed by two

  19. The use of knowledge-based Genetic Algorithm for starting time optimisation in a lot-bucket MRP

    NASA Astrophysics Data System (ADS)

    Ridwan, Muhammad; Purnomo, Andi

    2016-01-01

    In production planning, Material Requirement Planning (MRP) is usually developed based on time-bucket system, a period in the MRP is representing the time and usually weekly. MRP has been successfully implemented in Make To Stock (MTS) manufacturing, where production activity must be started before customer demand is received. However, to be implemented successfully in Make To Order (MTO) manufacturing, a modification is required on the conventional MRP in order to make it in line with the real situation. In MTO manufacturing, delivery schedule to the customers is defined strictly and must be fulfilled in order to increase customer satisfaction. On the other hand, company prefers to keep constant number of workers, hence production lot size should be constant as well. Since a bucket in conventional MRP system is representing time and usually weekly, hence, strict delivery schedule could not be accommodated. Fortunately, there is a modified time-bucket MRP system, called as lot-bucket MRP system that proposed by Casimir in 1999. In the lot-bucket MRP system, a bucket is representing a lot, and the lot size is preferably constant. The time to finish every lot could be varying depends on due date of lot. Starting time of a lot must be determined so that every lot has reasonable production time. So far there is no formal method to determine optimum starting time in the lot-bucket MRP system. Trial and error process usually used for it but some time, it causes several lots have very short production time and the lot-bucket MRP would be infeasible to be executed. This paper presents the use of Genetic Algorithm (GA) for optimisation of starting time in a lot-bucket MRP system. Even though GA is well known as powerful searching algorithm, however, improvement is still required in order to increase possibility of GA in finding optimum solution in shorter time. A knowledge-based system has been embedded in the proposed GA as the improvement effort, and it is proven that the

  20. TH-A-9A-08: Knowledge-Based Quality Control of Clinical Stereotactic Radiosurgery Treatment Plans

    SciTech Connect

    Shiraishi, S; Moore, K L; Tan, J; Olsen, L

    2014-06-15

    Purpose: To develop a quality control tool to reduce stereotactic radiosurgery (SRS) planning variability using models that predict achievable plan quality metrics (QMs) based on individual patient anatomy. Methods: Using a knowledge-based methodology that quantitatively correlates anatomical geometric features to resultant organ-at-risk (OAR) dosimetry, we developed models for predicting achievable OAR dose-volume histograms (DVHs) by training with a cohort of previously treated SRS patients. The DVH-based QMs used in this work are the gradient measure, GM=(3/4pi)^1/3*[V50%^1/3−V100%^1/3], and V10Gy of normal brain. As GM quantifies the total rate of dose fall-off around the planning target volume (PTV), all voxels inside the patient's body contour were treated as OAR for DVH prediction. 35 previously treated SRS plans from our institution were collected; all were planned with non-coplanar volumetric-modulated arc therapy to prescription doses of 12–25 Gy. Of the 35-patient cohort, 15 were used for model training and 20 for model validation. Accuracies of the predictions were quantified by the mean and the standard deviation of the difference between clinical and predicted QMs, δQM=QM-clin−QM-pred. Results: Best agreement between predicted and clinical QMs was obtained when models were built separately for V-PTV<2.5cc and V-PTV>2.5cc. Eight patients trained the V-PTV<2.5cc model and seven patients trained the V-PTV>2.5cc models, respectively. The mean and the standard deviation of δGM were 0.3±0.4mm for the training sets and −0.1±0.6mm for the validation sets, demonstrating highly accurate GM predictions. V10Gy predictions were also highly accurate, with δV10Gy=0.8±0.7cc for the training sets and δV10Gy=0.7±1.4cc for the validation sets. Conclusion: The accuracy of the models in predicting two key SRS quality metrics highlights the potential of this technique for quality control for SRS treatments. Future investigations will seek to determine

  1. Producing Qualified Graduates and Assuring Education Quality in the Knowledge-Based Society: Roles and Issues of Graduate Education. Report of the International Workshop on Graduate Education, 2009. RIHE International Seminar Reports. No.14

    ERIC Educational Resources Information Center

    Research Institute for Higher Education, Hiroshima University (NJ3), 2010

    2010-01-01

    Through being specially funded by the Ministry of Education and Science in 2008, the Research Institute for Higher Education (RIHE) in Hiroshima University has been able to implement a new research project on the reform of higher education in the knowledge-based society of the 21st century. Thus RIHE hosted the second International Workshop on…

  2. It Ain't the Heat, It's the Humanity: Evidence and Implications of a Knowledge-Based Consensus on Man-Made Global Warming

    NASA Astrophysics Data System (ADS)

    Jacobs, P.; Cook, J.; Nuccitelli, D. A.

    2013-12-01

    One of the most worrisome misconceptions among the general public about climate change is a belief that scientists disagree not only about the cause of the present climate change, but also whether or not the planet is currently warming. Recent surveys have demonstrated that an overwhelming consensus exists, both within the scientific literature and among scientists with climate expertise, that the planet is warming and humans are driving this climatic change. This disconnect, or 'consensus gap', between scientific agreement and public belief has significant consequences for public understanding of the reality and cause of climate change, as well as support for potential solutions. Ensuring that the consensus message is not simply broadcast but is also accepted as legitimate by the public appears to be a primary education and communications opportunity. While the existence of a consensus is not itself evidence of a position's truth, according to Miller (2013) scientific consensus can be taken as evidence that a position is true if it is 'knowledge-based', satisfying the conditions of social calibration, apparent consilience of evidence, and social diversity. We demonstrate that the scientific consensus on anthropogenic climate change is knowledge-based, satisfying Miller's criteria. In so doing, we hope to increase confidence in its use as an education and communications tool, and assure the public of its validity. We show the consensus is socially calibrated, based on common evidential standards, ontological schemes, and shared formalism. We establish that consilience of evidence points overwhelmingly to the reality of anthropogenic climate change by examining the evidence from several perspectives. We identify unique fingerprints expected as a result of increased greenhouse forcing, eliminate potential natural drivers of climate change as the cause of the present change, and demonstrate the consistency of the observed climate response with known changes in natural

  3. Knowledge-based systems as decision support tools in an ecosystem approach to fisheries: Comparing a fuzzy-logic and a rule-based approach

    NASA Astrophysics Data System (ADS)

    Jarre, Astrid; Paterson, Barbara; Moloney, Coleen L.; Miller, David C. M.; Field, John G.; Starfield, Anthony M.

    2008-10-01

    In an ecosystem approach to fisheries (EAF), management must draw on information of widely different types, and information addressing various scales. Knowledge-based systems assist in the decision-making process by summarising this information in a logical, transparent and reproducible way. Both rule-based Boolean and fuzzy-logic models have been used successfully as knowledge-based decision support tools. This study compares two such systems relevant to fisheries management in an EAF developed for the southern Benguela. The first is a rule-based system for the prediction of anchovy recruitment and the second is a fuzzy-logic tool to monitor implementation of an EAF in the sardine fishery. We construct a fuzzy-logic counterpart to the rule-based model, and a rule-based counterpart to the fuzzy-logic model, compare their results, and include feedback from potential users of these two decision support tools in our evaluation of the two approaches. With respect to the model objectives, no method clearly outperformed the other. The advantages of numerically processing continuous variables, and interpreting the final output, as in fuzzy-logic models, can be weighed up against the advantages of using a few, qualitative, easy-to-understand categories as in rule-based models. The natural language used in rule-based implementations is easily understood by, and communicated among, users of these systems. Users unfamiliar with fuzzy-set theory must “trust” the logic of the model. Graphical visualization of intermediate and end results is an important advantage of any system. Applying the two approaches in parallel improved our understanding of the model as well as of the underlying problems. Even for complex problems, small knowledge-based systems such as the ones explored here are worth developing and using. Their strengths lie in (i) synthesis of the problem in a logical and transparent framework, (ii) helping scientists to deliberate how to apply their science to

  4. Creating a knowledge-based economy in the United Arab Emirates: realising the unfulfilled potential of women in the science, technology and engineering fields

    NASA Astrophysics Data System (ADS)

    Ghazal Aswad, Noor; Vidican, Georgeta; Samulewicz, Diana

    2011-12-01

    As the United Arab Emirates (UAE) moves towards a knowledge-based economy, maximising the participation of the national workforce, especially women, in the transformation process is crucial. Using survey methods and semi-structured interviews, this paper examines the factors that influence women's decisions regarding their degree programme and their attitudes towards science, technology and engineering (STE). The findings point to the importance of adapting mainstream policies to the local context and the need to better understand the effect of culture and society on the individual and the economy. There is a need to increase interest in STE by raising awareness of what the fields entail, potential careers and their suitability with existing cultural beliefs. Also suggested is the need to overcome negative stereotypes of engineering, implement initiatives for further family involvement at the higher education level, as well as the need to ensure a greater availability of STE university programmes across the UAE.

  5. The feasibility of sub-millisievert coronary CT angiography with low tube voltage, prospective ECG gating, and a knowledge-based iterative model reconstruction algorithm.

    PubMed

    Park, Chul Hwan; Lee, Joohee; Oh, Chisuk; Han, Kyung Hwa; Kim, Tae Hoon

    2015-12-01

    We evaluated the feasibility of sub-millisievert (mSv) coronary CT angiography (CCTA) using low tube voltage, prospective ECG gating, and a knowledge-based iterative model reconstruction algorithm. Twenty-four non-obese healthy subjects (M:F 13:11; mean age 50.2 ± 7.8 years) were enrolled. Three sets of CT images were reconstructed using three different reconstruction methods: filtered back projection (FBP), iterative reconstruction (IR), and knowledge-based iterative model reconstruction (IMR). The scanning parameters were as follows: step-and-shoot axial scanning, 80 kVp, and 200 mAs. On the three sets of CT images, the attenuation and image noise values were measured at the aortic root. The signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) were calculated at the proximal right coronary artery and the left main coronary artery. The qualitative image quality of the CCTA with IMR was assessed using a 4-point grading scale (grade 1, poor; grade 4, excellent). The mean radiation dose of the CCTA was 0.89 ± 0.09 mSv. The attenuation values with IMR were not different from those of other reconstruction methods. The image noise with IMR was significantly lower than with IR and FBP. Compared to FBP, the noise reduction rate of IMR was 69 %. The SNR and CNR of CCTA with IMR were significantly higher than with FBP or IR. On the qualitative analysis with IMR, all included segments were diagnostic (grades 2, 3, and 4), and the mean image quality score was 3.6 ± 0.6. In conclusion, CCTA with low tube voltage, prospective ECG gating, and an IMR algorithm might be a feasible method that allows for sub-millisievert radiation doses and good image quality when used with non-obese subjects. PMID:26521066

  6. Reification of abstract concepts to improve comprehension using interactive virtual environments and a knowledge-based design: a renal physiology model.

    PubMed

    Alverson, Dale C; Saiki, Stanley M; Caudell, Thomas P; Goldsmith, Timothy; Stevens, Susan; Saland, Linda; Colleran, Kathleen; Brandt, John; Danielson, Lee; Cerilli, Lisa; Harris, Alexis; Gregory, Martin C; Stewart, Randall; Norenberg, Jeffery; Shuster, George; Panaoitis; Holten, James; Vergera, Victor M; Sherstyuk, Andrei; Kihmm, Kathleen; Lui, Jack; Wang, Kin Lik

    2006-01-01

    Several abstract concepts in medical education are difficult to teach and comprehend. In order to address this challenge, we have been applying the approach of reification of abstract concepts using interactive virtual environments and a knowledge-based design. Reification is the process of making abstract concepts and events, beyond the realm of direct human experience, concrete and accessible to teachers and learners. Entering virtual worlds and simulations not otherwise easily accessible provides an opportunity to create, study, and evaluate the emergence of knowledge and comprehension from the direct interaction of learners with otherwise complex abstract ideas and principles by bringing them to life. Using a knowledge-based design process and appropriate subject matter experts, knowledge structure methods are applied in order to prioritize, characterize important relationships, and create a concept map that can be integrated into the reified models that are subsequently developed. Applying these principles, our interdisciplinary team has been developing a reified model of the nephron into which important physiologic functions can be integrated and rendered into a three dimensional virtual environment called Flatland, a virtual environments development software tool, within which a learners can interact using off-the-shelf hardware. The nephron model can be driven dynamically by a rules-based artificial intelligence engine, applying the rules and concepts developed in conjunction with the subject matter experts. In the future, the nephron model can be used to interactively demonstrate a number of physiologic principles or a variety of pathological processes that may be difficult to teach and understand. In addition, this approach to reification can be applied to a host of other physiologic and pathological concepts in other systems. These methods will require further evaluation to determine their impact and role in learning.

  7. An Innovative Approach to Addressing Childhood Obesity: A Knowledge-Based Infrastructure for Supporting Multi-Stakeholder Partnership Decision-Making in Quebec, Canada

    PubMed Central

    Addy, Nii Antiaye; Shaban-Nejad, Arash; Buckeridge, David L.; Dubé, Laurette

    2015-01-01

    Multi-stakeholder partnerships (MSPs) have become a widespread means for deploying policies in a whole of society strategy to address the complex problem of childhood obesity. However, decision-making in MSPs is fraught with challenges, as decision-makers are faced with complexity, and have to reconcile disparate conceptualizations of knowledge across multiple sectors with diverse sets of indicators and data. These challenges can be addressed by supporting MSPs with innovative tools for obtaining, organizing and using data to inform decision-making. The purpose of this paper is to describe and analyze the development of a knowledge-based infrastructure to support MSP decision-making processes. The paper emerged from a study to define specifications for a knowledge-based infrastructure to provide decision support for community-level MSPs in the Canadian province of Quebec. As part of the study, a process assessment was conducted to understand the needs of communities as they collect, organize, and analyze data to make decisions about their priorities. The result of this process is a “portrait”, which is an epidemiological profile of health and nutrition in their community. Portraits inform strategic planning and development of interventions, and are used to assess the impact of interventions. Our key findings indicate ambiguities and disagreement among MSP decision-makers regarding causal relationships between actions and outcomes, and the relevant data needed for making decisions. MSP decision-makers expressed a desire for easy-to-use tools that facilitate the collection, organization, synthesis, and analysis of data, to enable decision-making in a timely manner. Findings inform conceptual modeling and ontological analysis to capture the domain knowledge and specify relationships between actions and outcomes. This modeling and analysis provide the foundation for an ontology, encoded using OWL 2 Web Ontology Language. The ontology is developed to provide

  8. An innovative approach to addressing childhood obesity: a knowledge-based infrastructure for supporting multi-stakeholder partnership decision-making in Quebec, Canada.

    PubMed

    Addy, Nii Antiaye; Shaban-Nejad, Arash; Buckeridge, David L; Dubé, Laurette

    2015-02-01

    Multi-stakeholder partnerships (MSPs) have become a widespread means for deploying policies in a whole of society strategy to address the complex problem of childhood obesity. However, decision-making in MSPs is fraught with challenges, as decision-makers are faced with complexity, and have to reconcile disparate conceptualizations of knowledge across multiple sectors with diverse sets of indicators and data. These challenges can be addressed by supporting MSPs with innovative tools for obtaining, organizing and using data to inform decision-making. The purpose of this paper is to describe and analyze the development of a knowledge-based infrastructure to support MSP decision-making processes. The paper emerged from a study to define specifications for a knowledge-based infrastructure to provide decision support for community-level MSPs in the Canadian province of Quebec. As part of the study, a process assessment was conducted to understand the needs of communities as they collect, organize, and analyze data to make decisions about their priorities. The result of this process is a "portrait", which is an epidemiological profile of health and nutrition in their community. Portraits inform strategic planning and development of interventions, and are used to assess the impact of interventions. Our key findings indicate ambiguities and disagreement among MSP decision-makers regarding causal relationships between actions and outcomes, and the relevant data needed for making decisions. MSP decision-makers expressed a desire for easy-to-use tools that facilitate the collection, organization, synthesis, and analysis of data, to enable decision-making in a timely manner. Findings inform conceptual modeling and ontological analysis to capture the domain knowledge and specify relationships between actions and outcomes. This modeling and analysis provide the foundation for an ontology, encoded using OWL 2 Web Ontology Language. The ontology is developed to provide semantic

  9. An innovative approach to addressing childhood obesity: a knowledge-based infrastructure for supporting multi-stakeholder partnership decision-making in Quebec, Canada.

    PubMed

    Addy, Nii Antiaye; Shaban-Nejad, Arash; Buckeridge, David L; Dubé, Laurette

    2015-01-23

    Multi-stakeholder partnerships (MSPs) have become a widespread means for deploying policies in a whole of society strategy to address the complex problem of childhood obesity. However, decision-making in MSPs is fraught with challenges, as decision-makers are faced with complexity, and have to reconcile disparate conceptualizations of knowledge across multiple sectors with diverse sets of indicators and data. These challenges can be addressed by supporting MSPs with innovative tools for obtaining, organizing and using data to inform decision-making. The purpose of this paper is to describe and analyze the development of a knowledge-based infrastructure to support MSP decision-making processes. The paper emerged from a study to define specifications for a knowledge-based infrastructure to provide decision support for community-level MSPs in the Canadian province of Quebec. As part of the study, a process assessment was conducted to understand the needs of communities as they collect, organize, and analyze data to make decisions about their priorities. The result of this process is a "portrait", which is an epidemiological profile of health and nutrition in their community. Portraits inform strategic planning and development of interventions, and are used to assess the impact of interventions. Our key findings indicate ambiguities and disagreement among MSP decision-makers regarding causal relationships between actions and outcomes, and the relevant data needed for making decisions. MSP decision-makers expressed a desire for easy-to-use tools that facilitate the collection, organization, synthesis, and analysis of data, to enable decision-making in a timely manner. Findings inform conceptual modeling and ontological analysis to capture the domain knowledge and specify relationships between actions and outcomes. This modeling and analysis provide the foundation for an ontology, encoded using OWL 2 Web Ontology Language. The ontology is developed to provide semantic

  10. Development and Assessment of a Geographic Knowledge-Based Model for Mapping Suitable Areas for Rift Valley Fever Transmission in Eastern Africa.

    PubMed

    Tran, Annelise; Trevennec, Carlène; Lutwama, Julius; Sserugga, Joseph; Gély, Marie; Pittiglio, Claudia; Pinto, Julio; Chevalier, Véronique

    2016-09-01

    Rift Valley fever (RVF), a mosquito-borne disease affecting ruminants and humans, is one of the most important viral zoonoses in Africa. The objective of the present study was to develop a geographic knowledge-based method to map the areas suitable for RVF amplification and RVF spread in four East African countries, namely, Kenya, Tanzania, Uganda and Ethiopia, and to assess the predictive accuracy of the model using livestock outbreak data from Kenya and Tanzania. Risk factors and their relative importance regarding RVF amplification and spread were identified from a literature review. A numerical weight was calculated for each risk factor using an analytical hierarchy process. The corresponding geographic data were collected, standardized and combined based on a weighted linear combination to produce maps of the suitability for RVF transmission. The accuracy of the resulting maps was assessed using RVF outbreak locations in livestock reported in Kenya and Tanzania between 1998 and 2012 and the ROC curve analysis. Our results confirmed the capacity of the geographic information system-based multi-criteria evaluation method to synthesize available scientific knowledge and to accurately map (AUC = 0.786; 95% CI [0.730-0.842]) the spatial heterogeneity of RVF suitability in East Africa. This approach provides users with a straightforward and easy update of the maps according to data availability or the further development of scientific knowledge. PMID:27631374

  11. Numerical, Analytical, Experimental Study of Fluid Dynamic Forces in Seals. Volume 1; Executive Summary and Description of Knowledge-Based System

    NASA Technical Reports Server (NTRS)

    Liang, Anita D. (Technical Monitor); Shapiro, Wilbur; Aggarwal, Bharat

    2004-01-01

    The objectives of the program were to develop computational fluid dynamics (CFD) codes and simpler industrial codes for analyzing and designing advanced seals for air-breathing and space propulsion engines. The CFD code SCISEAL is capable of producing full three-dimensional flow field information for a variety of cylindrical configurations. An implicit multidomain capability allows the division of complex flow domains to allow optimum use of computational cells. SCISEAL also has the unique capability to produce cross-coupled stiffness and damping coefficients for rotordynamic computations. The industrial codes consist of a series of separate stand-alone modules designed for expeditious parametric analyses and optimization of a wide variety of cylindrical and face seals. Coupled through a Knowledge-Based System (KBS) that provides a user-friendly Graphical User Interface (GUI), the industrial codes are PC based using an OS/2 operating system. These codes were designed to treat film seals where a clearance exists between the rotating and stationary components. Leakage is inhibited by surface roughness, small but stiff clearance films, and viscous pumping devices. The codes have demonstrated to be a valuable resource for seal development of future air-breathing and space propulsion engines.

  12. Performance of a Knowledge-Based Model for Optimization of Volumetric Modulated Arc Therapy Plans for Single and Bilateral Breast Irradiation

    PubMed Central

    Fogliata, Antonella; Nicolini, Giorgia; Bourgier, Celine; Clivio, Alessandro; De Rose, Fiorenza; Fenoglietto, Pascal; Lobefalo, Francesca; Mancosu, Pietro; Tomatis, Stefano; Vanetti, Eugenio; Scorsetti, Marta; Cozzi, Luca

    2015-01-01

    Purpose To evaluate the performance of a model-based optimisation process for volumetric modulated arc therapy, VMAT, applied to whole breast irradiation. Methods and Materials A set of 150 VMAT dose plans with simultaneous integrated boost were selected to train a model for the prediction of dose-volume constraints. The dosimetric validation was done on different groups of patients from three institutes for single (50 cases) and bilateral breast (20 cases). Results Quantitative improvements were observed between the model-based and the reference plans, particularly for heart dose. Of 460 analysed dose-volume objectives, 13% of the clinical plans failed to meet the constraints while the respective model-based plans succeeded. Only in 5 cases did the reference plans pass while the respective model-based failed the criteria. For the bilateral breast analysis, the model-based plans resulted in superior or equivalent dose distributions to the reference plans in 96% of the cases. Conclusions Plans optimised using a knowledge-based model to determine the dose-volume constraints showed dosimetric improvements when compared to earlier approved clinical plans. The model was applicable to patients from different centres for both single and bilateral breast irradiation. The data suggests that the dose-volume constraint optimisation can be effectively automated with the new engine and could encourage its application to clinical practice. PMID:26691687

  13. Definition of the applicability domains of knowledge-based predictive toxicology expert systems by using a structural fragment-based approach.

    PubMed

    Ellison, Claire M; Enoch, Steven J; Cronin, Mark Td; Madden, Judith C; Judson, Philip

    2009-11-01

    The applicability domain of a (quantitative) structure-activity relationship ([Q]SAR) must be defined, if a model is to be used successfully for toxicity prediction, particularly for regulatory purposes. Previous efforts to set guidelines on the definition of applicability domains have often been biased toward quantitative, rather than qualitative, models. As a result, novel techniques are still required to define the applicability domains of structural alert models and knowledge-based systems. By using Derek for Windows as an example, this study defined the domain for the skin sensitisation structural alert rule-base. This was achieved by fragmenting the molecules within a training set of compounds, then searching the fragments for those created from a test compound. This novel method was able to highlight test chemicals which differed from those in the training set. The information was then used to designate chemicals as being either within or outside the domain of applicability for the structural alert on which that training set was based.

  14. FluKB: A Knowledge-Based System for Influenza Vaccine Target Discovery and Analysis of the Immunological Properties of Influenza Viruses.

    PubMed

    Simon, Christian; Kudahl, Ulrich J; Sun, Jing; Olsen, Lars Rønn; Zhang, Guang Lan; Reinherz, Ellis L; Brusic, Vladimir

    2015-01-01

    FluKB is a knowledge-based system focusing on data and analytical tools for influenza vaccine discovery. The main goal of FluKB is to provide access to curated influenza sequence and epitope data and enhance the analysis of influenza sequence diversity and the analysis of targets of immune responses. FluKB consists of more than 400,000 influenza protein sequences, known epitope data (357 verified T-cell epitopes, 685 HLA binders, and 16 naturally processed MHC ligands), and a collection of 28 influenza antibodies and their structurally defined B-cell epitopes. FluKB was built using a modular framework allowing the implementation of analytical workflows and includes standard search tools, such as keyword search and sequence similarity queries, as well as advanced tools for the analysis of sequence variability. The advanced analytical tools for vaccine discovery include visual mapping of T- and B-cell vaccine targets and assessment of neutralizing antibody coverage. FluKB supports the discovery of vaccine targets and the analysis of viral diversity and its implications for vaccine discovery as well as potential T-cell breadth and antibody cross neutralization involving multiple strains. FluKB is representation of a new generation of databases that integrates data, analytical tools, and analytical workflows that enable comprehensive analysis and automatic generation of analysis reports.

  15. Validation of an enhanced knowledge-based method for segmentation and quantitative analysis of intrathoracic airway trees from three-dimensional CT images

    SciTech Connect

    Sonka, M.; Park, W.; Hoffman, E.A.

    1995-12-31

    Accurate assessment of airway physiology, evaluated in terms of geometric changes, is critically dependent upon the accurate imaging and image segmentation of the three-dimensional airway tree structure. The authors have previously reported a knowledge-based method for three-dimensional airway tree segmentation from high resolution CT (HRCT) images. Here, they report a substantially improved version of the method. In the current implementation, the method consists of several stages. First, the lung borders are automatically determined in the three-dimensional set of HRCT data. The primary airway tree is semi-automatically identified. In the next stage, potential airways are determined in individual CT slices using a rule-based system that uses contextual information and a priori knowledge about pulmonary anatomy. Using three-dimensional connectivity properties of the pulmonary airway tree, the three-dimensional tree is constructed from the set of adjacent slices. The method`s performance and accuracy were assessed in five 3D HRCT canine images. Computer-identified airways matched 226/258 observer-defined airways (87.6%); the computer method failed to detect the airways in the remaining 32 locations. By visual assessment of rendered airway trees, the experienced observers judged the computer-detected airway trees as highly realistic.

  16. Prediction and validation of protein–protein interactors from genome-wide DNA-binding data using a knowledge-based machine-learning approach

    PubMed Central

    Homan, Bernou; Mohamed, Stephanie; Harvey, Richard P.; Bouveret, Romaric

    2016-01-01

    The ability to accurately predict the DNA targets and interacting cofactors of transcriptional regulators from genome-wide data can significantly advance our understanding of gene regulatory networks. NKX2-5 is a homeodomain transcription factor that sits high in the cardiac gene regulatory network and is essential for normal heart development. We previously identified genomic targets for NKX2-5 in mouse HL-1 atrial cardiomyocytes using DNA-adenine methyltransferase identification (DamID). Here, we apply machine learning algorithms and propose a knowledge-based feature selection method for predicting NKX2-5 protein : protein interactions based on motif grammar in genome-wide DNA-binding data. We assessed model performance using leave-one-out cross-validation and a completely independent DamID experiment performed with replicates. In addition to identifying previously described NKX2-5-interacting proteins, including GATA, HAND and TBX family members, a number of novel interactors were identified, with direct protein : protein interactions between NKX2-5 and retinoid X receptor (RXR), paired-related homeobox (PRRX) and Ikaros zinc fingers (IKZF) validated using the yeast two-hybrid assay. We also found that the interaction of RXRα with NKX2-5 mutations found in congenital heart disease (Q187H, R189G and R190H) was altered. These findings highlight an intuitive approach to accessing protein–protein interaction information of transcription factors in DNA-binding experiments. PMID:27683156

  17. Development and Assessment of a Geographic Knowledge-Based Model for Mapping Suitable Areas for Rift Valley Fever Transmission in Eastern Africa

    PubMed Central

    Tran, Annelise; Trevennec, Carlène; Lutwama, Julius; Sserugga, Joseph; Gély, Marie; Pittiglio, Claudia; Pinto, Julio; Chevalier, Véronique

    2016-01-01

    Rift Valley fever (RVF), a mosquito-borne disease affecting ruminants and humans, is one of the most important viral zoonoses in Africa. The objective of the present study was to develop a geographic knowledge-based method to map the areas suitable for RVF amplification and RVF spread in four East African countries, namely, Kenya, Tanzania, Uganda and Ethiopia, and to assess the predictive accuracy of the model using livestock outbreak data from Kenya and Tanzania. Risk factors and their relative importance regarding RVF amplification and spread were identified from a literature review. A numerical weight was calculated for each risk factor using an analytical hierarchy process. The corresponding geographic data were collected, standardized and combined based on a weighted linear combination to produce maps of the suitability for RVF transmission. The accuracy of the resulting maps was assessed using RVF outbreak locations in livestock reported in Kenya and Tanzania between 1998 and 2012 and the ROC curve analysis. Our results confirmed the capacity of the geographic information system-based multi-criteria evaluation method to synthesize available scientific knowledge and to accurately map (AUC = 0.786; 95% CI [0.730–0.842]) the spatial heterogeneity of RVF suitability in East Africa. This approach provides users with a straightforward and easy update of the maps according to data availability or the further development of scientific knowledge. PMID:27631374

  18. Individual 3D region-of-interest atlas of the human brain: knowledge-based class image analysis for extraction of anatomical objects

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Sabri, Osama; Buell, Udalrich

    2000-06-01

    After neural network-based classification of tissue types, the second step of atlas extraction is knowledge-based class image analysis to get anatomically meaningful objects. Basic algorithms are region growing, mathematical morphology operations, and template matching. A special algorithm was designed for each object. The class label of each voxel and the knowledge about the relative position of anatomical objects to each other and to the sagittal midplane of the brain can be utilized for object extraction. User interaction is only necessary to define starting, mid- and end planes for most object extractions and to determine the number of iterations for erosion and dilation operations. Extraction can be done for the following anatomical brain regions: cerebrum; cerebral hemispheres; cerebellum; brain stem; white matter (e.g., centrum semiovale); gray matter [cortex, frontal, parietal, occipital, temporal lobes, cingulum, insula, basal ganglia (nuclei caudati, putamen, thalami)]. For atlas- based quantification of functional data, anatomical objects can be convoluted with the point spread function of functional data to take into account the different resolutions of morphological and functional modalities. This method allows individual atlas extraction from MRI image data of a patient without the need of warping individual data to an anatomical or statistical MRI brain atlas.

  19. FluKB: A Knowledge-Based System for Influenza Vaccine Target Discovery and Analysis of the Immunological Properties of Influenza Viruses

    PubMed Central

    Simon, Christian; Kudahl, Ulrich J.; Sun, Jing; Olsen, Lars Rønn; Zhang, Guang Lan; Reinherz, Ellis L.; Brusic, Vladimir

    2015-01-01

    FluKB is a knowledge-based system focusing on data and analytical tools for influenza vaccine discovery. The main goal of FluKB is to provide access to curated influenza sequence and epitope data and enhance the analysis of influenza sequence diversity and the analysis of targets of immune responses. FluKB consists of more than 400,000 influenza protein sequences, known epitope data (357 verified T-cell epitopes, 685 HLA binders, and 16 naturally processed MHC ligands), and a collection of 28 influenza antibodies and their structurally defined B-cell epitopes. FluKB was built using a modular framework allowing the implementation of analytical workflows and includes standard search tools, such as keyword search and sequence similarity queries, as well as advanced tools for the analysis of sequence variability. The advanced analytical tools for vaccine discovery include visual mapping of T- and B-cell vaccine targets and assessment of neutralizing antibody coverage. FluKB supports the discovery of vaccine targets and the analysis of viral diversity and its implications for vaccine discovery as well as potential T-cell breadth and antibody cross neutralization involving multiple strains. FluKB is representation of a new generation of databases that integrates data, analytical tools, and analytical workflows that enable comprehensive analysis and automatic generation of analysis reports. PMID:26504853

  20. SU-E-P-58: Dosimetric Study of Conventional Intensity-Modulated Radiotherapy and Knowledge-Based Radiation Therapy for Postoperation of Cervix Cancer

    SciTech Connect

    Ma, C; Yin, Y

    2015-06-15

    Purpose: To compare the dosimetric difference of the target volume and organs at risk(OARs) between conventional intensity-modulated radiotherapy(C-IMRT) and knowledge-based radiation therapy (KBRT) plans for cervix cancer. Methods: 39 patients with cervical cancer after surgery were randomly selected, 20 patient plans were used to create the model, the other 19 cases used for comparative evaluation. All plans were designed in Eclipse system. The prescription dose was 30.6Gy, 17 fractions, OARs dose satisfied to the clinical requirement. A paired t test was used to evaluate the differences of dose-volume histograms (DVH). Results: Comparaed to C-IMRT plan, the KBRT plan target can achieve the similar target dose coverage, D98,D95,D2,HI and CI had no difference (P≥0.05). The dose of rectum, bladder and femoral heads had no significant differences(P≥0.05). The time was used to design treatment plan was significant reduced. Conclusion: This study shows that postoperative radiotherapy of cervical KBRT plans can achieve the similar target and OARs dose, but the shorter designing time.

  1. Development and Assessment of a Geographic Knowledge-Based Model for Mapping Suitable Areas for Rift Valley Fever Transmission in Eastern Africa.

    PubMed

    Tran, Annelise; Trevennec, Carlène; Lutwama, Julius; Sserugga, Joseph; Gély, Marie; Pittiglio, Claudia; Pinto, Julio; Chevalier, Véronique

    2016-09-01

    Rift Valley fever (RVF), a mosquito-borne disease affecting ruminants and humans, is one of the most important viral zoonoses in Africa. The objective of the present study was to develop a geographic knowledge-based method to map the areas suitable for RVF amplification and RVF spread in four East African countries, namely, Kenya, Tanzania, Uganda and Ethiopia, and to assess the predictive accuracy of the model using livestock outbreak data from Kenya and Tanzania. Risk factors and their relative importance regarding RVF amplification and spread were identified from a literature review. A numerical weight was calculated for each risk factor using an analytical hierarchy process. The corresponding geographic data were collected, standardized and combined based on a weighted linear combination to produce maps of the suitability for RVF transmission. The accuracy of the resulting maps was assessed using RVF outbreak locations in livestock reported in Kenya and Tanzania between 1998 and 2012 and the ROC curve analysis. Our results confirmed the capacity of the geographic information system-based multi-criteria evaluation method to synthesize available scientific knowledge and to accurately map (AUC = 0.786; 95% CI [0.730-0.842]) the spatial heterogeneity of RVF suitability in East Africa. This approach provides users with a straightforward and easy update of the maps according to data availability or the further development of scientific knowledge.

  2. Knowledge engineering for adverse drug event prevention: on the design and development of a uniform, contextualized and sustainable knowledge-based framework.

    PubMed

    Koutkias, Vassilis; Kilintzis, Vassilis; Stalidis, George; Lazou, Katerina; Niès, Julie; Durand-Texte, Ludovic; McNair, Peter; Beuscart, Régis; Maglaveras, Nicos

    2012-06-01

    The primary aim of this work was the development of a uniform, contextualized and sustainable knowledge-based framework to support adverse drug event (ADE) prevention via Clinical Decision Support Systems (CDSSs). In this regard, the employed methodology involved first the systematic analysis and formalization of the knowledge sources elaborated in the scope of this work, through which an application-specific knowledge model has been defined. The entire framework architecture has been then specified and implemented by adopting Computer Interpretable Guidelines (CIGs) as the knowledge engineering formalism for its construction. The framework integrates diverse and dynamic knowledge sources in the form of rule-based ADE signals, all under a uniform Knowledge Base (KB) structure, according to the defined knowledge model. Equally important, it employs the means to contextualize the encapsulated knowledge, in order to provide appropriate support considering the specific local environment (hospital, medical department, language, etc.), as well as the mechanisms for knowledge querying, inference, sharing, and management. In this paper, we present thoroughly the establishment of the proposed knowledge framework by presenting the employed methodology and the results obtained as regards implementation, performance and validation aspects that highlight its applicability and virtue in medication safety.

  3. The peptide agonist-binding site of the glucagon-like peptide-1 (GLP-1) receptor based on site-directed mutagenesis and knowledge-based modelling

    PubMed Central

    Dods, Rachel L.; Donnelly, Dan

    2015-01-01

    Glucagon-like peptide-1 (7–36)amide (GLP-1) plays a central role in regulating blood sugar levels and its receptor, GLP-1R, is a target for anti-diabetic agents such as the peptide agonist drugs exenatide and liraglutide. In order to understand the molecular nature of the peptide–receptor interaction, we used site-directed mutagenesis and pharmacological profiling to highlight nine sites as being important for peptide agonist binding and/or activation. Using a knowledge-based approach, we constructed a 3D model of agonist-bound GLP-1R, basing the conformation of the N-terminal region on that of the receptor-bound NMR structure of the related peptide pituitary adenylate cyclase-activating protein (PACAP21). The relative position of the extracellular to the transmembrane (TM) domain, as well as the molecular details of the agonist-binding site itself, were found to be different from the model that was published alongside the crystal structure of the TM domain of the glucagon receptor, but were nevertheless more compatible with published mutagenesis data. Furthermore, the NMR-determined structure of a high-potency cyclic conformationally-constrained 11-residue analogue of GLP-1 was also docked into the receptor-binding site. Despite having a different main chain conformation to that seen in the PACAP21 structure, four conserved residues (equivalent to His-7, Glu-9, Ser-14 and Asp-15 in GLP-1) could be structurally aligned and made similar interactions with the receptor as their equivalents in the GLP-1-docked model, suggesting the basis of a pharmacophore for GLP-1R peptide agonists. In this way, the model not only explains current mutagenesis and molecular pharmacological data but also provides a basis for further experimental design. PMID:26598711

  4. kPROT: a knowledge-based scale for the propensity of residue orientation in transmembrane segments. Application to membrane protein structure prediction.

    PubMed

    Pilpel, Y; Ben-Tal, N; Lancet, D

    1999-12-10

    Modeling of integral membrane proteins and the prediction of their functional sites requires the identification of transmembrane (TM) segments and the determination of their angular orientations. Hydrophobicity scales predict accurately the location of TM helices, but are less accurate in computing angular disposition. Estimating lipid-exposure propensities of the residues from statistics of solved membrane protein structures has the disadvantage of relying on relatively few proteins. As an alternative, we propose here a scale of knowledge-based Propensities for Residue Orientation in Transmembrane segments (kPROT), derived from the analysis of more than 5000 non-redundant protein sequences. We assume that residues that tend to be exposed to the membrane are more frequent in TM segments of single-span proteins, while residues that prefer to be buried in the transmembrane bundle interior are present mainly in multi-span TMs. The kPROT value for each residue is thus defined as the logarithm of the ratio of its proportions in single and multiple TM spans. The scale is refined further by defining it for three discrete sections of the TM segment; namely, extracellular, central, and intracellular. The capacity of the kPROT scale to predict angular helical orientation was compared to that of alternative methods in a benchmark test, using a diversity of multi-span alpha-helical transmembrane proteins with a solved 3D structure. kPROT yielded an average angular error of 41 degrees, significantly lower than that of alternative scales (62 degrees -68 degrees ). The new scale thus provides a useful general tool for modeling and prediction of functional residues in membrane proteins. A WWW server (http://bioinfo.weizmann.ac.il/kPROT) is available for automatic helix orientation prediction with kPROT. PMID:10588897

  5. Determining the impact on the professional learning of graduates of a science and pedagogical content knowledge-based graduate degree program

    NASA Astrophysics Data System (ADS)

    Mike, Alyson Mary

    This study examined the professional learning of participants in a science and pedagogical content knowledge-based graduate degree program, specifically the Master of Science in Science Education (MSSE) at Montana State University. The program's blended learning model includes distance learning coursework and laboratory, field and seminar experiences. Three-quarters of the faculty are scientists. The study sought to identify program components that contribute to a graduate course of study that is coherent, has academic rigor, and contributes to educator's professional growth and learning. The study examined the program from three perspectives: recommendations for teachers' professional learning through professional development, components of a quality graduate program, and a framework for distance learning. No large-scale studies on comprehensive models of teacher professional learning leading to change in practice have been conducted in the United States. The literature on teachers' professional learning is small. Beginning with a comprehensive review of the literature, this study sought to identify components of professional learning through professional development for teachers. The MSSE professional learning survey was designed for students and faculty, and 349 students and 24 faculty responded. The student survey explored how course experiences fostered professional learning. Open-ended responses on the student survey provided insight regarding specific program experiences influencing key categories of professional learning. A parallel faculty survey was designed to elicit faculty perspectives on the extent to which their courses fostered science content knowledge and other aspects of professional learning. Case study data and portfolios from MSSE students were used to provide deeper insights into the influential aspects of the program. The study provided evidence of significant professional learning among science teacher participants. This growth occurred in

  6. Automated knowledge-base refinement

    NASA Technical Reports Server (NTRS)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  7. Knowledge-based robotic grasping

    SciTech Connect

    Stansfield, S.A.

    1989-01-01

    In this paper, we describe a general-purpose robotic grasping system for use in unstructured environments. Using computer vision and a compact set of heuristics, the system automatically generates the robot arm and hand motions required for grasping an unmodeled object. The utility of such a system is most evident in environments where the robot will have to grasp and manipulate a variety of unknown objects, but where many of the manipulation tasks may be relatively simple. Examples of such domains are planetary exploration and astronaut assistance, undersea salvage and rescue, and nuclear waste site clean-up. This work implements a two-stage model of grasping: stage one is an orientation of the hand and wrist and a ballistic reach toward the object; stage two is hand preshaping and adjustment. Visual features are first extracted from the unmodeled object. These features and their relations are used by an expert system to generate a set of valid reach/grasps for the object. These grasps are then used in driving the robot hand and arm to bring the fingers into contact with the object in the desired configuration. 16 refs., 14 figs.

  8. Application of knowledge-based decision tree classification method to monitoring ecological environment in mining areas based on the multi-temporal Landsat TM(ETM) images: a case study at Daye, Hubei, China

    NASA Astrophysics Data System (ADS)

    Yu, Shiyong

    2008-11-01

    This paper presents a case study of Daye, Hubei, China, to trace mining activities and related environment changes during the past 10 years, with an emphasis on land cover changes. Two sets of satellite data have been used: TM and ETM+ image data. A multi-temporal dataset consisting of two Land sat 5 Thematic Mapper (TM) images and one Enhanced Thematic Mapper Plus (ETM+) image in 1986, 1994 and 2002 have been used to compare the land cover changes of the Daye area, Hubei Province, China. Combined bands method and iron oxide index and the NDVI index method have been used to investigate the spectrum character and the space character of the different ground objects. The knowledge-based decision tree classification method has been used to get highly accurate classification result from the TM and ETM+ image data. The results of change detection show that quality of whole water body was still bad, although the water quality has been improved in some areas. Vegetation shows that degradation trend occurs especially in those areas close to the mining areas, large areas of wood land and plantations are reduced, the increasing bare areas appear and the reclamation percentage of the abandoned mining is only 20% from 1986 to 2002. The ecological environment in the study area may become worse unless the efficient management of mining and effective eco-environment protection are carried out instantly.

  9. Cerebrospinal fluid data compilation and knowledge-based interpretation of bacterial, viral, parasitic, oncological, chronic inflammatory and demyelinating diseases. Diagnostic patterns not to be missed in neurology and psychiatry.

    PubMed

    Reiber, Hansotto

    2016-04-01

    The analysis of intrathecal IgG, IgA and IgM synthesis in cerebrospinal fluid (CSF) and evaluation in combined quotient diagrams provides disease-related patterns. The compilation with complementary parameters (barrier function, i.e., CSF flow rate, cytology, lactate, antibodies) in a cumulative CSF data report allows a knowledge-based interpretation and provides analytical and medical plausibility for the quality assessment in CSF laboratories. The diagnostic relevance is described for neurological and psychiatric diseases, for which CSF analysis can't be replaced by other diagnostic methods without loss of information. Dominance of intrathecal IgM, IgA or three class immune responses give a systematic approach for Facial nerve palsy, Neurotrypanosomiasis, Opportunistic diseases, lymphoma, Neurotuberculosis, Adrenoleucodystrophy or tumor metastases. Particular applications consider the diagnostic power of the polyspecific antibody response (MRZ-antibodies) in multiple sclerosis, a CSF-related systematic view on differential diagnostic of psychiatric diseases and the dynamics of brain- derived compared to blood-derived molecules in CSF for localization of paracytes. PMID:27097008

  10. Coronary Computed Tomographic Angiography at 80 kVp and Knowledge-Based Iterative Model Reconstruction Is Non-Inferior to that at 100 kVp with Iterative Reconstruction

    PubMed Central

    Lee, Joohee; Park, Chul Hwan; Oh, Chi Suk; Han, Kyunghwa; Kim, Tae Hoon

    2016-01-01

    The aims of this study were to compare the image noise and quality of coronary computed tomographic angiography (CCTA) at 80 kVp with knowledge-based iterative model reconstruction (IMR) to those of CCTA at 100 kVp with hybrid iterative reconstruction (IR), and to evaluate the feasibility of a low-dose radiation protocol with IMR. Thirty subjects who underwent prospective electrocardiogram-gating CCTA at 80 kVp, 150 mAs, and IMR (Group A), and 30 subjects with 100 kVp, 150 mAs, and hybrid IR (Group B) were retrospectively enrolled after sample-size calculation. A BMI of less than 25 kg/m2 was required for inclusion. The attenuation value and image noise of CCTA were measured and the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated at the proximal right coronary artery and left main coronary artery. The image noise was analyzed using a non-inferiority test. The CCTA images were qualitatively evaluated using a four-point scale. The radiation dose was significantly lower in Group A than Group B (0.69 ± 0.08 mSv vs. 1.39 ± 0.15 mSv, p < 0.001). The attenuation values were higher in Group A than Group B (p < 0.001). The SNR and CNR in Group A were higher than those of Group B. The image noise of Group A was non-inferior to that of Group B. Qualitative image quality of Group A was better than that of Group B (3.6 vs. 3.4, p = 0.017). CCTA at 80 kVp with IMR could reduce the radiation dose by about 50%, with non-inferior image noise and image quality than those of CCTA at 100 kVp with hybrid IR. PMID:27658197

  11. PROUST: Knowledge-Based Program Understanding.

    ERIC Educational Resources Information Center

    Johnson, W. Lewis; Soloway, Elliot

    This report describes PROUST, a computer-based system for online analyses and understanding of PASCAL programs written by novice programmers, which takes as input a program and a non-algorithmic description of the program requirements and finds the most likely mapping between the requirements and the code. Both the theory and processing techniques…

  12. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery.

  13. Knowledge-based systems for power management

    NASA Technical Reports Server (NTRS)

    Lollar, L. F.

    1992-01-01

    NASA-Marshall's Electrical Power Branch has undertaken the development of expert systems in support of further advancements in electrical power system automation. Attention is given to the features (1) of the Fault Recovery and Management Expert System, (2) a resource scheduler or Master of Automated Expert Scheduling Through Resource Orchestration, and (3) an adaptive load-priority manager, or Load Priority List Management System. The characteristics of an advisory battery manager for the Hubble Space Telescope, designated the 'nickel-hydrogen expert system', are also noted.

  14. HMDB: a knowledgebase for the human metabolome

    PubMed Central

    Wishart, David S.; Knox, Craig; Guo, An Chi; Eisner, Roman; Young, Nelson; Gautam, Bijaya; Hau, David D.; Psychogios, Nick; Dong, Edison; Bouatra, Souhaila; Mandal, Rupasri; Sinelnikov, Igor; Xia, Jianguo; Jia, Leslie; Cruz, Joseph A.; Lim, Emilia; Sobsey, Constance A.; Shrivastava, Savita; Huang, Paul; Liu, Philip; Fang, Lydia; Peng, Jun; Fradette, Ryan; Cheng, Dean; Tzur, Dan; Clements, Melisa; Lewis, Avalyn; De Souza, Andrea; Zuniga, Azaret; Dawe, Margot; Xiong, Yeping; Clive, Derrick; Greiner, Russ; Nazyrova, Alsu; Shaykhutdinov, Rustem; Li, Liang; Vogel, Hans J.; Forsythe, Ian

    2009-01-01

    The Human Metabolome Database (HMDB, http://www.hmdb.ca) is a richly annotated resource that is designed to address the broad needs of biochemists, clinical chemists, physicians, medical geneticists, nutritionists and members of the metabolomics community. Since its first release in 2007, the HMDB has been used to facilitate the research for nearly 100 published studies in metabolomics, clinical biochemistry and systems biology. The most recent release of HMDB (version 2.0) has been significantly expanded and enhanced over the previous release (version 1.0). In particular, the number of fully annotated metabolite entries has grown from 2180 to more than 6800 (a 300% increase), while the number of metabolites with biofluid or tissue concentration data has grown by a factor of five (from 883 to 4413). Similarly, the number of purified compounds with reference to NMR, LC-MS and GC-MS spectra has more than doubled (from 380 to more than 790 compounds). In addition to this significant expansion in database size, many new database searching tools and new data content has been added or enhanced. These include better algorithms for spectral searching and matching, more powerful chemical substructure searches, faster text searching software, as well as dedicated pathway searching tools and customized, clickable metabolic maps. Changes to the user-interface have also been implemented to accommodate future expansion and to make database navigation much easier. These improvements should make the HMDB much more useful to a much wider community of users. PMID:18953024

  15. DAPD: A Knowledgebase for Diabetes Associated Proteins.

    PubMed

    Gopinath, Krishnasamy; Jayakumararaj, Ramaraj; Karthikeyan, Muthusamy

    2015-01-01

    Recent advancements in genomics and proteomics provide a solid foundation for understanding the pathogenesis of diabetes. Proteomics of diabetes associated pathways help to identify the most potent target for the management of diabetes. The relevant datasets are scattered in various prominent sources which takes much time to select the therapeutic target for the clinical management of diabetes. However, additional information about target proteins is needed for validation. This lacuna may be resolved by linking diabetes associated genes, pathways and proteins and it will provide a strong base for the treatment and planning management strategies of diabetes. Thus, a web source "Diabetes Associated Proteins Database (DAPD)" has been developed to link the diabetes associated genes, pathways and proteins using PHP, MySQL. The current version of DAPD has been built with proteins associated with different types of diabetes. In addition, DAPD has been linked to external sources to gain the access to more participatory proteins and their pathway network. DAPD will reduce the time and it is expected to pave the way for the discovery of novel anti-diabetic leads using computational drug designing for diabetes management. DAPD is open accessed via following url www.mkarthikeyan.bioinfoau.org/dapd. PMID:26357271

  16. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery. PMID:19727614

  17. Knowledge-based flow field zoning

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation flow field zoning in two dimensions is an important step towards easing the three-dimensional grid generation bottleneck in computational fluid dynamics. A knowledge based approach works well, but certain aspects of flow field zoning make the use of such an approach challenging. A knowledge based flow field zoner, called EZGrid, was implemented and tested on representative two-dimensional aerodynamic configurations. Results are shown which illustrate the way in which EZGrid incorporates the effects of physics, shape description, position, and user bias in a flow field zoning.

  18. NASDA knowledge-based network planning system

    NASA Technical Reports Server (NTRS)

    Yamaya, K.; Fujiwara, M.; Kosugi, S.; Yambe, M.; Ohmori, M.

    1993-01-01

    One of the SODS (space operation and data system) sub-systems, NP (network planning) was the first expert system used by NASDA (national space development agency of Japan) for tracking and control of satellite. The major responsibilities of the NP system are: first, the allocation of network and satellite control resources and, second, the generation of the network operation plan data (NOP) used in automated control of the stations and control center facilities. Up to now, the first task of network resource scheduling was done by network operators. NP system automatically generates schedules using its knowledge base, which contains information on satellite orbits, station availability, which computer is dedicated to which satellite, and how many stations must be available for a particular satellite pass or a certain time period. The NP system is introduced.

  19. Reuse: A knowledge-based approach

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui

    1992-01-01

    This paper describes our research in automating the reuse process through the use of application domain models. Application domain models are explicit formal representations of the application knowledge necessary to understand, specify, and generate application programs. Furthermore, they provide a unified repository for the operational structure, rules, policies, and constraints of a specific application area. In our approach, domain models are expressed in terms of a transaction-based meta-modeling language. This paper has described in detail the creation and maintenance of hierarchical structures. These structures are created through a process that includes reverse engineering of data models with supplementary enhancement from application experts. Source code is also reverse engineered but is not a major source of domain model instantiation at this time. In the second phase of the software synthesis process, program specifications are interactively synthesized from an instantiated domain model. These specifications are currently integrated into a manual programming process but will eventually be used to derive executable code with mechanically assisted transformations. This research is performed within the context of programming-in-the-large types of systems. Although our goals are ambitious, we are implementing the synthesis system in an incremental manner through which we can realize tangible results. The client/server architecture is capable of supporting 16 simultaneous X/Motif users and tens of thousands of attributes and classes. Domain models have been partially synthesized from five different application areas. As additional domain models are synthesized and additional knowledge is gathered, we will inevitably add to and modify our representation. However, our current experience indicates that it will scale and expand to meet our modeling needs.

  20. Digitizing legacy documents: A knowledge-base preservation project

    SciTech Connect

    Anderson, E.; Atkinson, R.; Crego, C.; Slisz, J.; Tompson, S.

    1998-09-01

    As more library customers and staff throughout the world come to rely upon rapid electronic access to fulltext documents, there is increasing demand to also make older documents electronically accessible. Illinois State Library grant funds allowed us to purchase hardware and software necessary to answer this demand. We created a production system to scan our legacy documents, convert them into Portable Document Format (PDF), save them to a server for World Wide Web access, and write them to CD discs for distribution.

  1. THINK Back: KNowledge-based Interpretation of High Throughput data.

    PubMed

    Farfán, Fernando; Ma, Jun; Sartor, Maureen A; Michailidis, George; Jagadish, Hosagrahar V

    2012-01-01

    Results of high throughput experiments can be challenging to interpret. Current approaches have relied on bulk processing the set of expression levels, in conjunction with easily obtained external evidence, such as co-occurrence. While such techniques can be used to reason probabilistically, they are not designed to shed light on what any individual gene, or a network of genes acting together, may be doing. Our belief is that today we have the information extraction ability and the computational power to perform more sophisticated analyses that consider the individual situation of each gene. The use of such techniques should lead to qualitatively superior results. The specific aim of this project is to develop computational techniques to generate a small number of biologically meaningful hypotheses based on observed results from high throughput microarray experiments, gene sequences, and next-generation sequences. Through the use of relevant known biomedical knowledge, as represented in published literature and public databases, we can generate meaningful hypotheses that will aide biologists to interpret their experimental data. We are currently developing novel approaches that exploit the rich information encapsulated in biological pathway graphs. Our methods perform a thorough and rigorous analysis of biological pathways, using complex factors such as the topology of the pathway graph and the frequency in which genes appear on different pathways, to provide more meaningful hypotheses to describe the biological phenomena captured by high throughput experiments, when compared to other existing methods that only consider partial information captured by biological pathways. PMID:22536867

  2. THINK Back: KNowledge-based Interpretation of High Throughput data

    PubMed Central

    2012-01-01

    Results of high throughput experiments can be challenging to interpret. Current approaches have relied on bulk processing the set of expression levels, in conjunction with easily obtained external evidence, such as co-occurrence. While such techniques can be used to reason probabilistically, they are not designed to shed light on what any individual gene, or a network of genes acting together, may be doing. Our belief is that today we have the information extraction ability and the computational power to perform more sophisticated analyses that consider the individual situation of each gene. The use of such techniques should lead to qualitatively superior results. The specific aim of this project is to develop computational techniques to generate a small number of biologically meaningful hypotheses based on observed results from high throughput microarray experiments, gene sequences, and next-generation sequences. Through the use of relevant known biomedical knowledge, as represented in published literature and public databases, we can generate meaningful hypotheses that will aide biologists to interpret their experimental data. We are currently developing novel approaches that exploit the rich information encapsulated in biological pathway graphs. Our methods perform a thorough and rigorous analysis of biological pathways, using complex factors such as the topology of the pathway graph and the frequency in which genes appear on different pathways, to provide more meaningful hypotheses to describe the biological phenomena captured by high throughput experiments, when compared to other existing methods that only consider partial information captured by biological pathways. PMID:22536867

  3. Knowledge-based data analysis comes of age

    PubMed Central

    2010-01-01

    The emergence of high-throughput technologies for measuring biological systems has introduced problems for data interpretation that must be addressed for proper inference. First, analysis techniques need to be matched to the biological system, reflecting in their mathematical structure the underlying behavior being studied. When this is not done, mathematical techniques will generate answers, but the values and reliability estimates may not accurately reflect the biology. Second, analysis approaches must address the vast excess in variables measured (e.g. transcript levels of genes) over the number of samples (e.g. tumors, time points), known as the ‘large-p, small-n’ problem. In large-p, small-n paradigms, standard statistical techniques generally fail, and computational learning algorithms are prone to overfit the data. Here we review the emergence of techniques that match mathematical structure to the biology, the use of integrated data and prior knowledge to guide statistical analysis, and the recent emergence of analysis approaches utilizing simple biological models. We show that novel biological insights have been gained using these techniques. PMID:19854753

  4. Organizational Learning and the Case for Knowledge-Based Systems.

    ERIC Educational Resources Information Center

    Petrides, Lisa A.

    2002-01-01

    Discusses organizational learning and the required reassessment and redesign of internal structures and procedures related to the flow of information throughout an organization. Provides a framework for the integration of institutional research within the larger context of organizational learning and the creation and maintenance of a research…

  5. Knowledge-based system for the design of heat exchangers

    NASA Astrophysics Data System (ADS)

    Cochran, W. J.; Hainley, Don; Khartabil, Loay

    1993-03-01

    A knowledge based system has been developed to assist engineers in the design of compact heat exchangers. The main objectives of this project were to: (1) automate aspects of heat exchanger design; (2) produce multiple successful designs quickly; and (3) optimize these designs based on specific constraints or criteria. Productivity improvements from use of this system have been as much as two orders of magnitude. The design of heat exchangers is a time-consuming, iterative process. For a given set of requirements a design engineer uses his knowledge and experience to pick an initial design point and then calculates (with a large Fortran program) the performance for that design. If performance data do not meet requirements, various design parameters are modified and performance is calculated again. An expert system now embodies design expertise (rules for design decisions) allowing automation of this iterative process and substantial time savings for engineers. In addition, optimizing successful designs is now practical, whereas in the past it was generally infeasible due to the amount of labor involved. A configuration system was also developed that serves as a `front- end' for the design system. The configuration system matches design requirements to existing products and offers suggestions for initial design points. Both were developed with the KAPPA knowledge based system shell. The two KAPPA programs and the Fortran program for numerical calculations are integrated within a Windows 3.1 environment on a 486 PC.

  6. Building a Knowledge-Based Economy and Society.

    ERIC Educational Resources Information Center

    Bryson, Jo

    This paper provides an overview of the forces shaping the future of the knowledge economy and society, including: the speed and type of change that is occurring; the technologies that are propelling it; the technology and information choices that competitors are making; which organizations are in the lead; who has the most to gain and to lose; the…

  7. Knowledge-Based Economies and Education: A Grand Canyon Analogy

    ERIC Educational Resources Information Center

    Mahy, Colleen; Krimmel, Tyler

    2008-01-01

    Expeditions inspire people to reach beyond themselves. Today, post-secondary education requires as much planning as any expedition. However, there has been a trend that has seen just over half of all high school students in Ontario going on to post-secondary education. While some people have barely noticed this statistic, the OECD has released a…

  8. A Knowledge-Based Representation Scheme for Environmental Science Models

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Dungan, Jennifer L.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    One of the primary methods available for studying environmental phenomena is the construction and analysis of computational models. We have been studying how artificial intelligence techniques can be applied to assist in the development and use of environmental science models within the context of NASA-sponsored activities. We have identified several high-utility areas as potential targets for research and development: model development; data visualization, analysis, and interpretation; model publishing and reuse, training and education; and framing, posing, and answering questions. Central to progress on any of the above areas is a representation for environmental models that contains a great deal more information than is present in a traditional software implementation. In particular, a traditional software implementation is devoid of any semantic information that connects the code with the environmental context that forms the background for the modeling activity. Before we can build AI systems to assist in model development and usage, we must develop a representation for environmental models that adequately describes a model's semantics and explicitly represents the relationship between the code and the modeling task at hand. We have developed one such representation in conjunction with our work on the SIGMA (Scientists' Intelligent Graphical Modeling Assistant) environment. The key feature of the representation is that it provides a semantic grounding for the symbols in a set of modeling equations by linking those symbols to an explicit representation of the underlying environmental scenario.

  9. Strategic Positioning of HRM in Knowledge-Based Organizations

    ERIC Educational Resources Information Center

    Thite, Mohan

    2004-01-01

    With knowledge management as the strategic intent and learning to learn as the strategic weapon, the current management focus is on how to leverage knowledge faster and better than competitors. Research demonstrates that it is the cultural mindset of the people in the organisation that primarily defines success in knowledge intensive…

  10. THINK Back: KNowledge-based Interpretation of High Throughput data.

    PubMed

    Farfán, Fernando; Ma, Jun; Sartor, Maureen A; Michailidis, George; Jagadish, Hosagrahar V

    2012-03-13

    Results of high throughput experiments can be challenging to interpret. Current approaches have relied on bulk processing the set of expression levels, in conjunction with easily obtained external evidence, such as co-occurrence. While such techniques can be used to reason probabilistically, they are not designed to shed light on what any individual gene, or a network of genes acting together, may be doing. Our belief is that today we have the information extraction ability and the computational power to perform more sophisticated analyses that consider the individual situation of each gene. The use of such techniques should lead to qualitatively superior results. The specific aim of this project is to develop computational techniques to generate a small number of biologically meaningful hypotheses based on observed results from high throughput microarray experiments, gene sequences, and next-generation sequences. Through the use of relevant known biomedical knowledge, as represented in published literature and public databases, we can generate meaningful hypotheses that will aide biologists to interpret their experimental data. We are currently developing novel approaches that exploit the rich information encapsulated in biological pathway graphs. Our methods perform a thorough and rigorous analysis of biological pathways, using complex factors such as the topology of the pathway graph and the frequency in which genes appear on different pathways, to provide more meaningful hypotheses to describe the biological phenomena captured by high throughput experiments, when compared to other existing methods that only consider partial information captured by biological pathways.

  11. Knowledge-based operation and management of communications systems

    NASA Technical Reports Server (NTRS)

    Heggestad, Harold M.

    1988-01-01

    Expert systems techniques are being applied in operation and control of the Defense Communications System (DCS), which has the mission of providing reliable worldwide voice, data and message services for U.S. forces and commands. Thousands of personnel operate DCS facilities, and many of their functions match the classical expert system scenario: complex, skill-intensive environments with a full spectrum of problems in training and retention, cost containment, modernization, and so on. Two of these functions are: (1) fault isolation and restoral of dedicated circuits at Tech Control Centers, and (2) network management for the Defense Switched Network (the modernized dial-up voice system currently replacing AUTOVON). An expert system for the first of these is deployed for evaluation purposes at Andrews Air Force Base, and plans are being made for procurement of operational systems. In the second area, knowledge obtained with a sophisticated simulator is being embedded in an expert system. The background, design and status of both projects are described.

  12. Satellite Contamination and Materials Outgassing Knowledgebase - An Interactive Database Reference

    NASA Technical Reports Server (NTRS)

    Green, D. B.; Burns, Dewitt (Technical Monitor)

    2001-01-01

    The goal of this program is to collect at one site much of the knowledge accumulated about the outgassing properties of aerospace materials based on ground testing, the effects of this outgassing observed on spacecraft in flight, and the broader contamination environment measured by instruments on-orbit. We believe that this Web site will help move contamination a step forward, away from anecdotal folklore toward engineering discipline. Our hope is that once operational, this site will form a nucleus for information exchange, that users will not only take information from our knowledge base, but also provide new information from ground testing and space missions, expanding and increasing the value of this site to all. We urge Government and industry users to endorse this approach that will reduce redundant testing, reduce unnecessary delays, permit uniform comparisons, and permit informed decisions.

  13. Utilitarian Model of Measuring Confidence within Knowledge-Based Societies

    ERIC Educational Resources Information Center

    Jack, Brady Michael; Hung, Kuan-Ming; Liu, Chia Ju; Chiu, Houn Lin

    2009-01-01

    This paper introduces a utilitarian confidence testing statistic called Risk Inclination Model (RIM) which indexes all possible confidence wagering combinations within the confines of a defined symmetrically point-balanced test environment. This paper presents the theoretical underpinnings, a formal derivation, a hypothetical application, and…

  14. Logo Programming, Problem Solving, and Knowledge-Based Instruction.

    ERIC Educational Resources Information Center

    Swan, Karen; Black, John B.

    The research reported in this paper was designed to investigate the hypothesis that computer programming may support the teaching and learning of problem solving, but that to do so, problem solving must be explicitly taught. Three studies involved students in several grades: 4th, 6th, 8th, 11th, and 12th. Findings collectively show that five…

  15. Knowledge-based control for robot self-localization

    NASA Technical Reports Server (NTRS)

    Bennett, Bonnie Kathleen Holte

    1993-01-01

    Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.

  16. Towards a HPV Vaccine Knowledgebase for Patient Education Content.

    PubMed

    Wang, Dennis; Cunningham, Rachel; Boom, Julie; Amith, Muhammad; Tao, Cui

    2016-01-01

    Human papillomavirus is a widespread sexually transmitted infection that can be prevented with vaccination. However, HPV vaccination rates in the United States are disappointingly low. This paper will introduce a patient oriented web ontology intended to provide an interactive way to educate patients about HPV and the HPV vaccine that will to empower patients to make the right vaccination decision. The information gathered for this initial draft of the ontology was primarily taken from the Centers for Disease Control and Prevention's Vaccine Information Statements. The ontology currently consists of 160 triples, 141 classes, 52 properties and 55 individuals. For future iterations, we aim to incorporate more information as well as obtain subject matter expert feedback to improve the overall quality of the ontology. PMID:27332237

  17. Dynamic reasoning in a knowledge-based system

    NASA Technical Reports Server (NTRS)

    Rao, Anand S.; Foo, Norman Y.

    1988-01-01

    Any space based system, whether it is a robot arm assembling parts in space or an onboard system monitoring the space station, has to react to changes which cannot be foreseen. As a result, apart from having domain-specific knowledge as in current expert systems, a space based AI system should also have general principles of change. This paper presents a modal logic which can not only represent change but also reason with it. Three primitive operations, expansion, contraction and revision are introduced and axioms which specify how the knowledge base should change when the external world changes are also specified. Accordingly the notion of dynamic reasoning is introduced, which unlike the existing forms of reasoning, provide general principles of change. Dynamic reasoning is based on two main principles, namely minimize change and maximize coherence. A possible-world semantics which incorporates the above two principles is also discussed. The paper concludes by discussing how the dynamic reasoning system can be used to specify actions and hence form an integral part of an autonomous reasoning and planning system.

  18. CACTUS: Command and Control Training Using Knowledge-Based Simulations.

    ERIC Educational Resources Information Center

    Hartley, J. R.; And Others

    1992-01-01

    Describes a computer simulation, CACTUS, that was developed in the United Kingdom to help police with command and control training for large crowd control incidents. Use of the simulation for pre-event planning and decision making is discussed, debriefing is described, and the role of the trainer is considered. (LRW)

  19. PAHdb 2003: what a locus-specific knowledgebase can do.

    PubMed

    Scriver, Charles R; Hurtubise, Mélanie; Konecki, David; Phommarinh, Manyphong; Prevost, Lynne; Erlandsen, Heidi; Stevens, Ray; Waters, Paula J; Ryan, Shannon; McDonald, David; Sarkissian, Christineh

    2003-04-01

    PAHdb, a legacy of and resource in genetics, is a relational locus-specific database (http://www.pahdb.mcgill.ca). It records and annotates both pathogenic alleles (n = 439, putative disease-causing) and benign alleles (n = 41, putative untranslated polymorphisms) at the human phenylalanine hydroxylase locus (symbol PAH). Human alleles named by nucleotide number (systematic names) and their trivial names receive unique identifier numbers. The annotated gDNA sequence for PAH is typical for mammalian genes. An annotated gDNA sequence is numbered so that cDNA and gDNA sites are interconvertable. A site map for PAHdb leads to a large array of secondary data (attributes): source of the allele (submitter, publication, or population); polymorphic haplotype background; and effect of the allele as predicted by molecular modeling on the phenylalanine hydroxylase enzyme (EC 1.14.16.1) or by in vitro expression analysis. The majority (63%) of the putative pathogenic PAH alleles are point mutations causing missense in translation of which few have a primary effect on PAH enzyme kinetics. Most apparently have a secondary effect on its function through misfolding, aggregation, and intracellular degradation of the protein. Some point mutations create new splice sites. A subset of primary PAH mutations that are tetrahydrobiopterin-responsive is highlighted on a Curators' Page. A clinical module describes the corresponding human clinical disorders (hyperphenylalaninemia [HPA] and phenylketonuria [PKU]), their inheritance, and their treatment. PAHdb contains data on the mouse gene (Pah) and on four orthologous mutant mouse models and their use (for example, in research on oral treatment of PKU with the enzyme phenylalanine ammonia lyase [EC 4.3.1.5]).

  20. Knowledge-based segmentation of SAR data with learned priors.

    PubMed

    Haker, S; Sapiro, G; Tannenbaum, A

    2000-01-01

    An approach for the segmentation of still and video synthetic aperture radar (SAR) images is described. A priori knowledge about the objects present in the image, e.g., target, shadow and background terrain, is introduced via Bayes' rule. Posterior probabilities obtained in this way are then anisotropically smoothed, and the image segmentation is obtained via MAP classifications of the smoothed data. When segmenting sequences of images, the smoothed posterior probabilities of past frames are used to learn the prior distributions in the succeeding frame. We show with examples from public data sets that this method provides an efficient and fast technique for addressing the segmentation of SAR data. PMID:18255401

  1. Knowledge-based model building of proteins: concepts and examples.

    PubMed Central

    Bajorath, J.; Stenkamp, R.; Aruffo, A.

    1993-01-01

    We describe how to build protein models from structural templates. Methods to identify structural similarities between proteins in cases of significant, moderate to low, or virtually absent sequence similarity are discussed. The detection and evaluation of structural relationships is emphasized as a central aspect of protein modeling, distinct from the more technical aspects of model building. Computational techniques to generate and complement comparative protein models are also reviewed. Two examples, P-selectin and gp39, are presented to illustrate the derivation of protein model structures and their use in experimental studies. PMID:7505680

  2. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.

    1991-01-01

    The purpose is to develop algorithms and architectures for embedding artificial intelligence in aircraft guidance and control systems. With the approach adopted, AI-computing is used to create an outer guidance loop for driving the usual aircraft autopilot. That is, a symbolic processor monitors the operation and performance of the aircraft. Then, based on rules and other stored knowledge, commands are automatically formulated for driving the autopilot so as to accomplish desired flight operations. The focus is on developing a software system which can respond to linguistic instructions, input in a standard format, so as to formulate a sequence of simple commands to the autopilot. The instructions might be a fairly complex flight clearance, input either manually or by data-link. Emphasis is on a software system which responds much like a pilot would, employing not only precise computations, but, also, knowledge which is less precise, but more like common-sense. The approach is based on prior work to develop a generic 'shell' architecture for an AI-processor, which may be tailored to many applications by describing the application in appropriate processor data bases (libraries). Such descriptions include numerical models of the aircraft and flight control system, as well as symbolic (linguistic) descriptions of flight operations, rules, and tactics.

  3. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  4. The SwissLipids knowledgebase for lipid biology

    PubMed Central

    Liechti, Robin; Hyka-Nouspikel, Nevila; Niknejad, Anne; Gleizes, Anne; Götz, Lou; Kuznetsov, Dmitry; David, Fabrice P.A.; van der Goot, F. Gisou; Riezman, Howard; Bougueleret, Lydie; Xenarios, Ioannis; Bridge, Alan

    2015-01-01

    Motivation: Lipids are a large and diverse group of biological molecules with roles in membrane formation, energy storage and signaling. Cellular lipidomes may contain tens of thousands of structures, a staggering degree of complexity whose significance is not yet fully understood. High-throughput mass spectrometry-based platforms provide a means to study this complexity, but the interpretation of lipidomic data and its integration with prior knowledge of lipid biology suffers from a lack of appropriate tools to manage the data and extract knowledge from it. Results: To facilitate the description and exploration of lipidomic data and its integration with prior biological knowledge, we have developed a knowledge resource for lipids and their biology—SwissLipids. SwissLipids provides curated knowledge of lipid structures and metabolism which is used to generate an in silico library of feasible lipid structures. These are arranged in a hierarchical classification that links mass spectrometry analytical outputs to all possible lipid structures, metabolic reactions and enzymes. SwissLipids provides a reference namespace for lipidomic data publication, data exploration and hypothesis generation. The current version of SwissLipids includes over 244 000 known and theoretically possible lipid structures, over 800 proteins, and curated links to published knowledge from over 620 peer-reviewed publications. We are continually updating the SwissLipids hierarchy with new lipid categories and new expert curated knowledge. Availability: SwissLipids is freely available at http://www.swisslipids.org/. Contact: alan.bridge@isb-sib.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25943471

  5. Designer: A Knowledge-Based Graphic Design Assistant.

    ERIC Educational Resources Information Center

    Weitzman, Louis

    This report describes Designer, an interactive tool for assisting with the design of two-dimensional graphic interfaces for instructional systems. The system, which consists of a color graphics interface to a mathematical simulation, provides enhancements to the Graphics Editor component of Steamer (a computer-based training system designed to aid…

  6. Enabling a systems biology knowledgebase with gaggle and firegoose

    SciTech Connect

    Baliga, Nitin S.

    2014-12-12

    The overall goal of this project was to extend the existing Gaggle and Firegoose systems to develop an open-source technology that runs over the web and links desktop applications with many databases and software applications. This technology would enable researchers to incorporate workflows for data analysis that can be executed from this interface to other online applications. The four specific aims were to (1) provide one-click mapping of genes, proteins, and complexes across databases and species; (2) enable multiple simultaneous workflows; (3) expand sophisticated data analysis for online resources; and enhance open-source development of the Gaggle-Firegoose infrastructure. Gaggle is an open-source Java software system that integrates existing bioinformatics programs and data sources into a user-friendly, extensible environment to allow interactive exploration, visualization, and analysis of systems biology data. Firegoose is an extension to the Mozilla Firefox web browser that enables data transfer between websites and desktop tools including Gaggle. In the last phase of this funding period, we have made substantial progress on development and application of the Gaggle integration framework. We implemented the workspace to the Network Portal. Users can capture data from Firegoose and save them to the workspace. Users can create workflows to start multiple software components programmatically and pass data between them. Results of analysis can be saved to the cloud so that they can be easily restored on any machine. We also developed the Gaggle Chrome Goose, a plugin for the Google Chrome browser in tandem with an opencpu server in the Amazon EC2 cloud. This allows users to interactively perform data analysis on a single web page using the R packages deployed on the opencpu server. The cloud-based framework facilitates collaboration between researchers from multiple organizations. We have made a number of enhancements to the cmonkey2 application to enable and improve the integration within different environments, and we have created a new tools pipeline for generating EGRIN2 models in a largely automated way.

  7. Knowledge-based classification of neuronal fibers in entire brain.

    PubMed

    Xia, Yan; Turken, U; Whitfield-Gabrieli, Susan L; Gabrieli, John D

    2005-01-01

    This work presents a framework driven by parcellation of brain gray matter in standard normalized space to classify the neuronal fibers obtained from diffusion tensor imaging (DTI) in entire human brain. Classification of fiber bundles into groups is an important step for the interpretation of DTI data in terms of functional correlates of white matter structures. Connections between anatomically delineated brain regions that are considered to form functional units, such as a short-term memory network, are identified by first clustering fibers based on their terminations in anatomically defined zones of gray matter according to Talairach Atlas, and then refining these groups based on geometric similarity criteria. Fiber groups identified this way can then be interpreted in terms of their functional properties using knowledge of functional neuroanatomy of individual brain regions specified in standard anatomical space, as provided by functional neuroimaging and brain lesion studies. PMID:16685847

  8. Knowledge-based design of a soluble bacteriorhodopsin.

    PubMed

    Gibas, C; Subramaniam, S

    1997-10-01

    Much knowledge has been accrued from high resolution protein structures. This knowledge provides rules and guidelines for the rational design of soluble proteins. We have extracted these rules and applied them to redesigning the structure of bacteriorhodopsin and to creating blueprints for a monomeric, soluble seven-helix bundle protein. Such a protein is likely to have desirable properties, such as ready crystallization, which membrane proteins lack and an internal structure similar to that of the native protein. While preserving residues shown to be necessary for protein function, we made modifications to the rest of the sequence, distributing polar and charged residues over the surface of the protein to achieve an amino acid composition as akin to that of soluble helical proteins as possible. A secondary goal was to increase apolar contacts in the helix intercalation regions of the protein. The scheme used to design the model sequences requires knowledge of the number and orientation of helices and some information about interior contacts, but detailed structural knowledge is not required to use a scheme of this type.

  9. Robotic grasping of unknown objects: A knowledge-based approach

    SciTech Connect

    Stansfield, S.A. )

    1991-08-01

    In this article the author describes a general-purpose robotic grasping system for use in unstructured environments. Using computer vision and a compact set of heuristics, the system automatically generates the robot arm and hand motions required for grasping an unmodeled object. The utility of such a system is most evident in environments where the robot will have to grasp and manipulation tasks may be relatively simple. Examples of such domains are planetary exploration and astronaut assistance, undersea salvage and rescue, and nuclear waste site clean-up. This work implements a two-stage model of grasping: stage one is an orientation of the hand and wrist and a ballistic reach toward the object; stage two is hand preshaping and adjustment. Visual features are first extracted from the unmodeled object. These features and their relations are used by an expert system to generate a set of valid reach/grasps for the object. These grasps are then used in driving the robot hand and arm to bring the fingers into contact with the object in the desired configuration. Experimental results are presented to illustrate the functioning of the system.

  10. Robotic grasping of unknown objects: A knowledge-based approach

    SciTech Connect

    Stansfield, S.A.

    1989-06-01

    In this paper, we demonstrate a general-purpose robotic grasping system for use in unstructured environments. Using computer vision and a compact set of heuristics, the system automatically generates the robot arm and hand motions required for grasping an unmodeled object. The utility of such a system is most evident in environments where the robot will have to grasp and manipulate a variety of unknown objects, but where many of the manipulation tasks may be relatively simple. Examples of such domains are planetary exploration and astronaut assistance, undersea salvage and rescue, and nuclear waste site clean-up. This work implements a two-stage model of grasping: stage one is an orientation of the hand and wrist and a ballistic reach toward the object; stage two is hand preshaping and adjustment. Visual features are first extracted from the unmodeled object. These features and their relations are used by an expert system to generate a set of valid reach/grasps for the object. These grasps are then used in driving the robot hand and arm to bring the fingers into contact with the object in the desired configuration. Experimental results are presented to illustrate the functioning of the system. 27 refs., 38 figs.

  11. Robotic grasping of unknown objects: A knowledge-based approach

    SciTech Connect

    Stansfield, S.A.

    1990-11-01

    In this paper, the authors demonstrate a general-purpose robotic grasping system for use in unstructured environments. Using computer vision and a compact set of heuristics, the system automatically generates the robot arm and hand motions required for grasping an unmodeled object. The utility of such a system is most evident in environments where the robot will have to grasp and manipulate a variety of unknown objects, but where many of the manipulation tasks may be relatively simple. Examples of such domains are planetary exploration and astronaut assistance, undersea salvage and rescue, and nuclear waste site clean-up. This work implements a two-stage model of grasping: stage one is an orientation of the hand and wrist and a ballistic reach toward the object; stage two is hand preshaping and adjustment. Visual features are first extracted from the unmodeled object. These features and their relations are used by an expert system to generate a set of valid reach/grasps for the object. These grasps are then used in driving the robot hand and arm to bring the fingers into contact with the object in the desired configuration. Experimental results are presented to illustrate the functioning of the system.

  12. Compiling knowledge-based systems specified in KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Feldman, Roy D.

    1991-01-01

    The first year of the PrKAda project is recounted. The primary goal was to develop a system for delivering Artificial Intelligence applications developed in the ProKappa system in a pure-Ada environment. The following areas are discussed: the ProKappa core and ProTalk programming language; the current status of the implementation; the limitations and restrictions of the current system; and the development of Ada-language message handlers in the ProKappa environment.

  13. A knowledge-based system for controlling automobile traffic

    NASA Technical Reports Server (NTRS)

    Maravas, Alexander; Stengel, Robert F.

    1994-01-01

    Transportation network capacity variations arising from accidents, roadway maintenance activity, and special events as well as fluctuations in commuters' travel demands complicate traffic management. Artificial intelligence concepts and expert systems can be useful in framing policies for incident detection, congestion anticipation, and optimal traffic management. This paper examines the applicability of intelligent route guidance and control as decision aids for traffic management. Basic requirements for managing traffic are reviewed, concepts for studying traffic flow are introduced, and mathematical models for modeling traffic flow are examined. Measures for quantifying transportation network performance levels are chosen, and surveillance and control strategies are evaluated. It can be concluded that automated decision support holds great promise for aiding the efficient flow of automobile traffic over limited-access roadways, bridges, and tunnels.

  14. Fuzzy logic controllers: A knowledge-based system perspective

    NASA Technical Reports Server (NTRS)

    Bonissone, Piero P.

    1993-01-01

    Over the last few years we have seen an increasing number of applications of Fuzzy Logic Controllers. These applications range from the development of auto-focus cameras, to the control of subway trains, cranes, automobile subsystems (automatic transmissions), domestic appliances, and various consumer electronic products. In summary, we consider a Fuzzy Logic Controller to be a high level language with its local semantics, interpreter, and compiler, which enables us to quickly synthesize non-linear controllers for dynamic systems.

  15. ECGene: A Literature‐Based Knowledgebase of Endometrial Cancer Genes

    PubMed Central

    Liu, Yining; O'Mara, Tracy A

    2016-01-01

    ABSTRACT Endometrial cancer (EC) ranks as the sixth common cancer for women worldwide. To better distinguish cancer subtypes and identify effective early diagnostic biomarkers, we need improved understanding of the biological mechanisms associated with EC dysregulated genes. Although there is a wealth of clinical and molecular information relevant to EC in the literature, there has been no systematic summary of EC‐implicated genes. In this study, we developed a literature‐based database ECGene (Endometrial Cancer Gene database) with comprehensive annotations. ECGene features manual curation of 414 genes from thousands of publications, results from eight EC gene expression datasets, precomputation of coexpressed long noncoding RNAs, and an EC‐implicated gene interactome. In the current release, we generated and comprehensively annotated a list of 458 EC‐implicated genes. We found the top‐ranked EC‐implicated genes are frequently mutated in The Cancer Genome Atlas (TCGA) tumor samples. Furthermore, systematic analysis of coexpressed lncRNAs provided insight into the important roles of lncRNA in EC development. ECGene has a user‐friendly Web interface and is freely available at http://ecgene.bioinfo‐minzhao.org/. As the first literature‐based online resource for EC, ECGene serves as a useful gateway for researchers to explore EC genetics. PMID:26699919

  16. Knowledge-based optical coatings design and manufacturing

    NASA Astrophysics Data System (ADS)

    Guenther, Karl H.; Gonzalez, Avelino J.; Yoo, Hoi J.

    1990-12-01

    The theory of thin film optics is well developed for the spectral analysis of a given optical coating. The inverse synthesis - designing an optical coating for a certain spectral performance - is more complicated. Usually a multitude of theoretical designs is feasible because most design problems are over-determined with the number of layers possible with three variables each (n, k, t). The expertise of a good thin film designer comes in at this point with a mostly intuitive selection of certain designs based on previous experience and current manufacturing capabilities. Manufacturing a designed coating poses yet another subset of multiple solutions, as thin if in deposition technology has evolved over the years with a vast variety of different processes. The abundance of published literature may often be more confusing than helpful to the practicing thin film engineer, even if he has time and opportunity to read it. The choice of the right process is also severely limited by the given manufacturing hardware and cost considerations which may not easily allow for the adaption of a new manufacturing approach, even if it promises to be better technically (it ought to be also cheaper). On the user end of the thin film coating business, the typical optical designer or engineer who needs an optical coating may have limited or no knowledge at all about the theoretical and manufacturing criteria for the optimum selection of what he needs. This can be sensed frequently by overly tight tolerances and requirements for optical performance which sometimes stretch the limits of mother nature. We introduce here a know1edge-based system (KBS) intended to assist expert designers and manufacturers in their task of maximizing results and minimizing errors, trial runs, and unproductive time. It will help the experts to manipulate parameters which are largely determined through heuristic reasoning by employing artificial intelligence techniques. In a later state, the KBS will include a module allowing the layman user of coatings to make the right choice.

  17. Design of a knowledge-based welding advisor

    SciTech Connect

    Kleban, S.D.

    1996-06-01

    Expert system implementation can take numerous forms ranging form traditional declarative rule-based systems with if-then syntax to imperative programming languages that capture expertise in procedural code. The artificial intelligence community generally thinks of expert systems as rules or rule-bases and an inference engine to process the knowledge. The welding advisor developed at Sandia National Laboratories and described in this paper deviates from this by codifying expertise using object representation and methods. Objects allow computer scientists to model the world as humans perceive it giving us a very natural way to encode expert knowledge. The design of the welding advisor, which generates and evaluates solutions, will be compared and contrasted to a traditional rule- based system.

  18. Data Discovery and Access via the Heliophysics Events Knowledgebase (HEK)

    NASA Astrophysics Data System (ADS)

    Somani, A.; Hurlburt, N. E.; Schrijver, C. J.; Cheung, M.; Freeland, S.; Slater, G. L.; Seguin, R.; Timmons, R.; Green, S.; Chang, L.; Kobashi, A.; Jaffey, A.

    2011-12-01

    The HEK is a integrated system which helps direct scientists to solar events and data from a variety of providers. The system is fully operational and adoption of HEK has been growing since the launch of NASA's SDO mission. In this presentation we describe the different components that comprise HEK. The Heliophysics Events Registry (HER) and Heliophysics Coverage Registry (HCR) form the two major databases behind the system. The HCR allows the user to search on coverage event metadata for a variety of instruments. The HER allows the user to search on annotated event metadata for a variety of instruments. Both the HCR and HER are accessible via a web API which can return search results in machine readable formats (e.g. XML and JSON). A variety of SolarSoft services are also provided to allow users to search the HEK as well as obtain and manipulate data. Other components include - the Event Detection System (EDS) continually runs feature finding algorithms on SDO data to populate the HER with relevant events, - A web form for users to request SDO data cutouts for multiple AIA channels as well as HMI line-of-sight magnetograms, - iSolSearch, which allows a user to browse events in the HER and search for specific events over a specific time interval, all within a graphical web page, - Panorama, which is the software tool used for rapid visualization of large volumes of solar image data in multiple channels/wavelengths. The user can also easily create WYSIWYG movies and launch the Annotator tool to describe events and features. - EVACS, which provides a JOGL powered client for the HER and HCR. EVACS displays the searched for events on a full disk magnetogram of the sun while displaying more detailed information for events.

  19. KBSIM: a system for interactive knowledge-based simulation.

    PubMed

    Hakman, M; Groth, T

    1991-01-01

    The KBSIM system integrates quantitative simulation with symbolic reasoning techniques, under the control of a user interface management system, using a relational database management system for data storage and interprocess communication. The system stores and processes knowledge from three distinct knowledge domains, viz. (i) knowledge about the processes of the system under investigation, expressed in terms of a Continuous System Simulation Language (CSSL); (ii) heuristic knowledge on how to reach the goals of the simulation experiment, expressed in terms of a Rule Description Language (RDL); and (iii) knowledge about the requirements of the intended users, expressed in terms of a User Interface Description Language (UIDL). The user works in an interactive environment controlling the simulation course with use of a mouse and a large screen containing a set of 'live' charts and forms. The user is assisted by an embedded 'expert system' module continuously watching both the system's behavior and the user's action, and producing alerts, alarms, comments and advice. The system was developed on a Hewlett-Packard 9000/350 workstation under the HP-Unix and HP-Windows operating systems, using the MIMER database management system, and Fortran, Prolog/Lisp and C as implementation languages. The KBSIM system has great potentials for supporting problem solving, design of working procedures and teaching related to management of highly dynamic systems. PMID:2060297

  20. Knowledge-Based Interpretation Of Scanned Business Letters

    NASA Astrophysics Data System (ADS)

    Kreich, Joachim; Luhn, Achim; Maderlechner, Gerd

    1989-07-01

    Office Automation by electronic text processing has not reduced the amount of paper used for communication and storage. The present boom of FAX-Systems proves this tendency. With this growing degree of office automation the paper-computer interface becomes increasingly important. To be useful, this interface must be able to handle documents containing text as well as graphics, and convert them into an electronic representation that not only captures content (like in current OCR readers), but also the layout and logic structure. We describe a system for the analysis of business letters which is able to extract the key elements of a letter like its sender, the date, etc. The letter can thus for instance be stored in electronic archival systems, edited by structure editors, or forwarded via electronic mail services. This system was implemented on a Symbolics Lisp machine for the high level part of the analysis and on a VAX for the low and medium level processing stages. Some practical results are presented and discussed. Apart from this application our system is a useful testbed to implement and test sophisticated control structures and model representations for image understanding.

  1. A model for a knowledge-based system's life cycle

    NASA Technical Reports Server (NTRS)

    Kiss, Peter A.

    1990-01-01

    The American Institute of Aeronautics and Astronautics has initiated a Committee on Standards for Artificial Intelligence. Presented here are the initial efforts of one of the working groups of that committee. The purpose here is to present a candidate model for the development life cycle of Knowledge Based Systems (KBS). The intent is for the model to be used by the Aerospace Community and eventually be evolved into a standard. The model is rooted in the evolutionary model, borrows from the spiral model, and is embedded in the standard Waterfall model for software development. Its intent is to satisfy the development of both stand-alone and embedded KBSs. The phases of the life cycle are detailed as are and the review points that constitute the key milestones throughout the development process. The applicability and strengths of the model are discussed along with areas needing further development and refinement by the aerospace community.

  2. Multilingual Knowledge-Based Concept Recognition in Textual Data

    NASA Astrophysics Data System (ADS)

    Schierle, Martin; Trabold, Daniel

    With respect to the increasing volume of textual data which is available through digital resources today, the identification of the main concepts in those texts becomes increasingly important and can be seen as a vital step in the analysis of unstructured information.

  3. Interference and economic threshold level of little seed canary grass in wheat under different sowing times.

    PubMed

    Hussain, Saddam; Khaliq, Abdul; Matloob, Amar; Fahad, Shah; Tanveer, Asif

    2015-01-01

    Little seed canary grass (LCG) is a pernicious weed of wheat crop causing enormous yield losses. Information on the interference and economic threshold (ET) level of LCG is of prime significance to rationalize the use of herbicide for its effective management in wheat fields. The present study was conducted to quantify interference and ET density of LCG in mid-sown (20 November) and late-sown (10 December) wheat. Experiment was triplicated in randomized split-plot design with sowing dates as the main plots and LCG densities (10, 20, 30, and 40 plants m(-2)) as the subplots. Plots with two natural infestations of weeds including and excluding LCG were maintained for comparing its interference in pure stands with designated densities. A season-long weed-free treatment was also run. Results indicated that composite stand of weeds, including LCG, and density of 40 LCG plants m(-2) were more competitive with wheat, especially when crop was sown late in season. Maximum weed dry biomass was attained by composite stand of weeds including LCG followed by 40 LCG plants m(-2) under both sowing dates. Significant variations in wheat growth and yield were observed under the influence of different LCG densities as well as sowing dates. Presence of 40 LCG plants m(-2) reduced wheat yield by 28 and 34% in mid- and late-sown wheat crop, respectively. These losses were much greater than those for infestation of all weeds, excluding LCG. Linear regression model was effective in simulating wheat yield losses over a wide range of LCG densities, and the regression equations showed good fit to observed data. The ET levels of LCG were 6-7 and 2.2-3.3 plants m(-2) in mid- and late-sown wheat crop, respectively. Herbicide should be applied in cases when LCG density exceeds these levels under respective sowing dates.

  4. Thermal Performance Testing of EMU and CSAFE Liquid Cooling Gannents

    NASA Technical Reports Server (NTRS)

    Rhodes, Richard; Bue, Grant; Meginnis, Ian; Hakam, Mary; Radford, Tamara

    2013-01-01

    Future exploration missions require the development of a new liquid cooling garment (LCG) to support the next generation extravehicular activity (EVA) suit system. The new LCG must offer greater system reliability, optimal thermal performance as required by mission directive, and meet other design requirements including improved tactile comfort. To advance the development of a future LCG, a thermal performance test was conducted to evaluate: (1) the comparable thermal performance of the EMU LCG and the CSAFE developed engineering evaluation unit (EEU) LCG, (2) the effect of the thermal comfort undergarment (TCU) on the EMU LCG tactile and thermal comfort, and (3) the performance of a torso or upper body only LCG shirt to evaluate a proposed auxiliary loop. To evaluate the thermal performance of each configuration, a metabolic test was conducted using the Demonstrator Spacesuit to create a relevant test environment. Three (3) male test subjects of similar height and weight walked on a treadmill at various speeds to produce three different metabolic loads - resting (300-600 BTU/hr), walking at a slow pace (1200 BTU/hr), and walking at a brisk pace (2200 BTU/hr). Each subject participated in five tests - two wearing the CSAFE full LCG, one wearing the EMU LCG without TCUs, one wearing the EMU LCG with TCUs, and one with the CSAFE shirt-only. During the test, performance data for the breathing air and cooling water systems and subject specific data was collected to define the thermal performance of the configurations. The test results show that the CSAFE EEU LCG and EMU LCG with TCU had comparable performance. The testing also showed that an auxiliary loop LCG, sized similarly to the shirt-only configuration, should provide adequate cooling for contingency scenarios. Finally, the testing showed that the TCU did not significantly hinder LCG heat transfer, and may prove to be acceptable for future suit use with additional analysis and testing.

  5. Segmentation of the lumbar spine with knowledge-based shape models

    NASA Astrophysics Data System (ADS)

    Kohnen, Michael; Mahnken, Andreas H.; Brandt, Alexander S.; Steinberg, Stephan; Guenther, Rolf W.; Wein, Berthold B.

    2002-05-01

    A shape model for full automatic segmentation and recognition of lateral lumbar spine radiographs has been developed. The shape model is able to learn the shape variations from a training dataset by a principal component analysis of the shape information. Furthermore, specific image features at each contour point are added into models of gray value profiles. These models were computed from a training dataset consisting of 25 manually segmented lumbar spines. The application of the model containing both shape and image information is optimized on unknown images using a simulated annealing search first to acquire a coarse localization of the model. Further on, the shape points are iteratively moved towards image structures matching the gray value models. During optimization the shape information of the model assures that the segmented object boundary stays plausible. The shape model was tested on 65 unknown images achieving a mean segmentation accuracy of 88% measured from the percental cover of the resulting and manually drawn contours.

  6. Teachers in Digital Knowledge-Based Society: New Roles and Vision.

    ERIC Educational Resources Information Center

    Kim, Chong Yang

    2002-01-01

    Proposes a vision of future education influenced by the uniqueness of Asian culture, the building of an Asia-Pacific network of teachers, and ways to effectively utilize the potential of digital society, all from the basis of Asian culture and tradition. (Contains five references.) (AUTHOR)

  7. The discovery of bioisoster compound for plumbagin using the knowledge-based rational method

    NASA Astrophysics Data System (ADS)

    Jeong, Seo Hee; Choi, Jung Sup; Ko, Young Kwan; Kang, Nam Sook

    2015-04-01

    Arabidopsis thaliana 7-Keto-8-AminoPelargonic Acid Synthase (AtKAPAS) is a crucial herbicide target, and AtKAPAS inhibitors are widely available in the agrochemical market. The herbicide plumbagin is known as a potent inhibitor for AtKAPAS but it is extremely toxic. In this study, we identified the metabolic site of plumbagin and also performed a similarity-based library analysis using 2D fingerprints and a docking study. Four compounds as virtual hits were derived from plumbagin. Treatment of Digitaria ciliaris with compound 2, one of four hit compounds, stunted the growth of leaves and the leaf tissue was desiccated or burned within three days. Thus, we expect that compound 2 will be developed as a new herbicide and additionally our strategy will provide helpful information for optimizing lead compounds.

  8. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  9. Applications of the automatic change detection for disaster monitoring by the knowledge-based framework

    NASA Astrophysics Data System (ADS)

    Tadono, T.; Hashimoto, S.; Onosato, M.; Hori, M.

    2012-11-01

    Change detection is a fundamental approach in utilization of satellite remote sensing image, especially in multi-temporal analysis that involves for example extracting damaged areas by a natural disaster. Recently, the amount of data obtained by Earth observation satellites has increased significantly owing to the increasing number and types of observing sensors, the enhancement of their spatial resolution, and improvements in their data processing systems. In applications for disaster monitoring, in particular, fast and accurate analysis of broad geographical areas is required to facilitate efficient rescue efforts. It is expected that robust automatic image interpretation is necessary. Several algorithms have been proposed in the field of automatic change detection in past, however they are still lack of robustness for multi purposes, an instrument independency, and accuracy better than a manual interpretation. We are trying to develop a framework for automatic image interpretation using ontology-based knowledge representation. This framework permits the description, accumulation, and use of knowledge drawn from image interpretation. Local relationships among certain concepts defined in the ontology are described as knowledge modules and are collected in the knowledge base. The knowledge representation uses a Bayesian network as a tool to describe various types of knowledge in a uniform manner. Knowledge modules are synthesized and used for target-specified inference. The results applied to two types of disasters by the framework without any modification and tuning are shown in this paper.

  10. Expert knowledge-based assessment of farming practices for different biotic indicators using fuzzy logic.

    PubMed

    Sattler, Claudia; Stachow, Ulrich; Berger, Gert

    2012-03-01

    The study presented here describes a modeling approach for the ex-ante assessment of farming practices with respect to their risk for several single-species biodiversity indicators. The approach is based on fuzzy-logic techniques and, thus, is tolerant to the inclusion of sources of uncertain knowledge, such as expert judgment into the assessment. The result of the assessment is a so-called Index of Suitability (IS) for the five selected biotic indicators calculated per farming practice. Results of IS values are presented for the comparison of crops and for the comparison of several production alternatives per crop (e.g., organic vs. integrated farming, mineral vs. organic fertilization, and reduced vs. plow tillage). Altogether, the modeled results show that the different farming practices can greatly differ in terms of their suitability for the different biotic indicators and that the farmer has a certain scope of flexibility in opting for a farming practice that is more in favor of biodiversity conservation. Thus, the approach is apt to identify farming practices that contribute to biodiversity conservation and, moreover, enables the identification of farming practices that are suitable with respect to more than one biotic indicator.

  11. A knowledge-based design framework for airplane conceptual and preliminary design

    NASA Astrophysics Data System (ADS)

    Anemaat, Wilhelmus A. J.

    The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane. Using these tools will show an improvement in efficiency over using separate programs due to the automatic recalculation with any change of input data. The direct visual feedback of 3D geometry in the AAA-AML, will lead to quicker resolving of problems as opposed to conventional methods.

  12. Exploring Architecture Options for a Federated, Cloud-based System Biology Knowledgebase

    SciTech Connect

    Gorton, Ian; Liu, Yan; Yin, Jian

    2010-12-02

    This paper evaluates various cloud computing technologies and resources for building a system biology knowledge base system. This system will host a huge amount of data and contain a flexible sets of workflows to operate on these data. It will enable system biologist to share their data and algorithms to allow research results to be reproduced, shared, and reused across the system biology community.

  13. Predicting hydrologic response through a hierarchical catchment knowledgebase: A Bayes empirical Bayes approach

    NASA Astrophysics Data System (ADS)

    Smith, Tyler; Marshall, Lucy; Sharma, Ashish

    2014-02-01

    Making useful Predictions in Ungauged Basins is an incredibly difficult task given the limitations of hydrologic models to represent physical processes appropriately across the heterogeneity within and among different catchments. Here, we introduce a new method for this challenge, Bayes empirical Bayes, that allows for the statistical pooling of information from multiple donor catchments and provides the ability to transfer parametric distributions rather than single parameter sets to the ungauged catchment. Further, the methodology provides an efficient framework with which to formally assess predictive uncertainty at the ungauged catchment. We investigated the utility of the methodology under both synthetic and real data conditions, and with respect to its sensitivity to the number and quality of the donor catchments used. This study highlighted the ability of the hierarchical Bayes empirical Bayes approach to produce expected outcomes in both the synthetic and real data applications. The method was found to be sensitive to the quality (hydrologic similarity) of the donor catchments used. Results were less sensitive to the number of donor catchments, but indicated that predictive uncertainty was best constrained with larger numbers of donor catchments (but still adequate with fewer donors).

  14. A Knowledge-Based Approach to Retrieving Teaching Materials for Context-Aware Learning

    ERIC Educational Resources Information Center

    Shih, Wen-Chung; Tseng, Shian-Shyong

    2009-01-01

    With the rapid development of wireless communication and sensor technologies, ubiquitous learning has become a promising solution to educational problems. In context-aware ubiquitous learning environments, it is required that learning content is retrieved according to environmental contexts, such as learners' location. Also, a learning content…

  15. Knowledge-based object recognition for different morphological classes of plants

    NASA Astrophysics Data System (ADS)

    Brendel, Thorsten; Schwanke, Joerg; Jensch, Peter F.; Megnet, Roland

    1995-01-01

    Micropropagation of plants is done by cutting juvenile plants and placing them into special container-boxes with nutrient-solution where the pieces can grow up and be cut again several times. To produce high amounts of biomass it is necessary to do plant micropropagation by a robotic syshoot. In this paper we describe parts of the vision syshoot that recognizes plants and their particular cutting points. Therefore, it is necessary to extract elements of the plants and relations between these elements (for example root, shoot, leaf). Different species vary in their morphological appearance, variation is also immanent in plants of the same species. Therefore, we introduce several morphological classes of plants from that we expect same recognition methods. As a result of our work we present rules which help users to create specific algorithms for object recognition of plant species.

  16. Knowledge-Based CAI: CINS for Individualized Curriculum Sequencing. Final Technical Report No. 290.

    ERIC Educational Resources Information Center

    Wescourt, Keith T.; And Others

    This report describes research on the Curriculum Information Network (CIN) paradigm for computer-assisted instruction (CAI) in technical subjects. The CIN concept was first conceived and implemented in the BASIC Instructional Program (BIP). The primary objective of CIN-based CAI and the BIP project has been to develop procedures for providing each…

  17. Initial Validation of a Knowledge-Based Measure of Social Information Processing and Anger Management

    ERIC Educational Resources Information Center

    Leff, Stephen S.; Cassano, Michael; MacEvoy, Julie Paquette; Costigan, Tracy

    2010-01-01

    Over the past fifteen years many schools have utilized aggression prevention programs. Despite these apparent advances, many programs are not examined systematically to determine the areas in which they are most effective. One reason for this is that many programs, especially those in urban under-resourced areas, do not utilize outcome measures…

  18. Knowledge-Based Inferences across the Hemispheres: Domain Makes a Difference

    ERIC Educational Resources Information Center

    Shears, Connie; Hawkins, Amanda; Varner, Andria; Lewis, Lindsey; Heatley, Jennifer; Twachtmann, Lisa

    2008-01-01

    Language comprehension occurs when the left-hemisphere (LH) and the right-hemisphere (RH) share information derived from discourse [Beeman, M. J., Bowden, E. M., & Gernsbacher, M. A. (2000). Right and left hemisphere cooperation for drawing predictive and coherence inferences during normal story comprehension. "Brain and Language, 71", 310-336].…

  19. Developing a Physiologically-Based Pharmacokinetic Model Knowledgebase in Support of Provisional Model Construction

    EPA Science Inventory

    Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-v...

  20. Preparing Oral Examinations of Mathematical Domains with the Help of a Knowledge-Based Dialogue System.

    ERIC Educational Resources Information Center

    Schmidt, Peter

    A conception of discussing mathematical material in the domain of calculus is outlined. Applications include that university students work at their knowledge and prepare for their oral examinations by utilizing the dialog system. The conception is based upon three pillars. One central pillar is a knowledge base containing the collections of…

  1. miRegulome: a knowledge-base of miRNA regulomics and analysis

    PubMed Central

    Barh, Debmalya; Kamapantula, Bhanu; Jain, Neha; Nalluri, Joseph; Bhattacharya, Antaripa; Juneja, Lucky; Barve, Neha; Tiwari, Sandeep; Miyoshi, Anderson; Azevedo, Vasco; Blum, Kenneth; Kumar, Anil; Silva, Artur; Ghosh, Preetam

    2015-01-01

    miRNAs regulate post transcriptional gene expression by targeting multiple mRNAs and hence can modulate multiple signalling pathways, biological processes, and patho-physiologies. Therefore, understanding of miRNA regulatory networks is essential in order to modulate the functions of a miRNA. The focus of several existing databases is to provide information on specific aspects of miRNA regulation. However, an integrated resource on the miRNA regulome is currently not available to facilitate the exploration and understanding of miRNA regulomics. miRegulome attempts to bridge this gap. The current version of miRegulome v1.0 provides details on the entire regulatory modules of miRNAs altered in response to chemical treatments and transcription factors, based on validated data manually curated from published literature. Modules of miRegulome (upstream regulators, downstream targets, miRNA regulated pathways, functions, diseases, etc) are hyperlinked to an appropriate external resource and are displayed visually to provide a comprehensive understanding. Four analysis tools are incorporated to identify relationships among different modules based on user specified datasets. miRegulome and its tools are helpful in understanding the biology of miRNAs and will also facilitate the discovery of biomarkers and therapeutics. With added features in upcoming releases, miRegulome will be an essential resource to the scientific community. Availability: http://bnet.egr.vcu.edu/miRegulome. PMID:26243198

  2. Knowledge-based image data management - An expert front-end for the BROWSE facility

    NASA Technical Reports Server (NTRS)

    Stoms, David M.; Star, Jeffrey L.; Estes, John E.

    1988-01-01

    An intelligent user interface being added to the NASA-sponsored BROWSE testbed facility is described. BROWSE is a prototype system designed to explore issues involved in locating image data in distributed archives and displaying low-resolution versions of that imagery at a local terminal. For prototyping, the initial application is the remote sensing of forest and range land.

  3. HRM in the Knowledge-based Economy: Is There an Afterlife?

    ERIC Educational Resources Information Center

    Raich, Mario

    2002-01-01

    Explains changes in the workplace attributed to the knowledge economy and poses questions for businesses, workers, and the human resources function. Outlines new expectations of and a new framework for human resource management. (SK)

  4. Knowledge-based decision support for Space Station assembly sequence planning

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A complete Personal Analysis Assistant (PAA) for Space Station Freedom (SSF) assembly sequence planning consists of three software components: the system infrastructure, intra-flight value added, and inter-flight value added. The system infrastructure is the substrate on which software elements providing inter-flight and intra-flight value-added functionality are built. It provides the capability for building representations of assembly sequence plans and specification of constraints and analysis options. Intra-flight value-added provides functionality that will, given the manifest for each flight, define cargo elements, place them in the National Space Transportation System (NSTS) cargo bay, compute performance measure values, and identify violated constraints. Inter-flight value-added provides functionality that will, given major milestone dates and capability requirements, determine the number and dates of required flights and develop a manifest for each flight. The current project is Phase 1 of a projected two phase program and delivers the system infrastructure. Intra- and inter-flight value-added were to be developed in Phase 2, which has not been funded. Based on experience derived from hundreds of projects conducted over the past seven years, ISX developed an Intelligent Systems Engineering (ISE) methodology that combines the methods of systems engineering and knowledge engineering to meet the special systems development requirements posed by intelligent systems, systems that blend artificial intelligence and other advanced technologies with more conventional computing technologies. The ISE methodology defines a phased program process that begins with an application assessment designed to provide a preliminary determination of the relative technical risks and payoffs associated with a potential application, and then moves through requirements analysis, system design, and development.

  5. An “ADME Module” in the Adverse Outcome Pathway Knowledgebase

    EPA Science Inventory

    The Adverse Outcome Pathway (AOP) framework has generated intense interest for its utility to organize knowledge on the toxicity mechanisms, starting from a molecular initiating event (MIE) to an adverse outcome across various levels of biological organization. While the AOP fra...

  6. Knowledge-Based, Central Nervous System (CNS) Lead Selection and Lead Optimization for CNS Drug Discovery.

    PubMed

    Ghose, Arup K; Herbertz, Torsten; Hudkins, Robert L; Dorsey, Bruce D; Mallamo, John P

    2012-01-18

    The central nervous system (CNS) is the major area that is affected by aging. Alzheimer's disease (AD), Parkinson's disease (PD), brain cancer, and stroke are the CNS diseases that will cost trillions of dollars for their treatment. Achievement of appropriate blood-brain barrier (BBB) penetration is often considered a significant hurdle in the CNS drug discovery process. On the other hand, BBB penetration may be a liability for many of the non-CNS drug targets, and a clear understanding of the physicochemical and structural differences between CNS and non-CNS drugs may assist both research areas. Because of the numerous and challenging issues in CNS drug discovery and the low success rates, pharmaceutical companies are beginning to deprioritize their drug discovery efforts in the CNS arena. Prompted by these challenges and to aid in the design of high-quality, efficacious CNS compounds, we analyzed the physicochemical property and the chemical structural profiles of 317 CNS and 626 non-CNS oral drugs. The conclusions derived provide an ideal property profile for lead selection and the property modification strategy during the lead optimization process. A list of substructural units that may be useful for CNS drug design was also provided here. A classification tree was also developed to differentiate between CNS drugs and non-CNS oral drugs. The combined analysis provided the following guidelines for designing high-quality CNS drugs: (i) topological molecular polar surface area of <76 Å(2) (25-60 Å(2)), (ii) at least one (one or two, including one aliphatic amine) nitrogen, (iii) fewer than seven (two to four) linear chains outside of rings, (iv) fewer than three (zero or one) polar hydrogen atoms, (v) volume of 740-970 Å(3), (vi) solvent accessible surface area of 460-580 Å(2), and (vii) positive QikProp parameter CNS. The ranges within parentheses may be used during lead optimization. One violation to this proposed profile may be acceptable. The chemoinformatics approaches for graphically analyzing multiple properties efficiently are presented.

  7. Discovery of new [Formula: see text] proteasome inhibitors using a knowledge-based computational screening approach.

    PubMed

    Mehra, Rukmankesh; Chib, Reena; Munagala, Gurunadham; Yempalla, Kushalava Reddy; Khan, Inshad Ali; Singh, Parvinder Pal; Khan, Farrah Gul; Nargotra, Amit

    2015-11-01

    Mycobacterium tuberculosis bacteria cause deadly infections in patients [Corrected]. The rise of multidrug resistance associated with tuberculosis further makes the situation worse in treating the disease. M. tuberculosis proteasome is necessary for the pathogenesis of the bacterium validated as an anti-tubercular target, thus making it an attractive enzyme for designing Mtb inhibitors. In this study, a computational screening approach was applied to identify new proteasome inhibitor candidates from a library of 50,000 compounds. This chemical library was procured from the ChemBridge (20,000 compounds) and the ChemDiv (30,000 compounds) databases. After a detailed analysis of the computational screening results, 50 in silico hits were retrieved and tested in vitro finding 15 compounds with [Formula: see text] values ranging from 35.32 to 64.15 [Formula: see text]M on lysate. A structural analysis of these hits revealed that 14 of these compounds probably have non-covalent mode of binding to the target and have not reported for anti-tubercular or anti-proteasome activity. The binding interactions of all the 14 protein-inhibitor complexes were analyzed using molecular docking studies. Further, molecular dynamics simulations of the protein in complex with the two most promising hits were carried out so as to identify the key interactions and validate the structural stability.

  8. Knowledge-based reasoning to annotate noncoding RNA using multi-agent system.

    PubMed

    Arruda, Wosley C; Souza, Daniel S; Ralha, Célia G; Walter, Maria Emilia M T; Raiol, Tainá; Brigido, Marcelo M; Stadler, Peter F

    2015-12-01

    Noncoding RNAs (ncRNAs) have been focus of intense research over the last few years. Since characteristics and signals of ncRNAs are not entirely known, researchers use different computational tools together with their biological knowledge to predict putative ncRNAs. In this context, this work presents ncRNA-Agents, a multi-agent system to annotate ncRNAs based on the output of different tools, using inference rules to simulate biologists' reasoning. Experiments with data from the fungus Saccharomyces cerevisiae allowed to measure the performance of ncRNA-Agents, with better sensibility, when compared to Infernal, a widely used tool for annotating ncRNA. Besides, data of the Schizosaccharomyces pombe and Paracoccidioides brasiliensis fungi identified novel putative ncRNAs, which demonstrated the usefulness of our approach. NcRNA-Agents can be be found at: http://www.biomol.unb.br/ncrna-agents.

  9. A Knowledge-Based Approach for Item Exposure Control in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Doong, Shing H.

    2009-01-01

    The purpose of this study is to investigate a functional relation between item exposure parameters (IEPs) and item parameters (IPs) over parallel pools. This functional relation is approximated by a well-known tool in machine learning. Let P and Q be parallel item pools and suppose IEPs for P have been obtained via a Sympson and Hetter-type…

  10. Issues in implementing a knowledge-based ECG analyzer for personal mobile health monitoring.

    PubMed

    Goh, K W; Kim, E; Lavanya, J; Kim, Y; Soh, C B

    2006-01-01

    Advances in sensor technology, personal mobile devices, and wireless broadband communications are enabling the development of an integrated personal mobile health monitoring system that can provide patients with a useful tool to assess their own health and manage their personal health information anytime and anywhere. Personal mobile devices, such as PDAs and mobile phones, are becoming more powerful integrated information management tools and play a major role in many people's lives. We focus on designing a health-monitoring system for people who suffer from cardiac arrhythmias. We have developed computer simulation models to evaluate the performance of appropriate electrocardiogram (ECG) analysis techniques that can be implemented on personal mobile devices. This paper describes an ECG analyzer to perform ECG beat and episode detection and classification. We have obtained promising preliminary results from our study. Also, we discuss several key considerations when implementing a mobile health monitoring solution. The mobile ECG analyzer would become a front-end patient health data acquisition module, which is connected to the Personal Health Information Management System (PHIMS) for data repository. PMID:17947185

  11. REGene: a literature-based knowledgebase of animal regeneration that bridge tissue regeneration and cancer

    PubMed Central

    Zhao, Min; Rotgans, Bronwyn; Wang, Tianfang; Cummins, S. F.

    2016-01-01

    Regeneration is a common phenomenon across multiple animal phyla. Regeneration-related genes (REGs) are critical for fundamental cellular processes such as proliferation and differentiation. Identification of REGs and elucidating their functions may help to further develop effective treatment strategies in regenerative medicine. So far, REGs have been largely identified by small-scale experimental studies and a comprehensive characterization of the diverse biological processes regulated by REGs is lacking. Therefore, there is an ever-growing need to integrate REGs at the genomics, epigenetics, and transcriptome level to provide a reference list of REGs for regeneration and regenerative medicine research. Towards achieving this, we developed the first literature-based database called REGene (REgeneration Gene database). In the current release, REGene contains 948 human (929 protein-coding and 19 non-coding genes) and 8445 homologous genes curated from gene ontology and extensive literature examination. Additionally, the REGene database provides detailed annotations for each REG, including: gene expression, methylation sites, upstream transcription factors, and protein-protein interactions. An analysis of the collected REGs reveals strong links to a variety of cancers in terms of genetic mutation, protein domains, and cellular pathways. We have prepared a web interface to share these regeneration genes, supported by refined browsing and searching functions at http://REGene.bioinfo-minzhao.org/. PMID:26975833

  12. Knowledge-based machine indexing from natural language text: Knowledge base design, development, and maintenance

    NASA Technical Reports Server (NTRS)

    Genuardi, Michael T.

    1993-01-01

    One strategy for machine-aided indexing (MAI) is to provide a concept-level analysis of the textual elements of documents or document abstracts. In such systems, natural-language phrases are analyzed in order to identify and classify concepts related to a particular subject domain. The overall performance of these MAI systems is largely dependent on the quality and comprehensiveness of their knowledge bases. These knowledge bases function to (1) define the relations between a controlled indexing vocabulary and natural language expressions; (2) provide a simple mechanism for disambiguation and the determination of relevancy; and (3) allow the extension of concept-hierarchical structure to all elements of the knowledge file. After a brief description of the NASA Machine-Aided Indexing system, concerns related to the development and maintenance of MAI knowledge bases are discussed. Particular emphasis is given to statistically-based text analysis tools designed to aid the knowledge base developer. One such tool, the Knowledge Base Building (KBB) program, presents the domain expert with a well-filtered list of synonyms and conceptually-related phrases for each thesaurus concept. Another tool, the Knowledge Base Maintenance (KBM) program, functions to identify areas of the knowledge base affected by changes in the conceptual domain (for example, the addition of a new thesaurus term). An alternate use of the KBM as an aid in thesaurus construction is also discussed.

  13. Fast QRS Detection with an Optimized Knowledge-Based Method: Evaluation on 11 Standard ECG Databases

    PubMed Central

    Elgendi, Mohamed

    2013-01-01

    The current state-of-the-art in automatic QRS detection methods show high robustness and almost negligible error rates. In return, the methods are usually based on machine-learning approaches that require sufficient computational resources. However, simple-fast methods can also achieve high detection rates. There is a need to develop numerically efficient algorithms to accommodate the new trend towards battery-driven ECG devices and to analyze long-term recorded signals in a time-efficient manner. A typical QRS detection method has been reduced to a basic approach consisting of two moving averages that are calibrated by a knowledge base using only two parameters. In contrast to high-accuracy methods, the proposed method can be easily implemented in a digital filter design. PMID:24066054

  14. A knowledge-based weighting approach to ligand-based virtual screening.

    PubMed

    Stiefl, Nikolaus; Zaliani, Andrea

    2006-01-01

    On the basis of the recently introduced reduced graph concept of ErG (extending reduced graphs), a straightforward weighting approach to include additional (e.g., structural or SAR) knowledge into similarity searching procedures for virtual screening (wErG) is proposed. This simple procedure is exemplified with three data sets, for which interaction patterns available from X-ray structures of native or peptidomimetic ligands with their target protein are used to significantly improve retrieval rates of known actives from the MDL Drug Report database. The results are compared to those of other virtual screening techniques such as Daylight fingerprints, FTrees, UNITY, and various FlexX docking protocols. Here, it is shown that wErG exhibits a very good and stable performance independent of the target structure. On the basis of this (and the fact that ErG retrieves structurally more dissimilar compounds due to its potential to perform scaffold-hopping), the combination of wErG and FlexX is successfully explored. Overall, wErG is not only an easily applicable weighting procedure that efficiently identifies actives in large data sets but it is also straightforward to understand for both medicinal and computational chemists and can, therefore, be driven by several aspects of project-related knowledge (e.g., X-ray, NMR, SAR, and site-directed mutagenesis) in a very early stage of the hit identification process.

  15. Knowledge-based changes to health systems: the Thai experience in policy development.

    PubMed Central

    Tangcharoensathien, Viroj; Wibulpholprasert, Suwit; Nitayaramphong, Sanguan

    2004-01-01

    Over the past two decades the government in Thailand has adopted an incremental approach to extending health-care coverage to the population. It first offered coverage to government employees and their dependents, and then introduced a scheme under which low-income people were exempt from charges for health care. This scheme was later extended to include elderly people, children younger than 12 years of age and disabled people. A voluntary public insurance scheme was implemented to cover those who could afford to pay for their own care. Private sector employees were covered by the Social Health Insurance scheme, which was implemented in 1991. Despite these efforts, 30% of the population remained uninsured in 2001. In October of that year, the new government decided to embark on a programme to provide universal health-care coverage. This paper describes how research into health systems and health policy contributed to the move towards universal coverage. Data on health systems financing and functioning had been gathered before and after the founding of the Health Systems Research Institute in early 1990. In 1991, a contract capitation model had been used to launch the Social Health Insurance scheme. The advantages of using a capitation model are that it contains costs and provides an acceptable quality of service as opposed to the cost escalation and inefficiency that occur under fee-for-service reimbursement models, such as the one used to provide medical benefits to civil servants. An analysis of the implementation of universal coverage found that politics moved universal coverage onto the policy agenda during the general election campaign in January 2001. The capacity for research on health systems and policy to generate evidence guided the development of the policy and the design of the system at a later stage. Because the reformists who sought to bring about universal coverage (who were mostly civil servants in the Ministry of Public Health and members of nongovernmental organizations) were able to bridge the gap between researchers and politicians, an evidence-based political decision was made. Additionally, the media played a part in shaping the societal consensus on universal coverage. PMID:15643796

  16. Final Report - Phylogenomic tools and web resources for the Systems Biology Knowledgebase

    SciTech Connect

    Sjolander, Kimmen

    2014-12-08

    The major advance during this last reporting period (8/15/12 to present) is our release of data on the PhyloFacts website: phylogenetic trees, multiple sequence alignments and other data for protein families are now available for download from http://phylogenomics.berkeley.edu/data/. This project as a whole aimed to develop high-throughput functional annotation systems that exploit information from protein 3D structure and evolution to provide highly precise inferences of various aspects of gene function, including molecular function, biological process, pathway association, Pfam domains, cellular localization and so on. We accomplished these aims by developing and testing different systems on a database of protein family trees: the PhyloFacts Phylogenomic Encyclopedia (at http://phylogenomics.berkeley.edu/phylofacts/ ).

  17. SLS-PLAN-IT: A knowledge-based blackboard scheduling system for Spacelab life sciences missions

    NASA Technical Reports Server (NTRS)

    Kao, Cheng-Yan; Lee, Seok-Hua

    1992-01-01

    The primary scheduling tool in use during the Spacelab Life Science (SLS-1) planning phase was the operations research (OR) based, tabular form Experiment Scheduling System (ESS) developed by NASA Marshall. PLAN-IT is an artificial intelligence based interactive graphic timeline editor for ESS developed by JPL. The PLAN-IT software was enhanced for use in the scheduling of Spacelab experiments to support the SLS missions. The enhanced software SLS-PLAN-IT System was used to support the real-time reactive scheduling task during the SLS-1 mission. SLS-PLAN-IT is a frame-based blackboard scheduling shell which, from scheduling input, creates resource-requiring event duration objects and resource-usage duration objects. The blackboard structure is to keep track of the effects of event duration objects on the resource usage objects. Various scheduling heuristics are coded in procedural form and can be invoked any time at the user's request. The system architecture is described along with what has been learned with the SLS-PLAN-IT project.

  18. Knowledge-based probabilistic representations of branching ratios in chemical networks: The case of dissociative recombinations

    SciTech Connect

    Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal

    2010-10-07

    Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information. As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.

  19. Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Dungan, Jennifer L.

    1997-01-01

    In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.

  20. A knowledge-based tool for multilevel decomposition of a complex design problem

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1989-01-01

    Although much work has been done in applying artificial intelligence (AI) tools and techniques to problems in different engineering disciplines, only recently has the application of these tools begun to spread to the decomposition of complex design problems. A new tool based on AI techniques has been developed to implement a decomposition scheme suitable for multilevel optimization and display of data in an N x N matrix format.

  1. Knowledge-based image processing for on-off type DNA microarray

    NASA Astrophysics Data System (ADS)

    Kim, Jong D.; Kim, Seo K.; Cho, Jeong S.; Kim, Jongwon

    2002-06-01

    This paper addresses the image processing technique for discriminating whether the probes are hybrized with target DNA in the Human Papilloma Virus (HPV) DNA Chip designed for genotyping HPV. In addition to the probes, the HPV DNA chip has markers that always react with the sample DNA. The positions of probe-dots in the final scanned image are fixed relative to the marker-dot locations with a small variation according to the accuracy of the dotter and the scanner. The probes are duplicated 4 times for the diagnostic stability. The prior knowledges such as the maker relative distance and the duplication information of probes is integrated into the template matching technique with the normalized correlation measure. Results show that the employment of both of the prior knowledges is to simply average the template matching measures over the positions of the markers and probes. The eventual proposed scheme yields stable marker locating and probe classification.

  2. Development of a knowledge-based system for the design of composite automotive components

    NASA Astrophysics Data System (ADS)

    Moynihan, Gary P.; Stephens, J. Paul

    1997-01-01

    Composite materials are comprised of two or more constituents possessing significantly different physical properties. Due to their high strength and light weight, there is an emerging trend to utilize composites in the automotive industry. There is an inherent link between component design and the manufacturing processes necessary for fabrication. To many designers, this situation may be intimidating, since there is frequently little available understanding of composites and their processes. A direct results is high rates of product scrap and rework. Thus, there is a need to implement a systematic approach to composite material design. One such approach is quality function deployment (QFD). By translating customer requirements into design parameters, through the use of heuristics, QFD supports the improvement of product quality during the planning stages prior to actual production. The purpose of this research is to automate the use of knowledge pertaining to the design and application of composite materials within the automobile industry. This is being accomplished through the development of a prototype expert system incorporating a QFD approach. It will provide industry designers with access to knowledge of composite materials that might not be otherwise available.

  3. The "Digital Friend": A knowledge-based decision support system for space crews

    NASA Astrophysics Data System (ADS)

    Hoermann, Hans-Juergen; Johannes, Bernd; Petrovich Salnitski, Vyacheslav

    Space travel of far distances presents exceptional strain on the medical and psychological well-being of the astronauts who undertake such missions. An intelligent knowledge management system has been developed, to assist space crews on long-duration missions as an autonomous decision support system, called the "Digital Friend". This system will become available upon request for the purpose of coaching group processes and individual performance levels as well as aiding in tactical decision processes by taking crew condition parameters into account. In its initial stage, the "Digital Friend" utilizes interconnected layers of knowledge, which encompass relevant models of operational, situational, individual psycho-physiological as well as group processes. An example is the human life science model that contains historic, diagnostic, and prognostic knowledge about the habitual, actual, and anticipated patterns of physiological, cognitive, and group psychology parameters of the crew members. Depending on the available data derived from pre-mission screening, regular check-ups, or non-intrusive onboard monitoring, the "Digital Friend" can generate a situational analysis and diagnose potential problems. When coping with the effects of foreseeable and unforeseen stressors encountered during the mission, the system can provide feedback and support the crew with a recommended course of actions. The first prototype of the "Digital Friend" employs the Neurolab/Healthlab platform developed in a cooperation of DLR and IBMP. The prototype contains psycho-physiological sensors with multiple Heally Satellites that relay data to the intelligent Heally Masters and a telemetric Host station. The analysis of data from a long-term simulation study illustrates how the system can be used to estimate the operators' current level of skill reliability based on Salnitski's model [V. Salnitski, A. Bobrov, A. Dudukin, B. Johannes, Reanalysis of operators reliability in professional skills under simulated and real space flight conditions, Proceedings of the 55th IAC Congress, 4-8 October 2004, Vancouver, Canada ], which combines time-series information of work performance and of work effort. In further collaborative projects, the subsequent system development will pursue a series of studies in related terrestrial environments to validate and fabricate different sensor systems, as well as to tune the data-processing engine and to test suitable user interfaces.

  4. Predicting Large RNA-Like Topologies by a Knowledge-Based Clustering Approach.

    PubMed

    Baba, Naoto; Elmetwaly, Shereef; Kim, Namhee; Schlick, Tamar

    2016-02-27

    An analysis and expansion of our resource for classifying, predicting, and designing RNA structures, RAG (RNA-As-Graphs), is presented, with the goal of understanding features of RNA-like and non-RNA-like motifs and exploiting this information for RNA design. RAG was first reported in 2004 for cataloging RNA secondary structure motifs using graph representations. In 2011, the RAG resource was updated with the increased availability of RNA structures and was improved by utilities for analyzing RNA structures, including substructuring and search tools. We also classified RNA structures as graphs up to 10 vertices (~200 nucleotides) into three classes: existing, RNA-like, and non-RNA-like using clustering approaches. Here, we focus on the tree graphs and evaluate the newly founded RNAs since 2011, which also support our refined predictions of RNA-like motifs. We expand the RAG resource for large tree graphs up to 13 vertices (~260 nucleotides), thereby cataloging more than 10 times as many secondary structures. We apply clustering algorithms based on features of RNA secondary structures translated from known tertiary structures to suggest which hypothetical large RNA motifs can be considered "RNA-like". The results by the PAM (Partitioning Around Medoids) approach, in particular, reveal good accuracy, with small error for the largest cases. The RAG update here up to 13 vertices offers a useful graph-based tool for exploring RNA motifs and suggesting large RNA motifs for design. PMID:26478223

  5. From Cues to Nudge: A Knowledge-Based Framework for Surveillance of Healthcare-Associated Infections.

    PubMed

    Shaban-Nejad, Arash; Mamiya, Hiroshi; Riazanov, Alexandre; Forster, Alan J; Baker, Christopher J O; Tamblyn, Robyn; Buckeridge, David L

    2016-01-01

    We propose an integrated semantic web framework consisting of formal ontologies, web services, a reasoner and a rule engine that together recommend appropriate level of patient-care based on the defined semantic rules and guidelines. The classification of healthcare-associated infections within the HAIKU (Hospital Acquired Infections - Knowledge in Use) framework enables hospitals to consistently follow the standards along with their routine clinical practice and diagnosis coding to improve quality of care and patient safety. The HAI ontology (HAIO) groups over thousands of codes into a consistent hierarchy of concepts, along with relationships and axioms to capture knowledge on hospital-associated infections and complications with focus on the big four types, surgical site infections (SSIs), catheter-associated urinary tract infection (CAUTI); hospital-acquired pneumonia, and blood stream infection. By employing statistical inferencing in our study we use a set of heuristics to define the rule axioms to improve the SSI case detection. We also demonstrate how the occurrence of an SSI is identified using semantic e-triggers. The e-triggers will be used to improve our risk assessment of post-operative surgical site infections (SSIs) for patients undergoing certain type of surgeries (e.g., coronary artery bypass graft surgery (CABG)). PMID:26537131

  6. The Contribution of University Business Incubators to New Knowledge-based Ventures: Evidence from Italy.

    ERIC Educational Resources Information Center

    Grimaldi, Rosa; Grandi, Alessandro

    2001-01-01

    University business incubators give businesses access to labs and equipment, scientific-technical knowledge, networks, and reputation. A study of incubators in Italy shows they do not resolve inadequate funding or lack of management and financial skills. However, the networking capacity can offset these problems. (Contains 25 notes/references.)…

  7. A knowledge-based system for monitoring the electrical power system of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Eddy, Pat

    1987-01-01

    The design and the prototype for the expert system for the Hubble Space Telescope's electrical power system are discussed. This prototype demonstrated the capability to use real time data from a 32k telemetry stream and to perform operational health and safety status monitoring, detect trends such as battery degradation, and detect anomalies such as solar array failures. This prototype, along with the pointing control system and data management system expert systems, forms the initial Telemetry Analysis for Lockheed Operated Spacecraft (TALOS) capability.

  8. Knowledge-based design of reagentless fluorescent biosensors from a designed ankyrin repeat protein.

    PubMed

    Brient-Litzler, Elodie; Plückthun, Andreas; Bedouelle, Hugues

    2010-04-01

    Designed ankyrin repeat proteins (DARPins) can be selected from combinatorial libraries to bind any target antigen. They show high levels of recombinant expression, solubility and stability, and contain no cysteine residue. The possibility of obtaining, from any DARPin and at high yields, fluorescent conjugates which respond to the binding of the antigen by a variation of fluorescence, would have numerous applications in micro- and nano-analytical sciences. This possibility was explored with Off7, a DARPin directed against the maltose binding protein (MalE) from Escherichia coli, with known crystal structure of the complex. Eight residues of Off7, whose solvent accessible surface area varies on association with the antigen but which are not in direct contact with the antigen, were individually mutated into cysteine and then chemically coupled with a fluorophore. The conjugates were ranked according to their relative sensitivities. All of them showed an increase in their fluorescence intensity on antigen binding by >1.7-fold. The best conjugate retained the same affinity as the parental DARPin. Its signal increased linearly and specifically with the concentration of antigen, up to 15-fold in buffer and 3-fold in serum when fully saturated, the difference being mainly due to the absorption of light by serum. Its lower limit of detection was equal to 0.3 nM with a standard spectrofluorometer. Titrations with potassium iodide indicated that the fluorescence variation was due to a shielding of the fluorescent group from the solvent by the antigen. These results suggest rules for the design of reagentless fluorescent biosensors from any DARPin. PMID:19945965

  9. Sensor explication: knowledge-based robotic plan execution through logical objects.

    PubMed

    Budenske, J; Gini, M

    1997-01-01

    Complex robot tasks are usually described as high level goals, with no details on how to achieve them. However, details must be provided to generate primitive commands to control a real robot. A sensor explication concept that makes details explicit from general commands is presented. We show how the transformation from high-level goals to primitive commands can be performed at execution time and we propose an architecture based on reconfigurable objects that contain domain knowledge and knowledge about the sensors and actuators available. Our approach is based on two premises: 1) plan execution is an information gathering process where determining what information is relevant is a great part of the process; and 2) plan execution requires that many details are made explicit. We show how our approach is used in solving the task of moving a robot to and through an unknown, and possibly narrow, doorway; where sonic range data is used to find the doorway, walls, and obstacles. We illustrate the difficulty of such a task using data from a large number of experiments we conducted with a real mobile robot. The laboratory results illustrate how the proper application of knowledge in the integration and utilization of sensors and actuators increases the robustness of plan execution.

  10. Knowledge-based image processing for proton therapy planning of ocular tumors

    NASA Astrophysics Data System (ADS)

    Noeh, Sebastian; Haarbeck, Klaus; Bornfeld, Norbert; Tolxdorff, Thomas

    1998-06-01

    Our project is concerned with the improvement of radiation treatment procedures for ocular tumors. In this context the application of proton beams offers new possibilities to considerably enhance precision and reliability of current radiation treatment systems. A precise model of the patient's eye and the tumor is essential for determining the necessary treatment plan. Current treatment systems base their irradiation plan calculations mainly on schematic eye models (e.g., Gullstrand's schematic eye). The adjustment of the model to the patient's anatomy is done by distorting the model according to information from ultrasound and/or CT images. In our project a precise model of the orbita is determined from CT, high resolution MRT, ultrasound (A-mode depth images and/or 2D B-mode images) and photographs of the fundus. The results from various segmentation and image analysis steps performed on all the data are combined to achieve an eye model of improved precision. By using a proton cannon for the therapy execution, the high precision of the model can be exploited, thus achieving a basic improvement of the therapy. Control over the destruction of the tumor can be increased by maximizing the dose distributions within the target volume keeping the damage in the surrounding tissue to a minimum. This article is concerned with the image processing to generate an eye model on which treatment planning is based.

  11. Constructing a knowledge-based database for dermatological integrative medical information.

    PubMed

    Shin, Jeeyoung; Jo, Yunju; Bae, Hyunsu; Hong, Moochang; Shin, Minkyu; Kim, Yangseok

    2013-01-01

    Recently, overuse of steroids and immunosuppressive drugs has produced incurable dermatological health problems. Traditional medical approaches have been studied for alternative solutions. However, accessing relevant information is difficult given the differences in information for western medicine (WM) and traditional medicine (TM). Therefore, an integrated medical information infrastructure must be utilized to bridge western and traditional treatments. In this study, WM and TM information was collected based on literature searches and information from internet databases on dermatological issues. Additionally, definitions for unified terminology and disease categorization based on individual cases were generated. Also a searchable database system was established that may be a possible model system for integrating both WM and TM medical information on dermatological conditions. Such a system will yield benefits for researchers and facilitate the best possible medical solutions for patients. The DIMI is freely available online.

  12. KLIFS: a knowledge-based structural database to navigate kinase-ligand interaction space.

    PubMed

    van Linden, Oscar P J; Kooistra, Albert J; Leurs, Rob; de Esch, Iwan J P; de Graaf, Chris

    2014-01-23

    Protein kinases regulate the majority of signal transduction pathways in cells and have become important targets for the development of designer drugs. We present a systematic analysis of kinase-ligand interactions in all regions of the catalytic cleft of all 1252 human kinase-ligand cocrystal structures present in the Protein Data Bank (PDB). The kinase-ligand interaction fingerprints and structure database (KLIFS) contains a consistent alignment of 85 kinase ligand binding site residues that enables the identification of family specific interaction features and classification of ligands according to their binding modes. We illustrate how systematic mining of kinase-ligand interaction space gives new insights into how conserved and selective kinase interaction hot spots can accommodate the large diversity of chemical scaffolds in kinase ligands. These analyses lead to an improved understanding of the structural requirements of kinase binding that will be useful in ligand discovery and design studies.

  13. Knowledge-Based, Central Nervous System (CNS) Lead Selection and Lead Optimization for CNS Drug Discovery.

    PubMed

    Ghose, Arup K; Herbertz, Torsten; Hudkins, Robert L; Dorsey, Bruce D; Mallamo, John P

    2012-01-18

    The central nervous system (CNS) is the major area that is affected by aging. Alzheimer's disease (AD), Parkinson's disease (PD), brain cancer, and stroke are the CNS diseases that will cost trillions of dollars for their treatment. Achievement of appropriate blood-brain barrier (BBB) penetration is often considered a significant hurdle in the CNS drug discovery process. On the other hand, BBB penetration may be a liability for many of the non-CNS drug targets, and a clear understanding of the physicochemical and structural differences between CNS and non-CNS drugs may assist both research areas. Because of the numerous and challenging issues in CNS drug discovery and the low success rates, pharmaceutical companies are beginning to deprioritize their drug discovery efforts in the CNS arena. Prompted by these challenges and to aid in the design of high-quality, efficacious CNS compounds, we analyzed the physicochemical property and the chemical structural profiles of 317 CNS and 626 non-CNS oral drugs. The conclusions derived provide an ideal property profile for lead selection and the property modification strategy during the lead optimization process. A list of substructural units that may be useful for CNS drug design was also provided here. A classification tree was also developed to differentiate between CNS drugs and non-CNS oral drugs. The combined analysis provided the following guidelines for designing high-quality CNS drugs: (i) topological molecular polar surface area of <76 Å(2) (25-60 Å(2)), (ii) at least one (one or two, including one aliphatic amine) nitrogen, (iii) fewer than seven (two to four) linear chains outside of rings, (iv) fewer than three (zero or one) polar hydrogen atoms, (v) volume of 740-970 Å(3), (vi) solvent accessible surface area of 460-580 Å(2), and (vii) positive QikProp parameter CNS. The ranges within parentheses may be used during lead optimization. One violation to this proposed profile may be acceptable. The chemoinformatics approaches for graphically analyzing multiple properties efficiently are presented. PMID:22267984

  14. A Very Large Area Network (VLAN) knowledge-base applied to space communication problems

    NASA Technical Reports Server (NTRS)

    Zander, Carol S.

    1988-01-01

    This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.

  15. A Very Large Area Network (VLAN) knowledge-base applied to space communication problems

    NASA Astrophysics Data System (ADS)

    Zander, Carol S.

    1988-10-01

    This paper first describes a hierarchical model for very large area networks (VLAN). Space communication problems whose solution could profit by the model are discussed and then an enhanced version of this model incorporating the knowledge needed for the missile detection-destruction problem is presented. A satellite network or VLAN is a network which includes at least one satellite. Due to the complexity, a compromise between fully centralized and fully distributed network management has been adopted. Network nodes are assigned to a physically localized group, called a partition. Partitions consist of groups of cell nodes with one cell node acting as the organizer or master, called the Group Master (GM). Coordinating the group masters is a Partition Master (PM). Knowledge is also distributed hierarchically existing in at least two nodes. Each satellite node has a back-up earth node. Knowledge must be distributed in such a way so as to minimize information loss when a node fails. Thus the model is hierarchical both physically and informationally.

  16. Using Knowledge-Based Systems to Support Learning of Organizational Knowledge: A Case Study

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Nash, Rebecca L.; Phan, Tu-Anh T.; Bailey, Teresa R.

    2003-01-01

    This paper describes the deployment of a knowledge system to support learning of organizational knowledge at the Jet Propulsion Laboratory (JPL), a US national research laboratory whose mission is planetary exploration and to 'do what no one has done before.' Data collected over 19 weeks of operation were used to assess system performance with respect to design considerations, participation, effectiveness of communication mechanisms, and individual-based learning. These results are discussed in the context of organizational learning research and implications for practice.

  17. Constructing a Knowledge-Based Database for Dermatological Integrative Medical Information

    PubMed Central

    Jo, Yunju; Bae, Hyunsu; Hong, Moochang; Shin, Minkyu; Kim, Yangseok

    2013-01-01

    Recently, overuse of steroids and immunosuppressive drugs has produced incurable dermatological health problems. Traditional medical approaches have been studied for alternative solutions. However, accessing relevant information is difficult given the differences in information for western medicine (WM) and traditional medicine (TM). Therefore, an integrated medical information infrastructure must be utilized to bridge western and traditional treatments. In this study, WM and TM information was collected based on literature searches and information from internet databases on dermatological issues. Additionally, definitions for unified terminology and disease categorization based on individual cases were generated. Also a searchable database system was established that may be a possible model system for integrating both WM and TM medical information on dermatological conditions. Such a system will yield benefits for researchers and facilitate the best possible medical solutions for patients. The DIMI is freely available online. PMID:24386003

  18. From Cues to Nudge: A Knowledge-Based Framework for Surveillance of Healthcare-Associated Infections.

    PubMed

    Shaban-Nejad, Arash; Mamiya, Hiroshi; Riazanov, Alexandre; Forster, Alan J; Baker, Christopher J O; Tamblyn, Robyn; Buckeridge, David L

    2016-01-01

    We propose an integrated semantic web framework consisting of formal ontologies, web services, a reasoner and a rule engine that together recommend appropriate level of patient-care based on the defined semantic rules and guidelines. The classification of healthcare-associated infections within the HAIKU (Hospital Acquired Infections - Knowledge in Use) framework enables hospitals to consistently follow the standards along with their routine clinical practice and diagnosis coding to improve quality of care and patient safety. The HAI ontology (HAIO) groups over thousands of codes into a consistent hierarchy of concepts, along with relationships and axioms to capture knowledge on hospital-associated infections and complications with focus on the big four types, surgical site infections (SSIs), catheter-associated urinary tract infection (CAUTI); hospital-acquired pneumonia, and blood stream infection. By employing statistical inferencing in our study we use a set of heuristics to define the rule axioms to improve the SSI case detection. We also demonstrate how the occurrence of an SSI is identified using semantic e-triggers. The e-triggers will be used to improve our risk assessment of post-operative surgical site infections (SSIs) for patients undergoing certain type of surgeries (e.g., coronary artery bypass graft surgery (CABG)).

  19. Design of a novel knowledge-based fault detection and isolation scheme.

    PubMed

    Zhao, Qing; Xu, Zhihan

    2004-04-01

    In this paper, a real-time fault detection and isolation (FDI) scheme for dynamical systems is developed, by integrating the signal processing technique with neural network design. Wavelet analysis is applied to capture the fault-induced transients of the measured signals in real-time, and the decomposed signals are pre-processed to extract details about a fault. A Regional Self-Organizing feature Map (R-SOM) neural network is synthesized to classify the fault types. The R-SOM neural network adopts two regions adjustment in the learning algorithm, thus it has high precision in clustering and matching, especially when the noise, disturbance and other uncertainties exist in the systems. As a result, the proposed FDI scheme is robust and accurate. The design is implemented on a stirred tank system and satisfactory online testing results are obtained.

  20. Membrane transporters in a human genome-scale metabolic knowledgebase and their implications for disease

    PubMed Central

    Sahoo, Swagatika; Aurich, Maike K.; Jonsson, Jon J.; Thiele, Ines

    2014-01-01

    Membrane transporters enable efficient cellular metabolism, aid in nutrient sensing, and have been associated with various diseases, such as obesity and cancer. Genome-scale metabolic network reconstructions capture genomic, physiological, and biochemical knowledge of a target organism, along with a detailed representation of the cellular metabolite transport mechanisms. Since the first reconstruction of human metabolism, Recon 1, published in 2007, progress has been made in the field of metabolite transport. Recently, we published an updated reconstruction, Recon 2, which significantly improved the metabolic coverage and functionality. Human metabolic reconstructions have been used to investigate the role of metabolism in disease and to predict biomarkers and drug targets. Given the importance of cellular transport systems in understanding human metabolism in health and disease, we analyzed the coverage of transport systems for various metabolite classes in Recon 2. We will review the current knowledge on transporters (i.e., their preferred substrates, transport mechanisms, metabolic relevance, and disease association for each metabolite class). We will assess missing coverage and propose modifications and additions through a transport module that is functional when combined with Recon 2. This information will be valuable for further refinements. These data will also provide starting points for further experiments by highlighting areas of incomplete knowledge. This review represents the first comprehensive overview of the transporters involved in central metabolism and their transport mechanisms, thus serving as a compendium of metabolite transporters specific for human metabolic reconstructions. PMID:24653705

  1. Data- and knowledge-based modeling of gene regulatory networks: an update

    PubMed Central

    Linde, Jörg; Schulze, Sylvie; Henkel, Sebastian G.; Guthke, Reinhard

    2015-01-01

    Gene regulatory network inference is a systems biology approach which predicts interactions between genes with the help of high-throughput data. In this review, we present current and updated network inference methods focusing on novel techniques for data acquisition, network inference assessment, network inference for interacting species and the integration of prior knowledge. After the advance of Next-Generation-Sequencing of cDNAs derived from RNA samples (RNA-Seq) we discuss in detail its application to network inference. Furthermore, we present progress for large-scale or even full-genomic network inference as well as for small-scale condensed network inference and review advances in the evaluation of network inference methods by crowdsourcing. Finally, we reflect the current availability of data and prior knowledge sources and give an outlook for the inference of gene regulatory networks that reflect interacting species, in particular pathogen-host interactions. PMID:27047314

  2. Knowledge-based real-space explorations for low-resolution structure determination.

    PubMed

    Furnham, Nicholas; Doré, Andrew S; Chirgadze, Dimitri Y; de Bakker, Paul I W; Depristo, Mark A; Blundell, Tom L

    2006-08-01

    The accurate and effective interpretation of low-resolution data in X-ray crystallography is becoming increasingly important as structural initiatives turn toward large multiprotein complexes. Substantial challenges remain due to the poor information content and ambiguity in the interpretation of electron density maps at low resolution. Here, we describe a semiautomated procedure that employs a restraint-based conformational search algorithm, RAPPER, to produce a starting model for the structure determination of ligase interacting factor 1 in complex with a fragment of DNA ligase IV at low resolution. The combined use of experimental data and a priori knowledge of protein structure enabled us not only to generate an all-atom model but also to reaffirm the inferred sequence registry. This approach provides a means to extract quickly from experimental data useful information that would otherwise be discarded and to take into account the uncertainty in the interpretation--an overriding issue for low-resolution data.

  3. Knowledge-based analysis of microarray gene expression data by using support vector machines

    SciTech Connect

    William Grundy; Manuel Ares, Jr.; David Haussler

    2001-06-18

    The authors introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. They test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, they use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.

  4. ELM-PE: A Knowledge-based Programming Environment for Learning LISP.

    ERIC Educational Resources Information Center

    Weber, Gerhard; Mollenberg, Antje

    Novices in programming face many problems affecting their learning process and programming success. Learning to program includes using the programming environment, learning a programming language's syntax and semantics, understanding a problem and translating it into an executable plan, developing algorithms and programs, and testing and debugging…

  5. Perspectives of UK Vice-Chancellors on Leading Universities in a Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Bosetti, Lynn; Walker, Keith

    2010-01-01

    This paper draws upon the experiences and perceptions of ten university vice-chancellors in the United Kingdom on the challenges they face in providing leadership and strategic direction for their institutions into the twenty-first century. The paper reveals the perceptions and spoken words of these leaders as they identify the key challenges…

  6. Excellence in the Knowledge-Based Economy: From Scientific to Research Excellence

    ERIC Educational Resources Information Center

    Sørensen, Mads P.; Bloch, Carter; Young, Mitchell

    2016-01-01

    In 2013, the European Union (EU) unveiled its new "Composite Indicator for Scientific and Technological Research Excellence." This is not an isolated occurrence; policy-based interest in excellence is growing all over the world. The heightened focus on excellence and, in particular, attempts to define it through quantitative indicators…

  7. Knowledge-based system V and V in the Space Station Freedom program

    NASA Technical Reports Server (NTRS)

    Kelley, Keith; Hamilton, David; Culbert, Chris

    1992-01-01

    Knowledge Based Systems (KBS's) are expected to be heavily used in the Space Station Freedom Program (SSFP). Although SSFP Verification and Validation (V&V) requirements are based on the latest state-of-the-practice in software engineering technology, they may be insufficient for Knowledge Based Systems (KBS's); it is widely stated that there are differences in both approach and execution between KBS V&V and conventional software V&V. In order to better understand this issue, we have surveyed and/or interviewed developers from sixty expert system projects in order to understand the differences and difficulties in KBS V&V. We have used this survey results to analyze the SSFP V&V requirements for conventional software in order to determine which specific requirements are inappropriate for KBS V&V and why they are inappropriate. Further work will result in a set of recommendations that can be used either as guidelines for applying conventional software V&V requirements to KBS's or as modifications to extend the existing SSFP conventional software V&V requirements to include KBS requirements. The results of this work are significant to many projects, in addition to SSFP, which will involve KBS's.

  8. Knowledge-based geographic information systems (KBGIS): new analytic and data management tools

    SciTech Connect

    Albert, T.M.

    1988-11-01

    In its simplest form, a geographic information system (GIS) may be viewed as a data base management system in which most of the data are spatially indexed, and upon which sets of procedures operate to answer queries about spatial entities represented in the data base. Utilization of artificial intelligence (AI) techniques can enhance greatly the capabilities of a GIS, particularly in handling very large, diverse data bases involved in the earth sciences. A KBGIS has been developed by the US Geological Survey which incorporates AI techniques such as learning, expert systems, new data representation, and more. The system, which will be developed further and applied, is a prototype of the next generation of GIS's, an intelligent GIS, as well as an example of a general-purpose intelligent data handling system. The paper provides a description of KBGIS and its application, as well as the AI techniques involved.

  9. Intent and error recognition as part of a knowledge-based cockpit assistant

    NASA Astrophysics Data System (ADS)

    Strohal, Michael; Onken, Reiner

    1998-03-01

    With the Crew Assistant Military Aircraft (CAMA) a knowledge- based cockpit assistant system for future military transport aircraft is developed and tested to enhance situation awareness. Human-centered automation was the central principal for the development of CAMA, an approach to achieve advanced man-machine interaction, mainly by enhancing situation awareness. The CAMA-module Pilot Intent and Error Recognition (PIER) evaluates the pilot's activities and mission events in order to interpret and understand the pilot's actions in the context of the flight situation. Expected crew actions based on the flight plan are compared with the actual behavior shown by the crew. If discrepancies are detected the PIER module tries to figure out, whether the deviation was caused erroneously or by a sensible intent. By monitoring pilot actions as well as the mission context, the system is able to compare the pilot's action with a set of behavioral hypotheses. In case of an intentional deviation from the flight plan, the module checks, whether the behavior matches to the given set of behavior patterns of the pilot. Intent recognition can increase man-machine synergy by anticipating a need for assistance pertinent to the pilot's intent without having a pilot request. The interpretation of all possible situations with respect to intent recognition in terms of a reasoning process is based on a set of decision rules. To cope with the need of inferencing under uncertainty a fuzzy-logic approach is used. A weakness of the fuzzy-logic approach lies in the possibly ill-defined boundaries of the fuzzy sets. Self-Organizing Maps (SOM) as introduced and elaborated on by T. Kohonen are applied to improve the fuzzy set data and rule base complying with observed pilot behavior. Hierarchical cluster analysis is used to locate clusters of similar patterns in the maps. As introduced by Pedrycz, every feature is evaluated using fuzzy sets for each designated cluster. This approach allows to generate fuzzy sets and rules by use of a user-friendly and easily adjustable environment of development tools for data interpretation.

  10. A knowledge-based system for finding cutsets and performing diagnostics

    SciTech Connect

    Mikaili, R.; Danofsky, R.A.; Spinrad, B.I.

    1989-01-01

    In performing a probabilistic risk assessment (PRA), fault trees are constructed and evaluated; this is called fault-tree analysis. The end products of fault-tree analysis are cutsets. Cutsets are defined as lists of components whose failure causes the failure of the system. Fault-tree analysis is error prone and time consuming. The Expert System for Analyzing Systems (ESAS) has been developed, which implements a method that bypasses fault-tree analysis for finding cutsets. This expert system then uses these cutsets for diagnostic purposes. Given an anomaly, ESAS finds the corresponding cutsets that contain the probable causes. Several thermal-hydraulic and electrical systems were analyzed by ESAS, and the cutsets found were identical to those obtained by performing fault-tree analysis. To further test ESAS, it is hoped to analyze systems in the Duane Arnold Energy Center nuclear power plant located at Palo, Iowa, operated by Iowa Electric and Light Company.

  11. A self-consistent knowledge-based approach to protein design.

    PubMed Central

    Rossi, A; Micheletti, C; Seno, F; Maritan, A

    2001-01-01

    A simple and very efficient protein design strategy is proposed by developing some recently introduced theoretical tools which have been successfully applied to exactly solvable protein models. The design approach is implemented by using three amino acid classes and it is based on the minimization of an appropriate energy function. For a given native state the results of the design procedure are compared, through a statistical analysis, with the properties of an ensemble of sequences folding in the same conformation. If the success rate is computed on those sites designed with high confidence, it can be as high as 80%. The method is also able to identify key sites for the folding process: results for 2ci2 and barnase are in very good agreement with experimental results. PMID:11159418

  12. An approach to state recognition and knowledge-based diagnosis for engines

    NASA Astrophysics Data System (ADS)

    Hong, Ding; Xiuwen, Gui; Shuzi, Yang

    1991-07-01

    Several studies have been performed in order to recognise operating states and diagnose faults in an automotive engine via the acceleration signal of whole-engine block vibration. The method of how to extract time-domain and frequency-domain features which describe the engine's operating states from the accelaration signal is discussed in detail. New concepts of sensitive feature and sensitive distance are defined for the purpose of evaluating the recognition performance. On the basis of these concepts, a new kind of distance function—quotient distance—is presented, which takes the sensitive feature as its major discriminating basis. A diagnostic strategy is proposed on the basis of the method of state recognition and expert systems architecture.

  13. Knowledge-Based Strategies in Canadian Workplaces: Is There a Role for Continuing Education?

    ERIC Educational Resources Information Center

    Willment, Jo-Anne

    2004-01-01

    A faculty researcher and six graduate students from the Master of Continuing Education program at the University of Calgary completed a small study of knowledge practices within government, postsecondary, and corporate workplaces across Canada. Interview results include an overview of findings and three narrative descriptions. Analysis produced a…

  14. The Weather Lab: An Instruction-Based Assessment Tool Built from a Knowledge-Based System.

    ERIC Educational Resources Information Center

    Mioduser, David; Venezky, Richard L.; Gong, Brian

    1998-01-01

    Presents the Weather Lab, a computer-based tool for assessing student knowledge and understanding of weather phenomena by involving students in generating weather forecasts or manipulating weather components affecting the final formulation of a forecast. Contains 37 references. (Author/ASK)

  15. A Hybrid Knowledge-Based and Data-Driven Approach to Identifying Semantically Similar Concepts

    PubMed Central

    Pivovarov, Rimma; Elhadad, Noémie

    2012-01-01

    An open research question when leveraging ontological knowledge is when to treat different concepts separately from each other and when to aggregate them. For instance, concepts for the terms "paroxysmal cough" and "nocturnal cough" might be aggregated in a kidney disease study, but should be left separate in a pneumonia study. Determining whether two concepts are similar enough to be aggregated can help build better datasets for data mining purposes and avoid signal dilution. Quantifying the similarity among concepts is a difficult task, however, in part because such similarity is context-dependent. We propose a comprehensive method, which computes a similarity score for a concept pair by combining data-driven and ontology-driven knowledge. We demonstrate our method on concepts from SNOMED-CT and on a corpus of clinical notes of patients with chronic kidney disease. By combining information from usage patterns in clinical notes and from ontological structure, the method can prune out concepts that are simply related from those which are semantically similar. When evaluated against a list of concept pairs annotated for similarity, our method reaches an AUC (area under the curve) of 92%. PMID:22289420

  16. Knowledge-Based, Central Nervous System (CNS) Lead Selection and Lead Optimization for CNS Drug Discovery

    PubMed Central

    2011-01-01

    The central nervous system (CNS) is the major area that is affected by aging. Alzheimer’s disease (AD), Parkinson’s disease (PD), brain cancer, and stroke are the CNS diseases that will cost trillions of dollars for their treatment. Achievement of appropriate blood–brain barrier (BBB) penetration is often considered a significant hurdle in the CNS drug discovery process. On the other hand, BBB penetration may be a liability for many of the non-CNS drug targets, and a clear understanding of the physicochemical and structural differences between CNS and non-CNS drugs may assist both research areas. Because of the numerous and challenging issues in CNS drug discovery and the low success rates, pharmaceutical companies are beginning to deprioritize their drug discovery efforts in the CNS arena. Prompted by these challenges and to aid in the design of high-quality, efficacious CNS compounds, we analyzed the physicochemical property and the chemical structural profiles of 317 CNS and 626 non-CNS oral drugs. The conclusions derived provide an ideal property profile for lead selection and the property modification strategy during the lead optimization process. A list of substructural units that may be useful for CNS drug design was also provided here. A classification tree was also developed to differentiate between CNS drugs and non-CNS oral drugs. The combined analysis provided the following guidelines for designing high-quality CNS drugs: (i) topological molecular polar surface area of <76 Å2 (25–60 Å2), (ii) at least one (one or two, including one aliphatic amine) nitrogen, (iii) fewer than seven (two to four) linear chains outside of rings, (iv) fewer than three (zero or one) polar hydrogen atoms, (v) volume of 740–970 Å3, (vi) solvent accessible surface area of 460–580 Å2, and (vii) positive QikProp parameter CNS. The ranges within parentheses may be used during lead optimization. One violation to this proposed profile may be acceptable. The chemoinformatics approaches for graphically analyzing multiple properties efficiently are presented. PMID:22267984

  17. The Digital Anatomist Distributed Framework and Its Applications to Knowledge-based Medical Imaging

    PubMed Central

    Brinkley, James F.; Rosse, Cornelius

    1997-01-01

    Abstract The domain of medical imaging is anatomy. Therefore, anatomic knowledge should be a rational basis for organizing and analyzing images. The goals of the Digital Anatomist Program at the University of Washington include the development of an anatomically based software framework for organizing, analyzing, visualizing and utilizing biomedical information. The framework is based on representations for both spatial and symbolic anatomic knowledge, and is being implemented in a distributed architecture in which multiple client programs on the Internet are used to update and access an expanding set of anatomical information resources. The development of this framework is driven by several practical applications, including symbolic anatomic reasoning, knowledge based image segmentation, anatomy information retrieval, and functional brain mapping. Since each of these areas involves many difficult image processing issues, our research strategy is an evolutionary one, in which applications are developed somewhat independently, and partial solutions are integrated in a piecemeal fashion, using the network as the substrate. This approach assumes that networks of interacting components can synergistically work together to solve problems larger than either could solve on its own. Each of the individual projects is described, along with evaluations that show that the individual components are solving the problems they were designed for, and are beginning to interact with each other in a synergistic manner. We argue that this synergy will increase, not only within our own group, but also among groups as the Internet matures, and that an anatomic knowledge base will be a useful means for fostering these interactions. PMID:9147337

  18. Knowledge-base for interpretation of cerebrospinal fluid data patterns. Essentials in neurology and psychiatry.

    PubMed

    Reiber, Hansotto

    2016-06-01

    The physiological and biophysical knowledge base for interpretations of cerebrospinal fluid (CSF) data and reference ranges are essential for the clinical pathologist and neurochemist. With the popular description of the CSF flow dependent barrier function, the dynamics and concentration gradients of blood-derived, brain-derived and leptomeningeal proteins in CSF or the specificity-independent functions of B-lymphocytes in brain also the neurologist, psychiatrist, neurosurgeon as well as the neuropharmacologist may find essentials for diagnosis, research or development of therapies. This review may help to replace the outdated ideas like "leakage" models of the barriers, linear immunoglobulin Index Interpretations or CSF electrophoresis. Calculations, Interpretations and analytical pitfalls are described for albumin quotients, quantitation of immunoglobulin synthesis in Reibergrams, oligoclonal IgG, IgM analysis, the polyspecific ( MRZ- ) antibody reaction, the statistical treatment of CSF data and general quality assessment in the CSF laboratory. The diagnostic relevance is documented in an accompaning review. PMID:27332077

  19. Knowledge-based decision support for Space Station assembly sequence planning

    NASA Astrophysics Data System (ADS)

    1991-04-01

    A complete Personal Analysis Assistant (PAA) for Space Station Freedom (SSF) assembly sequence planning consists of three software components: the system infrastructure, intra-flight value added, and inter-flight value added. The system infrastructure is the substrate on which software elements providing inter-flight and intra-flight value-added functionality are built. It provides the capability for building representations of assembly sequence plans and specification of constraints and analysis options. Intra-flight value-added provides functionality that will, given the manifest for each flight, define cargo elements, place them in the National Space Transportation System (NSTS) cargo bay, compute performance measure values, and identify violated constraints. Inter-flight value-added provides functionality that will, given major milestone dates and capability requirements, determine the number and dates of required flights and develop a manifest for each flight. The current project is Phase 1 of a projected two phase program and delivers the system infrastructure. Intra- and inter-flight value-added were to be developed in Phase 2, which has not been funded. Based on experience derived from hundreds of projects conducted over the past seven years, ISX developed an Intelligent Systems Engineering (ISE) methodology that combines the methods of systems engineering and knowledge engineering to meet the special systems development requirements posed by intelligent systems, systems that blend artificial intelligence and other advanced technologies with more conventional computing technologies. The ISE methodology defines a phased program process that begins with an application assessment designed to provide a preliminary determination of the relative technical risks and payoffs associated with a potential application, and then moves through requirements analysis, system design, and development.

  20. Learning and plan refinement in a knowledge-based system for automatic speech recognition

    SciTech Connect

    De Mori, R.; Lam, L.; Gilloux, M.

    1987-03-01

    This paper shows how a semiautomatic design of a speech recognition system can be done as a planning activity. Recognition performances are used for deciding plan refinement. Inductive learning is performed for setting action preconditions. Experimental results in the recognition of connected letters spoken by 100 speakers are presented.