Science.gov

Sample records for prior biological knowledge-based

  1. SU-E-J-71: Spatially Preserving Prior Knowledge-Based Treatment Planning

    SciTech Connect

    Wang, H; Xing, L

    2015-06-15

    Purpose: Prior knowledge-based treatment planning is impeded by the use of a single dose volume histogram (DVH) curve. Critical spatial information is lost from collapsing the dose distribution into a histogram. Even similar patients possess geometric variations that becomes inaccessible in the form of a single DVH. We propose a simple prior knowledge-based planning scheme that extracts features from prior dose distribution while still preserving the spatial information. Methods: A prior patient plan is not used as a mere starting point for a new patient but rather stopping criteria are constructed. Each structure from the prior patient is partitioned into multiple shells. For instance, the PTV is partitioned into an inner, middle, and outer shell. Prior dose statistics are then extracted for each shell and translated into the appropriate Dmin and Dmax parameters for the new patient. Results: The partitioned dose information from a prior case has been applied onto 14 2-D prostate cases. Using prior case yielded final DVHs that was comparable to manual planning, even though the DVH for the prior case was different from the DVH for the 14 cases. Solely using a single DVH for the entire organ was also performed for comparison but showed a much poorer performance. Different ways of translating the prior dose statistics into parameters for the new patient was also tested. Conclusion: Prior knowledge-based treatment planning need to salvage the spatial information without transforming the patients on a voxel to voxel basis. An efficient balance between the anatomy and dose domain is gained through partitioning the organs into multiple shells. The use of prior knowledge not only serves as a starting point for a new case but the information extracted from the partitioned shells are also translated into stopping criteria for the optimization problem at hand.

  2. Case-based reasoning for space applications: Utilization of prior experience in knowledge-based systems

    NASA Technical Reports Server (NTRS)

    King, James A.

    1987-01-01

    The goal is to explain Case-Based Reasoning as a vehicle to establish knowledge-based systems based on experimental reasoning for possible space applications. This goal will be accomplished through an examination of reasoning based on prior experience in a sample domain, and also through a presentation of proposed space applications which could utilize Case-Based Reasoning techniques.

  3. Prior-knowledge-based spectral mixture analysis for impervious surface mapping

    SciTech Connect

    Zhang, Jinshui; He, Chunyang; Zhou, Yuyu; Zhu, Shuang; Shuai, Guanyuan

    2014-01-03

    In this study, we developed a prior-knowledge-based spectral mixture analysis (PKSMA) to map impervious surfaces by using endmembers derived separately for high- and low-density urban regions. First, an urban area was categorized into high- and low-density urban areas, using a multi-step classification method. Next, in high-density urban areas that were assumed to have only vegetation and impervious surfaces (ISs), the Vegetation-Impervious model (V-I) was used in a spectral mixture analysis (SMA) with three endmembers: vegetation, high albedo, and low albedo. In low-density urban areas, the Vegetation-Impervious-Soil model (V-I-S) was used in an SMA analysis with four endmembers: high albedo, low albedo, soil, and vegetation. The fraction of IS with high and low albedo in each pixel was combined to produce the final IS map. The root mean-square error (RMSE) of the IS map produced using PKSMA was about 11.0%, compared to 14.52% using four-endmember SMA. Particularly in high-density urban areas, PKSMA (RMSE = 6.47%) showed better performance than four-endmember (15.91%). The results indicate that PKSMA can improve IS mapping compared to traditional SMA by using appropriately selected endmembers and is particularly strong in high-density urban areas.

  4. CONCEPTUAL FRAMEWORK FOR THE CHEMICAL EFFECTS IN BIOLOGICAL SYSTEMS (CEBS) TOXICOGENOMICS KNOWLEDGE BASE

    EPA Science Inventory

    Conceptual Framework for the Chemical Effects in Biological Systems (CEBS) T oxicogenomics Knowledge Base

    Abstract
    Toxicogenomics studies how the genome is involved in responses to environmental stressors or toxicants. It combines genetics, genome-scale mRNA expressio...

  5. A prior-knowledge-based threshold segmentation method of forward-looking sonar images for underwater linear object detection

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Bian, Hongyu; Yagi, Shin-ichi; Yang, Xiaodong

    2016-07-01

    Raw sonar images may not be used for underwater detection or recognition directly because disturbances such as the grating-lobe and multi-path disturbance affect the gray-level distribution of sonar images and cause phantom echoes. To search for a more robust segmentation method with a reasonable computational cost, a prior-knowledge-based threshold segmentation method of underwater linear object detection is discussed. The possibility of guiding the segmentation threshold evolution of forward-looking sonar images using prior knowledge is verified by experiment. During the threshold evolution, the collinear relation of two lines that correspond to double peaks in the voting space of the edged image is used as the criterion of termination. The interaction is reflected in the sense that the Hough transform contributes to the basis of the collinear relation of lines, while the binary image generated from the current threshold provides the resource of the Hough transform. The experimental results show that the proposed method could maintain a good tradeoff between the segmentation quality and the computational time in comparison with conventional segmentation methods. The proposed method redounds to a further process for unsupervised underwater visual understanding.

  6. RegenBase: a knowledge base of spinal cord injury biology for translational research

    PubMed Central

    Callahan, Alison; Abeyruwan, Saminda W.; Al-Ali, Hassan; Sakurai, Kunie; Ferguson, Adam R.; Popovich, Phillip G.; Shah, Nigam H.; Visser, Ubbo; Bixby, John L.; Lemmon, Vance P.

    2016-01-01

    Spinal cord injury (SCI) research is a data-rich field that aims to identify the biological mechanisms resulting in loss of function and mobility after SCI, as well as develop therapies that promote recovery after injury. SCI experimental methods, data and domain knowledge are locked in the largely unstructured text of scientific publications, making large scale integration with existing bioinformatics resources and subsequent analysis infeasible. The lack of standard reporting for experiment variables and results also makes experiment replicability a significant challenge. To address these challenges, we have developed RegenBase, a knowledge base of SCI biology. RegenBase integrates curated literature-sourced facts and experimental details, raw assay data profiling the effect of compounds on enzyme activity and cell growth, and structured SCI domain knowledge in the form of the first ontology for SCI, using Semantic Web representation languages and frameworks. RegenBase uses consistent identifier schemes and data representations that enable automated linking among RegenBase statements and also to other biological databases and electronic resources. By querying RegenBase, we have identified novel biological hypotheses linking the effects of perturbagens to observed behavioral outcomes after SCI. RegenBase is publicly available for browsing, querying and download. Database URL: http://regenbase.org PMID:27055827

  7. BioBIKE: a Web-based, programmable, integrated biological knowledge base.

    PubMed

    Elhai, Jeff; Taton, Arnaud; Massar, J P; Myers, John K; Travers, Mike; Casey, Johnny; Slupesky, Mark; Shrager, Jeff

    2009-07-01

    BioBIKE (biobike.csbc.vcu.edu) is a web-based environment enabling biologists with little programming expertise to combine tools, data, and knowledge in novel and possibly complex ways, as demanded by the biological problem at hand. BioBIKE is composed of three integrated components: a biological knowledge base, a graphical programming interface and an extensible set of tools. Each of the five current BioBIKE instances provides all available information (genomic, metabolic, experimental) appropriate to a given research community. The BioBIKE programming language and graphical programming interface employ familiar operations to help users combine functions and information to conduct biologically meaningful analyses. Many commonly used tools, such as Blast and PHYLIP, are built-in, allowing users to access them within the same interface and to pass results from one to another. Users may also invent their own tools, packaging complex expressions under a single name, which is immediately made accessible through the graphical interface. BioBIKE represents a partial solution to the difficult question of how to enable those with no background in computer programming to work directly and creatively with mass biological information. BioBIKE is distributed under the MIT Open Source license. A description of the underlying language and other technical matters is available at www.Biobike.org. PMID:19433511

  8. RegenBase: a knowledge base of spinal cord injury biology for translational research.

    PubMed

    Callahan, Alison; Abeyruwan, Saminda W; Al-Ali, Hassan; Sakurai, Kunie; Ferguson, Adam R; Popovich, Phillip G; Shah, Nigam H; Visser, Ubbo; Bixby, John L; Lemmon, Vance P

    2016-01-01

    Spinal cord injury (SCI) research is a data-rich field that aims to identify the biological mechanisms resulting in loss of function and mobility after SCI, as well as develop therapies that promote recovery after injury. SCI experimental methods, data and domain knowledge are locked in the largely unstructured text of scientific publications, making large scale integration with existing bioinformatics resources and subsequent analysis infeasible. The lack of standard reporting for experiment variables and results also makes experiment replicability a significant challenge. To address these challenges, we have developed RegenBase, a knowledge base of SCI biology. RegenBase integrates curated literature-sourced facts and experimental details, raw assay data profiling the effect of compounds on enzyme activity and cell growth, and structured SCI domain knowledge in the form of the first ontology for SCI, using Semantic Web representation languages and frameworks. RegenBase uses consistent identifier schemes and data representations that enable automated linking among RegenBase statements and also to other biological databases and electronic resources. By querying RegenBase, we have identified novel biological hypotheses linking the effects of perturbagens to observed behavioral outcomes after SCI. RegenBase is publicly available for browsing, querying and download.Database URL:http://regenbase.org. PMID:27055827

  9. Integrating biological knowledge based on functional annotations for biclustering of gene expression data.

    PubMed

    Nepomuceno, Juan A; Troncoso, Alicia; Nepomuceno-Chamorro, Isabel A; Aguilar-Ruiz, Jesús S

    2015-05-01

    Gene expression data analysis is based on the assumption that co-expressed genes imply co-regulated genes. This assumption is being reformulated because the co-expression of a group of genes may be the result of an independent activation with respect to the same experimental condition and not due to the same regulatory regime. For this reason, traditional techniques are recently being improved with the use of prior biological knowledge from open-access repositories together with gene expression data. Biclustering is an unsupervised machine learning technique that searches patterns in gene expression data matrices. A scatter search-based biclustering algorithm that integrates biological information is proposed in this paper. In addition to the gene expression data matrix, the input of the algorithm is only a direct annotation file that relates each gene to a set of terms from a biological repository where genes are annotated. Two different biological measures, FracGO and SimNTO, are proposed to integrate this information by means of its addition to-be-optimized fitness function in the scatter search scheme. The measure FracGO is based on the biological enrichment and SimNTO is based on the overlapping among GO annotations of pairs of genes. Experimental results evaluate the proposed algorithm for two datasets and show the algorithm performs better when biological knowledge is integrated. Moreover, the analysis and comparison between the two different biological measures is presented and it is concluded that the differences depend on both the data source and how the annotation file has been built in the case GO is used. It is also shown that the proposed algorithm obtains a greater number of enriched biclusters than other classical biclustering algorithms typically used as benchmark and an analysis of the overlapping among biclusters reveals that the biclusters obtained present a low overlapping. The proposed methodology is a general-purpose algorithm which allows

  10. A Knowledge Base for Teaching Biology Situated in the Context of Genetic Testing

    ERIC Educational Resources Information Center

    van der Zande, Paul; Waarlo, Arend Jan; Brekelmans, Mieke; Akkerman, Sanne F.; Vermunt, Jan D.

    2011-01-01

    Recent developments in the field of genomics will impact the daily practice of biology teachers who teach genetics in secondary education. This study reports on the first results of a research project aimed at enhancing biology teacher knowledge for teaching genetics in the context of genetic testing. The increasing body of scientific knowledge…

  11. A Knowledge Base for Teaching Biology Situated in the Context of Genetic Testing

    NASA Astrophysics Data System (ADS)

    van der Zande, Paul; Waarlo, Arend Jan; Brekelmans, Mieke; Akkerman, Sanne F.; Vermunt, Jan D.

    2011-10-01

    Recent developments in the field of genomics will impact the daily practice of biology teachers who teach genetics in secondary education. This study reports on the first results of a research project aimed at enhancing biology teacher knowledge for teaching genetics in the context of genetic testing. The increasing body of scientific knowledge concerning genetic testing and the related consequences for decision-making indicate the societal relevance of such a situated learning approach. What content knowledge do biology teachers need for teaching genetics in the personal health context of genetic testing? This study describes the required content knowledge by exploring the educational practice and clinical genetic practices. Nine experienced teachers and 12 respondents representing the clinical genetic practices (clients, medical professionals, and medical ethicists) were interviewed about the biological concepts and ethical, legal, and social aspects (ELSA) of testing they considered relevant to empowering students as future health care clients. The ELSA suggested by the respondents were complemented by suggestions found in the literature on genetic counselling. The findings revealed that the required teacher knowledge consists of multiple layers that are embedded in specific genetic test situations: on the one hand, the knowledge of concepts represented by the curricular framework and some additional concepts (e.g. multifactorial and polygenic disorder) and, on the other hand, more knowledge of ELSA and generic characteristics of genetic test practice (uncertainty, complexity, probability, and morality). Suggestions regarding how to translate these characteristics, concepts, and ELSA into context-based genetics education are discussed.

  12. Enhancing Interpretability of Gene Signatures with Prior Biological Knowledge

    PubMed Central

    Squillario, Margherita; Barbieri, Matteo; Verri, Alessandro; Barla, Annalisa

    2016-01-01

    Biological interpretability is a key requirement for the output of microarray data analysis pipelines. The most used pipeline first identifies a gene signature from the acquired measurements and then uses gene enrichment analysis as a tool for functionally characterizing the obtained results. Recently Knowledge Driven Variable Selection (KDVS), an alternative approach which performs both steps at the same time, has been proposed. In this paper, we assess the effectiveness of KDVS against standard approaches on a Parkinson’s Disease (PD) dataset. The presented quantitative analysis is made possible by the construction of a reference list of genes and gene groups associated to PD. Our work shows that KDVS is much more effective than the standard approach in enhancing the interpretability of the obtained results. PMID:27600081

  13. Enhancing Interpretability of Gene Signatures with Prior Biological Knowledge.

    PubMed

    Squillario, Margherita; Barbieri, Matteo; Verri, Alessandro; Barla, Annalisa

    2016-01-01

    Biological interpretability is a key requirement for the output of microarray data analysis pipelines. The most used pipeline first identifies a gene signature from the acquired measurements and then uses gene enrichment analysis as a tool for functionally characterizing the obtained results. Recently Knowledge Driven Variable Selection (KDVS), an alternative approach which performs both steps at the same time, has been proposed. In this paper, we assess the effectiveness of KDVS against standard approaches on a Parkinson's Disease (PD) dataset. The presented quantitative analysis is made possible by the construction of a reference list of genes and gene groups associated to PD. Our work shows that KDVS is much more effective than the standard approach in enhancing the interpretability of the obtained results. PMID:27600081

  14. Knowledge-Based Abstracting.

    ERIC Educational Resources Information Center

    Black, William J.

    1990-01-01

    Discussion of automatic abstracting of technical papers focuses on a knowledge-based method that uses two sets of rules. Topics discussed include anaphora; text structure and discourse; abstracting techniques, including the keyword method and the indicator phrase method; and tools for text skimming. (27 references) (LRW)

  15. Knowledge based programming at KSC

    NASA Technical Reports Server (NTRS)

    Tulley, J. H., Jr.; Delaune, C. I.

    1986-01-01

    Various KSC knowledge-based systems projects are discussed. The objectives of the knowledge-based automatic test equipment and Shuttle connector analysis network projects are described. It is observed that knowledge-based programs must handle factual and expert knowledge; the characteristics of these two types of knowledge are examined. Applications for the knowledge-based programming technique are considered.

  16. Filtering genetic variants and placing informative priors based on putative biological function.

    PubMed

    Friedrichs, Stefanie; Malzahn, Dörthe; Pugh, Elizabeth W; Almeida, Marcio; Liu, Xiao Qing; Bailey, Julia N

    2016-01-01

    High-density genetic marker data, especially sequence data, imply an immense multiple testing burden. This can be ameliorated by filtering genetic variants, exploiting or accounting for correlations between variants, jointly testing variants, and by incorporating informative priors. Priors can be based on biological knowledge or predicted variant function, or even be used to integrate gene expression or other omics data. Based on Genetic Analysis Workshop (GAW) 19 data, this article discusses diversity and usefulness of functional variant scores provided, for example, by PolyPhen2, SIFT, or RegulomeDB annotations. Incorporating functional scores into variant filters or weights and adjusting the significance level for correlations between variants yielded significant associations with blood pressure traits in a large family study of Mexican Americans (GAW19 data set). Marker rs218966 in gene PHF14 and rs9836027 in MAP4 significantly associated with hypertension; additionally, rare variants in SNUPN significantly associated with systolic blood pressure. Variant weights strongly influenced the power of kernel methods and burden tests. Apart from variant weights in test statistics, prior weights may also be used when combining test statistics or to informatively weight p values while controlling false discovery rate (FDR). Indeed, power improved when gene expression data for FDR-controlled informative weighting of association test p values of genes was used. Finally, approaches exploiting variant correlations included identity-by-descent mapping and the optimal strategy for joint testing rare and common variants, which was observed to depend on linkage disequilibrium structure. PMID:26866982

  17. A collaborative knowledge base for cognitive phenomics

    PubMed Central

    Sabb, FW; Bearden, CE; Glahn, DC; Parker, DS; Freimer, N; Bilder, RM

    2014-01-01

    The human genome project has stimulated development of impressive repositories of biological knowledge at the genomic level and new knowledge bases are rapidly being developed in a ‘bottom-up’ fashion. In contrast, higher-level phenomics knowledge bases are underdeveloped, particularly with respect to the complex neuropsychiatric syndrome, symptom, cognitive, and neural systems phenotypes widely acknowledged as critical to advance molecular psychiatry research. This gap limits informatics strategies that could improve both the mining and representation of relevant knowledge, and help prioritize phenotypes for new research. Most existing structured knowledge bases also engage a limited set of contributors, and thus fail to leverage recent developments in social collaborative knowledge-building. We developed a collaborative annotation database to enable representation and sharing of empirical information about phenotypes important to neuropsychiatric research (www.Phenowiki.org). As a proof of concept, we focused on findings relevant to ‘cognitive control’, a neurocognitive construct considered important to multiple neuropsychiatric syndromes. Currently this knowledge base tabulates empirical findings about heritabilities and measurement properties of specific cognitive task and rating scale indicators (n = 449 observations). It is hoped that this new open resource can serve as a starting point that enables broadly collaborative knowledge-building, and help investigators select and prioritize endophenotypes for translational research. PMID:18180765

  18. Knowledge-based nursing diagnosis

    NASA Astrophysics Data System (ADS)

    Roy, Claudette; Hay, D. Robert

    1991-03-01

    Nursing diagnosis is an integral part of the nursing process and determines the interventions leading to outcomes for which the nurse is accountable. Diagnoses under the time constraints of modern nursing can benefit from a computer assist. A knowledge-based engineering approach was developed to address these problems. A number of problems were addressed during system design to make the system practical extended beyond capture of knowledge. The issues involved in implementing a professional knowledge base in a clinical setting are discussed. System functions, structure, interfaces, health care environment, and terminology and taxonomy are discussed. An integrated system concept from assessment through intervention and evaluation is outlined.

  19. Expert and Knowledge Based Systems.

    ERIC Educational Resources Information Center

    Demaid, Adrian; Edwards, Lyndon

    1987-01-01

    Discusses the nature and current state of knowledge-based systems and expert systems. Describes an expert system from the viewpoints of a computer programmer and an applications expert. Addresses concerns related to materials selection and forecasts future developments in the teaching of materials engineering. (ML)

  20. Population Education: A Knowledge Base.

    ERIC Educational Resources Information Center

    Jacobson, Willard J.

    To aid junior high and high school educators and curriculum planners as they develop population education programs, the book provides an overview of the population education knowledge base. In addition, it suggests learning activities, discussion questions, and background information which can be integrated into courses dealing with population,…

  1. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  2. Automated knowledge-base refinement

    NASA Technical Reports Server (NTRS)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  3. Knowledge based SAR images exploitations

    NASA Astrophysics Data System (ADS)

    Wang, David L.

    1987-01-01

    One of the basic functions of SAR images exploitation system is the detection of man-made objects. The performance of object detection is strongly limited by performance of segmentation modules. This paper presents a detection paradigm composed of an adaptive segmentation algorithm based on a priori knowledge of objects followed by a top-down hierarchical detection process that generates and evaluates object hypotheses. Shadow information and inter-object relationships can be added to the knowledge base to improve performance over that of a statistical detector based only on the attributes of individual objects.

  4. Knowledge based jet engine diagnostics

    NASA Technical Reports Server (NTRS)

    Jellison, Timothy G.; Dehoff, Ronald L.

    1987-01-01

    A fielded expert system automates equipment fault isolation and recommends corrective maintenance action for Air Force jet engines. The knowledge based diagnostics tool was developed as an expert system interface to the Comprehensive Engine Management System, Increment IV (CEMS IV), the standard Air Force base level maintenance decision support system. XMAM (trademark), the Expert Maintenance Tool, automates procedures for troubleshooting equipment faults, provides a facility for interactive user training, and fits within a diagnostics information feedback loop to improve the troubleshooting and equipment maintenance processes. The application of expert diagnostics to the Air Force A-10A aircraft TF-34 engine equipped with the Turbine Engine Monitoring System (TEMS) is presented.

  5. Does Teaching Experience Matter? Examining Biology Teachers' Prior Knowledge for Teaching in an Alternative Certification Program

    ERIC Educational Resources Information Center

    Friedrichsen, Patricia J.; Abell, Sandra K.; Pareja, Enrique M.; Brown, Patrick L.; Lankford, Deanna M.; Volkmann, Mark J.

    2009-01-01

    Alternative certification programs (ACPs) have been proposed as a viable way to address teacher shortages, yet we know little about how teacher knowledge develops within such programs. The purpose of this study was to investigate prior knowledge for teaching among students entering an ACP, comparing individuals with teaching experience to those…

  6. Cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward A.; Buchanan, Bruce G.

    1988-01-01

    This final report covers work performed under Contract NCC2-220 between NASA Ames Research Center and the Knowledge Systems Laboratory, Stanford University. The period of research was from March 1, 1987 to February 29, 1988. Topics covered were as follows: (1) concurrent architectures for knowledge-based systems; (2) methods for the solution of geometric constraint satisfaction problems, and (3) reasoning under uncertainty. The research in concurrent architectures was co-funded by DARPA, as part of that agency's Strategic Computing Program. The research has been in progress since 1985, under DARPA and NASA sponsorship. The research in geometric constraint satisfaction has been done in the context of a particular application, that of determining the 3-D structure of complex protein molecules, using the constraints inferred from NMR measurements.

  7. Knowledge Base Editor (SharpKBE)

    NASA Technical Reports Server (NTRS)

    Tikidjian, Raffi; James, Mark; Mackey, Ryan

    2007-01-01

    The SharpKBE software provides a graphical user interface environment for domain experts to build and manage knowledge base systems. Knowledge bases can be exported/translated to various target languages automatically, including customizable target languages.

  8. Using Conceptual Analysis To Build Knowledge Bases.

    ERIC Educational Resources Information Center

    Shinghal, Rajjan; Le Xuan, Albert

    This paper describes the methods and techniques called Conceptual Analysis (CA), a rigorous procedure to generate (without involuntary omissions and repetitions) knowledge bases for the development of knowledge-based systems. An introduction is given of CA and how it can be used to produce knowledge bases. A discussion is presented on what is…

  9. Foundation: Transforming data bases into knowledge bases

    NASA Technical Reports Server (NTRS)

    Purves, R. B.; Carnes, James R.; Cutts, Dannie E.

    1987-01-01

    One approach to transforming information stored in relational data bases into knowledge based representations and back again is described. This system, called Foundation, allows knowledge bases to take advantage of vast amounts of pre-existing data. A benefit of this approach is inspection, and even population, of data bases through an intelligent knowledge-based front-end.

  10. Knowledge-based landmarking of cephalograms.

    PubMed

    Lévy-Mandel, A D; Venetsanopoulos, A N; Tsotsos, J K

    1986-06-01

    Orthodontists have defined a certain number of characteristic points, or landmarks, on X-ray images of the human skull which are used to study growth or as a diagnostic aid. This work presents the first step toward an automatic extraction of these points. They are defined with respect to particular lines which are retrieved first. The original image is preprocessed with a prefiltering operator (median filter) followed by an edge detector (Mero-Vassy operator). A knowledge-based line-following algorithm is subsequently applied, involving a production system with organized sets of rules and a simple interpreter. The a priori knowledge implemented in the algorithm must take into account the fact that the lines represent biological shapes and can vary considerably from one patient to the next. The performance of the algorithm is judged with the help of objective quality criteria. Determination of the exact shapes of the lines allows the computation of the positions of the landmarks. PMID:3519070

  11. Genetic characterization for intraspecific hybridization of an exotic parasitoid prior its introduction for classical biological control

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The successful establishment of an exotic parasitoid in the context of classical biological control of insect pests depends upon its adaptability to the new environment. In theory, intraspecific hybridization may improve the success of the establishment as a result of an increase in the available ge...

  12. Knowledge-Based Network Operations

    NASA Astrophysics Data System (ADS)

    Wu, Chuan-lin; Hung, Chaw-Kwei; Stedry, Steven P.; McClure, James P.; Yeh, Show-Way

    1988-03-01

    An expert system is being implemented for enhancing operability of the Ground Communication Facility (GCF) of Jet Propulsion Laboratory's (JPL) Deep Space Network (DSN). The DSN is a tracking network for all of JPL's spacecraft plus a subset of spacecrafts launched by other NASA centers. A GCF upgrade task is set to replace the current GCF aging system with new, modern equipments which are capable of using knowledge-based monitor and control approach. The expert system, implemented in terms of KEE and SUN workstation, is used for performing network fault management, configuration management, and performance management in real-time. Monitor data are collected from each processor and DSCC's in every five seconds. In addition to serving as input parameters of the expert system, extracted management information is used to update a management information database. For the monitor and control purpose, software of each processor is divided into layers following the OSI standard. Each layer is modeled as a finite state machine. A System Management Application Process (SMAP) is implemented at application layer, which coordinates layer managers of the same processor and communicates with peer SMAPs of other processors. The expert system will be tuned by augmenting the production rules as the operation is going on, and its performance will be measured.

  13. A Natural Language Interface Concordant with a Knowledge Base

    PubMed Central

    Han, Yong-Jin; Park, Seong-Bae; Park, Se-Young

    2016-01-01

    The discordance between expressions interpretable by a natural language interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate natural language questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible natural language expressions corresponding to each formal query can also be collected in advance. A natural language expression has a one-to-one mapping with a formal query. Hence, a natural language question is translated into a formal query by matching the question with the most appropriate natural language expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively. PMID:26904105

  14. A Discussion of Knowledge Based Design

    NASA Technical Reports Server (NTRS)

    Wood, Richard M.; Bauer, Steven X. S.

    1999-01-01

    A discussion of knowledge and Knowledge- Based design as related to the design of aircraft is presented. The paper discusses the perceived problem with existing design studies and introduces the concepts of design and knowledge for a Knowledge- Based design system. A review of several Knowledge-Based design activities is provided. A Virtual Reality, Knowledge-Based system is proposed and reviewed. The feasibility of Virtual Reality to improve the efficiency and effectiveness of aerodynamic and multidisciplinary design, evaluation, and analysis of aircraft through the coupling of virtual reality technology and a Knowledge-Based design system is also reviewed. The final section of the paper discusses future directions for design and the role of Knowledge-Based design.

  15. Resolving intermediates in biological proton-coupled electron transfer: a tyrosyl radical prior to proton movement.

    PubMed

    Faller, Peter; Goussias, Charilaos; Rutherford, A William; Un, Sun

    2003-07-22

    The coupling of proton chemistry with redox reactions is important in many enzymes and is central to energy transduction in biology. However, the mechanistic details are poorly understood. Here, we have studied tyrosine oxidation, a reaction in which the removal of one electron from the amino acid is linked to the release of its phenolic proton. Using the unique photochemical properties of photosystem II, it was possible to oxidize the tyrosine at 1.8 K, a temperature at which proton and protein motions are limited. The state formed was detected by high magnetic field EPR as a high-energy radical intermediate trapped in an unprecedentedly electropositive environment. Warming of the protein allows this state to convert to a relaxed, stable form of the radical. The relaxation event occurs at 77 K and seems to involve proton migration and only a very limited movement of the protein. These reactions represent a stabilization process that prevents the back-reaction and determines the reactivity of the radical. PMID:12855767

  16. Resolving intermediates in biological proton-coupled electron transfer: A tyrosyl radical prior to proton movement

    PubMed Central

    Faller, Peter; Goussias, Charilaos; Rutherford, A. William; Un, Sun

    2003-01-01

    The coupling of proton chemistry with redox reactions is important in many enzymes and is central to energy transduction in biology. However, the mechanistic details are poorly understood. Here, we have studied tyrosine oxidation, a reaction in which the removal of one electron from the amino acid is linked to the release of its phenolic proton. Using the unique photochemical properties of photosystem II, it was possible to oxidize the tyrosine at 1.8 K, a temperature at which proton and protein motions are limited. The state formed was detected by high magnetic field EPR as a high-energy radical intermediate trapped in an unprecedentedly electropositive environment. Warming of the protein allows this state to convert to a relaxed, stable form of the radical. The relaxation event occurs at 77 K and seems to involve proton migration and only a very limited movement of the protein. These reactions represent a stabilization process that prevents the back-reaction and determines the reactivity of the radical. PMID:12855767

  17. IGENPRO knowledge-based operator support system.

    SciTech Connect

    Morman, J. A.

    1998-07-01

    Research and development is being performed on the knowledge-based IGENPRO operator support package for plant transient diagnostics and management to provide operator assistance during off-normal plant transient conditions. A generic thermal-hydraulic (T-H) first-principles approach is being implemented using automated reasoning, artificial neural networks and fuzzy logic to produce a generic T-H system-independent/plant-independent package. The IGENPRO package has a modular structure composed of three modules: the transient trend analysis module PROTREN, the process diagnostics module PRODIAG and the process management module PROMANA. Cooperative research and development work has focused on the PRODIAG diagnostic module of the IGENPRO package and the operator training matrix of transients used at the Braidwood Pressurized Water Reactor station. Promising simulator testing results with PRODIAG have been obtained for the Braidwood Chemical and Volume Control System (CVCS), and the Component Cooling Water System. Initial CVCS test results have also been obtained for the PROTREN module. The PROMANA effort also involves the CVCS. Future work will be focused on the long-term, slow and mild degradation transients where diagnoses of incipient T-H component failure prior to forced outage events is required. This will enhance the capability of the IGENPRO system as a predictive maintenance tool for plant staff and operator support.

  18. A knowledge base browser using hypermedia

    NASA Technical Reports Server (NTRS)

    Pocklington, Tony; Wang, Lui

    1990-01-01

    A hypermedia system is being developed to browse CLIPS (C Language Integrated Production System) knowledge bases. This system will be used to help train flight controllers for the Mission Control Center. Browsing this knowledge base will be accomplished either by having navigating through the various collection nodes that have already been defined, or through the query languages.

  19. Novel joint TOA/RSSI-based WCE location tracking method without prior knowledge of biological human body tissues.

    PubMed

    Ito, Takahiro; Anzai, Daisuke; Jianqing Wang

    2014-01-01

    This paper proposes a novel joint time of arrival (TOA)/received signal strength indicator (RSSI)-based wireless capsule endoscope (WCE) location tracking method without prior knowledge of biological human tissues. Generally, TOA-based localization can achieve much higher localization accuracy than other radio frequency-based localization techniques, whereas wireless signals transmitted from a WCE pass through various kinds of human body tissues, as a result, the propagation velocity inside a human body should be different from one in free space. Because the variation of propagation velocity is mainly affected by the relative permittivity of human body tissues, instead of pre-measurement for the relative permittivity in advance, we simultaneously estimate not only the WCE location but also the relative permittivity information. For this purpose, this paper first derives the relative permittivity estimation model with measured RSSI information. Then, we pay attention to a particle filter algorithm with the TOA-based localization and the RSSI-based relative permittivity estimation. Our computer simulation results demonstrates that the proposed tracking methods with the particle filter can accomplish an excellent localization accuracy of around 2 mm without prior information of the relative permittivity of the human body tissues. PMID:25571605

  20. A knowledge-based clustering algorithm driven by Gene Ontology.

    PubMed

    Cheng, Jill; Cline, Melissa; Martin, John; Finkelstein, David; Awad, Tarif; Kulp, David; Siani-Rose, Michael A

    2004-08-01

    We have developed an algorithm for inferring the degree of similarity between genes by using the graph-based structure of Gene Ontology (GO). We applied this knowledge-based similarity metric to a clique-finding algorithm for detecting sets of related genes with biological classifications. We also combined it with an expression-based distance metric to produce a co-cluster analysis, which accentuates genes with both similar expression profiles and similar biological characteristics and identifies gene clusters that are more stable and biologically meaningful. These algorithms are demonstrated in the analysis of MPRO cell differentiation time series experiments. PMID:15468759

  1. The Coming of Knowledge-Based Business.

    ERIC Educational Resources Information Center

    Davis, Stan; Botkin, Jim

    1994-01-01

    Economic growth will come from knowledge-based businesses whose "smart" products filter and interpret information. Businesses will come to think of themselves as educators and their customers as learners. (SK)

  2. Updating knowledge bases with disjunctive information

    SciTech Connect

    Zhang, Yan; Foo, Norman Y.

    1996-12-31

    It is well known that the minimal change principle was widely used in knowledge base updates. However, recent research has shown that conventional minimal change methods, eg. the PMA, are generally problematic for updating knowledge bases with disjunctive information. In this paper, we propose two different approaches to deal with this problem - one is called the minimal change with exceptions (MCE), the other is called the minimal change with maximal disjunctive inclusions (MCD). The first method is syntax-based, while the second is model-theoretic. We show that these two approaches are equivalent for propositional knowledge base updates, and the second method is also appropriate for first order knowledge base updates. We then prove that our new update approaches still satisfy the standard Katsuno and Mendelzon`s update postulates.

  3. Knowledge Based Systems and Metacognition in Radar

    NASA Astrophysics Data System (ADS)

    Capraro, Gerard T.; Wicks, Michael C.

    An airborne ground looking radar sensor's performance may be enhanced by selecting algorithms adaptively as the environment changes. A short description of an airborne intelligent radar system (AIRS) is presented with a description of the knowledge based filter and detection portions. A second level of artificial intelligence (AI) processing is presented that monitors, tests, and learns how to improve and control the first level. This approach is based upon metacognition, a way forward for developing knowledge based systems.

  4. Methodology for testing and validating knowledge bases

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  5. Distributed, cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    Some current research in the development and application of distributed, cooperating knowledge-based systems technology is addressed. The focus of the current research is the spacecraft ground operations environment. The underlying hypothesis is that, because of the increasing size, complexity, and cost of planned systems, conventional procedural approaches to the architecture of automated systems will give way to a more comprehensive knowledge-based approach. A hallmark of these future systems will be the integration of multiple knowledge-based agents which understand the operational goals of the system and cooperate with each other and the humans in the loop to attain the goals. The current work includes the development of a reference model for knowledge-base management, the development of a formal model of cooperating knowledge-based agents, the use of testbed for prototyping and evaluating various knowledge-based concepts, and beginning work on the establishment of an object-oriented model of an intelligent end-to-end (spacecraft to user) system. An introductory discussion of these activities is presented, the major concepts and principles being investigated are highlighted, and their potential use in other application domains is indicated.

  6. Patient Dependency Knowledge-Based Systems.

    PubMed

    Soliman, F

    1998-10-01

    The ability of Patient Dependency Systems to provide information for staffing decisions and budgetary development has been demonstrated. In addition, they have become powerful tools in modern hospital management. This growing interest in Patient Dependency Systems has renewed calls for their automation. As advances in Information Technology and in particular Knowledge-Based Engineering reach new heights, hospitals can no longer afford to ignore the potential benefits obtainable from developing and implementing Patient Dependency Knowledge-Based Systems. Experience has shown that the vast majority of decisions and rules used in the Patient Dependency method are too complex to capture in the form of a traditional programming language. Furthermore, the conventional Patient Dependency Information System automates the simple and rigid bookkeeping functions. On the other hand Knowledge-Based Systems automate complex decision making and judgmental processes and therefore are the appropriate technology for automating the Patient Dependency method. In this paper a new technique to automate Patient Dependency Systems using knowledge processing is presented. In this approach all Patient Dependency factors have been translated into a set of Decision Rules suitable for use in a Knowledge-Based System. The system is capable of providing the decision-maker with a number of scenarios and their possible outcomes. This paper also presents the development of Patient Dependency Knowledge-Based Systems, which can be used in allocating and evaluating resources and nursing staff in hospitals on the basis of patients' needs. PMID:9809275

  7. Decision support using causation knowledge base

    SciTech Connect

    Nakamura, K.; Iwai, S.; Sawaragi, T.

    1982-11-01

    A decision support system using a knowledge base of documentary data is presented. Causal assertions in documents are extracted and organized into cognitive maps, which are networks of causal relations, by the methodology of documentary coding. The knowledge base is constructed by joining cognitive maps of several documents concerned with a societal complex problem. The knowledge base is an integration of several expertises described in documents, though it is only concerned with causal structure of the problem, and includes overall and detailed information about the problem. Decisionmakers concerned with the problem interactively retrieve relevant information from the knowledge base in the process of decisionmaking and form their overall and detailed understanding of the complex problem based on the expertises stored in the knowledge base. Three retrieval modes are proposed according to types of the decisionmakers requests: 1) skeleton maps indicate overall causal structure of the problem, 2) hierarchical graphs give detailed information about parts of the causal structure, and 3) sources of causal relations are presented when necessary, for example when the decisionmaker wants to browse the causal assertions in documents. 10 references.

  8. Knowledge-based diagnosis for aerospace systems

    NASA Technical Reports Server (NTRS)

    Atkinson, David J.

    1988-01-01

    The need for automated diagnosis in aerospace systems and the approach of using knowledge-based systems are examined. Research issues in knowledge-based diagnosis which are important for aerospace applications are treated along with a review of recent relevant research developments in Artificial Intelligence. The design and operation of some existing knowledge-based diagnosis systems are described. The systems described and compared include the LES expert system for liquid oxygen loading at NASA Kennedy Space Center, the FAITH diagnosis system developed at the Jet Propulsion Laboratory, the PES procedural expert system developed at SRI International, the CSRL approach developed at Ohio State University, the StarPlan system developed by Ford Aerospace, the IDM integrated diagnostic model, and the DRAPhys diagnostic system developed at NASA Langley Research Center.

  9. Knowledge-based Autonomous Test Engineer (KATE)

    NASA Technical Reports Server (NTRS)

    Parrish, Carrie L.; Brown, Barbara L.

    1991-01-01

    Mathematical models of system components have long been used to allow simulators to predict system behavior to various stimuli. Recent efforts to monitor, diagnose, and control real-time systems using component models have experienced similar success. NASA Kennedy is continuing the development of a tool for implementing real-time knowledge-based diagnostic and control systems called KATE (Knowledge based Autonomous Test Engineer). KATE is a model-based reasoning shell designed to provide autonomous control, monitoring, fault detection, and diagnostics for complex engineering systems by applying its reasoning techniques to an exchangeable quantitative model describing the structure and function of the various system components and their systemic behavior.

  10. Knowledge-based commodity distribution planning

    NASA Technical Reports Server (NTRS)

    Saks, Victor; Johnson, Ivan

    1994-01-01

    This paper presents an overview of a Decision Support System (DSS) that incorporates Knowledge-Based (KB) and commercial off the shelf (COTS) technology components. The Knowledge-Based Logistics Planning Shell (KBLPS) is a state-of-the-art DSS with an interactive map-oriented graphics user interface and powerful underlying planning algorithms. KBLPS was designed and implemented to support skilled Army logisticians to prepare and evaluate logistics plans rapidly, in order to support corps-level battle scenarios. KBLPS represents a substantial advance in graphical interactive planning tools, with the inclusion of intelligent planning algorithms that provide a powerful adjunct to the planning skills of commodity distribution planners.

  11. Knowledge-based flow field zoning

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation flow field zoning in two dimensions is an important step towards easing the three-dimensional grid generation bottleneck in computational fluid dynamics. A knowledge based approach works well, but certain aspects of flow field zoning make the use of such an approach challenging. A knowledge based flow field zoner, called EZGrid, was implemented and tested on representative two-dimensional aerodynamic configurations. Results are shown which illustrate the way in which EZGrid incorporates the effects of physics, shape description, position, and user bias in a flow field zoning.

  12. Knowledge-Based Learning: Integration of Deductive and Inductive Learning for Knowledge Base Completion.

    ERIC Educational Resources Information Center

    Whitehall, Bradley Lane

    In constructing a knowledge-based system, the knowledge engineer must convert rules of thumb provided by the domain expert and previously solved examples into a working system. Research in machine learning has produced algorithms that create rules for knowledge-based systems, but these algorithms require either many examples or a complete domain…

  13. Ethics, Inclusiveness, and the UCEA Knowledge Base.

    ERIC Educational Resources Information Center

    Strike, Kenneth A.

    1995-01-01

    Accepts most of Bull and McCarthy's rejection of the ethical boundary thesis in this same "EAQ" issue. Reinterprets their argument, using a three-part model of administrative knowledge. Any project for constructing an educational administration knowledge base is suspect, since little "pure" empirical and instrumental knowledge will be confirmed by…

  14. The adverse outcome pathway knowledge base

    EPA Science Inventory

    The rapid advancement of the Adverse Outcome Pathway (AOP) framework has been paralleled by the development of tools to store, analyse, and explore AOPs. The AOP Knowledge Base (AOP-KB) project has brought three independently developed platforms (Effectopedia, AOP-Wiki, and AOP-X...

  15. The Knowledge Bases of the Expert Teacher.

    ERIC Educational Resources Information Center

    Turner-Bisset, Rosie

    1999-01-01

    Presents a model for knowledge bases for teaching that will act as a mental map for understanding the complexity of teachers' professional knowledge. Describes the sources and evolution of the model, explains how the model functions in practice, and provides an illustration using an example of teaching in history. (CMK)

  16. Improving the Knowledge Base in Teacher Education.

    ERIC Educational Resources Information Center

    Rockler, Michael J.

    Education in the United States for most of the last 50 years has built its knowledge base on a single dominating foundation--behavioral psychology. This paper analyzes the history of behaviorism. Syntheses are presented of the theories of Ivan P. Pavlov, J. B. Watson, and B. F. Skinner, all of whom contributed to the body of works on behaviorism.…

  17. Knowledge-based machine indexing from natural language text: Knowledge base design, development, and maintenance

    NASA Technical Reports Server (NTRS)

    Genuardi, Michael T.

    1993-01-01

    One strategy for machine-aided indexing (MAI) is to provide a concept-level analysis of the textual elements of documents or document abstracts. In such systems, natural-language phrases are analyzed in order to identify and classify concepts related to a particular subject domain. The overall performance of these MAI systems is largely dependent on the quality and comprehensiveness of their knowledge bases. These knowledge bases function to (1) define the relations between a controlled indexing vocabulary and natural language expressions; (2) provide a simple mechanism for disambiguation and the determination of relevancy; and (3) allow the extension of concept-hierarchical structure to all elements of the knowledge file. After a brief description of the NASA Machine-Aided Indexing system, concerns related to the development and maintenance of MAI knowledge bases are discussed. Particular emphasis is given to statistically-based text analysis tools designed to aid the knowledge base developer. One such tool, the Knowledge Base Building (KBB) program, presents the domain expert with a well-filtered list of synonyms and conceptually-related phrases for each thesaurus concept. Another tool, the Knowledge Base Maintenance (KBM) program, functions to identify areas of the knowledge base affected by changes in the conceptual domain (for example, the addition of a new thesaurus term). An alternate use of the KBM as an aid in thesaurus construction is also discussed.

  18. Bridging the gap: simulations meet knowledge bases

    NASA Astrophysics Data System (ADS)

    King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.

    2003-09-01

    Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.

  19. The importance of knowledge-based technology.

    PubMed

    Cipriano, Pamela F

    2012-01-01

    Nurse executives are responsible for a workforce that can provide safer and more efficient care in a complex sociotechnical environment. National quality priorities rely on technologies to provide data collection, share information, and leverage analytic capabilities to interpret findings and inform approaches to care that will achieve better outcomes. As a key steward for quality, the nurse executive exercises leadership to provide the infrastructure to build and manage nursing knowledge and instill accountability for following evidence-based practices. These actions contribute to a learning health system where new knowledge is captured as a by-product of care delivery enabled by knowledge-based electronic systems. The learning health system also relies on rigorous scientific evidence embedded into practice at the point of care. The nurse executive optimizes use of knowledge-based technologies, integrated throughout the organization, that have the capacity to help transform health care. PMID:22407206

  20. Knowledge-based system for computer security

    SciTech Connect

    Hunteman, W.J.

    1988-01-01

    The rapid expansion of computer security information and technology has provided little support for the security officer to identify and implement the safeguards needed to secure a computing system. The Department of Energy Center for Computer Security is developing a knowledge-based computer security system to provide expert knowledge to the security officer. The system is policy-based and incorporates a comprehensive list of system attack scenarios and safeguards that implement the required policy while defending against the attacks. 10 figs.

  1. Clips as a knowledge based language

    NASA Technical Reports Server (NTRS)

    Harrington, James B.

    1987-01-01

    CLIPS is a language for writing expert systems applications on a personal or small computer. Here, the CLIPS programming language is described and compared to three other artificial intelligence (AI) languages (LISP, Prolog, and OPS5) with regard to the processing they provide for the implementation of a knowledge based system (KBS). A discussion is given on how CLIPS would be used in a control system.

  2. Presentation planning using an integrated knowledge base

    NASA Technical Reports Server (NTRS)

    Arens, Yigal; Miller, Lawrence; Sondheimer, Norman

    1988-01-01

    A description is given of user interface research aimed at bringing together multiple input and output modes in a way that handles mixed mode input (commands, menus, forms, natural language), interacts with a diverse collection of underlying software utilities in a uniform way, and presents the results through a combination of output modes including natural language text, maps, charts and graphs. The system, Integrated Interfaces, derives much of its ability to interact uniformly with the user and the underlying services and to build its presentations, from the information present in a central knowledge base. This knowledge base integrates models of the application domain (Navy ships in the Pacific region, in the current demonstration version); the structure of visual displays and their graphical features; the underlying services (data bases and expert systems); and interface functions. The emphasis is on a presentation planner that uses the knowledge base to produce multi-modal output. There has been a flurry of recent work in user interface management systems. (Several recent examples are listed in the references). Existing work is characterized by an attempt to relieve the software designer of the burden of handcrafting an interface for each application. The work has generally focused on intelligently handling input. This paper deals with the other end of the pipeline - presentations.

  3. Satellite Contamination and Materials Outgassing Knowledge base

    NASA Technical Reports Server (NTRS)

    Minor, Jody L.; Kauffman, William J. (Technical Monitor)

    2001-01-01

    Satellite contamination continues to be a design problem that engineers must take into account when developing new satellites. To help with this issue, NASA's Space Environments and Effects (SEE) Program funded the development of the Satellite Contamination and Materials Outgassing Knowledge base. This engineering tool brings together in one location information about the outgassing properties of aerospace materials based upon ground-testing data, the effects of outgassing that has been observed during flight and measurements of the contamination environment by on-orbit instruments. The knowledge base contains information using the ASTM Standard E- 1559 and also consolidates data from missions using quartz-crystal microbalances (QCM's). The data contained in the knowledge base was shared with NASA by government agencies and industry in the US and international space agencies as well. The term 'knowledgebase' was used because so much information and capability was brought together in one comprehensive engineering design tool. It is the SEE Program's intent to continually add additional material contamination data as it becomes available - creating a dynamic tool whose value to the user is ever increasing. The SEE Program firmly believes that NASA, and ultimately the entire contamination user community, will greatly benefit from this new engineering tool and highly encourages the community to not only use the tool but add data to it as well.

  4. Knowledge-Based Query Construction Using the CDSS Knowledge Base for Efficient Evidence Retrieval.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Lee, Sungyoung; Kang, Byeong Ho

    2015-01-01

    Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important. PMID:26343669

  5. Knowledge-Based Query Construction Using the CDSS Knowledge Base for Efficient Evidence Retrieval

    PubMed Central

    Afzal, Muhammad; Hussain, Maqbool; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Lee, Sungyoung; Kang, Byeong Ho

    2015-01-01

    Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important. PMID:26343669

  6. Empirical Analysis and Refinement of Expert System Knowledge Bases

    PubMed Central

    Weiss, Sholom M.; Politakis, Peter; Ginsberg, Allen

    1986-01-01

    Recent progress in knowledge base refinement for expert systems is reviewed. Knowledge base refinement is characterized by the constrained modification of rule-components in an existing knowledge base. The goals are to localize specific weaknesses in a knowledge base and to improve an expert system's performance. Systems that automate some aspects of knowledge base refinement can have a significant impact on the related problems of knowledge base acquisition, maintenance, verification, and learning from experience. The SEEK empiricial analysis and refinement system is reviewed and its successor system, SEEK2, is introduced. Important areas for future research in knowledge base refinement are described.

  7. Formative Assessment Pre-Test to Identify College Students' Prior Knowledge, Misconceptions and Learning Difficulties in Biology

    ERIC Educational Resources Information Center

    Lazarowitz, Reuven; Lieb, Carl

    2006-01-01

    A formative assessment pretest was administered to undergraduate students at the beginning of a science course in order to find out their prior knowledge, misconceptions and learning difficulties on the topic of the human respiratory system and energy issues. Those findings could provide their instructors with the valuable information required in…

  8. Developing a kidney and urinary pathway knowledge base

    PubMed Central

    2011-01-01

    Background Chronic renal disease is a global health problem. The identification of suitable biomarkers could facilitate early detection and diagnosis and allow better understanding of the underlying pathology. One of the challenges in meeting this goal is the necessary integration of experimental results from multiple biological levels for further analysis by data mining. Data integration in the life science is still a struggle, and many groups are looking to the benefits promised by the Semantic Web for data integration. Results We present a Semantic Web approach to developing a knowledge base that integrates data from high-throughput experiments on kidney and urine. A specialised KUP ontology is used to tie the various layers together, whilst background knowledge from external databases is incorporated by conversion into RDF. Using SPARQL as a query mechanism, we are able to query for proteins expressed in urine and place these back into the context of genes expressed in regions of the kidney. Conclusions The KUPKB gives KUP biologists the means to ask queries across many resources in order to aggregate knowledge that is necessary for answering biological questions. The Semantic Web technologies we use, together with the background knowledge from the domain’s ontologies, allows both rapid conversion and integration of this knowledge base. The KUPKB is still relatively small, but questions remain about scalability, maintenance and availability of the knowledge itself. Availability The KUPKB may be accessed via http://www.e-lico.eu/kupkb. PMID:21624162

  9. PharmGKB: the Pharmacogenomics Knowledge Base.

    PubMed

    Thorn, Caroline F; Klein, Teri E; Altman, Russ B

    2013-01-01

    The Pharmacogenomics Knowledge Base, PharmGKB, is an interactive tool for researchers investigating how genetic variation affects drug response. The PharmGKB Web site, http://www.pharmgkb.org , displays genotype, molecular, and clinical knowledge integrated into pathway representations and Very Important Pharmacogene (VIP) summaries with links to additional external resources. Users can search and browse the knowledgebase by genes, variants, drugs, diseases, and pathways. Registration is free to the entire research community, but subject to agreement to use for research purposes only and not to redistribute. Registered users can access and download data to aid in the design of future pharmacogenetics and pharmacogenomics studies. PMID:23824865

  10. Knowledge Based Understanding of Radiology Text

    PubMed Central

    Ranum, David L.

    1988-01-01

    A data acquisition tool which will extract pertinent diagnostic information from radiology reports has been designed and implemented. Pertinent diagnostic information is defined as that clinical data which is used by the HELP medical expert system. The program uses a memory based semantic parsing technique to “understand” the text. Moreover, the memory structures and lexicon necessary to perform this action are automatically generated from the diagnostic knowledge base by using a special purpose compiler. The result is a system where data extraction from free text is directed by an expert system whose goal is diagnosis.

  11. Knowledge-based systems in Japan

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward; Engelmore, Robert S.; Friedland, Peter E.; Johnson, Bruce B.; Nii, H. Penny; Schorr, Herbert; Shrobe, Howard

    1994-01-01

    This report summarizes a study of the state-of-the-art in knowledge-based systems technology in Japan, organized by the Japanese Technology Evaluation Center (JTEC) under the sponsorship of the National Science Foundation and the Advanced Research Projects Agency. The panel visited 19 Japanese sites in March 1992. Based on these site visits plus other interactions with Japanese organizations, both before and after the site visits, the panel prepared a draft final report. JTEC sent the draft to the host organizations for their review. The final report was published in May 1993.

  12. PharmGKB: The Pharmacogenomics Knowledge Base

    PubMed Central

    Thorn, Caroline F.; Klein, Teri E.; Altman, Russ B.

    2014-01-01

    The Pharmacogenomics Knowledge Base, PharmGKB, is an interactive tool for researchers investigating how genetic variation affects drug response. The PharmGKB Web site, http://www.pharmgkb.org, displays genotype, molecular, and clinical knowledge integrated into pathway representations and Very Important Pharmacogene (VIP) summaries with links to additional external resources. Users can search and browse the knowledgebase by genes, variants, drugs, diseases, and pathways. Registration is free to the entire research community, but subject to agreement to use for research purposes only and not to redistribute. Registered users can access and download data to aid in the design of future pharmacogenetics and pharmacogenomics studies. PMID:23824865

  13. Knowledge-based simulation for aerospace systems

    NASA Technical Reports Server (NTRS)

    Will, Ralph W.; Sliwa, Nancy E.; Harrison, F. Wallace, Jr.

    1988-01-01

    Knowledge-based techniques, which offer many features that are desirable in the simulation and development of aerospace vehicle operations, exhibit many similarities to traditional simulation packages. The eventual solution of these systems' current symbolic processing/numeric processing interface problem will lead to continuous and discrete-event simulation capabilities in a single language, such as TS-PROLOG. Qualitative, totally-symbolic simulation methods are noted to possess several intrinsic characteristics that are especially revelatory of the system being simulated, and capable of insuring that all possible behaviors are considered.

  14. Removal of Review and Reclassification Procedures for Biological Products Licensed Prior to July 1, 1972. Final rule.

    PubMed

    2016-02-12

    The Food and Drug Administration (FDA, the Agency, or we) is removing two regulations that prescribe procedures for FDA's review and classification of biological products licensed before July 1, 1972. FDA is taking this action because the two regulations are obsolete and no longer necessary in light of other statutory and regulatory authorities established since 1972, which allow FDA to evaluate and monitor the safety and effectiveness of all biological products. In addition, other statutory and regulatory authorities authorize FDA to revoke a license for biological products because they are not safe and effective, or are misbranded. FDA is taking this action as part of its retrospective review of its regulations to promote improvement and innovation. PMID:26878738

  15. Knowledge-based fragment binding prediction.

    PubMed

    Tang, Grace W; Altman, Russ B

    2014-04-01

    Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening. PMID:24762971

  16. A knowledge based software engineering environment testbed

    NASA Technical Reports Server (NTRS)

    Gill, C.; Reedy, A.; Baker, L.

    1985-01-01

    The Carnegie Group Incorporated and Boeing Computer Services Company are developing a testbed which will provide a framework for integrating conventional software engineering tools with Artifical Intelligence (AI) tools to promote automation and productivity. The emphasis is on the transfer of AI technology to the software development process. Experiments relate to AI issues such as scaling up, inference, and knowledge representation. In its first year, the project has created a model of software development by representing software activities; developed a module representation formalism to specify the behavior and structure of software objects; integrated the model with the formalism to identify shared representation and inheritance mechanisms; demonstrated object programming by writing procedures and applying them to software objects; used data-directed and goal-directed reasoning to, respectively, infer the cause of bugs and evaluate the appropriateness of a configuration; and demonstrated knowledge-based graphics. Future plans include introduction of knowledge-based systems for rapid prototyping or rescheduling; natural language interfaces; blackboard architecture; and distributed processing

  17. Coastal habitats of the Elwha River, Washington- Biological and physical patterns and processes prior to dam removal

    USGS Publications Warehouse

    Duda, Jeffrey J.; Warrick, Jonathan A.; Magirl, Christopher S.

    2011-01-01

    Together, these different scientific perspectives form a basis for understanding the Elwha River ecosystem, an environment that has and will undergo substantial change. A century of change began with the start of dam construction in 1910; additional major change will result from dam removal scheduled to begin in September 2011. This report provides a scientific snapshot of the lower Elwha River, its estuary, and adjacent nearshore ecosystems prior to dam removal that can be used to evaluate the responses and dynamics of various system components following dam removal.

  18. Modeling Guru: Knowledge Base for NASA Modelers

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  19. Coastal and lower Elwha River, Washington, prior to dam removal--history, status, and defining characteristics: Chapter 1 in Coastal habitats of the Elwha River, Washington--biological and physical patterns and processes prior to dam removal

    USGS Publications Warehouse

    Duda, Jeffrey J.; Warrick, Jonathan A.; Magirl, Christopher S.

    2011-01-01

    Characterizing the physical and biological characteristics of the lower Elwha River, its estuary, and adjacent nearshore habitats prior to dam removal is essential to monitor changes to these areas during and following the historic dam-removal project set to begin in September 2011. Based on the size of the two hydroelectric projects and the amount of sediment that will be released, the Elwha River in Washington State will be home to the largest river restoration through dam removal attempted in the United States. Built in 1912 and 1927, respectively, the Elwha and Glines Canyon Dams have altered key physical and biological characteristics of the Elwha River. Once abundant salmon populations, consisting of all five species of Pacific salmon, are restricted to the lower 7.8 river kilometers downstream of Elwha Dam and are currently in low numbers. Dam removal will reopen access to more than 140 km of mainstem, flood plain, and tributary habitat, most of which is protected within Olympic National Park. The high capture rate of river-borne sediments by the two reservoirs has changed the geomorphology of the riverbed downstream of the dams. Mobilization and downstream transport of these accumulated reservoir sediments during and following dam removal will significantly change downstream river reaches, the estuary complex, and the nearshore environment. To introduce the more detailed studies that follow in this report, we summarize many of the key aspects of the Elwha River ecosystem including a regional and historical context for this unprecedented project.

  20. Is pharmacy a knowledge-based profession?

    PubMed

    Waterfield, Jon

    2010-04-12

    An increasingly important question for the pharmacy educator is the relationship between pharmacy knowledge and professionalism. There is a substantial body of literature on the theory of knowledge and it is useful to apply this to the profession of pharmacy. This review examines the types of knowledge and skill used by the pharmacist, with particular reference to tacit knowledge which cannot be codified. This leads into a discussion of practice-based pharmacy knowledge and the link between pharmaceutical science and practice. The final section of the paper considers the challenge of making knowledge work in practice. This includes a discussion of the production of knowledge within the context of application. The theoretical question posed by this review, "Is pharmacy a knowledge-based profession?" highlights challenging areas of debate for the pharmacy educator. PMID:20498743

  1. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  2. NASDA knowledge-based network planning system

    NASA Technical Reports Server (NTRS)

    Yamaya, K.; Fujiwara, M.; Kosugi, S.; Yambe, M.; Ohmori, M.

    1993-01-01

    One of the SODS (space operation and data system) sub-systems, NP (network planning) was the first expert system used by NASDA (national space development agency of Japan) for tracking and control of satellite. The major responsibilities of the NP system are: first, the allocation of network and satellite control resources and, second, the generation of the network operation plan data (NOP) used in automated control of the stations and control center facilities. Up to now, the first task of network resource scheduling was done by network operators. NP system automatically generates schedules using its knowledge base, which contains information on satellite orbits, station availability, which computer is dedicated to which satellite, and how many stations must be available for a particular satellite pass or a certain time period. The NP system is introduced.

  3. Compilation for critically constrained knowledge bases

    SciTech Connect

    Schrag, R.

    1996-12-31

    We show that many {open_quotes}critically constrained{close_quotes} Random 3SAT knowledge bases (KBs) can be compiled into disjunctive normal form easily by using a variant of the {open_quotes}Davis-Putnam{close_quotes} proof procedure. From these compiled KBs we can answer all queries about entailment of conjunctive normal formulas, also easily - compared to a {open_quotes}brute-force{close_quotes} approach to approximate knowledge compilation into unit clauses for the same KBs. We exploit this fact to develop an aggressive hybrid approach which attempts to compile a KB exactly until a given resource limit is reached, then falls back to approximate compilation into unit clauses. The resulting approach handles all of the critically constrained Random 3SAT KBs with average savings of an order of magnitude over the brute-force approach.

  4. Knowledge base rule partitioning design for CLIPS

    NASA Technical Reports Server (NTRS)

    Mainardi, Joseph D.; Szatkowski, G. P.

    1990-01-01

    This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.

  5. Knowledge-based systems and NASA's software support environment

    NASA Technical Reports Server (NTRS)

    Dugan, Tim; Carmody, Cora; Lennington, Kent; Nelson, Bob

    1990-01-01

    A proposed role for knowledge-based systems within NASA's Software Support Environment (SSE) is described. The SSE is chartered to support all software development for the Space Station Freedom Program (SSFP). This includes support for development of knowledge-based systems and the integration of these systems with conventional software systems. In addition to the support of development of knowledge-based systems, various software development functions provided by the SSE will utilize knowledge-based systems technology.

  6. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium...

  7. Knowledge-based optical system design

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik

    1992-03-01

    This work is a new approach for the design of start optical systems and represents a new contribution of artificial intelligence techniques in the optical design field. A knowledge-based optical-systems design (KBOSD), based on artificial intelligence algorithms, first order logic, knowledge representation, rules, and heuristics on lens design, is realized. This KBOSD is equipped with optical knowledge in the domain of centered dioptrical optical systems used at low aperture and small field angles. It generates centered dioptrical, on-axis and low-aperture optical systems, which are used as start systems for the subsequent optimization by existing lens design programs. This KBOSD produces monochromatic or polychromatic optical systems, such as singlet lens, doublet lens, triplet lens, reversed singlet lens, reversed doublet lens, reversed triplet lens, and telescopes. In the design of optical systems, the KBOSD takes into account many user constraints such as cost, resistance of the optical material (glass) to chemical, thermal, and mechanical effects, as well as the optical quality such as minimal aberrations and chromatic aberrations corrections. This KBOSD is developed in the programming language Prolog and has knowledge on optical design principles and optical properties. It is composed of more than 3000 clauses. Inference engine and interconnections in the cognitive world of optical systems are described. The system uses neither a lens library nor a lens data base; it is completely based on optical design knowledge.

  8. Knowledge-based approach to system integration

    NASA Technical Reports Server (NTRS)

    Blokland, W.; Krishnamurthy, C.; Biegl, C.; Sztipanovits, J.

    1988-01-01

    To solve complex problems one can often use the decomposition principle. However, a problem is seldom decomposable into completely independent subproblems. System integration deals with problem of resolving the interdependencies and the integration of the subsolutions. A natural method of decomposition is the hierarchical one. High-level specifications are broken down into lower level specifications until they can be transformed into solutions relatively easily. By automating the hierarchical decomposition and solution generation an integrated system is obtained in which the declaration of high level specifications is enough to solve the problem. We offer a knowledge-based approach to integrate the development and building of control systems. The process modeling is supported by using graphic editors. The user selects and connects icons that represent subprocesses and might refer to prewritten programs. The graphical editor assists the user in selecting parameters for each subprocess and allows the testing of a specific configuration. Next, from the definitions created by the graphical editor, the actual control program is built. Fault-diagnosis routines are generated automatically as well. Since the user is not required to write program code and knowledge about the process is present in the development system, the user is not required to have expertise in many fields.

  9. Irrelevance Reasoning in Knowledge Based Systems

    NASA Technical Reports Server (NTRS)

    Levy, A. Y.

    1993-01-01

    This dissertation considers the problem of reasoning about irrelevance of knowledge in a principled and efficient manner. Specifically, it is concerned with two key problems: (1) developing algorithms for automatically deciding what parts of a knowledge base are irrelevant to a query and (2) the utility of relevance reasoning. The dissertation describes a novel tool, the query-tree, for reasoning about irrelevance. Based on the query-tree, we develop several algorithms for deciding what formulas are irrelevant to a query. Our general framework sheds new light on the problem of detecting independence of queries from updates. We present new results that significantly extend previous work in this area. The framework also provides a setting in which to investigate the connection between the notion of irrelevance and the creation of abstractions. We propose a new approach to research on reasoning with abstractions, in which we investigate the properties of an abstraction by considering the irrelevance claims on which it is based. We demonstrate the potential of the approach for the cases of abstraction of predicates and projection of predicate arguments. Finally, we describe an application of relevance reasoning to the domain of modeling physical devices.

  10. Knowledge-based reusable software synthesis system

    NASA Technical Reports Server (NTRS)

    Donaldson, Cammie

    1989-01-01

    The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.

  11. An Ebola virus-centered knowledge base.

    PubMed

    Kamdar, Maulik R; Dumontier, Michel

    2015-01-01

    Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard. PMID:26055098

  12. An Ebola virus-centered knowledge base

    PubMed Central

    Kamdar, Maulik R.; Dumontier, Michel

    2015-01-01

    Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard. Database URL: http://ebola.semanticscience.org. PMID:26055098

  13. Sensitive Determination of Terazosin in Pharmaceutical Formulations and Biological Samples by Ionic-Liquid Microextraction Prior to Spectrofluorimetry

    PubMed Central

    Zeeb, Mohsen; Sadeghi, Mahdi

    2012-01-01

    An efficient and environmentally friendly sample preparation method based on the application of hydrophobic 1-Hexylpyridinium hexafluorophosphate [Hpy][PF6] ionic liquid (IL) as a microextraction solvent was proposed to preconcentrate terazosin. The performance of the microextraction method was improved by introducing a common ion of pyridinium IL into the sample solution. Due to the presence of the common ion, the solubility of IL significantly decreased. As a result, the phase separation successfully occurred even at high ionic strength, and the volume of the settled IL-phase was not influenced by variations in the ionic strength (up to 30% w/v). After preconcentration step, the enriched phase was introduced to the spectrofluorimeter for the determination of terazosin. The obtained results revealed that this system did not suffer from the limitations of that in conventional ionic-liquid microextraction. Under optimum experimental conditions, the proposed method provided a limit of detection (LOD) of 0.027 μg L−1 and a relative standard deviation (R.S.D.) of 2.4%. The present method was successfully applied to terazosin determination in actual pharmaceutical formulations and biological samples. Considering the large variety of ionic liquids, the proposed microextraction method earns many merits, and will present a wide application in the future. PMID:22505920

  14. Sensitive determination of terazosin in pharmaceutical formulations and biological samples by ionic-liquid microextraction prior to spectrofluorimetry.

    PubMed

    Zeeb, Mohsen; Sadeghi, Mahdi

    2012-01-01

    An efficient and environmentally friendly sample preparation method based on the application of hydrophobic 1-Hexylpyridinium hexafluorophosphate [Hpy][PF(6)] ionic liquid (IL) as a microextraction solvent was proposed to preconcentrate terazosin. The performance of the microextraction method was improved by introducing a common ion of pyridinium IL into the sample solution. Due to the presence of the common ion, the solubility of IL significantly decreased. As a result, the phase separation successfully occurred even at high ionic strength, and the volume of the settled IL-phase was not influenced by variations in the ionic strength (up to 30% w/v). After preconcentration step, the enriched phase was introduced to the spectrofluorimeter for the determination of terazosin. The obtained results revealed that this system did not suffer from the limitations of that in conventional ionic-liquid microextraction. Under optimum experimental conditions, the proposed method provided a limit of detection (LOD) of 0.027 μg L(-1) and a relative standard deviation (R.S.D.) of 2.4%. The present method was successfully applied to terazosin determination in actual pharmaceutical formulations and biological samples. Considering the large variety of ionic liquids, the proposed microextraction method earns many merits, and will present a wide application in the future. PMID:22505920

  15. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  16. A knowledge base for Vitis vinifera functional analysis

    PubMed Central

    2015-01-01

    Background Vitis vinifera (Grapevine) is the most important fruit species in the modern world. Wine and table grapes sales contribute significantly to the economy of major wine producing countries. The most relevant goals in wine production concern quality and safety. In order to significantly improve the achievement of these objectives and to gain biological knowledge about cultivars, a genomic approach is the most reliable strategy. The recent grapevine genome sequencing offers the opportunity to study the potential roles of genes and microRNAs in fruit maturation and other physiological and pathological processes. Although several systems allowing the analysis of plant genomes have been reported, none of them has been designed specifically for the functional analysis of grapevine genomes of cultivars under environmental stress in connection with microRNA data. Description Here we introduce a novel knowledge base, called BIOWINE, designed for the functional analysis of Vitis vinifera genomes of cultivars present in Sicily. The system allows the analysis of RNA-seq experiments of two different cultivars, namely Nero d'Avola and Nerello Mascalese. Samples were taken under different climatic conditions of phenological phases, diseases, and geographic locations. The BIOWINE web interface is equipped with data analysis modules for grapevine genomes. In particular users may analyze the current genome assembly together with the RNA-seq data through a customized version of GBrowse. The web interface allows users to perform gene set enrichment by exploiting third-party databases. Conclusions BIOWINE is a knowledge base implementing a set of bioinformatics tools for the analysis of grapevine genomes. The system aims to increase our understanding of the grapevine varieties and species of Sicilian products focusing on adaptability to different climatic conditions, phenological phases, diseases, and geographic locations. PMID:26050794

  17. Weather, knowledge base and life-style

    NASA Astrophysics Data System (ADS)

    Bohle, Martin

    2015-04-01

    Why to main-stream curiosity for earth-science topics, thus to appraise these topics as of public interest? Namely, to influence practices how humankind's activities intersect the geosphere. How to main-stream that curiosity for earth-science topics? Namely, by weaving diverse concerns into common threads drawing on a wide range of perspectives: be it beauty or particularity of ordinary or special phenomena, evaluating hazards for or from mundane environments, or connecting the scholarly investigation with concerns of citizens at large; applying for threading traditional or modern media, arts or story-telling. Three examples: First "weather"; weather is a topic of primordial interest for most people: weather impacts on humans lives, be it for settlement, for food, for mobility, for hunting, for fishing, or for battle. It is the single earth-science topic that went "prime-time" since in the early 1950-ties the broadcasting of weather forecasts started and meteorologists present their work to the public, daily. Second "knowledge base"; earth-sciences are a relevant for modern societies' economy and value setting: earth-sciences provide insights into the evolution of live-bearing planets, the functioning of Earth's systems and the impact of humankind's activities on biogeochemical systems on Earth. These insights bear on production of goods, living conditions and individual well-being. Third "life-style"; citizen's urban culture prejudice their experiential connections: earth-sciences related phenomena are witnessed rarely, even most weather phenomena. In the past, traditional rural communities mediated their rich experiences through earth-centric story-telling. In course of the global urbanisation process this culture has given place to society-centric story-telling. Only recently anthropogenic global change triggered discussions on geoengineering, hazard mitigation, demographics, which interwoven with arts, linguistics and cultural histories offer a rich narrative

  18. Model-based knowledge-based optical processors

    NASA Astrophysics Data System (ADS)

    Casasent, David; Liebowitz, Suzanne A.

    1987-05-01

    An efficient 3-D object-centered knowledge base is described. The ability to on-line generate a 2-D image projection or range image for any object/viewer orientation from this knowledge base is addressed. Applications of this knowledge base in associative processors and symbolic correlators are then discussed. Initial test results are presented for a multiple degree of freedom object recognition problem. These include new techniques to achieve object orientation information and two new associative memory matrix formulations.

  19. Advanced software development workstation. Knowledge base design: Design of knowledge base for flight planning application

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    1992-01-01

    The development process of the knowledge base for the generation of Test Libraries for Mission Operations Computer (MOC) Command Support focused on a series of information gathering interviews. These knowledge capture sessions are supporting the development of a prototype for evaluating the capabilities of INTUIT on such an application. the prototype includes functions related to POCC (Payload Operation Control Center) processing. It prompts the end-users for input through a series of panels and then generates the Meds associated with the initialization and the update of hazardous command tables for a POCC Processing TLIB.

  20. Automated knowledge base development from CAD/CAE databases

    NASA Technical Reports Server (NTRS)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  1. System Engineering for the NNSA Knowledge Base

    NASA Astrophysics Data System (ADS)

    Young, C.; Ballard, S.; Hipp, J.

    2006-05-01

    To improve ground-based nuclear explosion monitoring capability, GNEM R&E (Ground-based Nuclear Explosion Monitoring Research & Engineering) researchers at the national laboratories have collected an extensive set of raw data products. These raw data are used to develop higher level products (e.g. 2D and 3D travel time models) to better characterize the Earth at regional scales. The processed products and selected portions of the raw data are stored in an archiving and access system known as the NNSA (National Nuclear Security Administration) Knowledge Base (KB), which is engineered to meet the requirements of operational monitoring authorities. At its core, the KB is a data archive, and the effectiveness of the KB is ultimately determined by the quality of the data content, but access to that content is completely controlled by the information system in which that content is embedded. Developing this system has been the task of Sandia National Laboratories (SNL), and in this paper we discuss some of the significant challenges we have faced and the solutions we have engineered. One of the biggest system challenges with raw data has been integrating database content from the various sources to yield an overall KB product that is comprehensive, thorough and validated, yet minimizes the amount of disk storage required. Researchers at different facilities often use the same data to develop their products, and this redundancy must be removed in the delivered KB, ideally without requiring any additional effort on the part of the researchers. Further, related data content must be grouped together for KB user convenience. Initially SNL used whatever tools were already available for these tasks, and did the other tasks manually. The ever-growing volume of KB data to be merged, as well as a need for more control of merging utilities, led SNL to develop our own java software package, consisting of a low- level database utility library upon which we have built several

  2. Use of high-intensity sonication for pre-treatment of biological tissues prior to multielemental analysis by total reflection X-ray fluorescence spectrometry

    NASA Astrophysics Data System (ADS)

    La Calle, Inmaculada De; Costas, Marta; Cabaleiro, Noelia; Lavilla, Isela; Bendicho, Carlos

    2012-01-01

    In this work, two ultrasound-based procedures are developed for sample preparation prior to determination of P, K, Ca, Cr, Mn, Fe, Ni, Cu, Zn, As, Se and Sr in biological tissues by total reflection X-ray fluorescence spectrometry. Ultrasound-assisted extraction by means of a cup-horn sonoreactor and ultrasonic-probe slurry sampling were compared with a well-established procedure such as magnetic agitation slurry sampling. For that purpose, seven certified reference materials and different real samples of animal tissue were used. Similar accuracy and precision is obtained with the three sample preparation approaches tried. Limits of detection were dependent on both the sample matrix and the sample pre-treatment used, best values being achieved with ultrasound-assisted extraction. Advantages of ultrasound-assisted extraction include reduced sample handling, decreased contamination risks (neither addition of surfactants nor use of foreign objects inside the extraction vial), simpler background (no solid particles onto the sample carrier) and improved recovery for some elements such as P. A mixture of 10% v/v HNO3 + 20-40% v/v HCl was suitable for extraction from biological tissues.

  3. Knowledge-based analysis of microarrays for the discovery of transcriptional regulation relationships

    PubMed Central

    2010-01-01

    Background The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. Results In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. Conclusion High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data. PMID:20122245

  4. Knowledge-Based Entrepreneurship in a Boundless Research System

    ERIC Educational Resources Information Center

    Dell'Anno, Davide

    2008-01-01

    International entrepreneurship and knowledge-based entrepreneurship have recently generated considerable academic and non-academic attention. This paper explores the "new" field of knowledge-based entrepreneurship in a boundless research system. Cultural barriers to the development of business opportunities by researchers persist in some academic…

  5. Applying Knowledge-Based Techniques to Software Development.

    ERIC Educational Resources Information Center

    Harandi, Mehdi T.

    1986-01-01

    Reviews overall structure and design principles of a knowledge-based programming support tool, the Knowledge-Based Programming Assistant, which is being developed at University of Illinois Urbana-Champaign. The system's major units (program design program coding, and intelligent debugging) and additional functions are described. (MBR)

  6. Knowledge base for expert system process control/optimization

    NASA Astrophysics Data System (ADS)

    Lee, C. W.; Abrams, Frances L.

    An expert system based on the philosophy of qualitative process automation has been developed for the autonomous cure cycle development and control of the autoclave curing process. The system's knowledge base in the form of declarative rules is based on the qualitative understanding of the curing process. The knowledge base and examples of the resulting cure cycle are presented.

  7. Verification of knowledge bases based on containment checking

    SciTech Connect

    Levy. A.Y.; Rousset, M.C.

    1996-12-31

    Building complex knowledge based applications requires encoding large amounts of domain knowledge. After acquiring knowledge from domain experts, much of the effort in building a knowledge base goes into verifying that the knowledge is encoded correctly. We consider the problem of verifying hybrid knowledge bases that contain both Horn rules and a terminology in a description logic. Our approach to the verification problem is based on showing a close relationship to the problem of query containment. Our first contribution, based on this relationship, is presenting a thorough analysis of the decidability and complexity of the verification problem, for knowledge bases containing recursive rules and the interpreted predicates =, {le}, < and {ne}. Second, we show that important new classes of constraints on correct inputs and outputs can be expressed in a hybrid setting, in which a description logic class hierarchy is also considered, and we present the first complete algorithm for verifying such hybrid knowledge bases.

  8. Hyperincursion and the Globalization of the Knowledge-Based Economy

    NASA Astrophysics Data System (ADS)

    Leydesdorff, Loet

    2006-06-01

    In biological systems, the capacity of anticipation—that is, entertaining a model of the system within the system—can be considered as naturally given. Human languages enable psychological systems to construct and exchange mental models of themselves and their environments reflexively, that is, provide meaning to the events. At the level of the social system expectations can further be codified. When these codifications are functionally differentiated—like between market mechanisms and scientific research programs—the potential asynchronicity in the update among the subsystems provides room for a second anticipatory mechanism at the level of the transversal information exchange among differently codified meaning-processing subsystems. Interactions between the two different anticipatory mechanisms (the transversal one and the one along the time axis in each subsystem) may lead to co-evolutions and stabilization of expectations along trajectories. The wider horizon of knowledgeable expectations can be expected to meta-stabilize and also globalize a previously stabilized configuration of expectations against the axis of time. While stabilization can be considered as consequences of interaction and aggregation among incursive formulations of the logistic equation, globalization can be modeled using the hyperincursive formulation of this equation. The knowledge-based subdynamic at the global level which thus emerges, enables historical agents to inform the reconstruction of previous states and to co-construct future states of the social system, for example, in a techno-economic co-evolution.

  9. Bioenergy Science Center KnowledgeBase

    DOE Data Explorer

    Syed, M. H.; Karpinets, T. V.; Parang, M.; Leuze, M. R.; Park, B. H.; Hyatt, D.; Brown, S. D.; Moulton, S. Galloway, M.D.; Uberbacher, E. C.

    The challenge of converting cellulosic biomass to sugars is the dominant obstacle to cost effective production of biofuels in s capable of significant enough quantities to displace U. S. consumption of fossil transportation fuels. The BioEnergy Science Center (BESC) tackles this challenge of biomass recalcitrance by closely linking (1) plant research to make cell walls easier to deconstruct, and (2) microbial research to develop multi-talented biocatalysts tailor-made to produce biofuels in a single step. [from the 2011 BESC factsheet] The BioEnergy Science Center (BESC) is a multi-institutional, multidisciplinary research (biological, chemical, physical and computational sciences, mathematics and engineering) organization focused on the fundamental understanding and elimination of biomass recalcitrance. The BESC Knowledgebase and its associated tools is a discovery platform for bioenergy research. It consists of a collection of metadata, data, and computational tools for data analysis, integration, comparison and visualization for plants and microbes in the center.The BESC Knowledgebase (KB) and BESC Laboratory Information Management System (LIMS) enable bioenergy researchers to perform systemic research. [http://bobcat.ornl.gov/besc/index.jsp

  10. The data dictionary: A view into the CTBT knowledge base

    SciTech Connect

    Shepherd, E.R.; Keyser, R.G.; Armstrong, H.M.

    1997-08-01

    The data dictionary for the Comprehensive Test Ban Treaty (CTBT) knowledge base provides a comprehensive, current catalog of the projected contents of the knowledge base. It is written from a data definition view of the knowledge base and therefore organizes information in a fashion that allows logical storage within the computer. The data dictionary introduces two organization categories of data: the datatype, which is a broad, high-level category of data, and the dataset, which is a specific instance of a datatype. The knowledge base, and thus the data dictionary, consist of a fixed, relatively small number of datatypes, but new datasets are expected to be added on a regular basis. The data dictionary is a tangible result of the design effort for the knowledge base and is intended to be used by anyone who accesses the knowledge base for any purpose, such as populating the knowledge base with data, or accessing the data for use with automatic data processing (ADP) routines, or browsing through the data for verification purposes. For these two reasons, it is important to discuss the development of the data dictionary as well as to describe its contents to better understand its usefulness; that is the purpose of this paper.

  11. Contribution of brain or biological reserve and cognitive or neural reserve to outcome after TBI: A meta-analysis (prior to 2015).

    PubMed

    Mathias, Jane L; Wheaton, Patricia

    2015-08-01

    Brain/biological (BR) and cognitive/neural reserve (CR) have increasingly been used to explain some of the variability that occurs as a consequence of normal ageing and neurological injuries or disease. However, research evaluating the impact of reserve on outcomes after adult traumatic brain injury (TBI) has yet to be quantitatively reviewed. This meta-analysis consolidated data from 90 studies (published prior to 2015) that either examined the relationship between measures of BR (genetics, age, sex) or CR (education, premorbid IQ) and outcomes after TBI or compared the outcomes of groups with high and low reserve. The evidence for genetic sources of reserve was limited and often contrary to prediction. APOE ∈4 status has been studied most, but did not have a consistent or sizeable impact on outcomes. The majority of studies found that younger age was associated with better outcomes, however most failed to adjust for normal age-related changes in cognitive performance that are independent of a TBI. This finding was reversed (older adults had better outcomes) in the small number of studies that provided age-adjusted scores; although it remains unclear whether differences in the cause and severity of injuries that are sustained by younger and older adults contributed to this finding. Despite being more likely to sustain a TBI, males have comparable outcomes to females. Overall, as is the case in the general population, higher levels of education and pre-morbid IQ are both associated with better outcomes. PMID:26054792

  12. Clinical knowledge-based inverse treatment planning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Xing, Lei

    2004-11-01

    Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse

  13. Knowledge-Based Systems (KBS) development standards: A maintenance perspective

    NASA Technical Reports Server (NTRS)

    Brill, John

    1990-01-01

    Information on knowledge-based systems (KBS) is given in viewgraph form. Information is given on KBS standardization needs, the knowledge engineering process, program management, software and hardware issues, and chronic problem areas.

  14. The process for integrating the NNSA knowledge base.

    SciTech Connect

    Wilkening, Lisa K.; Carr, Dorthe Bame; Young, Christopher John; Hampton, Jeff; Martinez, Elaine

    2009-03-01

    From 2002 through 2006, the Ground Based Nuclear Explosion Monitoring Research & Engineering (GNEMRE) program at Sandia National Laboratories defined and modified a process for merging different types of integrated research products (IRPs) from various researchers into a cohesive, well-organized collection know as the NNSA Knowledge Base, to support operational treaty monitoring. This process includes defining the KB structure, systematically and logically aggregating IRPs into a complete set, and verifying and validating that the integrated Knowledge Base works as expected.

  15. Maintaining a Knowledge Base Using the MEDAS Knowledge Engineering Tools

    PubMed Central

    Naeymi-Rad, Frank; Evens, Martha; Koschmann, Timothy; Lee, Chui-Mei; Gudipati, Rao Y.C.; Kepic, Theresa; Rackow, Eric; Weil, Max Harry

    1985-01-01

    This paper describes the process by which a medical expert creates a new knowledge base for MEDAS, the Medical Emergency Decision Assistance System. It follows the expert physician step by step as a new disorder is entered along with its relevant symptoms. As the expanded knowledge base is tested, inconsistencies are detected, and corrections are made, showing at each step the available tools and giving an example of their use.

  16. Conflict Resolution of Chinese Chess Endgame Knowledge Base

    NASA Astrophysics Data System (ADS)

    Chen, Bo-Nian; Liu, Pangfang; Hsu, Shun-Chin; Hsu, Tsan-Sheng

    Endgame heuristics are often incorperated as part of the evaluation function used in Chinese Chess programs. In our program, Contemplation, we have proposed an automatic strategy to construct a large set of endgame heuristics. In this paper, we propose a conflict resolution strategy to eliminate the conflicts among the constructed heuristic databases, which is called endgame knowledge base. In our experiment, the correctness of the obtained constructed endgame knowledge base is sufficiently high for practical usage.

  17. XML-Based SHINE Knowledge Base Interchange Language

    NASA Technical Reports Server (NTRS)

    James, Mark; Mackey, Ryan; Tikidjian, Raffi

    2008-01-01

    The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.

  18. Analysis of molecular expression patterns and integration with other knowledge bases using probabilistic Bayesian network models

    SciTech Connect

    Moler, Edward J.; Mian, I.S.

    2000-03-01

    How can molecular expression experiments be interpreted with greater than ten to the fourth measurements per chip? How can one get the most quantitative information possible from the experimental data with good confidence? These are important questions whose solutions require an interdisciplinary combination of molecular and cellular biology, computer science, statistics, and complex systems analysis. The explosion of data from microarray techniques present the problem of interpreting the experiments. The availability of large-scale knowledge bases provide the opportunity to maximize the information extracted from these experiments. We have developed new methods of discovering biological function, metabolic pathways, and regulatory networks from these data and knowledge bases. These techniques are applicable to analyses for biomedical engineering, clinical, and fundamental cell and molecular biology studies. Our approach uses probabilistic, computational methods that give quantitative interpretations of data in a biological context. We have selected Bayesian statistical models with graphical network representations as a framework for our methods. As a first step, we use a nave Bayesian classifier to identify statistically significant patterns in gene expression data. We have developed methods which allow us to (a) characterize which genes or experiments distinguish each class from the others, (b) cross-index the resulting classes with other databases to assess biological meaning of the classes, and (c) display a gross overview of cellular dynamics. We have developed a number of visualization tools to convey the results. We report here our methods of classification and our first attempts at integrating the data and other knowledge bases together with new visualization tools. We demonstrate the utility of these methods and tools by analysis of a series of yeast cDNA microarray data and to a set of cancerous/normal sample data from colon cancer patients. We discuss

  19. Case-Based Tutoring from a Medical Knowledge Base

    PubMed Central

    Chin, Homer L.

    1988-01-01

    The past decade has seen the emergence of programs that make use of large knowledge bases to assist physicians in diagnosis within the general field of internal medicine. One such program, Internist-I, contains knowledge about over 600 diseases, covering a significant proportion of internal medicine. This paper describes the process of converting a subset of this knowledge base--in the area of cardiovascular diseases--into a probabilistic format, and the use of this resulting knowledge base to teach medical diagnostic knowledge. The system (called KBSimulator--for Knowledge-Based patient Simulator) generates simulated patient cases and uses these cases as a focal point from which to teach medical knowledge. It interacts with the student in a mixed-initiative fashion, presenting patients for the student to diagnose, and allowing the student to obtain further information on his/her own initiative in the context of that patient case. The system scores the student, and uses these scores to form a rudimentary model of the student. This resulting model of the student is then used to direct the generation of subsequent patient cases. This project demonstrates the feasibility of building an intelligent, flexible instructional system that uses a knowledge base constructed primarily for medical diagnosis.

  20. Evaluation of database technologies for the CTBT Knowledge Base prototype

    SciTech Connect

    Keyser, R.; Shepard-Dombroski, E.; Baur, D.; Hipp, J.; Moore, S.; Young, C.; Chael, E.

    1996-11-01

    This document examines a number of different software technologies in the rapidly changing field of database management systems, evaluates these systems in light of the expected needs of the Comprehensive Test Ban Treaty (CTBT) Knowledge Base, and makes some recommendations for the initial prototypes of the Knowledge Base. The Knowledge Base requirements are examined and then used as criteria for evaluation of the database management options. A mock-up of the data expected in the Knowledge Base is used as a basis for examining how four different database technologies deal with the problems of storing and retrieving the data. Based on these requirement and the results of the evaluation, the recommendation is that the Illustra database be considered for the initial prototype of the Knowledge Base. Illustra offers a unique blend of performance, flexibility, and features that will aid in the implementation of the prototype. At the same time, Illustra provides a high level of compatibility with the hardware and software environments present at the US NDC (National Data Center) and the PIDC (Prototype International Data Center).

  1. A Knowledge-Based System Developer for aerospace applications

    NASA Technical Reports Server (NTRS)

    Shi, George Z.; Wu, Kewei; Fensky, Connie S.; Lo, Ching F.

    1993-01-01

    A prototype Knowledge-Based System Developer (KBSD) has been developed for aerospace applications by utilizing artificial intelligence technology. The KBSD directly acquires knowledge from domain experts through a graphical interface then builds expert systems from that knowledge. This raises the state of the art of knowledge acquisition/expert system technology to a new level by lessening the need for skilled knowledge engineers. The feasibility, applicability , and efficiency of the proposed concept was established, making a continuation which would develop the prototype to a full-scale general-purpose knowledge-based system developer justifiable. The KBSD has great commercial potential. It will provide a marketable software shell which alleviates the need for knowledge engineers and increase productivity in the workplace. The KBSD will therefore make knowledge-based systems available to a large portion of industry.

  2. Online Knowledge-Based Model for Big Data Topic Extraction

    PubMed Central

    Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan

    2016-01-01

    Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004

  3. Knowledge based systems: From process control to policy analysis

    SciTech Connect

    Marinuzzi, J.G.

    1993-01-01

    Los Alamos has been pursuing the use of Knowledge Based Systems for many years. These systems are currently being used to support projects that range across many production and operations areas. By investing time and money in people and equipment, Los Alamos has developed one of the strongest knowledge based systems capabilities within the DOE. Staff of Los Alamos' Mechanical Electronic Engineering Division are using these knowledge systems to increase capability, productivity and competitiveness in areas of manufacturing quality control, robotics, process control, plant design and management decision support. This paper describes some of these projects and associated technical program approaches, accomplishments, benefits and future goals.

  4. Knowledge based systems: From process control to policy analysis

    SciTech Connect

    Marinuzzi, J.G.

    1993-06-01

    Los Alamos has been pursuing the use of Knowledge Based Systems for many years. These systems are currently being used to support projects that range across many production and operations areas. By investing time and money in people and equipment, Los Alamos has developed one of the strongest knowledge based systems capabilities within the DOE. Staff of Los Alamos` Mechanical & Electronic Engineering Division are using these knowledge systems to increase capability, productivity and competitiveness in areas of manufacturing quality control, robotics, process control, plant design and management decision support. This paper describes some of these projects and associated technical program approaches, accomplishments, benefits and future goals.

  5. Arranging ISO 13606 archetypes into a knowledge base.

    PubMed

    Kopanitsa, Georgy

    2014-01-01

    To enable the efficient reuse of standard based medical data we propose to develop a higher level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analyzed for their ability to be applied in the implementation of a higher level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future. PMID:25160140

  6. Aquatic ecology of the Elwha River estuary prior to dam removal: Chapter 7 in Coastal habitats of the Elwha River, Washington--biological and physical patterns and processes prior to dam removal

    USGS Publications Warehouse

    Duda, Jeffrey J.; Beirne, Matthew M.; Larsen, Kimberly; Barry, Dwight; Stenberg, Karl; McHenry, Michael L.

    2011-01-01

    The removal of two long-standing dams on the Elwha River in Washington State will initiate a suite of biological and physical changes to the estuary at the river mouth. Estuaries represent a transition between freshwater and saltwater, have unique assemblages of plants and animals, and are a critical habitat for some salmon species as they migrate to the ocean. This chapter summarizes a number of studies in the Elwha River estuary, and focuses on physical and biological aspects of the ecosystem that are expected to change following dam removal. Included are data sets that summarize (1) water chemistry samples collected over a 16 month period; (2) beach seining activities targeted toward describing the fish assemblage of the estuary and migratory patterns of juvenile salmon; (3) descriptions of the aquatic and terrestrial invertebrate communities in the estuary, which represent an important food source for juvenile fish and are important water quality indicators; and (4) the diet and growth patterns of juvenile Chinook salmon in the lower Elwha River and estuary. These data represent baseline conditions of the ecosystem after nearly a century of changes due to the dams and will be useful in monitoring the changes to the river and estuary following dam removal.

  7. PLAN-IT - Knowledge-based mission sequencing

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric W.

    1987-01-01

    PLAN-IT (Plan-Integrated Timelines), a knowledge-based approach to assist in mission sequencing, is discussed. PLAN-IT uses a large set of scheduling techniques known as strategies to develop and maintain a mission sequence. The approach implemented by PLAN-IT and the current applications of PLAN-IT for sequencing at NASA are reported.

  8. Dynamic Strategic Planning in a Professional Knowledge-Based Organization

    ERIC Educational Resources Information Center

    Olivarius, Niels de Fine; Kousgaard, Marius Brostrom; Reventlow, Susanne; Quelle, Dan Grevelund; Tulinius, Charlotte

    2010-01-01

    Professional, knowledge-based institutions have a particular form of organization and culture that makes special demands on the strategic planning supervised by research administrators and managers. A model for dynamic strategic planning based on a pragmatic utilization of the multitude of strategy models was used in a small university-affiliated…

  9. A Text Knowledge Base from the AI Handbook.

    ERIC Educational Resources Information Center

    Simmons, Robert F.

    1987-01-01

    Describes a prototype natural language text knowledge system (TKS) that was used to organize 50 pages of a handbook on artificial intelligence as an inferential knowledge base with natural language query and command capabilities. Representation of text, database navigation, query systems, discourse structuring, and future research needs are…

  10. Knowledge Based Engineering for Spatial Database Management and Use

    NASA Technical Reports Server (NTRS)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  11. Planning and Implementing a High Performance Knowledge Base.

    ERIC Educational Resources Information Center

    Cortez, Edwin M.

    1999-01-01

    Discusses the conceptual framework for developing a rapid-prototype high-performance knowledge base for the four mission agencies of the United States Department of Agriculture and their university partners. Describes the background of the project and methods used for establishing the requirements; examines issues and problems surrounding semantic…

  12. CACTUS: Command and Control Training Using Knowledge-Based Simulations

    ERIC Educational Resources Information Center

    Hartley, Roger; Ravenscroft, Andrew; Williams, R. J.

    2008-01-01

    The CACTUS project was concerned with command and control training of large incidents where public order may be at risk, such as large demonstrations and marches. The training requirements and objectives of the project are first summarized justifying the use of knowledge-based computer methods to support and extend conventional training…

  13. Cataloging and Expert Systems: AACR2 as a Knowledge Base.

    ERIC Educational Resources Information Center

    Hjerppe, Roland; Olander, Birgitta

    1989-01-01

    Describes a project that developed two expert systems for library cataloging using the second edition of the Anglo American Cataloging Rules (AACR2) as a knowledge base. The discussion covers cataloging as interpretation, the structure of AACR2, and the feasibility of using expert systems for cataloging in traditional library settings. (26…

  14. Development to Release of CTBT Knowledge Base Datasets

    SciTech Connect

    Moore, S.G.; Shepherd, E.R.

    1998-10-20

    For the CTBT Knowledge Base to be useful as a tool for improving U.S. monitoring capabilities, the contents of the Knowledge Base must be subjected to a well-defined set of procedures to ensure integrity and relevance of the con- stituent datasets. This paper proposes a possible set of procedures for datasets that are delivered to Sandia National Laboratories (SNL) for inclusion in the Knowledge Base. The proposed procedures include defining preliminary acceptance criteria, performing verification and validation activities, and subjecting the datasets to approvrd by domain experts. Preliminary acceptance criteria include receipt of the data, its metadata, and a proposal for its usability for U.S. National Data Center operations. Verification activi- ties establish the correctness and completeness of the data, while validation activities establish the relevance of the data to its proposed use. Results from these activities are presented to domain experts, such as analysts and peers for final approval of the datasets for release to the Knowledge Base. Formats and functionality will vary across datasets, so the procedures proposed herein define an overall plan for establishing integrity and relevance of the dataset. Specific procedures for verification, validation, and approval will be defined for each dataset, or for each type of dataset, as appropriate. Potential dataset sources including Los Alamos National Laboratories and Lawrence Livermore National Laborato- ries have contributed significantly to the development of thk process.

  15. Computer Assisted Multi-Center Creation of Medical Knowledge Bases

    PubMed Central

    Giuse, Nunzia Bettinsoli; Giuse, Dario A.; Miller, Randolph A.

    1988-01-01

    Computer programs which support different aspects of medical care have been developed in recent years. Their capabilities range from diagnosis to medical imaging, and include hospital management systems and therapy prescription. In spite of their diversity these systems have one commonality: their reliance on a large body of medical knowledge in computer-readable form. This knowledge enables such programs to draw inferences, validate hypotheses, and in general to perform their intended task. As has been clear to developers of such systems, however, the creation and maintenance of medical knowledge bases are very expensive. Practical and economical difficulties encountered during this long-term process have discouraged most attempts. This paper discusses knowledge base creation and maintenance, with special emphasis on medical applications. We first describe the methods currently used and their limitations. We then present our recent work on developing tools and methodologies which will assist in the process of creating a medical knowledge base. We focus, in particular, on the possibility of multi-center creation of the knowledge base.

  16. Knowledge-Based Hierarchies: Using Organizations to Understand the Economy

    ERIC Educational Resources Information Center

    Garicano, Luis; Rossi-Hansberg, Esteban

    2015-01-01

    Incorporating the decision of how to organize the acquisition, use, and communication of knowledge into economic models is essential to understand a wide variety of economic phenomena. We survey the literature that has used knowledge-based hierarchies to study issues such as the evolution of wage inequality, the growth and productivity of firms,…

  17. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  18. Knowledge-Based Aid: A Four Agency Comparative Study

    ERIC Educational Resources Information Center

    McGrath, Simon; King, Kenneth

    2004-01-01

    Part of the response of many development cooperation agencies to the challenges of globalisation, ICTs and the knowledge economy is to emphasise the importance of knowledge for development. This paper looks at the discourses and practices of ''knowledge-based aid'' through an exploration of four agencies: the World Bank, DFID, Sida and JICA. It…

  19. Value Creation in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Liu, Fang-Chun

    2013-01-01

    Effective investment strategies help companies form dynamic core organizational capabilities allowing them to adapt and survive in today's rapidly changing knowledge-based economy. This dissertation investigates three valuation issues that challenge managers with respect to developing business-critical investment strategies that can have…

  20. KBGIS-II: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, Terence; Peuquet, Donna; Menon, Sudhakar; Agarwal, Pankaj

    1986-01-01

    The architecture and working of a recently implemented Knowledge-Based Geographic Information System (KBGIS-II), designed to satisfy several general criteria for the GIS, is described. The system has four major functions including query-answering, learning and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial object language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is performing all its designated tasks successfully. Future reports will relate performance characteristics of the system.

  1. KBGIS-2: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, T.; Peuquet, D.; Menon, S.; Agarwal, P.

    1986-01-01

    The architecture and working of a recently implemented knowledge-based geographic information system (KBGIS-2) that was designed to satisfy several general criteria for the geographic information system are described. The system has four major functions that include query-answering, learning, and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial objects language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is currently performing all its designated tasks successfully, although currently implemented on inadequate hardware. Future reports will detail the performance characteristics of the system, and various new extensions are planned in order to enhance the power of KBGIS-2.

  2. The Ignorance of the Knowledge-Based Economy. The Iconoclast.

    ERIC Educational Resources Information Center

    McMurtry, John

    1996-01-01

    Castigates the supposed "knowledge-based economy" as simply a public relations smokescreen covering up the free market exploitation of people and resources serving corporate interests. Discusses the many ways that private industry, often with government collusion, has controlled or denied dissemination of information to serve its own interests.…

  3. Conventional and Knowledge-Based Information Retrieval with Prolog.

    ERIC Educational Resources Information Center

    Leigh, William; Paz, Noemi

    1988-01-01

    Describes the use of PROLOG to program knowledge-based information retrieval systems, in which the knowledge contained in a document is translated into machine processable logic. Several examples of the resulting search process, and the program rules supporting the process, are given. (10 references) (CLB)

  4. Malaysia Transitions toward a Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Mustapha, Ramlee; Abdullah, Abu

    2004-01-01

    The emergence of a knowledge-based economy (k-economy) has spawned a "new" notion of workplace literacy, changing the relationship between employers and employees. The traditional covenant where employees expect a stable or lifelong employment will no longer apply. The retention of employees will most probably be based on their skills and…

  5. Intelligent Tools for Planning Knowledge base Development and Verification

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintaining the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems.

  6. Grey Documentation as a Knowledge Base in Social Work.

    ERIC Educational Resources Information Center

    Berman, Yitzhak

    1994-01-01

    Defines grey documentation as documents issued informally and not available through normal channels and discusses the role that grey documentation can play in the social work knowledge base. Topics addressed include grey documentation and science; social work and the empirical approach in knowledge development; and dissemination of grey…

  7. Ada as an implementation language for knowledge based systems

    NASA Technical Reports Server (NTRS)

    Rochowiak, Daniel

    1990-01-01

    Debates about the selection of programming languages often produce cultural collisions that are not easily resolved. This is especially true in the case of Ada and knowledge based programming. The construction of programming tools provides a desirable alternative for resolving the conflict.

  8. Special Issue: Decision Support and Knowledge-Based Systems.

    ERIC Educational Resources Information Center

    Stohr, Edward A.; And Others

    1987-01-01

    Six papers dealing with decision support and knowledge based systems are presented. Five of the papers are concerned in some way with the use of artificial intelligence techniques in individual or group decision support. The sixth paper presents empirical results from the use of a group decision support system. (CLB)

  9. National Nuclear Security Administration Knowledge Base Core Table Schema Document

    SciTech Connect

    CARR,DORTHE B.

    2002-09-01

    The National Nuclear Security Administration is creating a Knowledge Base to store technical information to support the United States nuclear explosion monitoring mission. This document defines the core database tables that are used in the Knowledge Base. The purpose of this document is to present the ORACLE database tables in the NNSA Knowledge Base that on modifications to the CSS3.0 Database Schema developed in 1990. (Anderson et al., 1990). These modifications include additional columns to the affiliation table, an increase in the internal ORACLE format from 8 integers to 9 integers for thirteen IDs, and new primary and unique key definitions for six tables. It is intended to be used as a reference by researchers inside and outside of NNSA/DOE as they compile information to submit to the NNSA Knowledge Base. These ''core'' tables are separated into two groups. The Primary tables are dynamic and consist of information that can be used in automatic and interactive processing (e.g. arrivals, locations). The Lookup tables change infrequently and are used for auxiliary information used by the processing. In general, the information stored in the core tables consists of: arrivals; events, origins, associations of arrivals; magnitude information; station information (networks, site descriptions, instrument responses); pointers to waveform data; and comments pertaining to the information. This document is divided into four sections, the first being this introduction. Section two defines the sixteen tables that make up the core tables of the NNSA Knowledge Base database. Both internal (ORACLE) and external formats for the attributes are defined, along with a short description of each attribute. In addition, the primary, unique and foreign keys are defined. Section three of the document shows the relationships between the different tables by using entity-relationship diagrams. The last section, defines the columns or attributes of the various tables. Information that is

  10. Openness to and preference for attributes of biologic therapy prior to initiation among patients with rheumatoid arthritis: patient and rheumatologist perspectives and implications for decision making

    PubMed Central

    Bolge, Susan C; Goren, Amir; Brown, Duncan; Ginsberg, Seth; Allen, Isabel

    2016-01-01

    Purpose Despite American College of Rheumatology recommendations, appropriate and timely initiation of biologic therapies does not always occur. This study examined openness to and preference for attributes of biologic therapies among patients with rheumatoid arthritis (RA), differences in patients’ and rheumatologists’ perceptions, and discussions around biologic therapy initiation. Patients and methods A self-administered online survey was completed by 243 adult patients with RA in the US who were taking disease-modifying antirheumatic drugs (DMARDs) and had never taken, but had discussed biologic therapy with a rheumatologist. Patients were recruited from a consumer panel (n=142) and patient advocacy organization (n=101). A separate survey was completed by 103 rheumatologists who treated at least 25 patients with RA per month with biologic therapy. Descriptive and bivariate analyses were conducted separately for patients and rheumatologists. Attributes of biologic therapy included route of administration (intravenous infusion or subcutaneous injection), frequency of injections/infusions, and duration of infusion. Results Over half of patients (53.1%) were open to both intravenous infusion and subcutaneous injection, whereas rheumatologists reported 40.7% of patients would be open to both. Only 26.3% of patients strongly preferred subcutaneous injection, whereas rheumatologists reported 35.2%. Discrepancies were even more pronounced among specific patient types (eg, older vs younger patients and Medicare recipients). Among patients, 23% reported initiating discussion about biologics and 54% reported their rheumatologist initiated the discussion. A majority of rheumatologists reported discussing in detail several key aspects of biologics, whereas a minority of patients reported the same. Conclusion Preferences differed among patients with RA from rheumatologists’ perceptions of these preferences for biologic therapy, including greater openness to intravenous

  11. Predicting Mycobacterium tuberculosis Complex Clades Using Knowledge-Based Bayesian Networks

    PubMed Central

    Bennett, Kristin P.

    2014-01-01

    We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238

  12. Predicting Mycobacterium tuberculosis complex clades using knowledge-based Bayesian networks.

    PubMed

    Aminian, Minoo; Couvin, David; Shabbeer, Amina; Hadley, Kane; Vandenberg, Scott; Rastogi, Nalin; Bennett, Kristin P

    2014-01-01

    We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238

  13. Dealing with difficult deformations: construction of a knowledge-based deformation atlas

    NASA Astrophysics Data System (ADS)

    Thorup, S. S.; Darvann, T. A.; Hermann, N. V.; Larsen, P.; Ólafsdóttir, H.; Paulsen, R. R.; Kane, A. A.; Govier, D.; Lo, L.-J.; Kreiborg, S.; Larsen, R.

    2010-03-01

    Twenty-three Taiwanese infants with unilateral cleft lip and palate (UCLP) were CT-scanned before lip repair at the age of 3 months, and again after lip repair at the age of 12 months. In order to evaluate the surgical result, detailed point correspondence between pre- and post-surgical images was needed. We have previously demonstrated that non-rigid registration using B-splines is able to provide automated determination of point correspondences in populations of infants without cleft lip. However, this type of registration fails when applied to the task of determining the complex deformation from before to after lip closure in infants with UCLP. The purpose of the present work was to show that use of prior information about typical deformations due to lip closure, through the construction of a knowledge-based atlas of deformations, could overcome the problem. Initially, mean volumes (atlases) for the pre- and post-surgical populations, respectively, were automatically constructed by non-rigid registration. An expert placed corresponding landmarks in the cleft area in the two atlases; this provided prior information used to build a knowledge-based deformation atlas. We model the change from pre- to post-surgery using thin-plate spline warping. The registration results are convincing and represent a first move towards an automatic registration method for dealing with difficult deformations due to this type of surgery.

  14. Scaling up explanation generation: Large-scale knowledge bases and empirical studies

    SciTech Connect

    Lester, J.C.; Porter, B.W.

    1996-12-31

    To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. We first describe Knight, a robust explanation system that constructs multi-sentential and multi-paragraph explanations from the Biology Knowledge Base, a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We then introduce the Two Panel evaluation methodology and describe how Knight`s performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, Knight scored within {open_quotes}half a grade{close_quote} of domain experts, and its performance exceeded that of one of the domain experts.

  15. RIBOWEB: linking structural computations to a knowledge base of published experimental data.

    PubMed

    Chen, R O; Felciano, R; Altman, R B

    1997-01-01

    The world wide web (WWW) has become critical for storing and disseminating biological data. It offers an additional opportunity, however, to support distributed computation and sharing of results. Currently, computational analysis tools are often separated from the data in a manner that makes iterative hypothesis testing cumbersome. We hypothesize that the cycle of scientific reasoning (using data to build models, and evaluating models in light of data) can be facilitated with resources that link computations with semantic models of the data. Riboweb is an on-line knowledge-based resource that supports the creation of three-dimensional models of the 30S ribosomal subunit. It has three components: (I) a knowledge base containing representations of the essential physical components and published structural data, (II) computational modules that use the knowledge base to build or analyze structural models, and (III) a web-based user interface that supports multiple users, sessions and computations. We have built a prototype of Riboweb, and have used it to refine a rough model of the central domain of the 30S subunit from E. coli. procedure. Our results suggest that sophisticated and integrated computational capabilities can be delivered to biologists using this simple three-component architecture. PMID:9322019

  16. Automated annual cropland mapping using knowledge-based temporal features

    NASA Astrophysics Data System (ADS)

    Waldner, François; Canto, Guadalupe Sepulcre; Defourny, Pierre

    2015-12-01

    Global, timely, accurate and cost-effective cropland mapping is a prerequisite for reliable crop condition monitoring. This article presented a simple and comprehensive methodology capable to meet the requirements of operational cropland mapping by proposing (1) five knowledge-based temporal features that remain stable over time, (2) a cleaning method that discards misleading pixels from a baseline land cover map and (3) a classifier that delivers high accuracy cropland maps (> 80%). This was demonstrated over four contrasted agrosystems in Argentina, Belgium, China and Ukraine. It was found that the quality and accuracy of the baseline impact more the certainty of the classification rather than the classification output itself. In addition, it was shown that interpolation of the knowledge-based features increases the stability of the classifier allowing for its re-use from year to year without recalibration. Hence, the method shows potential for application at larger scale as well as for delivering cropland map in near real time.

  17. Managing Project Landscapes in Knowledge-Based Enterprises

    NASA Astrophysics Data System (ADS)

    Stantchev, Vladimir; Franke, Marc Roman

    Knowledge-based enterprises are typically conducting a large number of research and development projects simultaneously. This is a particularly challenging task in complex and diverse project landscapes. Project Portfolio Management (PPM) can be a viable framework for knowledge and innovation management in such landscapes. A standardized process with defined functions such as project data repository, project assessment, selection, reporting, and portfolio reevaluation can serve as a starting point. In this work we discuss the benefits a multidimensional evaluation framework can provide for knowledge-based enterprises. Furthermore, we describe a knowledge and learning strategy and process in the context of PPM and evaluate their practical applicability at different stages of the PPM process.

  18. A specialized framework for Medical Diagnostic Knowledge Based Systems.

    PubMed Central

    Lanzola, G.; Stefanelli, M.

    1991-01-01

    To have a knowledge based system (KBS) exhibiting an intelligent behavior, it must be endowed even with knowledge able to represent the expert's strategies, other than with domain knowledge. The elicitation task is inherently difficult for strategic knowledge, because strategy is often tacit, and, even when it has been made explicit, it is not an easy task to describe it in a form that may be directly translated and implemented into a program. This paper describes a Specialized Framework for Medical Diagnostic Knowledge Based Systems able to help an expert in the process of building KBSs in a medical domain. The framework is based on an epistemological model of diagnostic reasoning which has proved to be helpful in describing the diagnostic process in terms of the tasks by which it is composed of. PMID:1807566

  19. A knowledge-based system for prototypical reasoning

    NASA Astrophysics Data System (ADS)

    Lieto, Antonio; Minieri, Andrea; Piana, Alberto; Radicioni, Daniele P.

    2015-04-01

    In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired architecture for the representation of conceptual information. The proposed system aims at extending the classical representational and reasoning capabilities of the ontology-based frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base, composed of a classical symbolic component (grounded on a formal ontology) with a typicality based one (grounded on the conceptual spaces framework). The resulting system attempts to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual process theories of reasoning and rationality. The system has been experimentally assessed in a conceptual categorisation task where common sense linguistic descriptions were given in input, and the corresponding target concepts had to be identified. The results show that the proposed solution substantially extends the representational and reasoning 'conceptual' capabilities of standard ontology-based systems.

  20. Knowledge-based zonal grid generation for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.

  1. Knowledge-based interpretation of outdoor natural color scenes

    SciTech Connect

    Ohta, Y.

    1985-01-01

    One of the major targets in vision research is to develop a total vision system starting from images to a symbolic description, utilizing various knowledge sources. This book demonstrates a knowledge-based image interpretation system that analyzes natural color scenes. Topics covered include color information for region segmentation, preliminary segmentation of color images, and a bottom-up and top-down region analyzer.

  2. A knowledge based model of electric utility operations. Final report

    SciTech Connect

    1993-08-11

    This report consists of an appendix to provide a documentation and help capability for an analyst using the developed expert system of electric utility operations running in CLIPS. This capability is provided through a separate package running under the WINDOWS Operating System and keyed to provide displays of text, graphics and mixed text and graphics that explain and elaborate on the specific decisions being made within the knowledge based expert system.

  3. MetaShare: Enabling Knowledge-Based Data Management

    NASA Astrophysics Data System (ADS)

    Pennington, D. D.; Salayandia, L.; Gates, A.; Osuna, F.

    2013-12-01

    MetaShare is a free and open source knowledge-based system for supporting data management planning, now required by some agencies and publishers. MetaShare supports users as they describe the types of data they will collect, expected standards, and expected policies for sharing. MetaShare's semantic model captures relationships between disciplines, tools, data types, data formats, and metadata standards. As the user plans their data management activities, MetaShare recommends choices based on practices and decisions from a community that has used the system for similar purposes, and extends the knowledge base to capture new relationships. The MetaShare knowledge base is being seeded with information for geoscience and environmental science domains, and is currently undergoing testing on at the University of Texas at El Paso. Through time and usage, it is expected to grow to support a variety of research domains, enabling community-based learning of data management practices. Knowledge of a user's choices during the planning phase can be used to support other tasks in the data life cycle, e.g., collecting, disseminating, and archiving data. A key barrier to scientific data sharing is the lack of sufficient metadata that provides context under which data were collected. The next phase of MetaShare development will automatically generate data collection instruments with embedded metadata and semantic annotations based on the information provided during the planning phase. While not comprehensive, this metadata will be sufficient for discovery and will enable user's to focus on more detailed descriptions of their data. Details are available at: Salayandia, L., Pennington, D., Gates, A., and Osuna, F. (accepted). MetaShare: From data management plans to knowledge base systems. AAAI Fall Symposium Series Workshop on Discovery Informatics, November 15-17, 2013, Arlington, VA.

  4. A knowledge-based expert system for inferring vegetation characteristics

    NASA Technical Reports Server (NTRS)

    Kimes, Daniel S.; Harrison, Patrick R.; Ratcliffe, P. A.

    1991-01-01

    A prototype knowledge-based expert system VEG is presented that focuses on extracting spectral hemispherical reflectance using any combination of nadir and/or directional reflectance data as input. The system is designed to facilitate expansion to handle other inferences regarding vegetation properties such as total hemispherical reflectance, leaf area index, percent ground cover, phosynthetic capacity, and biomass. This approach is more robust and accurate than conventional extraction techniques previously developed.

  5. Current and future trends in metagenomics : Development of knowledge bases

    NASA Astrophysics Data System (ADS)

    Mori, Hiroshi; Yamada, Takuji; Kurokawa, Ken

    Microbes are essential for every part of life on Earth. Numerous microbes inhabit the biosphere, many of which are uncharacterized or uncultivable. They form a complex microbial community that deeply affects against surrounding environments. Metagenome analysis provides a radically new way of examining such complex microbial community without isolation or cultivation of individual bacterial community members. In this article, we present a brief discussion about a metagenomics and the development of knowledge bases, and also discuss about the future trends in metagenomics.

  6. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul

    1994-01-01

    This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.

  7. ISPE: A knowledge-based system for fluidization studies

    SciTech Connect

    Reddy, S.

    1991-01-01

    Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.

  8. The Knowledge Base in Education Administration: Did NCATE Open a Pandora's Box?

    ERIC Educational Resources Information Center

    Achilles, C. M.; DuVall, L.

    The controversial nature of the knowledge base of educational administration is discussed in this paper. Included are a definition of professionalism, a discussion of how to build and develop a knowledge base, and a review of the obstacles to knowledge base development. Elements of a consensual knowledge base include theory, practice, and other…

  9. Building an organized knowledge base: Concept mapping and achievement in secondary school physics

    NASA Astrophysics Data System (ADS)

    Pankratius, William J.

    Direct teaching of problem-solving methods to high school physics students met with little success. Expert problem solving depended upon an organized knowledge base. Concept mapping was found to be a key to organizing an effective knowledge base. The investigation of the effect of the degree of concept mapping on achievement was the purpose of this study. Six intact high school physics classes, taught by this investigator, took part in the study. Two classes were control groups and received standard instruction. Four classes received six weeks of concept-mapping instruction prior to the unit under study. Two of these four classes were the low-level treatment group and were required to submit concept maps at the conclusion of the instruction. The other two classes were the high-level treatment group and were required to submit concept maps at the beginning and at the conclusion of the unit under study. One class from each treatment group took a pretest prior to instruction. An analysis of the posttest results revealed no pretest sensitization. A one-way analysis of covariance indicated a significant main effect for the treatment level at the p < 0.05 level. A pair of single-df comparisons of the adjusted treatment means resulted in significant differences (p < 0.05) between the control group and the average of the treatment means as well as between the two experimental groups. It can be concluded that for this sample (upper-middle-class high school physics students) mapping concepts prior to, during, and subsequent to instruction led to greater achievement as measured by posttest scores.

  10. Using the DOE Knowledge Base for Special Event Analysis

    SciTech Connect

    Armstrong, H.M.; Harris, J.M.; Young, C.J.

    1998-10-20

    The DOE Knowledge Base is a library of detailed information whose purpose is to support the United States National Data Center (USNDC) in its mission to monitor compliance with the Comprehensive Test Ban Treaty (CTBT). One of the important tasks which the USNDC must accomplish is to periodically perform detailed analysis of events of high interest, so-called "Special Events", to provide the national authority with information needed to make policy decisions. In this paper we investigate some possible uses of the Knowledge Base for Special Event Analysis (SEA), and make recommendations for improving Knowledge Base support for SEA. To analyze an event in detail, there are two basic types of data which must be used sensor-derived data (wave- forms, arrivals, events, etc.) and regiohalized contextual data (known sources, geological characteristics, etc.). Cur- rently there is no single package which can provide full access to both types of data, so for our study we use a separate package for each MatSeis, the Sandia Labs-developed MATLAB-based seismic analysis package, for wave- form data analysis, and ArcView, an ESRI product, for contextual data analysis. Both packages are well-suited to pro- totyping because they provide a rich set of currently available functionality and yet are also flexible and easily extensible, . Using these tools and Phase I Knowledge Base data sets, we show how the Knowledge Base can improve both the speed and the quality of SEA. Empirically-derived interpolated correction information can be accessed to improve both location estimates and associated error estimates. This information can in turn be used to identi~ any known nearby sources (e.g. mines, volcanos), which may then trigger specialized processing of the sensor data. Based on the location estimate, preferred magnitude formulas and discriminants can be retrieved, and any known blockages can be identified to prevent miscalculations. Relevant historic events can be identilled either by

  11. Three forms of assessment of prior knowledge, and improved performance following an enrichment programme, of English second language biology students within the context of a marine theme

    NASA Astrophysics Data System (ADS)

    Feltham, Nicola F.; Downs, Colleen T.

    2002-02-01

    The Science Foundation Programme (SFP) was launched in 1991 at the University of Natal, Pietermaritzburg, South Africa in an attempt to equip a selected number of matriculants from historically disadvantaged schools with the skills, resources and self-confidence needed to embark on their tertiary studies. Previous research within the SFP biology component suggests that a major contributor to poor achievement and low retention rates among English second language (ESL) students in the Life Sciences is the inadequate background knowledge in natural history. In this study, SFP student background knowledge was assessed along a continuum of language dependency using a set of three probes. Improved student performance in each of the respective assessments examined the extent to which a sound natural history background facilitated meaningful learning relative to ESL proficiency. Student profiles and attitudes to biology were also examined. Results indicated that students did not perceive language to be a problem in biology. However, analysis of the student performance in the assessment probes indicated that, although the marine course provided the students with the background knowledge that they were initially lacking, they continued to perform better in the drawing and MCQ tools in the post-tests, suggesting that it is their inability to express themselves in the written form that hampers their development. These results have implications for curriculum development within the constructivist framework of the SFP.

  12. Knowledge-based GIS techniques applied to geological engineering

    USGS Publications Warehouse

    Usery, E. Lynn; Altheide, Phyllis; Deister, Robin R.P.; Barr, David J.

    1988-01-01

    A knowledge-based geographic information system (KBGIS) approach which requires development of a rule base for both GIS processing and for the geological engineering application has been implemented. The rule bases are implemented in the Goldworks expert system development shell interfaced to the Earth Resources Data Analysis System (ERDAS) raster-based GIS for input and output. GIS analysis procedures including recoding, intersection, and union are controlled by the rule base, and the geological engineering map product is generted by the expert system. The KBGIS has been used to generate a geological engineering map of Creve Coeur, Missouri.

  13. Modeling materials failures for knowledge based system applications

    SciTech Connect

    Roberge, P.R.

    1996-12-31

    The evaluation of the probability of given premises to play a role in a final outcome can only be done when the parameters involved and their interactions are properly elucidated. But, for complex engineering situations, this often appears as an insurmountable task. The prediction of failures for the optimization of inspection and maintenance is such an example of complexity. After reviewing the models of expertise commonly used by knowledge engineers, this paper presents an object-oriented framework to guide the elicitation and organization of lifetime information for knowledge based system applications.

  14. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  15. Knowledge-based potential functions in protein design.

    PubMed

    Russ, William P; Ranganathan, Rama

    2002-08-01

    Predicting protein sequences that fold into specific native three-dimensional structures is a problem of great potential complexity. Although the complete solution is ultimately rooted in understanding the physical chemistry underlying the complex interactions between amino acid residues that determine protein stability, recent work shows that empirical information about these first principles is embedded in the statistics of protein sequence and structure databases. This review focuses on the use of 'knowledge-based' potentials derived from these databases in designing proteins. In addition, the data suggest how the study of these empirical potentials might impact our fundamental understanding of the energetic principles of protein structure. PMID:12163066

  16. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  17. SAFOD Brittle Microstructure and Mechanics Knowledge Base (BM2KB)

    NASA Astrophysics Data System (ADS)

    Babaie, Hassan A.; Broda Cindi, M.; Hadizadeh, Jafar; Kumar, Anuj

    2013-07-01

    Scientific drilling near Parkfield, California has established the San Andreas Fault Observatory at Depth (SAFOD), which provides the solid earth community with short range geophysical and fault zone material data. The BM2KB ontology was developed in order to formalize the knowledge about brittle microstructures in the fault rocks sampled from the SAFOD cores. A knowledge base, instantiated from this domain ontology, stores and presents the observed microstructural and analytical data with respect to implications for brittle deformation and mechanics of faulting. These data can be searched on the knowledge base‧s Web interface by selecting a set of terms (classes, properties) from different drop-down lists that are dynamically populated from the ontology. In addition to this general search, a query can also be conducted to view data contributed by a specific investigator. A search by sample is done using the EarthScope SAFOD Core Viewer that allows a user to locate samples on high resolution images of core sections belonging to different runs and holes. The class hierarchy of the BM2KB ontology was initially designed using the Unified Modeling Language (UML), which was used as a visual guide to develop the ontology in OWL applying the Protégé ontology editor. Various Semantic Web technologies such as the RDF, RDFS, and OWL ontology languages, SPARQL query language, and Pellet reasoning engine, were used to develop the ontology. An interactive Web application interface was developed through Jena, a java based framework, with AJAX technology, jsp pages, and java servlets, and deployed via an Apache tomcat server. The interface allows the registered user to submit data related to their research on a sample of the SAFOD core. The submitted data, after initial review by the knowledge base administrator, are added to the extensible knowledge base and become available in subsequent queries to all types of users. The interface facilitates inference capabilities in the

  18. Building a knowledge based economy in Russia using guided entrepreneurship

    NASA Astrophysics Data System (ADS)

    Reznik, Boris N.; Daniels, Marc; Ichim, Thomas E.; Reznik, David L.

    2005-06-01

    Despite advanced scientific and technological (S&T) expertise, the Russian economy is presently based upon manufacturing and raw material exports. Currently, governmental incentives are attempting to leverage the existing scientific infrastructure through the concept of building a Knowledge Based Economy. However, socio-economic changes do not occur solely by decree, but by alteration of approach to the market. Here we describe the "Guided Entrepreneurship" plan, a series of steps needed for generation of an army of entrepreneurs, which initiate a chain reaction of S&T-driven growth. The situation in Russia is placed in the framework of other areas where Guided Entrepreneurship has been successful.

  19. Knowledge-Based Parallel Performance Technology for Scientific Application Competitiveness Final Report

    SciTech Connect

    Malony, Allen D; Shende, Sameer

    2011-08-15

    The primary goal of the University of Oregon's DOE "œcompetitiveness" project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translation of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.

  20. A prototype knowledge-based simulation support system

    SciTech Connect

    Hill, T.R.; Roberts, S.D.

    1987-04-01

    As a preliminary step toward the goal of an intelligent automated system for simulation modeling support, we explore the feasibility of the overall concept by generating and testing a prototypical framework. A prototype knowledge-based computer system was developed to support a senior level course in industrial engineering so that the overall feasibility of an expert simulation support system could be studied in a controlled and observable setting. The system behavior mimics the diagnostic (intelligent) process performed by the course instructor and teaching assistants, finding logical errors in INSIGHT simulation models and recommending appropriate corrective measures. The system was programmed in a non-procedural language (PROLOG) and designed to run interactively with students working on course homework and projects. The knowledge-based structure supports intelligent behavior, providing its users with access to an evolving accumulation of expert diagnostic knowledge. The non-procedural approach facilitates the maintenance of the system and helps merge the roles of expert and knowledge engineer by allowing new knowledge to be easily incorporated without regard to the existing flow of control. The background, features and design of the system are describe and preliminary results are reported. Initial success is judged to demonstrate the utility of the reported approach and support the ultimate goal of an intelligent modeling system which can support simulation modelers outside the classroom environment. Finally, future extensions are suggested.

  1. Big data analytics in immunology: a knowledge-based approach.

    PubMed

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677

  2. Big Data Analytics in Immunology: A Knowledge-Based Approach

    PubMed Central

    Zhang, Guang Lan

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677

  3. Knowledge-based simulation using object-oriented programming

    NASA Technical Reports Server (NTRS)

    Sidoran, Karen M.

    1993-01-01

    Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.

  4. Knowledge-based imaging-sensor fusion system

    NASA Technical Reports Server (NTRS)

    Westrom, George

    1989-01-01

    An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.

  5. Portable Knowledge-Based Diagnostic And Maintenance Systems

    NASA Astrophysics Data System (ADS)

    Darvish, John; Olson, Noreen S.

    1989-03-01

    It is difficult to diagnose faults and maintain weapon systems because (1) they are highly complex pieces of equipment composed of multiple mechanical, electrical, and hydraulic assemblies, and (2) talented maintenance personnel are continuously being lost through the attrition process. To solve this problem, we developed a portable diagnostic and maintenance aid that uses a knowledge-based expert system. This aid incorporates diagnostics, operational procedures, repair and replacement procedures, and regularly scheduled maintenance into one compact, 18-pound graphics workstation. Drawings and schematics can be pulled up from the CD-ROM to assist the operator in answering the expert system's questions. Work for this aid began with the development of the initial knowledge-based expert system in a fast prototyping environment using a LISP machine. The second phase saw the development of a personal computer-based system that used videodisc technology to pictorially assist the operator. The current version of the aid eliminates the high expenses associated with videodisc preparation by scanning in the art work already in the manuals. A number of generic software tools have been developed that streamlined the construction of each iteration of the aid; these tools will be applied to the development of future systems.

  6. Framework Support For Knowledge-Based Software Development

    NASA Astrophysics Data System (ADS)

    Huseth, Steve

    1988-03-01

    The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.

  7. Utilizing knowledge-base semantics in graph-based algorithms

    SciTech Connect

    Darwiche, A.

    1996-12-31

    Graph-based algorithms convert a knowledge base with a graph structure into one with a tree structure (a join-tree) and then apply tree-inference on the result. Nodes in the join-tree are cliques of variables and tree-inference is exponential in w*, the size of the maximal clique in the join-tree. A central property of join-trees that validates tree-inference is the running-intersection property: the intersection of any two cliques must belong to every clique on the path between them. We present two key results in connection to graph-based algorithms. First, we show that the running-intersection property, although sufficient, is not necessary for validating tree-inference. We present a weaker property for this purpose, called running-interaction, that depends on non-structural (semantical) properties of a knowledge base. We also present a linear algorithm that may reduce w* of a join-tree, possibly destroying its running-intersection property, while maintaining its running-interaction property and, hence, its validity for tree-inference. Second, we develop a simple algorithm for generating trees satisfying the running-interaction property. The algorithm bypasses triangulation (the standard technique for constructing join-trees) and does not construct a join-tree first. We show that the proposed algorithm may in some cases generate trees that are more efficient than those generated by modifying a join-tree.

  8. Interactive classification: A technique for acquiring and maintaining knowledge bases

    SciTech Connect

    Finin, T.W.

    1986-10-01

    The practical application of knowledge-based systems, such as in expert systems, often requires the maintenance of large amounts of declarative knowledge. As a knowledge base (KB) grows in size and complexity, it becomes more difficult to maintain and extend. Even someone who is familiar with the knowledge domain, how it is represented in the KB, and the actual contents of the current KB may have severe difficulties in updating it. Even if the difficulties can be tolerated, there is a very real danger that inconsistencies and errors may be introduced into the KB through the modification. This paper describes an approach to this problem based on a tool called an interactive classifier. An interactive classifier uses the contents of the existing KB and knowledge about its representation to help the maintainer describe new KB objects. The interactive classifier will identify the appropriate taxonomic location for the newly described object and add it to the KB. The new object is allowed to be a generalization of existing KB objects, enabling the system to learn more about existing objects.

  9. Network fingerprint: a knowledge-based characterization of biomedical networks

    PubMed Central

    Cui, Xiuliang; He, Haochen; He, Fuchu; Wang, Shengqi; Li, Fei; Bo, Xiaochen

    2015-01-01

    It can be difficult for biomedical researchers to understand complex molecular networks due to their unfamiliarity with the mathematical concepts employed. To represent molecular networks with clear meanings and familiar forms for biomedical researchers, we introduce a knowledge-based computational framework to decipher biomedical networks by making systematic comparisons to well-studied “basic networks”. A biomedical network is characterized as a spectrum-like vector called “network fingerprint”, which contains similarities to basic networks. This knowledge-based multidimensional characterization provides a more intuitive way to decipher molecular networks, especially for large-scale network comparisons and clustering analyses. As an example, we extracted network fingerprints of 44 disease networks in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The comparisons among the network fingerprints of disease networks revealed informative disease-disease and disease-signaling pathway associations, illustrating that the network fingerprinting framework will lead to new approaches for better understanding of biomedical networks. PMID:26307246

  10. Hospital nurses' use of knowledge-based information resources.

    PubMed

    Tannery, Nancy Hrinya; Wessel, Charles B; Epstein, Barbara A; Gadd, Cynthia S

    2007-01-01

    The purpose of this study was to evaluate the information-seeking practices of nurses before and after access to a library's electronic collection of information resources. This is a pre/post intervention study of nurses at a rural community hospital. The hospital contracted with an academic health sciences library for access to a collection of online knowledge-based resources. Self-report surveys were used to obtain information about nurses' computer use and how they locate and access information to answer questions related to their patient care activities. In 2001, self-report surveys were sent to the hospital's 573 nurses during implementation of access to online resources with a post-implementation survey sent 1 year later. At the initiation of access to the library's electronic resources, nurses turned to colleagues and print textbooks or journals to satisfy their information needs. After 1 year of access, 20% of the nurses had begun to use the library's electronic resources. The study outcome suggests ready access to knowledge-based electronic information resources can lead to changes in behavior among some nurses. PMID:17289463

  11. A proven knowledge-based approach to prioritizing process information

    NASA Technical Reports Server (NTRS)

    Corsberg, Daniel R.

    1991-01-01

    Many space-related processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect is rapid analysis of the changing process information. During a disturbance, this task can overwhelm humans as well as computers. Humans deal with this by applying heuristics in determining significant information. A simple, knowledge-based approach to prioritizing information is described. The approach models those heuristics that humans would use in similar circumstances. The approach described has received two patents and was implemented in the Alarm Filtering System (AFS) at the Idaho National Engineering Laboratory (INEL). AFS was first developed for application in a nuclear reactor control room. It has since been used in chemical processing applications, where it has had a significant impact on control room environments. The approach uses knowledge-based heuristics to analyze data from process instrumentation and respond to that data according to knowledge encapsulated in objects and rules. While AFS cannot perform the complete diagnosis and control task, it has proven to be extremely effective at filtering and prioritizing information. AFS was used for over two years as a first level of analysis for human diagnosticians. Given the approach's proven track record in a wide variety of practical applications, it should be useful in both ground- and space-based systems.

  12. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  13. A knowledge-based care protocol system for ICU.

    PubMed

    Lau, F; Vincent, D D

    1995-01-01

    There is a growing interest in using care maps in ICU. So far, the emphasis has been on developing the critical path, problem/outcome, and variance reporting for specific diagnoses. This paper presents a conceptual knowledge-based care protocol system design for the ICU. It is based on the manual care map currently in use for managing myocardial infarction in the ICU of the Sturgeon General Hospital in Alberta. The proposed design uses expert rules, object schemas, case-based reasoning, and quantitative models as sources of its knowledge. Also being developed is a decision model with explicit linkages for outcome-process-measure from the care map. The resulting system is intended as a bedside charting and decision-support tool for caregivers. Proposed usage includes charting by acknowledgment, generation of alerts, and critiques on variances/events recorded, recommendations for planned interventions, and comparison with historical cases. Currently, a prototype is being developed on a PC-based network with Visual Basic, Level-Expert Object, and xBase. A clinical trial is also planned to evaluate whether this knowledge-based care protocol can reduce the length of stay of patients with myocardial infarction in the ICU. PMID:8591604

  14. Mindtagger: A Demonstration of Data Labeling in Knowledge Base Construction

    PubMed Central

    Shin, Jaeho; Ré, Christopher; Cafarella, Michael

    2016-01-01

    End-to-end knowledge base construction systems using statistical inference are enabling more people to automatically extract high-quality domain-specific information from unstructured data. As a result of deploying DeepDive framework across several domains, we found new challenges in debugging and improving such end-to-end systems to construct high-quality knowledge bases. DeepDive has an iterative development cycle in which users improve the data. To help our users, we needed to develop principles for analyzing the system's error as well as provide tooling for inspecting and labeling various data products of the system. We created guidelines for error analysis modeled after our colleagues' best practices, in which data labeling plays a critical role in every step of the analysis. To enable more productive and systematic data labeling, we created Mindtagger, a versatile tool that can be configured to support a wide range of tasks. In this demonstration, we show in detail what data labeling tasks are modeled in our error analysis guidelines and how each of them is performed using Mindtagger. PMID:27144082

  15. Knowledge-based inference engine for online video dissemination

    NASA Astrophysics Data System (ADS)

    Zhou, Wensheng; Kuo, C.-C. Jay

    2000-10-01

    To facilitate easy access to rich information of multimedia over the Internet, we develop a knowledge-based classification system that supports automatic Indexing and filtering based on semantic concepts for the dissemination of on-line real-time media. Automatic segmentation, annotation and summarization of media for fast information browsing and updating are achieved in the same time. In the proposed system, a real-time scene-change detection proxy performs an initial video structuring process by splitting a video clip into scenes. Motional and visual features are extracted in real time for every detected scene by using online feature extraction proxies. Higher semantics are then derived through a joint use of low-level features along with inference rules in the knowledge base. Inference rules are derived through a supervised learning process based on representative samples. On-line media filtering based on semantic concepts becomes possible by using the proposed video inference engine. Video streams are either blocked or sent to certain channels depending on whether or not the video stream is matched with the user's profile. The proposed system is extensively evaluated by applying the engine to video of basketball games.

  16. Knowledge-based assistant for ultrasonic inspection in metals

    NASA Astrophysics Data System (ADS)

    Franklin, Reynold; Halabe, Udaya B.

    1997-12-01

    Ultrasonic is a popular nondestructive testing technique for detecting flaws in metals, composites and other materials. A major limitation of this technique for successful field implementation is the need for skilled labor to identify an appropriate testing methodology and conduct the inspection. A knowledge-based assistant that can help the inspector in choosing the suitable testing methodology would greatly reduce the cost for inspection while maintaining reliability. Therefore a rule-based decision logic that can incorporate the expertise of a skilled operator for choosing a suitable ultrasonic configuration and testing procedure for a given application is explored and reported in this paper. A personal computer (PC) based expert system shell, VP Expert, is used to encode the rules and assemble the knowledge to address the different methods in ultrasonic inspection for metals. The expert system will be configured in a question-answer format. Since several factors (such as frequency, couplant, sensors, etc.) influence the inspection, appropriate decisions have to be made about each factor depending on the type of inspection method and the intended use of the metal. This knowledge base will help in identifying the methodology for detecting flaws, cracks, and thickness measurements, etc., which will lead to increase safety.

  17. Strong earthquakes knowledge base for calibrating fast damage assessment systems

    NASA Astrophysics Data System (ADS)

    Frolova, N.; Kozlov, M.; Larionov, V.; Nikolaev, A.; Suchshev, S.; Ugarov, A.

    2003-04-01

    At present Systems for fast damage and loss assessment due to strong earthquakes may use as input data: (1) information about event parameters (magnitude, depth and coordinates) issued by Alert Seismological Surveys; (2) wave-form data obtained by strong-motion seismograph network; (3) high resolution space images of the affected area obtained before and after the event. When data about magnidute, depth and location of event are used to simulate possible consequences, the reliability of estimations depends on completeness and reliability of databases on elements at risk (population and built environment); reliability of vulnerability functions of elements at risk; and errors in strong earthquakes' parameters determination by Alert Seismological Surveys. Some of these factors may be taken into account at the expense of the System calibration with usage of well documented past strong earthquakes. The paper is describing the structure and content of the knowledge base about well documented strong events, which occurred in last century. It contains the description of more than 1000 events. The data are distributed almost homogeneously as the losses due to earthquakes are concerned; the most events are in the magnitude range 6.5 -7.9. Software is created to accumulate and analyze the information about these events source parameters and social consequences. Created knowledge base is used for calibration the Fast Damage Assessment Tool, which is at present on duty with the framework of EDRIM Program. It is also used as additional information by experts who analyses the results of computations.

  18. Automatic in-syringe dispersive liquid-liquid microextraction of ⁹⁹Tc from biological samples and hospital residues prior to liquid scintillation counting.

    PubMed

    Villar, Marina; Avivar, Jessica; Ferrer, Laura; Borràs, Antoni; Vega, Fernando; Cerdà, Víctor

    2015-07-01

    A new approach exploiting in-syringe dispersive liquid-liquid microextraction (DLLME) for (99)Tc extraction and preconcentration from biological samples, i.e., urine and saliva, and liquid residues from treated patients is presented. (99)Tc is a beta emitter with a long half-life (2.111 × 10(5) years) and mobility in the different environmental compartments. One of the sources of this radionuclide is through the use of its father (99m)Tc in medical diagnosis. For the first time a critical comparison between extractants and disperser solvents for (99)Tc DLLME is presented, e.g., tributyl phosphate (TBP), trioctylmethylammonium chloride (Aliquat®336), triisooctylamine (TiOA), as extractants in apolar solvents such as xylene and dodecane, and disperser solvents such as acetone, acetonitrile, ethanol, methanol, 1-propanol, and 2-propanol. The system was optimized by experimental design, and 22.5% of Aliquat®336 in acetone was selected as extractant and disperser, respectively. Off-line detection was performed using a liquid scintillation counter. The present method has a (99)Tc minimum detectable activity (MDA) of 0.075 Bq with a high extraction/preconcentration frequency (8 h(-1)). Urine, saliva, and hospital residues were satisfactorily analyzed with recoveries of 82-119%. Thus, the proposed system is an automatic powerful tool to monitor the entry of (99)Tc into the environment. Graphical Abstract (99m)Tc is widely used in Nuclear Medicine for diagnosis. Its daugther (99)Tc is automatically monitored in biological samples from treated patients by in-syringe dispersive liquid-liquid microextraction. PMID:26007698

  19. Baseline levels of bioaerosols and volatile organic compounds around a municipal waste incinerator prior to the construction of a mechanical-biological treatment plant

    SciTech Connect

    Vilavert, Lolita; Nadal, Marti; Inza, Isabel; Figueras, Maria J.; Domingo, Jose L.

    2009-09-15

    New waste management programs are currently aimed at developing alternative treatment technologies such as mechanical-biological treatment (MBT) and composting plants. However, there is still a high uncertainty concerning the chemical and microbiological risks for human health, not only for workers of these facilities, but also for the population living in the neighborhood. A new MBT plant is planned to be constructed adjacently to a municipal solid waste incinerator (MSWI) in Tarragona (Catalonia, Spain). In order to evaluate its potential impact and to differentiate the impacts of MSWI from those of the MBT when the latter is operative, a pre-operational survey was initiated by determining the concentrations of 20 volatile organic compounds (VOCs) and bioaerosols (total bacteria, Gram-negative bacteria, fungi and Aspergillus fumigatus) in airborne samples around the MSWI. The results indicated that the current concentrations of bioaerosols (ranges: 382-3882, 18-790, 44-926, and <1-7 CFU/m{sup 3} for fungi at 25 deg. C, fungi at 37 deg. C, total bacteria, and Gram-negative bacteria, respectively) and VOCs (ranging from 0.9 to 121.2 {mu}g/m{sup 3}) are very low in comparison to reported levels in indoor and outdoor air in composting and MBT plants, as well in urban and industrial zones. With the exception of total bacteria, no correlations were observed between the environmental concentrations of biological agents and the direction/distance from the facility. However, total bacteria presented significantly higher levels downwind. Moreover, a non-significant increase of VOCs was detected in sites closer to the incinerator, which means that the MSWI could have a very minor impact on the surrounding environment.

  20. Baseline levels of bioaerosols and volatile organic compounds around a municipal waste incinerator prior to the construction of a mechanical-biological treatment plant.

    PubMed

    Vilavert, Lolita; Nadal, Martí; Inza, Isabel; Figueras, María J; Domingo, José L

    2009-09-01

    New waste management programs are currently aimed at developing alternative treatment technologies such as mechanical-biological treatment (MBT) and composting plants. However, there is still a high uncertainty concerning the chemical and microbiological risks for human health, not only for workers of these facilities, but also for the population living in the neighborhood. A new MBT plant is planned to be constructed adjacently to a municipal solid waste incinerator (MSWI) in Tarragona (Catalonia, Spain). In order to evaluate its potential impact and to differentiate the impacts of MSWI from those of the MBT when the latter is operative, a pre-operational survey was initiated by determining the concentrations of 20 volatile organic compounds (VOCs) and bioaerosols (total bacteria, gram-negative bacteria, fungi and Aspergillus fumigatus) in airborne samples around the MSWI. The results indicated that the current concentrations of bioaerosols (ranges: 382-3882, 18-790, 44-926, and <1-7 CFU/m(3) for fungi at 25 degrees C, fungi at 37 degrees C, total bacteria, and gram-negative bacteria, respectively) and VOCs (ranging from 0.9 to 121.2 microg/m(3)) are very low in comparison to reported levels in indoor and outdoor air in composting and MBT plants, as well in urban and industrial zones. With the exception of total bacteria, no correlations were observed between the environmental concentrations of biological agents and the direction/distance from the facility. However, total bacteria presented significantly higher levels downwind. Moreover, a non-significant increase of VOCs was detected in sites closer to the incinerator, which means that the MSWI could have a very minor impact on the surrounding environment. PMID:19346120

  1. A knowledge-based control system for air-scour optimisation in membrane bioreactors.

    PubMed

    Ferrero, G; Monclús, H; Sancho, L; Garrido, J M; Comas, J; Rodríguez-Roda, I

    2011-01-01

    Although membrane bioreactors (MBRs) technology is still a growing sector, its progressive implementation all over the world, together with great technical achievements, has allowed it to reach a mature degree, just comparable to other more conventional wastewater treatment technologies. With current energy requirements around 0.6-1.1 kWh/m3 of treated wastewater and investment costs similar to conventional treatment plants, main market niche for MBRs can be areas with very high restrictive discharge limits, where treatment plants have to be compact or where water reuse is necessary. Operational costs are higher than for conventional treatments; consequently there is still a need and possibilities for energy saving and optimisation. This paper presents the development of a knowledge-based decision support system (DSS) for the integrated operation and remote control of the biological and physical (filtration and backwashing or relaxation) processes in MBRs. The core of the DSS is a knowledge-based control module for air-scour consumption automation and energy consumption minimisation. PMID:21902045

  2. ASExpert: an integrated knowledge-based system for activated sludge plants.

    PubMed

    Sorour, M T; Bahgat, L M F; El, Iskandarani M A; Horan, N J

    2002-08-01

    The activated sludge process is commonly used for secondary wastewater treatment worldwide. This process is capable of achieving high quality effluent. However it has the reputation of being difficult to operate because of its poorly understood biological behaviour, variability of input flows and the need to incorporate qualitative data. To augment this incomplete knowledge with experience, knowledge-based systems were introduced in the 1980s however they didn't receive much popularity. This paper presents the Activated Sludge Expert system (ASExpert), which is a rule-based expert system plus a complete database tool proposed for use in activated sludge plants. The paper focuses on presenting the system's main features and capabilities to revive the interest in knowledge-based systems as a reliable means for monitoring plants. Then it presents the methodology adopted for ASExpert validation along with an assessment of testing results. Finally it concludes that expert systems technology has proved its importance for enhancing performance, especially if in the future it is integrated to a modern control system. PMID:12211453

  3. Knowledge-based system for flight information management. Thesis

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.

    1990-01-01

    The use of knowledge-based system (KBS) architectures to manage information on the primary flight display (PFD) of commercial aircraft is described. The PFD information management strategy used tailored the information on the PFD to the tasks the pilot performed. The KBS design and implementation of the task-tailored PFD information management application is described. The knowledge acquisition and subsequent system design of a flight-phase-detection KBS is also described. The flight-phase output of this KBS was used as input to the task-tailored PFD information management KBS. The implementation and integration of this KBS with existing aircraft systems and the other KBS is described. The flight tests are examined of both KBS's, collectively called the Task-Tailored Flight Information Manager (TTFIM), which verified their implementation and integration, and validated the software engineering advantages of the KBS approach in an operational environment.

  4. The Knowledge-Based Software Assistant: Beyond CASE

    NASA Technical Reports Server (NTRS)

    Carozzoni, Joseph A.

    1993-01-01

    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.

  5. The 2004 knowledge base parametric grid data software suite.

    SciTech Connect

    Wilkening, Lisa K.; Simons, Randall W.; Ballard, Sandy; Jensen, Lee A.; Chang, Marcus C.; Hipp, James Richard

    2004-08-01

    One of the most important types of data in the National Nuclear Security Administration (NNSA) Ground-Based Nuclear Explosion Monitoring Research and Engineering (GNEM R&E) Knowledge Base (KB) is parametric grid (PG) data. PG data can be used to improve signal detection, signal association, and event discrimination, but so far their greatest use has been for improving event location by providing ground-truth-based corrections to travel-time base models. In this presentation we discuss the latest versions of the complete suite of Knowledge Base PG tools developed by NNSA to create, access, manage, and view PG data. The primary PG population tool is the Knowledge Base calibration integration tool (KBCIT). KBCIT is an interactive computer application to produce interpolated calibration-based information that can be used to improve monitoring performance by improving precision of model predictions and by providing proper characterizations of uncertainty. It is used to analyze raw data and produce kriged correction surfaces that can be included in the Knowledge Base. KBCIT not only produces the surfaces but also records all steps in the analysis for later review and possible revision. New features in KBCIT include a new variogram autofit algorithm; the storage of database identifiers with a surface; the ability to merge surfaces; and improved surface-smoothing algorithms. The Parametric Grid Library (PGL) provides the interface to access the data and models stored in a PGL file database. The PGL represents the core software library used by all the GNEM R&E tools that read or write PGL data (e.g., KBCIT and LocOO). The library provides data representations and software models to support accurate and efficient seismic phase association and event location. Recent improvements include conversion of the flat-file database (FDB) to an Oracle database representation; automatic access of station/phase tagged models from the FDB during location; modification of the core

  6. Knowledge-based assistance in costing the space station DMS

    NASA Technical Reports Server (NTRS)

    Henson, Troy; Rone, Kyle

    1988-01-01

    The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.

  7. Knowledge-based fault diagnosis system for refuse collection vehicle

    NASA Astrophysics Data System (ADS)

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-01

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  8. Knowledge-based requirements analysis for automating software development

    NASA Technical Reports Server (NTRS)

    Markosian, Lawrence Z.

    1988-01-01

    We present a new software development paradigm that automates the derivation of implementations from requirements. In this paradigm, informally-stated requirements are expressed in a domain-specific requirements specification language. This language is machine-understable and requirements expressed in it are captured in a knowledge base. Once the requirements are captured, more detailed specifications and eventually implementations are derived by the system using transformational synthesis. A key characteristic of the process is that the required human intervention is in the form of providing problem- and domain-specific engineering knowledge, not in writing detailed implementations. We describe a prototype system that applies the paradigm in the realm of communication engineering: the prototype automatically generates implementations of buffers following analysis of the requirements on each buffer.

  9. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  10. Knowledge-Based Framework: its specification and new related discussions

    NASA Astrophysics Data System (ADS)

    Rodrigues, Douglas; Zaniolo, Rodrigo R.; Branco, Kalinka R. L. J. C.

    2015-09-01

    Unmanned Aerial Vehicle is a common application of critical embedded systems. The heterogeneity prevalent in these vehicles in terms of services for avionics is particularly relevant to the elaboration of multi-application missions. Besides, this heterogeneity in UAV services is often manifested in the form of characteristics such as reliability, security and performance. Different service implementations typically offer different guarantees in terms of these characteristics and in terms of associated costs. Particularly, we explore the notion of Service-Oriented Architecture (SOA) in the context of UAVs as safety-critical embedded systems for the composition of services to fulfil application-specified performance and dependability guarantees. So, we propose a framework for the deployment of these services and their variants. This framework is called Knowledge-Based Framework for Dynamically Changing Applications (KBF) and we specify its services module, discussing all the related issues.

  11. Knowledge-based decision support for patient monitoring in cardioanesthesia.

    PubMed

    Schecke, T; Langen, M; Popp, H J; Rau, G; Käsmacher, H; Kalff, G

    1992-01-01

    An approach to generating 'intelligent alarms' is presented that aggregates many information items, i.e. measured vital signs, recent medications, etc., into state variables that more directly reflect the patient's physiological state. Based on these state variables the described decision support system AES-2 also provides therapy recommendations. The assessment of the state variables and the generation of therapeutic advice follow a knowledge-based approach. Aspects of uncertainty, e.g. a gradual transition between 'normal' and 'below normal', are considered applying a fuzzy set approach. Special emphasis is laid on the ergonomic design of the user interface, which is based on color graphics and finger touch input on the screen. Certain simulation techniques considerably support the design process of AES-2 as is demonstrated with a typical example from cardioanesthesia. PMID:1402299

  12. Knowledge-based visualization of time-oriented clinical data.

    PubMed Central

    Shahar, Y.; Cheng, C.

    1998-01-01

    We describe a domain-independent framework (KNAVE) specific to the task of interpretation, summarization, visualization, explanation, and interactive exploration in a context-sensitive manner through time-oriented raw clinical data and the multiple levels of higher-level, interval-based concepts that can be abstracted from these data. The KNAVE exploration operators, which are independent of any particular clinical domain, access a knowledge base of temporal properties of measured data and interventions that is specific to the clinical domain. Thus, domain-specific knowledge underlies the domain-independent semantics of the interpretation, visualization, and exploration processes. Initial evaluation of the KNAVE prototype by a small number of users with variable clinical and informatics training has been encouraging. Images Figure 3 Figure 4 PMID:9929201

  13. Knowledge-based fault diagnosis system for refuse collection vehicle

    SciTech Connect

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-15

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  14. TMS for Instantiating a Knowledge Base With Incomplete Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A computer program that belongs to the class known among software experts as output truth-maintenance-systems (output TMSs) has been devised as one of a number of software tools for reducing the size of the knowledge base that must be searched during execution of artificial- intelligence software of the rule-based inference-engine type in a case in which data are missing. This program determines whether the consequences of activation of two or more rules can be combined without causing a logical inconsistency. For example, in a case involving hypothetical scenarios that could lead to turning a given device on or off, the program determines whether a scenario involving a given combination of rules could lead to turning the device both on and off at the same time, in which case that combination of rules would not be included in the scenario.

  15. A model for a knowledge-based system's life cycle

    NASA Technical Reports Server (NTRS)

    Kiss, Peter A.

    1990-01-01

    The American Institute of Aeronautics and Astronautics has initiated a Committee on Standards for Artificial Intelligence. Presented here are the initial efforts of one of the working groups of that committee. The purpose here is to present a candidate model for the development life cycle of Knowledge Based Systems (KBS). The intent is for the model to be used by the Aerospace Community and eventually be evolved into a standard. The model is rooted in the evolutionary model, borrows from the spiral model, and is embedded in the standard Waterfall model for software development. Its intent is to satisfy the development of both stand-alone and embedded KBSs. The phases of the life cycle are detailed as are and the review points that constitute the key milestones throughout the development process. The applicability and strengths of the model are discussed along with areas needing further development and refinement by the aerospace community.

  16. Structure of the knowledge base for an expert labeling system

    NASA Technical Reports Server (NTRS)

    Rajaram, N. S.

    1981-01-01

    One of the principal objectives of the NASA AgRISTARS program is the inventory of global crop resources using remotely sensed data gathered by Land Satellites (LANDSAT). A central problem in any such crop inventory procedure is the interpretation of LANDSAT images and identification of parts of each image which are covered by a particular crop of interest. This task of labeling is largely a manual one done by trained human analysts and consequently presents obstacles to the development of totally automated crop inventory systems. However, development in knowledge engineering as well as widespread availability of inexpensive hardware and software for artificial intelligence work offers possibilities for developing expert systems for labeling of crops. Such a knowledge based approach to labeling is presented.

  17. ProbOnto: ontology and knowledge base of probability distributions

    PubMed Central

    Swat, Maciej J.; Grenon, Pierre; Wimalaratne, Sarala

    2016-01-01

    Motivation: Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. Results: ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. Availability and Implementation: http://probonto.org Contact: mjswat@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153608

  18. Knowledge-based navigation of complex information spaces

    SciTech Connect

    Burke, R.D.; Hammond, K.J.; Young, B.C.

    1996-12-31

    While the explosion of on-line information has brought new opportunities for finding and using electronic data, it has also brought to the forefront the problem of isolating useful information and making sense of large multi-dimension information spaces. We have built several developed an approach to building data {open_quotes}tour guides,{close_quotes} called FINDME systems. These programs know enough about an information space to be able to help a user navigate through it. The user not only comes away with items of useful information but also insights into the structure of the information space itself. In these systems, we have combined ideas of instance-based browsing, structuring retrieval around the critiquing of previously-retrieved examples, and retrieval strategies, knowledge-based heuristics for finding relevant information. We illustrate these techniques with several examples, concentrating especially on the RENTME system, a FINDME system for helping users find suitable rental apartments in the Chicago metropolitan area.

  19. The Knowledge Base Interface for Parametric Grid Information

    SciTech Connect

    Hipp, James R.; Simons, Randall W.; Young, Chris J.

    1999-08-03

    The parametric grid capability of the Knowledge Base (KBase) provides an efficient robust way to store and access interpolatable information that is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use an approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation. The method involves three basic steps: data preparation, data storage, and data access. In past presentations we have discussed in detail the first step. In this paper we focus on the latter two, describing in detail the type of information which must be stored and the interface used to retrieve parametric grid data from the Knowledge Base. Once data have been properly prepared, the information (tessellation and associated value surfaces) needed to support the interface functionality, can be entered into the KBase. The primary types of parametric grid data that must be stored include (1) generic header information; (2) base model, station, and phase names and associated ID's used to construct surface identifiers; (3) surface accounting information; (4) tessellation accounting information; (5) mesh data for each tessellation; (6) correction data defined for each surface at each node of the surfaces owning tessellation (7) mesh refinement calculation set-up and flag information; and (8) kriging calculation set-up and flag information. The eight data components not only represent the results of the data preparation process but also include all required input information for several population tools that would enable the complete regeneration of the data results if that should be necessary.

  20. Autonomous Cryogenic Load Operations: Knowledge-Based Autonomous Test Engineer

    NASA Technical Reports Server (NTRS)

    Schrading, J. Nicolas

    2013-01-01

    The Knowledge-Based Autonomous Test Engineer (KATE) program has a long history at KSC. Now a part of the Autonomous Cryogenic Load Operations (ACLO) mission, this software system has been sporadically developed over the past 20 years. Originally designed to provide health and status monitoring for a simple water-based fluid system, it was proven to be a capable autonomous test engineer for determining sources of failure in the system. As part of a new goal to provide this same anomaly-detection capability for a complicated cryogenic fluid system, software engineers, physicists, interns and KATE experts are working to upgrade the software capabilities and graphical user interface. Much progress was made during this effort to improve KATE. A display of the entire cryogenic system's graph, with nodes for components and edges for their connections, was added to the KATE software. A searching functionality was added to the new graph display, so that users could easily center their screen on specific components. The GUI was also modified so that it displayed information relevant to the new project goals. In addition, work began on adding new pneumatic and electronic subsystems into the KATE knowledge base, so that it could provide health and status monitoring for those systems. Finally, many fixes for bugs, memory leaks, and memory errors were implemented and the system was moved into a state in which it could be presented to stakeholders. Overall, the KATE system was improved and necessary additional features were added so that a presentation of the program and its functionality in the next few months would be a success.

  1. Knowledge-based architecture for airborne mine and minefield detection

    NASA Astrophysics Data System (ADS)

    Agarwal, Sanjeev; Menon, Deepak; Swonger, C. W.

    2004-09-01

    One of the primary lessons learned from airborne mid-wave infrared (MWIR) based mine and minefield detection research and development over the last few years has been the fact that no single algorithm or static detection architecture is able to meet mine and minefield detection performance specifications. This is true not only because of the highly varied environmental and operational conditions under which an airborne sensor is expected to perform but also due to the highly data dependent nature of sensors and algorithms employed for detection. Attempts to make the algorithms themselves more robust to varying operating conditions have only been partially successful. In this paper, we present a knowledge-based architecture to tackle this challenging problem. The detailed algorithm architecture is discussed for such a mine/minefield detection system, with a description of each functional block and data interface. This dynamic and knowledge-driven architecture will provide more robust mine and minefield detection for a highly multi-modal operating environment. The acquisition of the knowledge for this system is predominantly data driven, incorporating not only the analysis of historical airborne mine and minefield imagery data collection, but also other "all source data" that may be available such as terrain information and time of day. This "all source data" is extremely important and embodies causal information that drives the detection performance. This information is not being used by current detection architectures. Data analysis for knowledge acquisition will facilitate better understanding of the factors that affect the detection performance and will provide insight into areas for improvement for both sensors and algorithms. Important aspects of this knowledge-based architecture, its motivations and the potential gains from its implementation are discussed, and some preliminary results are presented.

  2. On the optimal design of molecular sensing interfaces with lipid bilayer assemblies - A knowledge based approach

    NASA Astrophysics Data System (ADS)

    Siontorou, Christina G.

    2012-12-01

    Biosensors are analytic devices that incorporate a biochemical recognition system (biological, biologicalderived or biomimic: enzyme, antibody, DNA, receptor, etc.) in close contact with a physicochemical transducer (electrochemical, optical, piezoelectric, conductimetric, etc.) that converts the biochemical information, produced by the specific biological recognition reaction (analyte-biomolecule binding), into a chemical or physical output signal, related to the concentration of the analyte in the measuring sample. The biosensing concept is based on natural chemoreception mechanisms, which are feasible over/within/by means of a biological membrane, i.e., a structured lipid bilayer, incorporating or attached to proteinaceous moieties that regulate molecular recognition events which trigger ion flux changes (facilitated or passive) through the bilayer. The creation of functional structures that are similar to natural signal transduction systems, correlating and interrelating compatibly and successfully the physicochemical transducer with the lipid film that is self-assembled on its surface while embedding the reconstituted biological recognition system, and at the same time manage to satisfy the basic conditions for measuring device development (simplicity, easy handling, ease of fabrication) is far from trivial. The aim of the present work is to present a methodological framework for designing such molecular sensing interfaces, functioning within a knowledge-based system built on an ontological platform for supplying sub-systems options, compatibilities, and optimization parameters.

  3. The Knowledge-Based Economy and E-Learning: Critical Considerations for Workplace Democracy

    ERIC Educational Resources Information Center

    Remtulla, Karim A.

    2007-01-01

    The ideological shift by nation-states to "a knowledge-based economy" (also referred to as "knowledge-based society") is causing changes in the workplace. Brought about by the forces of globalisation and technological innovation, the ideologies of the "knowledge-based economy" are not limited to influencing the production, consumption and economic…

  4. Knowledge-based potentials in bioinformatics: From a physicist’s viewpoint

    NASA Astrophysics Data System (ADS)

    Zheng, Wei-Mou

    2015-12-01

    Biological raw data are growing exponentially, providing a large amount of information on what life is. It is believed that potential functions and the rules governing protein behaviors can be revealed from analysis on known native structures of proteins. Many knowledge-based potentials for proteins have been proposed. Contrary to most existing review articles which mainly describe technical details and applications of various potential models, the main foci for the discussion here are ideas and concepts involving the construction of potentials, including the relation between free energy and energy, the additivity of potentials of mean force and some key issues in potential construction. Sequence analysis is briefly viewed from an energetic viewpoint. Project supported in part by the National Natural Science Foundation of China (Grant Nos. 11175224 and 11121403).

  5. Dithizone modified magnetic nanoparticles for fast and selective solid phase extraction of trace elements in environmental and biological samples prior to their determination by ICP-OES.

    PubMed

    Cheng, Guihong; He, Man; Peng, Hanyong; Hu, Bin

    2012-01-15

    A fast and simple method for analysis of trace amounts of Cr(III), Cu(II), Pb(II) and Zn(II) in environmental and biological samples was developed by combining magnetic solid phase extraction (MSPE) with inductively coupled plasma-optical emission spectrometry (ICP-OES) detection. Dithizone modified silica-coated magnetic Fe(3)O(4) nanoparticles (H(2)Dz-SCMNPs) were prepared and used for MSPE of trace amounts of Cr(III), Cu(II), Pb(II) and Zn(II). The prepared magnetic nanoparticles were characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray powder diffraction (XRD), and Fourier transform infrared spectroscopy (FT-IR). The factors affecting the extraction of the target metal ions such as pH, sample volume, eluent, and interfering ions had been investigated and the adsorption mechanism of the target metals on the self-prepared H(2)Dz-SCMNPs was investigated by FT-IR and X-ray photo electron spectroscopy (XPS). Under the optimized conditions, the detection limits of the developed method for Cr(III), Cu(II), Pb(II) and Zn(II) were 35, 11, 62, and 8ngL(-1), respectively, with the enrichment factor of 100. The relative standard deviations (RSDs, c=10μgL(-1), n=7) were in the range of 1.7-3.1% and the linear range was 0.1-100μgL(-1). The proposed method had been validated by two certified reference materials (GSBZ50009-88 environmental water and GBW07601 human hair), and the determined values were in good agreement with the certified values. The method was also applied for the determination of trace metals in real water and human hair samples with recoveries in the range of 85-110% for the spiked samples. The developed MSPE-ICP-OES method has the advantages of simplicity, rapidity, selectivity, high extraction efficiency and is suitable for the analysis of samples with large volume and complex matrix. PMID:22265534

  6. Use of thermal analysis techniques (TG-DSC) for the characterization of diverse organic municipal waste streams to predict biological stability prior to land application

    SciTech Connect

    Fernandez, Jose M.; Plaza, Cesar; Polo, Alfredo; Plante, Alain F.

    2012-01-15

    ) techniques. Total amounts of CO{sub 2} respired indicated that the organic matter in the TS was the least stable, while that in the CS was the most stable. This was confirmed by changes detected with the spectroscopic methods in the composition of the organic wastes due to C mineralization. Differences were especially pronounced for TS, which showed a remarkable loss of aliphatic and proteinaceous compounds during the incubation process. TG, and especially DSC analysis, clearly reflected these differences between the three organic wastes before and after the incubation. Furthermore, the calculated energy density, which represents the energy available per unit of organic matter, showed a strong correlation with cumulative respiration. Results obtained support the hypothesis of a potential link between the thermal and biological stability of the studied organic materials, and consequently the ability of thermal analysis to characterize the maturity of municipal organic wastes and composts.

  7. LLNL Middle East, North Africa and Western Eurasia Knowledge Base

    SciTech Connect

    O'Boyle, J; Ruppert, S D; Hauk, T F; Dodge, D A; Ryall, F; Firpo, M A

    2001-07-12

    The Lawrence Livermore National Laboratory (LLNL) Ground-Based Nuclear Event Monitoring (GNEM) program has made significant progress populating a comprehensive Seismic Research Knowledge Base (SRKB) and deriving calibration parameters for the Middle East, North Africa and Western Eurasia (ME/NA/WE) regions. The LLNL SRKB provides not only a coherent framework in which to store and organize very large volumes of collected seismic waveforms, associated event parameter information, and spatial contextual data, but also provides an efficient data processing/research environment for deriving location and discrimination correction surfaces. The SRKB is a flexible and extensible framework consisting of a relational database (RDB), Geographical Information System (GIS), and associated product/data visualization and data management tools. This SRKB framework is designed to accommodate large volumes of data (almost 3 million waveforms from 57,000 events) in diverse formats from many sources (both LLNL derived research and integrated contractor products), in addition to maintaining detailed quality control and metadata. We have developed expanded look-up tables for critical station parameter information (including location and response) and an integrated and reconciled event catalog data set (including specification of preferred origin solutions and associated phase arrivals) for the PDE, CMT, ISC, REB and selected regional catalogs. Using the SRKB framework, we are combining traveltime observations, event characterization studies, and regional tectonic models to assemble a library of ground truth information and phenomenology (e.g. travel-time and amplitude) correction surfaces required for support of the ME/NA/WE regionalization program. We also use the SRKB to integrate data and research products from a variety of sources, such as contractors and universities, to merge and maintain quality control of the data sets. Corrections and parameters distilled from the LLNL SRKB

  8. A clinical trial of a knowledge-based medical record.

    PubMed

    Safran, C; Rind, D M; Davis, R B; Sands, D Z; Caraballo, E; Rippel, K; Wang, Q; Rury, C; Makadon, H J; Cotton, D J

    1995-01-01

    To meet the needs of primary care physicians caring for patients with HIV infection, we developed a knowledge-based medical record to allow the on-line patient record to play an active role in the care process. These programs integrate the on-line patient record, rule-based decision support, and full-text information retrieval into a clinical workstation for the practicing clinician. To determine whether use of a knowledge-based medical record was associated with more rapid and complete adherence to practice guidelines and improved quality of care, we performed a controlled clinical trial among physicians and nurse practitioners caring for 349 patients infected with the human immuno-deficiency virus (HIV); 191 patients were treated by 65 physicians and nurse practitioners assigned to the intervention group, and 158 patients were treated by 61 physicians and nurse practitioners assigned to the control group. During the 18-month study period, the computer generated 303 alerts in the intervention group and 388 in the control group. The median response time of clinicians to these alerts was 11 days in the intervention group and 52 days in the control group (PJJ0.0001, log-rank test). During the study, the computer generated 432 primary care reminders for the intervention group and 360 reminders for the control group. The median response time of clinicians to these alerts was 114 days in the intervention group and more than 500 days in the control group (PJJ0.0001, log-rank test). Of the 191 patients in the intervention group, 67 (35%) had one or more hospitalizations, compared with 70 (44%) of the 158 patients in the control group (PJ=J0.04, Wilcoxon test stratified for initial CD4 count). There was no difference in survival between the intervention and control groups (P = 0.18, log-rank test). We conclude that our clinical workstation significantly changed physicians' behavior in terms of their response to alerts regarding primary care interventions and that these

  9. EHR based Genetic Testing Knowledge Base (iGTKB) Development

    PubMed Central

    2015-01-01

    Background The gap between a large growing number of genetic tests and a suboptimal clinical workflow of incorporating these tests into regular clinical practice poses barriers to effective reliance on advanced genetic technologies to improve quality of healthcare. A promising solution to fill this gap is to develop an intelligent genetic test recommendation system that not only can provide a comprehensive view of genetic tests as education resources, but also can recommend the most appropriate genetic tests to patients based on clinical evidence. In this study, we developed an EHR based Genetic Testing Knowledge Base for Individualized Medicine (iGTKB). Methods We extracted genetic testing information and patient medical records from EHR systems at Mayo Clinic. Clinical features have been semi-automatically annotated from the clinical notes by applying a Natural Language Processing (NLP) tool, MedTagger suite. To prioritize clinical features for each genetic test, we compared odds ratio across four population groups. Genetic tests, genetic disorders and clinical features with their odds ratios have been applied to establish iGTKB, which is to be integrated into the Genetic Testing Ontology (GTO). Results Overall, there are five genetic tests operated with sample size greater than 100 in 2013 at Mayo Clinic. A total of 1,450 patients who was tested by one of the five genetic tests have been selected. We assembled 243 clinical features from the Human Phenotype Ontology (HPO) for these five genetic tests. There are 60 clinical features with at least one mention in clinical notes of patients taking the test. Twenty-eight clinical features with high odds ratio (greater than 1) have been selected as dominant features and deposited into iGTKB with their associated information about genetic tests and genetic disorders. Conclusions In this study, we developed an EHR based genetic testing knowledge base, iGTKB. iGTKB will be integrated into the GTO by providing relevant

  10. The Role of Causal Knowledge in Knowledge-Based Patient Simulation

    PubMed Central

    Chin, Homer L.; Cooper, Gregory F.

    1987-01-01

    We have investigated the ability to simulate a patient from a knowledge base. Specifically, we have examined the use of knowledge bases that associate findings with diseases through the use of probability measures, and their ability to generate realistic patient cases that can be used for teaching purposes. Many of these knowledge bases encode neither the interdependence among findings, nor intermediate disease states. Because of this, the use of these knowledge bases results in the generation of inconsistent or nonsensical patients. This paper describes an approach for the addition of causal structure to these knowledge bases which can overcome many of these limitations and improve the explanatory capability of such systems.

  11. Prospector II: Towards a knowledge base for mineral deposits

    USGS Publications Warehouse

    McCammon, R.B.

    1994-01-01

    What began in the mid-seventies as a research effort in designing an expert system to aid geologists in exploring for hidden mineral deposits has in the late eighties become a full-sized knowledge-based system to aid geologists in conducting regional mineral resource assessments. Prospector II, the successor to Prospector, is interactive-graphics oriented, flexible in its representation of mineral deposit models, and suited to regional mineral resource assessment. In Prospector II, the geologist enters the findings for an area, selects the deposit models or examples of mineral deposits for consideration, and the program compares the findings with the models or the examples selected, noting the similarities, differences, and missing information. The models or the examples selected are ranked according to scores that are based on the comparisons with the findings. Findings can be reassessed and the process repeated if necessary. The results provide the geologist with a rationale for identifying those mineral deposit types that the geology of an area permits. In future, Prospector II can assist in the creation of new models used in regional mineral resource assessment and in striving toward an ultimate classification of mineral deposits. ?? 1994 International Association for Mathematical Geology.

  12. Knowledge-acquisition tools for medical knowledge-based systems.

    PubMed

    Lanzola, G; Quaglini, S; Stefanelli, M

    1995-03-01

    Knowledge-based systems (KBS) have been proposed to solve a large variety of medical problems. A strategic issue for KBS development and maintenance are the efforts required for both knowledge engineers and domain experts. The proposed solution is building efficient knowledge acquisition (KA) tools. This paper presents a set of KA tools we are developing within a European Project called GAMES II. They have been designed after the formulation of an epistemological model of medical reasoning. The main goal is that of developing a computational framework which allows knowledge engineers and domain experts to interact cooperatively in developing a medical KBS. To this aim, a set of reusable software components is highly recommended. Their design was facilitated by the development of a methodology for KBS construction. It views this process as comprising two activities: the tailoring of the epistemological model to the specific medical task to be executed and the subsequent translation of this model into a computational architecture so that the connections between computational structures and their knowledge level counterparts are maintained. The KA tools we developed are illustrated taking examples from the behavior of a KBS we are building for the management of children with acute myeloid leukemia. PMID:9082135

  13. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  14. Knowledge-based system for design of signalized intersections

    SciTech Connect

    Linkenheld, J.S. ); Benekohal, R.F. ); Garrett, J.H. Jr. )

    1992-03-01

    For an efficient traffic operation in intelligent highway systems, traffic signals need to respond to the changes in roadway and traffic demand. The phasing and timing of traffic signals requires the use of heuristic rules of thumb to determine what phases are needed and how the green time should be assigned to them. Because of the need for judgmental knowledge in solving this problem, this study has used knowledge-based expert-system technology to develop a system for the phasing and signal timing (PHAST) of an isolated intersection. PHAST takes intersection geometry and traffic volume as input and generates appropriate phase plan, cycle length, and green time for each phase. The phase plan and signal timing change when intersection geometry or traffic demand changes. This paper describes the intended system functionality, the system architecture, the knowledge used to phase and time an intersection, the implementation of the system, and system verification. PHAST's performance was validated using phase plans and timings of several intersections.

  15. A knowledge-based system design/information tool

    NASA Technical Reports Server (NTRS)

    Allen, James G.; Sikora, Scott E.

    1990-01-01

    The objective of this effort was to develop a Knowledge Capture System (KCS) for the Integrated Test Facility (ITF) at the Dryden Flight Research Facility (DFRF). The DFRF is a NASA Ames Research Center (ARC) facility. This system was used to capture the design and implementation information for NASA's high angle-of-attack research vehicle (HARV), a modified F/A-18A. In particular, the KCS was used to capture specific characteristics of the design of the HARV fly-by-wire (FBW) flight control system (FCS). The KCS utilizes artificial intelligence (AI) knowledge-based system (KBS) technology. The KCS enables the user to capture the following characteristics of automated systems: the system design; the hardware (H/W) design and implementation; the software (S/W) design and implementation; and the utilities (electrical and hydraulic) design and implementation. A generic version of the KCS was developed which can be used to capture the design information for any automated system. The deliverable items for this project consist of the prototype generic KCS and an application, which captures selected design characteristics of the HARV FCS.

  16. A knowledge based expert system for condition monitoring

    SciTech Connect

    Selkirk, C.G.; Roberge, P.R.; Fisher, G.F.; Yeung, K.K.

    1994-12-31

    Condition monitoring (CM) is the focus of many maintenance philosophies around the world today. In the Canadian Forces (CF), CM has played an important role in the maintenance of aircraft systems since the introduction of spectrometric oil analysis (SOAP) over twenty years ago. Other techniques in use in the CF today include vibration analysis (VA), ferrography, and filter debris analysis (FDA). To improve the usefulness and utility gained from these CM techniques, work is currently underway to incorporate expert systems into them. An expert system for FDA is being developed which will aid filter debris analysts in identifying wear debris and wear level trends, and which will provide the analyst with reference examples in an attempt to standardize results. Once completed, this knowledge based expert system will provide a blueprint from which other CM expert systems can be created. Amalgamating these specific systems into a broad based global system will provide the CM analyst with a tool that will be able to correlate data and results from each of the techniques, thereby increasing the utility of each individual method of analysis. This paper will introduce FDA and then outline the development of the FDA expert system and future applications.

  17. Knowledge-based design of complex mechanical systems

    SciTech Connect

    Ishii, K.

    1988-01-01

    The recent development of Artificial Intelligence (AI) techniques allows incorporation of qualitative aspects of design into the computer aids. This thesis presents a framework for applying AI techniques to the design of complex mechanical systems. A complex, yet well-understood design example as a vehicle for the effort is used. The author first reviews how experienced designers use knowledge at various stages of system design. He then proposes a knowledge-based model of the design process and develop frameworks for applying knowledge engineering in order to construct a consultation system for the designers. He proposes four such frameworks for use at different stages of design: (1) Design Compatibility Analysis (DCA) analyzes the compatibility of the designer's design alternatives with the design specification, (2) Initial Design Suggestion (IDS) provides the designer with reasonable initial estimates of the design variables, (3) Rule-based Sensitivity Analysis (RSA) guides the user through redesign, and (4) Active Constraint Deduction (ACD) identifies the bottlenecks of design by heuristic knowledge. These frameworks eliminate unnecessary iterations and allows the user to obtain a satisfactory solution rapidly.

  18. How Quality Improvement Practice Evidence Can Advance the Knowledge Base.

    PubMed

    OʼRourke, Hannah M; Fraser, Kimberly D

    2016-01-01

    Recommendations for the evaluation of quality improvement interventions have been made in order to improve the evidence base of whether, to what extent, and why quality improvement interventions affect chosen outcomes. The purpose of this article is to articulate why these recommendations are appropriate to improve the rigor of quality improvement intervention evaluation as a research endeavor, but inappropriate for the purposes of everyday quality improvement practice. To support our claim, we describe the differences between quality improvement interventions that occur for the purpose of practice as compared to research. We then carefully consider how feasibility, ethics, and the aims of evaluation each impact how quality improvement interventions that occur in practice, as opposed to research, can or should be evaluated. Recommendations that fit the evaluative goals of practice-based quality improvement interventions are needed to support fair appraisal of the distinct evidence they produce. We describe a current debate on the nature of evidence to assist in reenvisioning how quality improvement evidence generated from practice might complement that generated from research, and contribute in a value-added way to the knowledge base. PMID:27584696

  19. Knowledge base navigator facilitating regional analysis inter-tool communication.

    SciTech Connect

    Hampton, Jeffery Wade; Chael, Eric Paul; Hart, Darren M.; Merchant, Bion John; Chown, Matthew N.

    2004-08-01

    To make use of some portions of the National Nuclear Security Administration (NNSA) Knowledge Base (KB) for which no current operational monitoring applications were available, Sandia National Laboratories have developed a set of prototype regional analysis tools (MatSeis, EventID Tool, CodaMag Tool, PhaseMatch Tool, Dendro Tool, Infra Tool, etc.), and we continue to maintain and improve these. Individually, these tools have proven effective in addressing specific monitoring tasks, but collectively their number and variety tend to overwhelm KB users, so we developed another application - the KB Navigator - to launch the tools and facilitate their use for real monitoring tasks. The KB Navigator is a flexible, extensible java application that includes a browser for KB data content, as well as support to launch any of the regional analysis tools. In this paper, we will discuss the latest versions of KB Navigator and the regional analysis tools, with special emphasis on the new overarching inter-tool communication methodology that we have developed to make the KB Navigator and the tools function together seamlessly. We use a peer-to-peer communication model, which allows any tool to communicate with any other. The messages themselves are passed as serialized XML, and the conversion from Java to XML (and vice versa) is done using Java Architecture for XML Binding (JAXB).

  20. Dynamic reasoning in a knowledge-based system

    NASA Technical Reports Server (NTRS)

    Rao, Anand S.; Foo, Norman Y.

    1988-01-01

    Any space based system, whether it is a robot arm assembling parts in space or an onboard system monitoring the space station, has to react to changes which cannot be foreseen. As a result, apart from having domain-specific knowledge as in current expert systems, a space based AI system should also have general principles of change. This paper presents a modal logic which can not only represent change but also reason with it. Three primitive operations, expansion, contraction and revision are introduced and axioms which specify how the knowledge base should change when the external world changes are also specified. Accordingly the notion of dynamic reasoning is introduced, which unlike the existing forms of reasoning, provide general principles of change. Dynamic reasoning is based on two main principles, namely minimize change and maximize coherence. A possible-world semantics which incorporates the above two principles is also discussed. The paper concludes by discussing how the dynamic reasoning system can be used to specify actions and hence form an integral part of an autonomous reasoning and planning system.

  1. SmartWeld: A knowledge-based approach to welding

    SciTech Connect

    Mitchiner, J.L.; Kleban, S.D.; Hess, B.V.; Mahin, K.W.; Messink, D.

    1996-07-01

    SmartWeld is a concurrent engineering system that integrates product design and processing decisions within an electronic desktop engineering environment. It is being developed to provide designers, process engineers, researchers and manufacturing technologists with transparent access to the right process information, process models, process experience and process experts, to realize``right the first time`` manufacturing. Empirical understanding along with process models are synthesized within a knowledge-based system to identify robust fabrication procedures based on cost, schedule, and performance. Integration of process simulation tools with design tools enables the designer to assess a number of design and process options on the computer rather than on the manufacturing floor. Task models and generic process models are being embedded within user friendly GUI`s to more readily enable the customer to use the SmartWeld system and its software tool set without extensive training. The integrated system architecture under development provides interactive communications and shared application capabilities across a variety of workstation and PC-type platforms either locally or at remote sites.

  2. Knowledge-based graphical interfaces for presenting technical information

    NASA Technical Reports Server (NTRS)

    Feiner, Steven

    1988-01-01

    Designing effective presentations of technical information is extremely difficult and time-consuming. Moreover, the combination of increasing task complexity and declining job skills makes the need for high-quality technical presentations especially urgent. We believe that this need can ultimately be met through the development of knowledge-based graphical interfaces that can design and present technical information. Since much material is most naturally communicated through pictures, our work has stressed the importance of well-designed graphics, concentrating on generating pictures and laying out displays containing them. We describe APEX, a testbed picture generation system that creates sequences of pictures that depict the performance of simple actions in a world of 3D objects. Our system supports rules for determining automatically the objects to be shown in a picture, the style and level of detail with which they should be rendered, the method by which the action itself should be indicated, and the picture's camera specification. We then describe work on GRIDS, an experimental display layout system that addresses some of the problems in designing displays containing these pictures, determining the position and size of the material to be presented.

  3. Real-time application of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Brumbaugh, Randal W.; Duke, Eugene L.

    1989-01-01

    The Rapid Prototyping Facility (RPF) was developed to meet a need for a facility which allows flight systems concepts to be prototyped in a manner which allows for real-time flight test experience with a prototype system. This need was focused during the development and demonstration of the expert system flight status monitor (ESFSM). The ESFSM was a prototype system developed on a LISP machine, but lack of a method for progressive testing and problem identification led to an impractical system. The RPF concept was developed, and the ATMS designed to exercise its capabilities. The ATMS Phase 1 demonstration provided a practical vehicle for testing the RPF, as well as a useful tool. ATMS Phase 2 development continues. A dedicated F-18 is expected to be assigned for facility use in late 1988, with RAV modifications. A knowledge-based autopilot is being developed using the RPF. This is a system which provides elementary autopilot functions and is intended as a vehicle for testing expert system verification and validation methods. An expert system propulsion monitor is being prototyped. This system provides real-time assistance to an engineer monitoring a propulsion system during a flight.

  4. A knowledge based system for scientific data visualization

    NASA Technical Reports Server (NTRS)

    Senay, Hikmet; Ignatius, Eve

    1992-01-01

    A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.

  5. What is a necessary knowledge base for sleep professionals?

    PubMed

    Harding, S M; Hawkins, J W

    2001-09-01

    Sleep medicine is multidisciplinary, and sleep medicine professionals should be trained to evaluate and treat all 88 sleep disorders. Sleep medicine specialists require a fund of knowledge that goes beyond what is obtained during a pulmonary fellowship. Skills required for a pulmonary sleep professional include: sleep medicine, neurobiology, psychiatry, neuro-psychology, neurology, pediatrics, and even limited exposure in otolaryngology, oral maxillofacial surgery, and dentistry. There is a paucity of published information concerning curricular requirements. Required skills for a sleep professional include proficiency in the clinical skills of sleep medicine as well as the technical skills of polysomnography. There is a very large knowledge content area requirement in both the basic sciences of sleep and the clinical aspects of sleep medicine. There are also important clinical skills content areas. As with all medical professionals, sleep professionals should have the highest ethical standards and a strong sense of responsibility toward their patients. A sleep medicine professional also has to be knowledgeable about administrative and legal aspects specific to sleep medicine. This essay reviews a sleep professional knowledge base model with emphasis on the requirements for a pulmonary sleep professional. PMID:11868148

  6. Knowledge based system for Satellite data product selection

    NASA Astrophysics Data System (ADS)

    Goyal, R.; Jayasudha, T.; Pandey, P.; Rama Devi, D.; Rebecca, A.; Manju Sarma, M.; Lakshmi, B.

    2014-11-01

    In recent years, the use of satellite data for geospatial applications has multiplied and contributed significantly towards development of the society. Satellite data requirements, in terms of spatial and spectral resolution, periodicity of data, level of correction and other parameters, vary for different applications. For major applications, remote sensing data alone may not suffice and may require additional data like field data. An application user, even though being versatile in his application, may not know which satellite data is best suited for his application, how to use the data and what information can be derived from the data. Remote sensing domain experts have the proficiency of using appropriate data for remote sensing applications. Entrenching domain expertise into the system and building a knowledge base system for satellite data product selection is vital. Non specialist data users need a user-friendly software which guides them to the most suitable satellite data product on the basis of their application. Such tool will aid the usage for apt remote sensed data for various sectors of application users. Additionally, the consumers will be less concerned about the technical particulars of the platforms that provide satellite data, instead focusing on the content and values in the data product, meeting the timelines and ease of access. Embedding knowledge is a popular and effective means of increasing the power of using a system. This paper describes a system, driven by the built-in knowledge of domain experts, for satellite data products selection for geospatial applications.

  7. Incremental Knowledge Base Construction Using DeepDive

    PubMed Central

    Shin, Jaeho; Wu, Sen; Wang, Feiran; De Sa, Christopher; Zhang, Ce; Ré, Christopher

    2016-01-01

    Populating a database with unstructured information is a long-standing problem in industry and research that encompasses problems of extraction, cleaning, and integration. Recent names used for this problem include dealing with dark data and knowledge base construction (KBC). In this work, we describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems, and we present techniques to make the KBC process more efficient. We observe that the KBC process is iterative, and we develop techniques to incrementally produce inference results for KBC systems. We propose two methods for incremental inference, based respectively on sampling and variational techniques. We also study the tradeoff space of these methods and develop a simple rule-based optimizer. DeepDive includes all of these contributions, and we evaluate Deep-Dive on five KBC systems, showing that it can speed up KBC inference tasks by up to two orders of magnitude with negligible impact on quality. PMID:27144081

  8. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  9. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  10. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  11. RKB: a Semantic Web knowledge base for RNA

    PubMed Central

    2010-01-01

    Increasingly sophisticated knowledge about RNA structure and function requires an inclusive knowledge representation that facilitates the integration of independently –generated information arising from such efforts as genome sequencing projects, microarray analyses, structure determination and RNA SELEX experiments. While RNAML, an XML-based representation, has been proposed as an exchange format for a select subset of information, it lacks domain-specific semantics that are essential for answering questions that require expert knowledge. Here, we describe an RNA knowledge base (RKB) for structure-based knowledge using RDF/OWL Semantic Web technologies. RKB extends a number of ontologies and contains basic terminology for nucleic acid composition along with context/model-specific structural features such as sugar conformations, base pairings and base stackings. RKB (available at http://semanticscience.org/projects/rkb) is populated with PDB entries and MC-Annotate structural annotation. We show queries to the RKB using description logic reasoning, thus opening the door to question answering over independently-published RNA knowledge using Semantic Web technologies. PMID:20626922

  12. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  13. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  14. FunSecKB: the Fungal Secretome KnowledgeBase

    PubMed Central

    Lum, Gengkon; Min, Xiang Jia

    2011-01-01

    The Fungal Secretome KnowledgeBase (FunSecKB) provides a resource of secreted fungal proteins, i.e. secretomes, identified from all available fungal protein data in the NCBI RefSeq database. The secreted proteins were identified using a well evaluated computational protocol which includes SignalP, WolfPsort and Phobius for signal peptide or subcellular location prediction, TMHMM for identifying membrane proteins, and PS-Scan for identifying endoplasmic reticulum (ER) target proteins. The entries were mapped to the UniProt database and any annotations of subcellular locations that were either manually curated or computationally predicted were included in FunSecKB. Using a web-based user interface, the database is searchable, browsable and downloadable by using NCBI’s RefSeq accession or gi number, UniProt accession number, keyword or by species. A BLAST utility was integrated to allow users to query the database by sequence similarity. A user submission tool was implemented to support community annotation of subcellular locations of fungal proteins. With the complete fungal data from RefSeq and associated web-based tools, FunSecKB will be a valuable resource for exploring the potential applications of fungal secreted proteins. Database URL: http://proteomics.ysu.edu/secretomes/fungi.php PMID:21300622

  15. Automatic tumor segmentation using knowledge-based techniques.

    PubMed

    Clark, M C; Hall, L O; Goldgof, D B; Velthuizen, R; Murtagh, F R; Silbiger, M S

    1998-04-01

    A system that automatically segments and labels glioblastoma-multiforme tumors in magnetic resonance images (MRI's) of the human brain is presented. The MRI's consist of T1-weighted, proton density, and T2-weighted feature images and are processed by a system which integrates knowledge-based (KB) techniques with multispectral analysis. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intracranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intracranial region, with region analysis used in performing the final tumor labeling. This system has been trained on three volume data sets and tested on thirteen unseen volume data sets acquired from a single MRI system. The KB tumor segmentation was compared with supervised, radiologist-labeled "ground truth" tumor volumes and supervised k-nearest neighbors tumor segmentations. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time. PMID:9688151

  16. Compiling knowledge-based systems from KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.

  17. Effective domain-dependent reuse in medical knowledge bases.

    PubMed

    Dojat, M; Pachet, F

    1995-12-01

    Knowledge reuse is now a critical issue for most developers of medical knowledge-based systems. As a rule, reuse is addressed from an ambitious, knowledge-engineering perspective that focuses on reusable general purpose knowledge modules, concepts, and methods. However, such a general goal fails to take into account the specific aspects of medical practice. From the point of view of the knowledge engineer, whose goal is to capture the specific features and intricacies of a given domain, this approach addresses the wrong level of generality. In this paper, we adopt a more pragmatic viewpoint, introducing the less ambitious goal of "domain-dependent limited reuse" and suggesting effective means of achieving it in practice. In a knowledge representation framework combining objects and production rules, we propose three mechanisms emerging from the combination of object-oriented programming and rule-based programming. We show these mechanisms contribute to achieve limited reuse and to introduce useful limited variations in medical expertise. PMID:8770532

  18. Does the use of health care and special school services, prior to admission for psychiatric inpatient treatment, differ between adolescents housed by child welfare services and those living with their biological parent(s)?

    PubMed

    Laukkanen, Matti; Hakko, Helinä; Räsänen, Pirkko; Riala, Kaisa

    2013-10-01

    We examined whether the use of health care and special school services, prior to admission for psychiatric inpatient treatment, differed between adolescents from child welfare units and those living at their parental home. 208 boys and 300 girls aged 12-17 years were admitted for psychiatric hospital between 2001 and 2006. Child welfare adolescents had used more health services/treatments prior to psychiatric hospital admission than adolescents living with their biological family. The best discriminating factors between study groups for both genders, were previous psychiatric hospitalizations, unemployed parents, use of special school services and self-perceived serious anxiety/tension or trouble controlling violent behavior. Repeated school grades and previous use of psychotropic medications were discriminating factors only in girls. Adolescents in child welfare deserve adequate mental health evaluations at an early stage, with referral to appropriate adolescent psychiatric services if required. Appropriate service provision and properly planned treatments may reduce the amount of intensive and sometimes unnecessary psychiatric inpatient treatments. PMID:23392732

  19. Development of a Prototype Model-Form Uncertainty Knowledge Base

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2016-01-01

    Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.

  20. Peyronie's disease: urologist's knowledge base and practice patterns.

    PubMed

    Sullivan, J; Moskovic, D; Nelson, C; Levine, L; Mulhall, J

    2015-03-01

    Peyronie's disease (PD) is a poorly understood clinical entity. We performed an in-depth analysis of the knowledge base and current practice patterns of urologists in the United States. A 46-question instrument was created by two experienced PD practitioners and emailed to current American Urology Association members nationally. Questions were either multiple-choice or used a visual analogue scale. Responses regarding treatment options were answered by ranking a list of utilized therapies by preference. Data were aggregated and mean values for each category compiled. Responses were received from 639 urologists (67% in private practice). Almost all (98%) reported seeing PD patients with regularity. Twenty-six percent believed PD prevalence is ≤1%, a small fraction (5%) reporting prevalence as ≥10%. Only 3% referred patients to a subspecialist in PD. Twenty-six percent believed PD is a condition that does not warrant any treatment. The preferred initial management was with oral agents (81%). Of those who used intralesional injections as first line, verapamil was most commonly selected (67%). Seventy-nine percent perform surgery for PD with 86% reporting the optimal timing at ≥12 months after onset of symptoms. Seventy percent perform penile plication, most commonly the Nesbit technique (54%), 61% perform implant surgery and 37% reported performing plaque incision/excision and grafting. Although PD is now a more recognized condition, there are still large variances in knowledge and management strategies. Prospective clinical studies are needed to elucidate standardized management guidelines and a more cohesive strategy to manage this common disease. PMID:25331235

  1. Temporal reasoning for diagnosis in a causal probabilistic knowledge base.

    PubMed

    Long, W

    1996-07-01

    We have added temporal reasoning to the Heart Disease Program (HDP) to take advantage of the temporal constraints inherent in cardiovascular reasoning. Some processes take place over minutes while others take place over months or years and a strictly probabilistic formalism can generate hypotheses that are impossible given the temporal relationships involved. The HDP has temporal constraints on the causal relations specified in the knowledge base and temporal properties on the patient input provided by the user. These are used in two ways. First, they are used to constrain the generation of the pre-computed causal pathways through the model that speed the generation of hypotheses. Second, they are used to generate time intervals for the instantiated nodes in the hypotheses, which are matched and adjusted as nodes are added to each evolving hypothesis. This domain offers a number of challenges for temporal reasoning. Since the nature of diagnostic reasoning is inferring a causal explanation from the evidence, many of the temporal intervals have few constraints and the reasoning has to make maximum use of those that exist. Thus, the HDP uses a temporal interval representation that includes the earliest and latest beginning and ending specified by the constraints. Some of the disease states can be corrected but some of the manifestations may remain. For example, a valve disease such as aortic stenosis produces hypertrophy that remains long after the valve has been replaced. This requires multiple time intervals to account for the existing findings. This paper discusses the issues and solutions that have been developed for temporal reasoning integrated with a pseudo-Bayesian probabilistic network in this challenging domain for diagnosis. PMID:8830922

  2. Comparative development of knowledge-based bioeconomy in the European Union and Turkey.

    PubMed

    Celikkanat Ozan, Didem; Baran, Yusuf

    2014-09-01

    Biotechnology, defined as the technological application that uses biological systems and living organisms, or their derivatives, to create or modify diverse products or processes, is widely used for healthcare, agricultural and environmental applications. The continuity in industrial applications of biotechnology enables the rise and development of the bioeconomy concept. Bioeconomy, including all applications of biotechnology, is defined as translation of knowledge received from life sciences into new, sustainable, environment friendly and competitive products. With the advanced research and eco-efficient processes in the scope of bioeconomy, more healthy and sustainable life is promised. Knowledge-based bioeconomy with its economic, social and environmental potential has already been brought to the research agendas of European Union (EU) countries. The aim of this study is to summarize the development of knowledge-based bioeconomy in EU countries and to evaluate Turkey's current situation compared to them. EU-funded biotechnology research projects under FP6 and FP7 and nationally-funded biotechnology projects under The Scientific and Technological Research Council of Turkey (TUBITAK) Academic Research Funding Program Directorate (ARDEB) and Technology and Innovation Funding Programs Directorate (TEYDEB) were examined. In the context of this study, the main research areas and subfields which have been funded, the budget spent and the number of projects funded since 2003 both nationally and EU-wide and the gaps and overlapping topics were analyzed. In consideration of the results, detailed suggestions for Turkey have been proposed. The research results are expected to be used as a roadmap for coordinating the stakeholders of bioeconomy and integrating Turkish Research Areas into European Research Areas. PMID:23815559

  3. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks

    PubMed Central

    De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616

  4. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks.

    PubMed

    Khan, Taimoor; De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616

  5. Knowledge-based image bandwidth compression and enhancement

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Tescher, Andrew G.

    1987-01-01

    Techniques for incorporating a priori knowledge in the digital coding and bandwidth compression of image data are described and demonstrated. An algorithm for identifying and highlighting thin lines and point objects prior to coding is presented, and the precoding enhancement of a slightly smoothed version of the image is shown to be more effective than enhancement of the original image. Also considered are readjustment of the local distortion parameter and variable-block-size coding. The line-segment criteria employed in the classification are listed in a table, and sample images demonstrating the effectiveness of the enhancement techniques are presented.

  6. Knowledge-based biomedical word sense disambiguation: comparison of approaches

    PubMed Central

    2010-01-01

    Background Word sense disambiguation (WSD) algorithms attempt to select the proper sense of ambiguous terms in text. Resources like the UMLS provide a reference thesaurus to be used to annotate the biomedical literature. Statistical learning approaches have produced good results, but the size of the UMLS makes the production of training data infeasible to cover all the domain. Methods We present research on existing WSD approaches based on knowledge bases, which complement the studies performed on statistical learning. We compare four approaches which rely on the UMLS Metathesaurus as the source of knowledge. The first approach compares the overlap of the context of the ambiguous word to the candidate senses based on a representation built out of the definitions, synonyms and related terms. The second approach collects training data for each of the candidate senses to perform WSD based on queries built using monosemous synonyms and related terms. These queries are used to retrieve MEDLINE citations. Then, a machine learning approach is trained on this corpus. The third approach is a graph-based method which exploits the structure of the Metathesaurus network of relations to perform unsupervised WSD. This approach ranks nodes in the graph according to their relative structural importance. The last approach uses the semantic types assigned to the concepts in the Metathesaurus to perform WSD. The context of the ambiguous word and semantic types of the candidate concepts are mapped to Journal Descriptors. These mappings are compared to decide among the candidate concepts. Results are provided estimating accuracy of the different methods on the WSD test collection available from the NLM. Conclusions We have found that the last approach achieves better results compared to the other methods. The graph-based approach, using the structure of the Metathesaurus network to estimate the relevance of the Metathesaurus concepts, does not perform well compared to the first two

  7. Soil water repellency: the knowledge base, advances and challenges

    NASA Astrophysics Data System (ADS)

    Doerr, S. H.

    2012-04-01

    The topic of soil water repellency (SWR or soil hydrophobicity) has moved from being perhaps a little known curiosity a few decades ago to a well established sub-discipline of soil physics and soil hydrology. In terms of the number of journal publications, SWR is comparable with other physical soil properties or processes such as crusting, aggregation or preferential flow. SWR refers to a condition when soil does not wet readily when in contact with water. This may be evident at the soil surface, when SWR leads to prolonged ponding on soils despite the presence of sufficient pore openings, or in the soil matrix, as manifest by enhanced uneven wetting and preferential flow that is not caused by structural in homogeneity. Amongst major milestones advancing the knowledge base of SWR have been the recognition that: (1) many, if not most, soils can exhibit SWR when the soil moisture content falls below a critical threshold, (2) it can be induced (and destroyed) during vegetation fires, but many soils exhibit SWR irrespective of burning, (3) it can be caused, in principle, by a large variety of naturally-abundant chemical compounds, (4) it is typically highly variable in space, time and its degree (severity and persistence), and (5) its impacts on, for example, soil hydrology, erosion and plant growth have the potential to be very substantial, but also that impacts are often minor for naturally vegetated and undisturbed soils. Amongst the key challenges that remain are: (a) predicting accurately the conditions when soils prone to SWR actually develop this property, (b) unravelling, for fire effected environments, to what degree any presence of absence of SWR is due to fire and post-fire recovery, (c) the exact nature and origin the material causing SWR at the molecular level in different environments, (d) understanding the implications of the spatial and temporal variability at different scales, (e) the capability to model and predict under which environmental conditions

  8. A national knowledge-based crop recognition in Mediterranean environment

    NASA Astrophysics Data System (ADS)

    Cohen, Yafit; Shoshany, Maxim

    2002-08-01

    Population growth, urban expansion, land degradation, civil strife and war may place plant natural resources for food and agriculture at risk. Crop and yield monitoring is basic information necessary for wise management of these resources. Satellite remote sensing techniques have proven to be cost-effective in widespread agricultural lands in Africa, America, Europe and Australia. However, they have had limited success in Mediterranean regions that are characterized by a high rate of spatio-temporal ecological heterogeneity and high fragmentation of farming lands. An integrative knowledge-based approach is needed for this purpose, which combines imagery and geographical data within the framework of an intelligent recognition system. This paper describes the development of such a crop recognition methodology and its application to an area that comprises approximately 40% of the cropland in Israel. This area contains eight crop types that represent 70% of Israeli agricultural production. Multi-date Landsat TM images representing seasonal vegetation cover variations were converted to normalized difference vegetation index (NDVI) layers. Field boundaries were delineated by merging Landsat data with SPOT-panchromatic images. Crop recognition was then achieved in two-phases, by clustering multi-temporal NDVI layers using unsupervised classification, and then applying 'split-and-merge' rules to these clusters. These rules were formalized through comprehensive learning of relationships between crop types, imagery properties (spectral and NDVI) and auxiliary data including agricultural knowledge, precipitation and soil types. Assessment of the recognition results using ground data from the Israeli Agriculture Ministry indicated an average recognition accuracy exceeding 85% which accounts for both omission and commission errors. The two-phase strategy implemented in this study is apparently successful for heterogeneous regions. This is due to the fact that it allows

  9. Systems, methods and apparatus for verification of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Rash, James L. (Inventor); Erickson, John D. (Inventor); Gracinin, Denis (Inventor); Rouff, Christopher A. (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments, domain knowledge is translated into a knowledge-based system. In some embodiments, a formal specification is derived from rules of a knowledge-based system, the formal specification is analyzed, and flaws in the formal specification are used to identify and correct errors in the domain knowledge, from which a knowledge-based system is translated.

  10. Knowledge-based factor analysis of multidimensional nuclear medicine image sequences

    NASA Astrophysics Data System (ADS)

    Yap, Jeffrey T.; Chen, Chin-Tu; Cooper, Malcolm; Treffert, Jon D.

    1994-05-01

    We have developed a knowledge-based approach to analyzing dynamic nuclear medicine data sets using factor analysis. Prior knowledge is used as constraints to produce factor images and their associated time functions which are physically and physiologically realistic. These methods have been applied to both planar and tomographic image sequences acquired using various single-photon emitting and positron emitting radiotracers. Computer-simulated data, non-human primate studies, and human clinical studies have been used to develop and evaluate the methodology. The organ systems studied include the kidneys, heart, brain, liver, and bone. The factors generated represent various isolated aspects of physiologic function, such as tissue perfusion and clearance. In some clinical studies, the factors have indicated the potential to isolate diseased tissue from normally functioning tissue. In addition, the factor analysis of data acquired using newly developed radioligands has shown the ability to differentiate the specific binding of the radioligand to the targeted receptors from the non-specific binding. This suggests the potential use of factor analysis in the development and evaluation of radiolabeled compounds as well as in the investigation of specific receptor systems and their role in diagnosing disease.

  11. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    SciTech Connect

    Thiele, Ines; Hyduke, Daniel R.; Steeb, Benjamin; Fankam, Guy; Allen, Douglas K.; Bazzani, Susanna; Charusanti, Pep; Chen, Feng-Chi; Fleming, Ronan MT; Hsiung, Chao A.; De Keersmaecker, Sigrid CJ; Liao, Yu-Chieh; Marchal, Kathleen; Mo, Monica L.; Özdemir, Emre; Raghunathan, Anu; Reed, Jennifer L.; Shin, Sook-Il; Sigurbjörnsdóttir, Sara; Steinmann, Jonas; Sudarsan, Suresh; Swainston, Neil; Thijs, Inge M.; Zengler, Karsten; Palsson, Bernhard O.; Adkins, Joshua N.; Bumann, Dirk

    2011-01-01

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium is a human pathogen, causes various diseases and its increasing antibiotic resistance poses a public health problem. Here, we describe a community-driven effort, in which more than 20 experts in S. Typhimurium biology and systems biology collaborated to reconcile and expand the S. Typhimurium BiGG knowledge-base. The consensus MR was obtained starting from two independently developed MRs for S. Typhimurium. Key results of this reconstruction jamboree include i) development and implementation of a community-based workflow for MR annotation and reconciliation; ii) incorporation of thermodynamic information; and iii) use of the consensus MR to identify potential multi-target drug therapy approaches. Finally, taken together, with the growing number of parallel MRs a structured, community-driven approach will be necessary to maximize quality while increasing adoption of MRs in experimental design and interpretation.

  12. Creating illusions of knowledge: learning errors that contradict prior knowledge.

    PubMed

    Fazio, Lisa K; Barber, Sarah J; Rajaram, Suparna; Ornstein, Peter A; Marsh, Elizabeth J

    2013-02-01

    Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors (e.g., "Franklin invented the light bulb"). On a later general-knowledge test, participants reproduced story errors despite previously answering the questions correctly. This misinformation effect was found even for questions that were answered correctly on the initial test with the highest level of confidence. Furthermore, prior knowledge offered no protection against errors entering the knowledge base; the misinformation effect was equivalent for previously known and unknown facts. Errors can enter the knowledge base even when learners have the knowledge necessary to catch the errors. PMID:22612770

  13. Knowledge-based critiquing of graphical user interfaces with CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jianping; Murphy, Elizabeth D.; Carter, Leslie E.; Truszkowski, Walter F.

    1994-01-01

    CHIMES is a critiquing tool that automates the process of checking graphical user interface (GUI) designs for compliance with human factors design guidelines and toolkit style guides. The current prototype identifies instances of non-compliance and presents problem statements, advice, and tips to the GUI designer. Changes requested by the designer are made automatically, and the revised GUI is re-evaluated. A case study conducted at NASA-Goddard showed that CHIMES has the potential for dramatically reducing the time formerly spent in hands-on consistency checking. Capabilities recently added to CHIMES include exception handling and rule building. CHIMES is intended for use prior to usability testing as a means, for example, of catching and correcting syntactic inconsistencies in a larger user interface.

  14. ANAP: An Integrated Knowledge Base for Arabidopsis Protein Interaction Network Analysis1[C][W][OA

    PubMed Central

    Wang, Congmao; Marshall, Alex; Zhang, Dabing; Wilson, Zoe A.

    2012-01-01

    Protein interactions are fundamental to the molecular processes occurring within an organism and can be utilized in network biology to help organize, simplify, and understand biological complexity. Currently, there are more than 10 publicly available Arabidopsis (Arabidopsis thaliana) protein interaction databases. However, there are limitations with these databases, including different types of interaction evidence, a lack of defined standards for protein identifiers, differing levels of information, and, critically, a lack of integration between them. In this paper, we present an interactive bioinformatics Web tool, ANAP (Arabidopsis Network Analysis Pipeline), which serves to effectively integrate the different data sets and maximize access to available data. ANAP has been developed for Arabidopsis protein interaction integration and network-based study to facilitate functional protein network analysis. ANAP integrates 11 Arabidopsis protein interaction databases, comprising 201,699 unique protein interaction pairs, 15,208 identifiers (including 11,931 The Arabidopsis Information Resource Arabidopsis Genome Initiative codes), 89 interaction detection methods, 73 species that interact with Arabidopsis, and 6,161 references. ANAP can be used as a knowledge base for constructing protein interaction networks based on user input and supports both direct and indirect interaction analysis. It has an intuitive graphical interface allowing easy network visualization and provides extensive detailed evidence for each interaction. In addition, ANAP displays the gene and protein annotation in the generated interactive network with links to The Arabidopsis Information Resource, the AtGenExpress Visualization Tool, the Arabidopsis 1,001 Genomes GBrowse, the Protein Knowledgebase, the Kyoto Encyclopedia of Genes and Genomes, and the Ensembl Genome Browser to significantly aid functional network analysis. The tool is available open access at http://gmdd.shgmo.org/Computational-Biology

  15. Using the Gene Ontology to Enrich Biological Pathways

    SciTech Connect

    Sanfilippo, Antonio P.; Baddeley, Robert L.; Beagley, Nathaniel; McDermott, Jason E.; Riensche, Roderick M.; Taylor, Ronald C.; Gopalan, Banu

    2009-12-10

    Most current approaches to automatic pathway generation are based on a reverse engineering approach in which pathway plausibility is solely derived from microarray gene expression data. These approaches tend to lack in generality and offer no independent validation as they are too reliant on the pathway observables that guide pathway generation. By contrast, alternative approaches that use prior biological knowledge to validate pathways inferred from gene expression data may err in the opposite direction as the prior knowledge is usually not sufficiently tuned to the pathology of focus. In this paper, we present a novel pathway generation approach that combines insights from the reverse engineering and knowledge-based approaches to increase the biological plausibility of automatically generated regulatory networks and describe an application of this approach to transcriptional data from a mouse model of neuroprotection during stroke.

  16. Guidelines for the verification and validation of expert system software and conventional software: Evaluation of knowledge base certification methods. Volume 4

    SciTech Connect

    Miller, L.A.; Hayes, J.E.; Mirsky, S.M.

    1995-03-01

    This report presents the results of the Knowledge Base Certification activity of the expert systems verification and validation (V&V) guideline development project which is jointly funded by the US Nuclear Regulatory Commission and the Electric Power Research Institute. The ultimate objective is the formulation of guidelines for the V&V of expert systems for use in nuclear power applications. This activity is concerned with the development and testing of various methods for assuring the quality of knowledge bases. The testing procedure used was that of behavioral experiment, the first known such evaluation of any type of V&V activity. The value of such experimentation is its capability to provide empirical evidence for -- or against -- the effectiveness of plausible methods in helping people find problems in knowledge bases. The three-day experiment included 20 participants from three nuclear utilities, the Nuclear Regulatory Commission`s Technical training Center, the University of Maryland, EG&G Idaho, and SAIC. The study used two real nuclear expert systems: a boiling water reactor emergency operating procedures tracking system and a pressurized water reactor safety assessment systems. Ten participants were assigned to each of the expert systems. All participants were trained in and then used a sequence of four different V&V methods selected as being the best and most appropriate for study on the basis of prior evaluation activities. These methods either involved the analysis and tracing of requirements to elements in the knowledge base (requirements grouping and requirements tracing) or else involved direct inspection of the knowledge base for various kinds of errors. Half of the subjects within each system group used the best manual variant of the V&V methods (the control group), while the other half were supported by the results of applying real or simulated automated tools to the knowledge bases (the experimental group).

  17. End-user oriented language to develop knowledge-based expert systems

    SciTech Connect

    Ueno, H.

    1983-01-01

    A description is given of the COMEX (compact knowledge based expert system) expert system language for application-domain users who want to develop a knowledge-based expert system by themselves. The COMEX system was written in FORTRAN and works on a microcomputer. COMEX is being used in several application domains such as medicine, education, and industry. 7 references.

  18. The Knowledge Base as an Extension of Distance Learning Reference Service

    ERIC Educational Resources Information Center

    Casey, Anne Marie

    2012-01-01

    This study explores knowledge bases as extension of reference services for distance learners. Through a survey and follow-up interviews with distance learning librarians, this paper discusses their interest in creating and maintaining a knowledge base as a resource for reference services to distance learners. It also investigates their perceptions…

  19. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  20. Comparison of LISP and MUMPS as implementation languages for knowledge-based systems

    SciTech Connect

    Curtis, A.C.

    1984-01-01

    Major components of knowledge-based systems are summarized, along with the programming language features generally useful in their implementation. LISP and MUMPS are briefly described and compared as vehicles for building knowledge-based systems. The paper concludes with suggestions for extensions to MUMPS which might increase its usefulness in artificial intelligence applications without affecting the essential nature of the language. 8 references.

  1. The Knowledge Base: Issues for Liberal Arts Colleges. AILACTE Occasional Paper No. 7.

    ERIC Educational Resources Information Center

    Diez, Mary E.

    The phrase "the knowledge base" conveys the sense that there is one, monolithic set of information waiting to be codified and promulgated, probably available only through someone else's research. This paper argues that the knowledge base for teaching is continually being created and interpreted, especially by practitioners. The facts frequently…

  2. Working Memory, Intelligence and Knowledge Base in Adult Persons with Intellectual Disability.

    ERIC Educational Resources Information Center

    Numminen, H.; Service, E.; Ruoppila, I.

    2002-01-01

    A study explored working memory (WM) capacity, WM task requirements, as well as effects between WM, skills, knowledge base, and intelligence in adults with mental retardation and children aged 3-6 years. Adults were better on measures reflecting skills and knowledge base. Children performed better in phonological and visuo-spatial WM tasks.…

  3. A Model of Knowledge Based Information Retrieval with Hierarchical Concept Graph.

    ERIC Educational Resources Information Center

    Kim, Young Whan; Kim, Jin H.

    1990-01-01

    Proposes a model of knowledge-based information retrieval (KBIR) that is based on a hierarchical concept graph (HCG) which shows relationships between index terms and constitutes a hierarchical thesaurus as a knowledge base. Conceptual distance between a query and an object is discussed and the use of Boolean operators is described. (25…

  4. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.

    1991-01-01

    The purpose is to develop algorithms and architectures for embedding artificial intelligence in aircraft guidance and control systems. With the approach adopted, AI-computing is used to create an outer guidance loop for driving the usual aircraft autopilot. That is, a symbolic processor monitors the operation and performance of the aircraft. Then, based on rules and other stored knowledge, commands are automatically formulated for driving the autopilot so as to accomplish desired flight operations. The focus is on developing a software system which can respond to linguistic instructions, input in a standard format, so as to formulate a sequence of simple commands to the autopilot. The instructions might be a fairly complex flight clearance, input either manually or by data-link. Emphasis is on a software system which responds much like a pilot would, employing not only precise computations, but, also, knowledge which is less precise, but more like common-sense. The approach is based on prior work to develop a generic 'shell' architecture for an AI-processor, which may be tailored to many applications by describing the application in appropriate processor data bases (libraries). Such descriptions include numerical models of the aircraft and flight control system, as well as symbolic (linguistic) descriptions of flight operations, rules, and tactics.

  5. Knowledge-based control for robot self-localization

    NASA Technical Reports Server (NTRS)

    Bennett, Bonnie Kathleen Holte

    1993-01-01

    Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.

  6. Knowledge-Based Reinforcement Learning for Data Mining

    NASA Astrophysics Data System (ADS)

    Kudenko, Daniel; Grzes, Marek

    experts have developed heuristics that help them in planning and scheduling resources in their work place. However, this domain knowledge is often rough and incomplete. When the domain knowledge is used directly by an automated expert system, the solutions are often sub-optimal, due to the incompleteness of the knowledge, the uncertainty of environments, and the possibility to encounter unexpected situations. RL, on the other hand, can overcome the weaknesses of the heuristic domain knowledge and produce optimal solutions. In the talk we propose two techniques, which represent first steps in the area of knowledge-based RL (KBRL). The first technique [1] uses high-level STRIPS operator knowledge in reward shaping to focus the search for the optimal policy. Empirical results show that the plan-based reward shaping approach outperforms other RL techniques, including alternative manual and MDP-based reward shaping when it is used in its basic form. We showed that MDP-based reward shaping may fail and successful experiments with STRIPS-based shaping suggest modifications which can overcome encountered problems. The STRIPSbased method we propose allows expressing the same domain knowledge in a different way and the domain expert can choose whether to define an MDP or STRIPS planning task. We also evaluated the robustness of the proposed STRIPS-based technique to errors in the plan knowledge. In case that STRIPS knowledge is not available, we propose a second technique [2] that shapes the reward with hierarchical tile coding. Where the Q-function is represented with low-level tile coding, a V-function with coarser tile coding can be learned in parallel and used to approximate the potential for ground states. In the context of data mining, our KBRL approaches can also be used for any data collection task where the acquisition of data may incur considerable cost. In addition, observing the data collection agent in specific scenarios may lead to new insights into optimal data

  7. Soybean Knowledge Base (SoyKB): a Web Resource for Soybean Translational Genomics

    SciTech Connect

    Joshi, Trupti; Patil, Kapil; Fitzpatrick, Michael R.; Franklin, Levi D.; Yao, Qiuming; Cook, Jeffrey R.; Wang, Zhem; Libault, Marc; Brechenmacher, Laurent; Valliyodan, Babu; Wu, Xiaolei; Cheng, Jianlin; Stacey, Gary; Nguyen, Henry T.; Xu, Dong

    2012-01-17

    Background: Soybean Knowledge Base (SoyKB) is a comprehensive all-inclusive web resource for soybean translational genomics. SoyKB is designed to handle the management and integration of soybean genomics, transcriptomics, proteomics and metabolomics data along with annotation of gene function and biological pathway. It contains information on four entities, namely genes, microRNAs, metabolites and single nucleotide polymorphisms (SNPs). Methods: SoyKB has many useful tools such as Affymetrix probe ID search, gene family search, multiple gene/ metabolite search supporting co-expression analysis, and protein 3D structure viewer as well as download and upload capacity for experimental data and annotations. It has four tiers of registration, which control different levels of access to public and private data. It allows users of certain levels to share their expertise by adding comments to the data. It has a user-friendly web interface together with genome browser and pathway viewer, which display data in an intuitive manner to the soybean researchers, producers and consumers. Conclusions: SoyKB addresses the increasing need of the soybean research community to have a one-stop-shop functional and translational omics web resource for information retrieval and analysis in a user-friendly way. SoyKB can be publicly accessed at http://soykb.org/.

  8. Knowledge-based discovery for designing CRISPR-CAS systems against invading mobilomes in thermophiles.

    PubMed

    Chellapandi, P; Ranjani, J

    2015-09-01

    Clustered regularly interspaced short palindromic repeats (CRISPRs) are direct features of the prokaryotic genomes involved in resistance to their bacterial viruses and phages. Herein, we have identified CRISPR loci together with CRISPR-associated sequences (CAS) genes to reveal their immunity against genome invaders in the thermophilic archaea and bacteria. Genomic survey of this study implied that genomic distribution of CRISPR-CAS systems was varied from strain to strain, which was determined by the degree of invading mobiloms. Direct repeats found to be equal in some extent in many thermopiles, but their spacers were differed in each strain. Phylogenetic analyses of CAS superfamily revealed that genes cmr, csh, csx11, HD domain, devR were belonged to the subtypes of cas gene family. The members in cas gene family of thermophiles were functionally diverged within closely related genomes and may contribute to develop several defense strategies. Nevertheless, genome dynamics, geological variation and host defense mechanism were contributed to share their molecular functions across the thermophiles. A thermophilic archaean, Thermococcus gammotolerans and thermophilic bacteria, Petrotoga mobilis and Thermotoga lettingae have shown superoperons-like appearance to cluster cas genes, which were typically evolved for their defense pathways. A cmr operon was identified with a specific promoter in a thermophilic archaean, Caldivirga maquilingensis. Overall, we concluded that knowledge-based genomic survey and phylogeny-based functional assignment have suggested for designing a reliable genetic regulatory circuit naturally from CRISPR-CAS systems, acquired defense pathways, to thermophiles in future synthetic biology. PMID:26279704

  9. Annotation of post-translational modifications in the Swiss-Prot knowledge base.

    PubMed

    Farriol-Mathis, Nathalie; Garavelli, John S; Boeckmann, Brigitte; Duvaud, Séverine; Gasteiger, Elisabeth; Gateau, Alain; Veuthey, Anne-Lise; Bairoch, Amos

    2004-06-01

    High-throughput proteomic studies produce a wealth of new information regarding post-translational modifications (PTMs). The Swiss-Prot knowledge base is faced with the challenge of including this information in a consistent and structured way, in order to facilitate easy retrieval and promote understanding by biologist expert users as well as computer programs. We are therefore standardizing the annotation of PTM features represented in Swiss-Prot. Indeed, a controlled vocabulary has been associated with every described PTM. In this paper, we present the major update of the feature annotation, and, by showing a few examples, explain how the annotation is implemented and what it means. Mod-Prot, a future companion database of Swiss-Prot, devoted to the biological aspects of PTMs (i.e., general description of the process, identity of the modification enzyme(s), taxonomic range, mass modification) is briefly described. Finally we encourage once again the scientific community (i.e., both individual researchers and database maintainers) to interact with us, so that we can continuously enhance the quality and swiftness of our services. PMID:15174124

  10. Data- and knowledge-based modeling of gene regulatory networks: an update

    PubMed Central

    Linde, Jörg; Schulze, Sylvie; Henkel, Sebastian G.; Guthke, Reinhard

    2015-01-01

    Gene regulatory network inference is a systems biology approach which predicts interactions between genes with the help of high-throughput data. In this review, we present current and updated network inference methods focusing on novel techniques for data acquisition, network inference assessment, network inference for interacting species and the integration of prior knowledge. After the advance of Next-Generation-Sequencing of cDNAs derived from RNA samples (RNA-Seq) we discuss in detail its application to network inference. Furthermore, we present progress for large-scale or even full-genomic network inference as well as for small-scale condensed network inference and review advances in the evaluation of network inference methods by crowdsourcing. Finally, we reflect the current availability of data and prior knowledge sources and give an outlook for the inference of gene regulatory networks that reflect interacting species, in particular pathogen-host interactions. PMID:27047314

  11. GEDA: new knowledge base of gene expression in drug addiction.

    PubMed

    Suh, Young Ju; Yang, Moon Hee; Yoon, Suk Joon; Park, Jong Hoon

    2006-07-31

    Abuse of drugs can elicit compulsive drug seeking behaviors upon repeated administration, and ultimately leads to the phenomenon of addiction. We developed a procedure for the standardization of microarray gene expression data of rat brain in drug addiction and stored them in a single integrated database system, focusing on more effective data processing and interpretation. Another characteristic of the present database is that it has a systematic flexibility for statistical analysis and linking with other databases. Basically, we adopt an intelligent SQL querying system, as the foundation of our DB, in order to set up an interactive module which can automatically read the raw gene expression data in the standardized format. We maximize the usability of this DB, helping users study significant gene expression and identify biological function of the genes through integrated up-to-date gene information such as GO annotation and metabolic pathway. For collecting the latest information of selected gene from the database, we also set up the local BLAST search engine and nonredundant sequence database updated by NCBI server on a daily basis. We find that the present database is a useful query interface and data-mining tool, specifically for finding out the genes related to drug addiction. We apply this system to the identification and characterization of methamphetamine-induced genes' behavior in rat brain. PMID:16889689

  12. CastorDB: a comprehensive knowledge base for Ricinus communis

    PubMed Central

    2011-01-01

    Background Ricinus communis is an industrially important non-edible oil seed crop, native to tropical and subtropical regions of the world. Although, R. communis genome was assembled in 4X draft by JCVI, and is predicted to contain 31,221 proteins, the function of most of the genes remains to be elucidated. A large amount of information of different aspects of the biology of R. communis is available, but most of the data are scattered one not easily accessible. Therefore a comprehensive resource on Castor, Castor DB, is required to facilitate research on this important plant. Findings CastorDB is a specialized and comprehensive database for the oil seed plant R. communis, integrating information from several diverse resources. CastorDB contains information on gene and protein sequences, gene expression and gene ontology annotation of protein sequences obtained from a variety of repositories, as primary data. In addition, computational analysis was used to predict cellular localization, domains, pathways, protein-protein interactions, sumoylation sites and biochemical properties and has been included as derived data. This database has an intuitive user interface that prompts the user to explore various possible information resources available on a given gene or a protein. Conclusion CastorDB provides a user friendly comprehensive resource on castor with particular emphasis on its genome, transcriptome, and proteome and on protein domains, pathways, protein localization, presence of sumoylation sites, expression data and protein interacting partners. PMID:21914200

  13. Validation of a Crowdsourcing Methodology for Developing a Knowledge Base of Related Problem-Medication Pairs

    PubMed Central

    Wright, A.; Krousel-Wood, M.; Thomas, E. J.; McCoy, J. A.; Sittig, D. F.

    2015-01-01

    Summary Background Clinical knowledge bases of problem-medication pairs are necessary for many informatics solutions that improve patient safety, such as clinical summarization. However, developing these knowledge bases can be challenging. Objective We sought to validate a previously developed crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large, non-university health care system with a widely used, commercially available electronic health record. Methods We first retrieved medications and problems entered in the electronic health record by clinicians during routine care during a six month study period. Following the previously published approach, we calculated the link frequency and link ratio for each pair then identified a threshold cutoff for estimated problem-medication pair appropriateness through clinician review; problem-medication pairs meeting the threshold were included in the resulting knowledge base. We selected 50 medications and their gold standard indications to compare the resulting knowledge base to the pilot knowledge base developed previously and determine its recall and precision. Results The resulting knowledge base contained 26,912 pairs, had a recall of 62.3% and a precision of 87.5%, and outperformed the pilot knowledge base containing 11,167 pairs from the previous study, which had a recall of 46.9% and a precision of 83.3%. Conclusions We validated the crowdsourcing approach for generating a knowledge base of problem-medication pairs in a large non-university health care system with a widely used, commercially available electronic health record, indicating that the approach may be generalizable across healthcare settings and clinical systems. Further research is necessary to better evaluate the knowledge, to compare crowdsourcing with other approaches, and to evaluate if incorporating the knowledge into electronic health records improves patient outcomes. PMID:26171079

  14. Constructing priors in synesthesia.

    PubMed

    van Leeuwen, Tessa M

    2014-01-01

    A new theoretical framework (PPSMC) applicable to synesthesia has been proposed, in which the discrepancy between the perceptual reality of (some) synesthetic concurrents and their subjective non-veridicality is being explained. The PPSMC framework stresses the relevance of the phenomenology of synesthesia for synesthesia research-and beyond. When describing the emergence and persistence of synesthetic concurrents under PPSMC, it is proposed that precise, high-confidence priors are crucial in synesthesia. I discuss the construction of priors in synesthesia. PMID:24702569

  15. Automated knowledge acquisition for second generation knowledge base systems: A conceptual analysis and taxonomy

    SciTech Connect

    Williams, K.E.; Kotnour, T.

    1991-12-31

    In this paper, we present a conceptual analysis of knowledge-base development methodologies. The purpose of this research is to help overcome the high cost and lack of efficiency in developing knowledge base representations for artificial intelligence applications. To accomplish this purpose, we analyzed the available methodologies and developed a knowledge-base development methodology taxonomy. We review manual, machine-aided, and machine-learning methodologies. A set of developed characteristics allows description and comparison among the methodologies. We present the results of this conceptual analysis of methodologies and recommendations for development of more efficient and effective tools.

  16. Automated knowledge acquisition for second generation knowledge base systems: A conceptual analysis and taxonomy

    SciTech Connect

    Williams, K.E.; Kotnour, T.

    1991-01-01

    In this paper, we present a conceptual analysis of knowledge-base development methodologies. The purpose of this research is to help overcome the high cost and lack of efficiency in developing knowledge base representations for artificial intelligence applications. To accomplish this purpose, we analyzed the available methodologies and developed a knowledge-base development methodology taxonomy. We review manual, machine-aided, and machine-learning methodologies. A set of developed characteristics allows description and comparison among the methodologies. We present the results of this conceptual analysis of methodologies and recommendations for development of more efficient and effective tools.

  17. A knowledge-based approach to estimating the magnitude and spatial patterns of potential threats to soil biodiversity.

    PubMed

    Orgiazzi, Alberto; Panagos, Panos; Yigini, Yusuf; Dunbar, Martha B; Gardi, Ciro; Montanarella, Luca; Ballabio, Cristiano

    2016-03-01

    Because of the increasing pressures exerted on soil, below-ground life is under threat. Knowledge-based rankings of potential threats to different components of soil biodiversity were developed in order to assess the spatial distribution of threats on a European scale. A list of 13 potential threats to soil biodiversity was proposed to experts with different backgrounds in order to assess the potential for three major components of soil biodiversity: soil microorganisms, fauna, and biological functions. This approach allowed us to obtain knowledge-based rankings of threats. These classifications formed the basis for the development of indices through an additive aggregation model that, along with ad-hoc proxies for each pressure, allowed us to preliminarily assess the spatial patterns of potential threats. Intensive exploitation was identified as the highest pressure. In contrast, the use of genetically modified organisms in agriculture was considered as the threat with least potential. The potential impact of climate change showed the highest uncertainty. Fourteen out of the 27 considered countries have more than 40% of their soils with moderate-high to high potential risk for all three components of soil biodiversity. Arable soils are the most exposed to pressures. Soils within the boreal biogeographic region showed the lowest risk potential. The majority of soils at risk are outside the boundaries of protected areas. First maps of risks to three components of soil biodiversity based on the current scientific knowledge were developed. Despite the intrinsic limits of knowledge-based assessments, a remarkable potential risk to soil biodiversity was observed. Guidelines to preliminarily identify and circumscribe soils potentially at risk are provided. This approach may be used in future research to assess threat at both local and global scale and identify areas of possible risk and, subsequently, design appropriate strategies for monitoring and protection of soil

  18. Risk Management of New Microelectronics for NASA: Radiation Knowledge-base

    NASA Technical Reports Server (NTRS)

    LaBel, Kenneth A.

    2004-01-01

    Contents include the following: NASA Missions - implications to reliability and radiation constraints. Approach to Insertion of New Technologies Technology Knowledge-base development. Technology model/tool development and validation. Summary comments.

  19. Knowledge-base browsing: an application of hybrid distributed/local connectionist networks

    NASA Astrophysics Data System (ADS)

    Samad, Tariq; Israel, Peggy

    1990-08-01

    We describe a knowledge base browser based on a connectionist (or neural network) architecture that employs both distributed and local representations. The distributed representations are used for input and output thereby enabling associative noise-tolerant interaction with the environment. Internally all representations are fully local. This simplifies weight assignment and facilitates network configuration for specific applications. In our browser concepts and relations in a knowledge base are represented using " microfeatures. " The microfeatures can encode semantic attributes structural features contextual information etc. Desired portions of the knowledge base can then be associatively retrieved based on a structured cue. An ordered list of partial matches is presented to the user for selection. Microfeatures can also be used as " bookmarks" they can be placed dynamically at appropriate points in the knowledge base and subsequently used as retrieval cues. A proof-of-concept system has been implemented for an internally developed Honeywell-proprietary knowledge acquisition tool. 1.

  20. A NASA/RAE cooperation in the development of a real-time knowledge based autopilot

    NASA Technical Reports Server (NTRS)

    Daysh, Colin; Corbin, Malcolm; Butler, Geoff; Duke, Eugene L.; Belle, Steven D.; Brumbaugh, Randal W.

    1991-01-01

    As part of a US/UK cooperative aeronautical research program, a joint activity between NASA-Ames and the Royal Aerospace Establishment on Knowledge Based Systems (KBS) was established. This joint activity is concerned with tools and techniques for the implementation and validation of real-time KBS. The proposed next stage of the research is described, in which some of the problems of implementing and validating a Knowledge Based Autopilot (KBAP) for a generic high performance aircraft will be studied.

  1. Towards a knowledge-based system to assist the Brazilian data-collecting system operation

    NASA Technical Reports Server (NTRS)

    Rodrigues, Valter; Simoni, P. O.; Oliveira, P. P. B.; Oliveira, C. A.; Nogueira, C. A. M.

    1988-01-01

    A study is reported which was carried out to show how a knowledge-based approach would lead to a flexible tool to assist the operation task in a satellite-based environmental data collection system. Some characteristics of a hypothesized system comprised of a satellite and a network of Interrogable Data Collecting Platforms (IDCPs) are pointed out. The Knowledge-Based Planning Assistant System (KBPAS) and some aspects about how knowledge is organized in the IDCP's domain are briefly described.

  2. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  3. Ab Initio Protein Structure Assembly Using Continuous Structure Fragments and Optimized Knowledge-based Force Field

    PubMed Central

    Xu, Dong; Zhang, Yang

    2012-01-01

    Ab initio protein folding is one of the major unsolved problems in computational biology due to the difficulties in force field design and conformational search. We developed a novel program, QUARK, for template-free protein structure prediction. Query sequences are first broken into fragments of 1–20 residues where multiple fragment structures are retrieved at each position from unrelated experimental structures. Full-length structure models are then assembled from fragments using replica-exchange Monte Carlo simulations, which are guided by a composite knowledge-based force field. A number of novel energy terms and Monte Carlo movements are introduced and the particular contributions to enhancing the efficiency of both force field and search engine are analyzed in detail. QUARK prediction procedure is depicted and tested on the structure modeling of 145 non-homologous proteins. Although no global templates are used and all fragments from experimental structures with template modeling score (TM-score) >0.5 are excluded, QUARK can successfully construct 3D models of correct folds in 1/3 cases of short proteins up to 100 residues. In the ninth community-wide Critical Assessment of protein Structure Prediction (CASP9) experiment, QUARK server outperformed the second and third best servers by 18% and 47% based on the cumulative Z-score of global distance test-total (GDT-TS) scores in the free modeling (FM) category. Although ab initio protein folding remains a significant challenge, these data demonstrate new progress towards the solution of the most important problem in the field. PMID:22411565

  4. Consistent Refinement of Submitted Models at CASP using a Knowledge-based Potential

    PubMed Central

    Chopra, Gaurav; Kalisman, Nir; Levitt, Michael

    2010-01-01

    Protein structure refinement is an important but unsolved problem; it must be solved if we are to predict biological function that is very sensitive to structural details. Specifically, Critical Assessment of Techniques for Protein Structure Prediction (CASP) shows that the accuracy of predictions in the comparative modeling category is often worse than that of the template on which the homology model is based. Here we describe a refinement protocol that is able to consistently refine submitted predictions for all categories at CASP7. The protocol uses direct energy minimization of the knowledge-based potential of mean force that is based on the interaction statistics of 167 atom types (Summa and Levitt, Proc Natl Acad Sci USA 2007; 104:3177–3182). Our protocol is thus computationally very efficient; it only takes a few minutes of CPU time to run typical protein models (300 residues). We observe an average structural improvement of 1% in GDT_TS, for predictions that have low and medium homology to known PDB structures (Global Distance Test score or GDT_TS between 50 and 80%). We also observe a marked improvement in the stereochemistry of the models. The level of improvement varies amongst the various participants at CASP, but we see large improvements (>10% increase in GDT_TS) even for models predicted by the best performing groups at CASP7. In addition, our protocol consistently improved the best predicted models in the refinement category at CASP7 and CASP8. These improvements in structure and stereochemistry prove the usefulness of our computationally inexpensive, powerful and automatic refinement protocol. PMID:20589633

  5. A knowledge based approach to matching human neurodegenerative disease and animal models

    PubMed Central

    Maynard, Sarah M.; Mungall, Christopher J.; Lewis, Suzanna E.; Imam, Fahim T.; Martone, Maryann E.

    2013-01-01

    Neurodegenerative diseases present a wide and complex range of biological and clinical features. Animal models are key to translational research, yet typically only exhibit a subset of disease features rather than being precise replicas of the disease. Consequently, connecting animal to human conditions using direct data-mining strategies has proven challenging, particularly for diseases of the nervous system, with its complicated anatomy and physiology. To address this challenge we have explored the use of ontologies to create formal descriptions of structural phenotypes across scales that are machine processable and amenable to logical inference. As proof of concept, we built a Neurodegenerative Disease Phenotype Ontology (NDPO) and an associated Phenotype Knowledge Base (PKB) using an entity-quality model that incorporates descriptions for both human disease phenotypes and those of animal models. Entities are drawn from community ontologies made available through the Neuroscience Information Framework (NIF) and qualities are drawn from the Phenotype and Trait Ontology (PATO). We generated ~1200 structured phenotype statements describing structural alterations at the subcellular, cellular and gross anatomical levels observed in 11 human neurodegenerative conditions and associated animal models. PhenoSim, an open source tool for comparing phenotypes, was used to issue a series of competency questions to compare individual phenotypes among organisms and to determine which animal models recapitulate phenotypic aspects of the human disease in aggregate. Overall, the system was able to use relationships within the ontology to bridge phenotypes across scales, returning non-trivial matches based on common subsumers that were meaningful to a neuroscientist with an advanced knowledge of neuroanatomy. The system can be used both to compare individual phenotypes and also phenotypes in aggregate. This proof of concept suggests that expressing complex phenotypes using formal

  6. An Integrative Framework for Bayesian Variable Selection with Informative Priors for Identifying Genes and Pathways

    PubMed Central

    Ander, Bradley P.; Zhang, Xiaoshuai; Xue, Fuzhong; Sharp, Frank R.; Yang, Xiaowei

    2013-01-01

    The discovery of genetic or genomic markers plays a central role in the development of personalized medicine. A notable challenge exists when dealing with the high dimensionality of the data sets, as thousands of genes or millions of genetic variants are collected on a relatively small number of subjects. Traditional gene-wise selection methods using univariate analyses face difficulty to incorporate correlational, structural, or functional structures amongst the molecular measures. For microarray gene expression data, we first summarize solutions in dealing with ‘large p, small n’ problems, and then propose an integrative Bayesian variable selection (iBVS) framework for simultaneously identifying causal or marker genes and regulatory pathways. A novel partial least squares (PLS) g-prior for iBVS is developed to allow the incorporation of prior knowledge on gene-gene interactions or functional relationships. From the point view of systems biology, iBVS enables user to directly target the joint effects of multiple genes and pathways in a hierarchical modeling diagram to predict disease status or phenotype. The estimated posterior selection probabilities offer probabilitic and biological interpretations. Both simulated data and a set of microarray data in predicting stroke status are used in validating the performance of iBVS in a Probit model with binary outcomes. iBVS offers a general framework for effective discovery of various molecular biomarkers by combining data-based statistics and knowledge-based priors. Guidelines on making posterior inferences, determining Bayesian significance levels, and improving computational efficiencies are also discussed. PMID:23844055

  7. Knowledge-based extraction of adverse drug events from biomedical text

    PubMed Central

    2014-01-01

    Background Many biomedical relation extraction systems are machine-learning based and have to be trained on large annotated corpora that are expensive and cumbersome to construct. We developed a knowledge-based relation extraction system that requires minimal training data, and applied the system for the extraction of adverse drug events from biomedical text. The system consists of a concept recognition module that identifies drugs and adverse effects in sentences, and a knowledge-base module that establishes whether a relation exists between the recognized concepts. The knowledge base was filled with information from the Unified Medical Language System. The performance of the system was evaluated on the ADE corpus, consisting of 1644 abstracts with manually annotated adverse drug events. Fifty abstracts were used for training, the remaining abstracts were used for testing. Results The knowledge-based system obtained an F-score of 50.5%, which was 34.4 percentage points better than the co-occurrence baseline. Increasing the training set to 400 abstracts improved the F-score to 54.3%. When the system was compared with a machine-learning system, jSRE, on a subset of the sentences in the ADE corpus, our knowledge-based system achieved an F-score that is 7 percentage points higher than the F-score of jSRE trained on 50 abstracts, and still 2 percentage points higher than jSRE trained on 90% of the corpus. Conclusion A knowledge-based approach can be successfully used to extract adverse drug events from biomedical text without need for a large training set. Whether use of a knowledge base is equally advantageous for other biomedical relation-extraction tasks remains to be investigated. PMID:24593054

  8. Medical Knowledge Base Acquisition: The Role of the Expert Review Process in Disease Profile Construction

    PubMed Central

    Giuse, Nunzia Bettinsoli; Bankowitz, Richard A.; Giuse, Dario A.; Parker, Ronnie C.; Miller, Randolph A.

    1989-01-01

    In order to better understand the knowledge acquisition process, we studied the changes which a newly developed “preliminary” QMR disease profile undergoes during the expert review process. Changes in the ten most recently created disease profiles from the INTERNIST-1/QMR knowledge base were analyzed. We classified the changes which occurred during knowledge base construction by the type of change and the reason for the change. Observed changes to proposed findings could be grouped according to whether a change was needed to maintain consistency with the existing knowledge base, or because of disagreement over knowledge content with the domain expert. Out of 987 total proposed findings in the ten profiles, 233 findings underwent 274 changes, approximately one change for each three proposed findings. A total of 43% of the changes were additions or deletions of findings or links compared to the preliminary disease profile, and 33% of the changes were alterations in the numerical value of the evoking strength or frequency. A total of 126 (46%) of changes were required to maintain consistency of the knowledge base, whereas the remaining 148 (54%) changes were altered based on suggestions made by the domain expert based on domain content. The type of change (consistency vs. domain knowledge) was found to correlate both with the class of finding (newly constructed vs. previously used) and with the experience of the profiler (novice vs. experienced). These differences suggest that some but not all aspects of the disease profiling process can be improved upon with experience. Since it is generally agreed that the construction of a knowledge base depends heavily upon the knowledge acquisition process, this study provides some insight into areas of investigation for others interested in the construction of automated tools to aid the process of knowledge base construction. It also provides support for the observation that knowledge base construction has at least some

  9. Knowledge based system verification and validation as related to automation of Space Station subsystems - Rationale for a knowledge based system lifecycle

    NASA Technical Reports Server (NTRS)

    Richardson, Keith; Wong, Carla

    1988-01-01

    The role of verification and validation (V and V) in software has been to support and strengthen the software lifecycle and to ensure that the resultant code meets the standards of the requirements documents. Knowledge-based system (KBS) V and V should serve the same role, but the KBS lifecycle is ill-defined. The rationale of a simple form of the KBS lifecycle is presented, including accommodation to certain critical KBS differences from software development.

  10. Knowledge based system verification and validation as related to automation of space station subsystems: Rationale for a knowledge based system lifecycle

    NASA Technical Reports Server (NTRS)

    Richardson, Keith; Wong, Carla

    1988-01-01

    The role of verification and validation (V and V) in software has been to support and strengthen the software lifecycle and to ensure that the resultant code meets the standards of the requirements documents. Knowledge Based System (KBS) V and V should serve the same role, but the KBS lifecycle is ill-defined. The rationale of a simple form of the KBS lifecycle is presented, including accommodation to certain critical KBS differences from software development.

  11. Knowledge base interpolation of path-dependent data using irregularly spaced natural neighbors

    SciTech Connect

    Hipp, J.; Keyser, R.; Young, C.; Shepard-Dombroski, E.; Chael, E.

    1996-08-01

    This paper summarizes the requirements for the interpolation scheme needed for the CTBT Knowledge Base and discusses interpolation issues relative to the requirements. Based on these requirements, a methodology for providing an accurate and robust interpolation scheme for the CTBT Knowledge Base is proposed. The method utilizes a Delaunay triangle tessellation to mesh the Earth`s surface and employs the natural-neighbor interpolation technique to provide accurate evaluation of geophysical data that is important for CTBT verification. The natural-neighbor interpolation method is a local weighted average technique capable of modeling sparse irregular data sets as is commonly found in the geophysical sciences. This is particularly true of the data to be contained in the CTBT Knowledge Base. Furthermore, natural neighbor interpolation is first order continuous everywhere except at the data points. The non-linear form of the natural-neighbor interpolation method can provide continuous first and second order derivatives throughout the entire data domain. Since one of the primary support functions of the Knowledge Base is to provide event location capabilities, and the seismic event location algorithms typically require first and second order continuity, this is a prime requirement of any interpolation methodology chosen for use by the CTBT Knowledge Base.

  12. Geomorphological feature extraction from a digital elevation model through fuzzy knowledge-based classification

    NASA Astrophysics Data System (ADS)

    Argialas, Demetre P.; Tzotsos, Angelos

    2003-03-01

    The objective of this research was the investigation of advanced image analysis methods for geomorphological mapping. Methods employed included multiresolution segmentation of the Digital Elevation Model (DEM) GTOPO30 and fuzzy knowledge based classification of the segmented DEM into three geomorphological classes: mountain ranges, piedmonts and basins. The study area was a segment of the Basin and Range Physiographic Province in Nevada, USA. The implementation was made in eCognition. In particular, the segmentation of GTOPO30 resulted into primitive objects. The knowledge-based classification of the primitive objects based on their elevation and shape parameters, resulted in the extraction of the geomorphological features. The resulted boundaries in comparison to those by previous studies were found satisfactory. It is concluded that geomorphological feature extraction can be carried out through fuzzy knowledge based classification as implemented in eCognition.

  13. Demonstration knowledge base to aid building operators in responding to real-time-pricing electricity rates

    SciTech Connect

    Norford, L.K. |; Englander, S.L.; Wiseley, B.J.

    1998-10-01

    The objective of ASHRAE Research Project 833, the results of which are summarized in this paper, was to develop a knowledge base, tested in demonstration software, that would assist building operators in assessing the benefits of controlling electrical equipment in response to electricity rates that vary hourly. The software combines a knowledge base with computations, both of which are controlled by the user via a graphical interface. Major electrical end uses of commercial buildings are considered. The knowledge base is used to assess the trade-off of service and cost that is implicit in establishing a threshold price, above which lighting is reduced or space temperatures are allowed to deviate from setpoint. The software also evaluates thermal storage systems and on-site generation, in which occupant comfort is not affected and the systems are operated to minimize operating costs. The thermal storage and generator control algorithms have proved to be optimal under limiting cases by comparison with mixed-integer programming.

  14. A situational approach to the design of a patient-oriented disease-specific knowledge base.

    PubMed Central

    Kim, Matthew I.; Ladenson, Paul; Johnson, Kevin B.

    2002-01-01

    We have developed a situational approach to the organization of disease-specific information that seeks to provide patients with targeted access to content in a knowledge base. Our approach focuses on dividing a defined knowledge base into sections corresponding to discrete clinical events associated with the evaluation and treatment of a specific disorder. Common reasons for subspecialty referral are used to generate situational statements that serve as entry points into the knowledge base. Each section includes defining questions generated using keywords associated with specific topics. Defining questions are linked to patient-focused answers. Evaluation of a thyroid cancer web site designed using this approach has identified high ratings for usability, relevance, and comprehension of retrieved information. This approach may be particularly useful in the development of resources for newly diagnosed patients. PMID:12463852

  15. A knowledge-based approach to automated flow-field zoning for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1989-01-01

    An automated three-dimensional zonal grid generation capability for computational fluid dynamics is shown through the development of a demonstration computer program capable of automatically zoning the flow field of representative two-dimensional (2-D) aerodynamic configurations. The applicability of a knowledge-based programming approach to the domain of flow-field zoning is examined. Several aspects of flow-field zoning make the application of knowledge-based techniques challenging: the need for perceptual information, the role of individual bias in the design and evaluation of zonings, and the fact that the zoning process is modeled as a constructive, design-type task (for which there are relatively few examples of successful knowledge-based systems in any domain). Engineering solutions to the problems arising from these aspects are developed, and a demonstration system is implemented which can design, generate, and output flow-field zonings for representative 2-D aerodynamic configurations.

  16. Pragmatically-Structured, Lexical-Semantic Knowledge Bases for Unified Medical Language Systems

    PubMed Central

    Evans, David A.

    1988-01-01

    Unified medical language systems must accommodate expressions ranging from fixed-form standardised vocabularies to the free-text, natural language of medical charts. Such ability will depend on the identification, representation, and organisation of the concepts that form the useful core of the biomedical conceptual domain. The MedSORT-II and UMLS Projects at Carnegie Mellon University have established a feasibile design for the development of lexicons and knowledge bases to support the automated processing of varieties of expressions (in the subdomain of clinical findings) into uniform representations. The essential principle involves incorporating lexical-semantic typing restrictions in a pragmatically-structured knowledge base. The approach does not depend on exhaustive knowledge representation, rather takes advantage of selective, limited relations among concepts. In particular, the projects have demonstrated that practical, comprehensive, and accurate processing of natural-language expressions is attainable with partial knowledge bases, which can be rapidly prototyped.

  17. Enhancing Automatic Biological Pathway Generation with GO-based Gene Similarity

    SciTech Connect

    Sanfilippo, Antonio P.; Baddeley, Robert L.; Beagley, Nathaniel; Riensche, Roderick M.; Gopalan, Banu

    2009-08-03

    One of the greatest challenges in today’s analysis of microarray gene expression data is to identify pathways across regulated genes that underlie structural and functional changes of living cells in specific pathologies. Most current approaches to pathway generation are based on a reverse engineering approach in which pathway plausibility is solely induced from observed pathway data. These approaches tend to lack in generality as they are too dependent on the pathway observables from which they are induced. By contrast, alternative approaches that rely on prior biological knowledge may err in the opposite direction as the prior knowledge is usually not sufficiently tuned to the pathology of focus. In this paper, we present a novel pathway generation approach which combines insights from the reverse engineering and knowledge-based approaches to increase the biological plausibility and specificity of induced regulatory networks.

  18. A formal approach to validation and verification for knowledge-based control systems

    NASA Technical Reports Server (NTRS)

    Castore, Glen

    1987-01-01

    As control systems become more complex in response to desires for greater system flexibility, performance and reliability, the promise is held out that artificial intelligence might provide the means for building such systems. An obstacle to the use of symbolic processing constructs in this domain is the need for verification and validation (V and V) of the systems. Techniques currently in use do not seem appropriate for knowledge-based software. An outline of a formal approach to V and V for knowledge-based control systems is presented.

  19. PRAIS: Distributed, real-time knowledge-based systems made easy

    NASA Technical Reports Server (NTRS)

    Goldstein, David G.

    1990-01-01

    This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.

  20. Generating MEDLINE search strategies using a librarian knowledge-based system.

    PubMed Central

    Peng, P.; Aguirre, A.; Johnson, S. B.; Cimino, J. J.

    1993-01-01

    We describe a librarian knowledge-based system that generates a search strategy from a query representation based on a user's information need. Together with the natural language parser AQUA, the system functions as a human/computer interface, which translates a user query from free text into a BRS Onsite search formulation, for searching the MEDLINE bibliographic database. In the system, conceptual graphs are used to represent the user's information need. The UMLS Metathesaurus and Semantic Net are used as the key knowledge sources in building the knowledge base. PMID:8130544

  1. The implementation of a knowledge-based Pathology Hypertext under HyperCard.

    PubMed

    Levy, A H; Thursh, D R

    1989-12-01

    A knowledge-based Hypertext of Pathology integrating videodisc-based images and computer-generated graphics with the textual cognitive information of an undergraduate pathology curriculum has been developed. The system described in this paper was implemented under HyperCard during 1988 and 1989. Three earlier versions of the system that were developed on different platforms are contrasted with the present system. Strengths, weaknesses, and future extensions of the system are enumerated. The conceptual basis and organizational principles of the knowledge base are also briefly discussed. PMID:2636967

  2. Design of Composite Structures Using Knowledge-Based and Case Based Reasoning

    NASA Technical Reports Server (NTRS)

    Lambright, Jonathan Paul

    1996-01-01

    A method of using knowledge based and case based reasoning to assist designers during conceptual design tasks of composite structures was proposed. The cooperative use of heuristics, procedural knowledge, and previous similar design cases suggests a potential reduction in design cycle time and ultimately product lead time. The hypothesis of this work is that the design process of composite structures can be improved by using Case-Based Reasoning (CBR) and Knowledge-Based (KB) reasoning in the early design stages. The technique of using knowledge-based and case-based reasoning facilitates the gathering of disparate information into one location that is easily and readily available. The method suggests that the inclusion of downstream life-cycle issues into the conceptual design phase reduces potential of defective, and sub-optimal composite structures. Three industry experts were interviewed extensively. The experts provided design rules, previous design cases, and test problems. A Knowledge Based Reasoning system was developed using the CLIPS (C Language Interpretive Procedural System) environment and a Case Based Reasoning System was developed using the Design Memory Utility For Sharing Experiences (MUSE) xviii environment. A Design Characteristic State (DCS) was used to document the design specifications, constraints, and problem areas using attribute-value pair relationships. The DCS provided consistent design information between the knowledge base and case base. Results indicated that the use of knowledge based and case based reasoning provided a robust design environment for composite structures. The knowledge base provided design guidance from well defined rules and procedural knowledge. The case base provided suggestions on design and manufacturing techniques based on previous similar designs and warnings of potential problems and pitfalls. The case base complemented the knowledge base and extended the problem solving capability beyond the existence of

  3. Arranging ISO 13606 archetypes into a knowledge base using UML connectors.

    PubMed

    Kopanitsa, Georgy

    2014-01-01

    To enable the efficient reuse of standard based medical data we propose to develop a higher-level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analysed for their ability to be applied in the implementation of a higher-level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future. PMID:24743069

  4. Space shuttle main engine anomaly data and inductive knowledge based systems: Automated corporate expertise

    NASA Technical Reports Server (NTRS)

    Modesitt, Kenneth L.

    1987-01-01

    Progress is reported on the development of SCOTTY, an expert knowledge-based system to automate the analysis procedure following test firings of the Space Shuttle Main Engine (SSME). The integration of a large-scale relational data base system, a computer graphics interface for experts and end-user engineers, potential extension of the system to flight engines, application of the system for training of newly-hired engineers, technology transfer to other engines, and the essential qualities of good software engineering practices for building expert knowledge-based systems are among the topics discussed.

  5. Control of inconsistency and redundancy in PROLOG-type knowledge bases

    SciTech Connect

    Murray, T.J.; Tanniru, M.R. )

    1991-01-01

    Because of the incremental and piecemeal nature of its construction, logical inconsistency and redundancy can be built inadvertently into a knowledge base. This paper discusses a methodology for analyzing the contents of a PROLOG-type knowledge base and for eliminating inconsistent and redundant logical elements. It introduces a graphical representation, the goal-fact network, of the logic required to infer a goal and describes the identification of inconsistency in that network. Three increasingly general alternatives, Boolean algebra, the Karnaugh map, and the Quine-McCluskey algorithm, are presented as tools to identify and to eliminate redundancy. 23 refs.

  6. Headaches prior to earthquakes

    NASA Astrophysics Data System (ADS)

    Morton, L. L.

    1988-06-01

    In two surveys of headaches it was noted that their incidence had increased significantly within 48 h prior to earthquakes from an incidence of 17% to 58% in the first survey using correlated samples and from 20.4% to 44% in the second survey using independent samples. It is suggested that an increase in positive air ions from rock compression may trigger head pain via a decrease in brain levels of the neurotransmitter serotonin. The findings are presented as preliminary, with the hope of generating further research efforts in areas more prone to earthquakes.

  7. Limitations of Levels, Learning Outcomes and Qualifications as Drivers Towards a More Knowledge-Based Society

    ERIC Educational Resources Information Center

    Brown, Alan

    2008-01-01

    National (and European) qualifications frameworks, the specification of learning outcomes and grand targets like the Lisbon goals of increasing the supply of graduates in Europe in order to achieve a more knowledge-based society are all predicated upon the idea of moving people through to higher and well-defined levels of skills, knowledge and…

  8. Expanding the Socio-Cultural Knowledge Base of TESOL Teacher Education

    ERIC Educational Resources Information Center

    Dogancay-Aktuna, Seran

    2006-01-01

    This paper argues for the expansion of the knowledge base of TESOL teacher education to integrate greater awareness of the sociocultural and political context of teaching English to speakers of other languages. It is argued that the changing roles of teachers, insights gained from classroom research and recent developments in critical applied…

  9. The Educational Media and Technology Profession: An Agenda for Research and Assessment of the Knowledge Base.

    ERIC Educational Resources Information Center

    Molenda, Michael; Olive, J. Fred III

    This report is the first effort to stake out the territory to be included in research on the profession of educational media and technology (em/t), and explore the existing knowledge base within that territory. It comprises a set of questions, the answers to which cast a light on who is in the profession, where it is going, and what useful…

  10. Hospital Bioethics: A Beginning Knowledge Base for the Neonatal Social Worker.

    ERIC Educational Resources Information Center

    Silverman, Ed

    1992-01-01

    Notes that life-saving advances in medicine have created difficult ethical and legal dilemmas for health care professionals. Presents beginning knowledge base for bioethical practice, especially in hospital neonatal units. Outlines key elements of bioethical decision making and examines potential social work role from clinical and organizational…

  11. The Knowledge Base of Non-Native English-Speaking Teachers: Perspectives of Teachers and Administrators

    ERIC Educational Resources Information Center

    Zhang, Fengjuan; Zhan, Ju

    2014-01-01

    This study explores the knowledge base of non-native English-speaking teachers (NNESTs) working in the Canadian English as a second language (ESL) context. By examining NNESTs' experiences in seeking employment and teaching ESL in Canada, and investigating ESL program administrators' perceptions and hiring practices in relation to…

  12. Implementing a Knowledge-Based Library Information System with Typed Horn Logic.

    ERIC Educational Resources Information Center

    Ait-Kaci, Hassan; And Others

    1990-01-01

    Describes a prototype library expert system called BABEL which uses a new programing language, LOGIN, that combines the idea of attribute inheritance with logic programing. Use of hierarchical classification of library objects to build a knowledge base for a library information system is explained, and further research is suggested. (11…

  13. A Comparison of Books and Hypermedia for Knowledge-based Sports Coaching.

    ERIC Educational Resources Information Center

    Vickers, Joan N.; Gaines, Brian R.

    1988-01-01

    Summarizes and illustrates the knowledge-based approach to instructional material design. A series of sports coaching handbooks and hypermedia presentations of the same material are described and the different instantiations of the knowledge and training structures are compared. Figures show knowledge structures for badminton and the architecture…

  14. Learning and Innovation in the Knowledge-Based Economy: Beyond Clusters and Qualifications

    ERIC Educational Resources Information Center

    James, Laura; Guile, David; Unwin, Lorna

    2013-01-01

    For over a decade policy-makers have claimed that advanced industrial societies should develop a knowledge-based economy (KBE) in response to economic globalisation and the transfer of manufacturing jobs to lower cost countries. In the UK, this vision shaped New Labour's policies for vocational education and training (VET), higher education…

  15. Learning Spaces: An ICT-Enabled Model of Future Learning in the Knowledge-Based Society

    ERIC Educational Resources Information Center

    Punie, Yves

    2007-01-01

    This article presents elements of a future vision of learning in the knowledge-based society which is enabled by ICT. It is not only based on extrapolations from trends and drivers that are shaping learning in Europe but also consists of a holistic attempt to envisage and anticipate future learning needs and requirements in the KBS. The "learning…

  16. The Feasibility and Effectiveness of a Pilot Resident-Organized and -Led Knowledge Base Review

    ERIC Educational Resources Information Center

    Vautrot, Victor J.; Festin, Fe E.; Bauer, Mark S.

    2010-01-01

    Objective: The Accreditation Council for Graduate Medical Education (ACGME) requires a sufficient medical knowledge base as one of the six core competencies in residency training. The authors judged that an annual "short-course" review of medical knowledge would be a useful adjunct to standard seminar and rotation teaching, and that a…

  17. Testing of a Natural Language Retrieval System for a Full Text Knowledge Base.

    ERIC Educational Resources Information Center

    Bernstein, Lionel M.; Williamson, Robert E.

    1984-01-01

    The Hepatitis Knowledge Base (text of prototype information system) was used for modifying and testing "A Navigator of Natural Language Organized (Textual) Data" (ANNOD), a retrieval system which combines probabilistic, linguistic, and empirical means to rank individual paragraphs of full text for similarity to natural language queries proposed by…

  18. A Transactional Approach to Children's Learning in a Knowledge-Based Society.

    ERIC Educational Resources Information Center

    Seng, Seok-Hoon

    The 21st century promises to make very different demands on our children and schools in a knowledge-based society. A slow but dynamic shift has been occurring in the Singapore educational system toward a learning nation and thinking school ethos. In the midst of this change, children will need to acquire a new set of skills. They will need to be…

  19. The Spread of Contingent Work in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Szabo, Katalin; Negyesi, Aron

    2005-01-01

    Permanent employment, typical of industrial societies and bolstered by numerous social guaranties, has been declining in the past 2 decades. There has been a steady expansion of various forms of contingent work. The decomposition of traditional work is a logical consequence of the characteristic patterns of the knowledge-based economy. According…

  20. Elements of Creative Social Science: Part I--Towards Greater Authority for the Knowledge Base.

    ERIC Educational Resources Information Center

    Lengyel, Peter

    1989-01-01

    Presents an argument defending the cognitive knowledge base in the social sciences. Contends that the findings related to the sociosphere are as important as those findings in the technosphere or biosphere. Suggests that the creation of a social science equivalency to research and development which would be called operationalizing and assembly.…

  1. Hidden Knowledge: Working-Class Capacity in the "Knowledge-Based Economy"

    ERIC Educational Resources Information Center

    Livingstone, David W.; Sawchuck, Peter H.

    2005-01-01

    The research reported in this paper attempts to document the actual learning practices of working-class people in the context of the much heralded "knowledge-based economy." Our primary thesis is that working-class peoples' indigenous learning capacities have been denied, suppressed, degraded or diverted within most capitalist schooling, adult…

  2. A knowledge-based object recognition system for applications in the space station

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.

    1988-02-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  3. Elaborating the Grounding of the Knowledge Base on Language and Learning for Preservice Literacy Teachers

    ERIC Educational Resources Information Center

    Piazza, Carolyn L.; Wallat, Cynthia

    2006-01-01

    This purpose of this article is to present a qualitative inquiry into the genesis of sociolinguistics and the contributions of eight sociolinguistic pioneers. This inquiry, based on an historical interpretation of events, reformulates the concept of validation as the social construction of a scientific knowledge base, and explicates three themes…

  4. Simultaneous Mapping of Interactions between Scientific and Technological Knowledge Bases: The Case of Space Communications.

    ERIC Educational Resources Information Center

    Hassan, E.

    2003-01-01

    Examines the knowledge structure of the field of space communications using bibliometric mapping techniques based on textual analysis. Presents a new approach with the aim of visualizing simultaneously the configuration of the scientific and technological knowledge bases at a worldwide level, and discusses results that show different…

  5. A knowledge-based flight status monitor for real-time application in digital avionics systems

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Disbrow, J. D.; Butler, G. F.

    1989-01-01

    The Dryden Flight Research Facility of the National Aeronautics and Space Administration (NASA) Ames Research Center (Ames-Dryden) is the principal NASA facility for the flight testing and evaluation of new and complex avionics systems. To aid in the interpretation of system health and status data, a knowledge-based flight status monitor was designed. The monitor was designed to use fault indicators from the onboard system which are telemetered to the ground and processed by a rule-based model of the aircraft failure management system to give timely advice and recommendations in the mission control room. One of the important constraints on the flight status monitor is the need to operate in real time, and to pursue this aspect, a joint research activity between NASA Ames-Dryden and the Royal Aerospace Establishment (RAE) on real-time knowledge-based systems was established. Under this agreement, the original LISP knowledge base for the flight status monitor was reimplemented using the intelligent knowledge-based system toolkit, MUSE, which was developed under RAE sponsorship. Details of the flight status monitor and the MUSE implementation are presented.

  6. Clear as Glass: A Combined List of Print and Electronic Journals in the Knowledge Base

    ERIC Educational Resources Information Center

    Lowe, M. Sara

    2008-01-01

    The non-standard practice at Cowles Library at Drake University has been to display electronic journals and some print journals in the Knowledge Base while simultaneously listing print journals and some electronic journals in the online public access catalog (OPAC). The result was a system that made it difficult for patrons to determine our…

  7. English Language Teacher Educators' Pedagogical Knowledge Base: The Macro and Micro Categories

    ERIC Educational Resources Information Center

    Moradkhani, Shahab; Akbari, Ramin; Samar, Reza Ghafar; Kiany, Gholam Reza

    2013-01-01

    The aim of this study was to determine the major categories of English language teacher educators' pedagogical knowledge base. To this end, semi-structured interviews were conducted with 5 teachers, teacher educators, and university professors (15 participants in total). The results of data analysis indicated that teacher educators'…

  8. Appropriating Professionalism: Restructuring the Official Knowledge Base of England's "Modernised" Teaching Profession

    ERIC Educational Resources Information Center

    Beck, John

    2009-01-01

    The present paper examines efforts by government and government agencies in England to prescribe and control the knowledge base of a teaching profession that has, under successive New Labour administrations since 1997, been subjected to "modernisation". A theoretical framework drawn from aspects of the work of Basil Bernstein, and of Rob Moore and…

  9. Using CLIPS in the domain of knowledge-based massively parallel programming

    NASA Technical Reports Server (NTRS)

    Dvorak, Jiri J.

    1994-01-01

    The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.

  10. Construction of geographical names knowledge base with ontology and production rule

    NASA Astrophysics Data System (ADS)

    Cheng, Gang; Du, Qingyun

    2009-10-01

    With the rapid development of the gazetteers, more and more geographical names databases has been established. Since the geographical names exit in form of records which provide little qualitative description other than quantitative information, geographical names are hardly shared and interoperable. In order to solve this problem, we urgently need to set up knowledge base for geographical names that shall provide qualitative knowledge to describe the essence of the elements. So, we use ontology and production rules to build geographical name knowledge base, where the geographical names ontology is regarded as the foundation for reuse and sharing of the geographical names information, and production rules are used to enhance the expressivity of the ontology. First of all, we analyzed the geographical names concepts and their semantics, the concepts of space and time and their relationships in geographical names to describe the knowledge structure for this field, used Web Ontology Language (OWL) to provide formal descriptions to give them explicit semantics, and proposed a unified semantic framework for description. Secondly, we established the common-sense rules and spatial relations inference rules coded with Semantic Web Rule Language (SWRL) which laid the foundation for geographical names knowledge discovery and automatic reasoning. Finally, we established a geographical name knowledge base combining both the geographical names ontology and rules established above. Through the analysis of examples we showed that based on the geographical names knowledge base the geographical names information can be well shared and reused.

  11. Preparing Oral Examinations of Mathematical Domains with the Help of a Knowledge-Based Dialogue System.

    ERIC Educational Resources Information Center

    Schmidt, Peter

    A conception of discussing mathematical material in the domain of calculus is outlined. Applications include that university students work at their knowledge and prepare for their oral examinations by utilizing the dialog system. The conception is based upon three pillars. One central pillar is a knowledge base containing the collections of…

  12. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  13. L2 Teachers' Pedagogic Knowledge Base: A Comparison between Experienced and Less Experienced Practitioners

    ERIC Educational Resources Information Center

    Akbari, Ramin; Tajik, Leila

    2009-01-01

    Second language teacher education community has become increasingly interested in the pedagogical knowledge base of teachers as a window into practitioners' mental lives. The present study was conducted to document likely differences between the pedagogic thoughts of experienced and less experienced teachers. Eight teachers participated in the…

  14. Longitudinal Assessment of Progress in Reasoning Capacity and Relation with Self-Estimation of Knowledge Base

    ERIC Educational Resources Information Center

    Collard, Anne; Mélot, France; Bourguignon, Jean-Pierre

    2015-01-01

    The aim of the study was to investigate progress in reasoning capacity and knowledge base appraisal in a longitudinal analysis of data from summative evaluation throughout a medical problem-based learning curriculum. The scores in multidisciplinary discussion of a clinical case and multiple choice questionnaires (MCQs) were studied longitudinally…

  15. The Impact of the Shifting Knowledge Base, from Development to Achievement, on Early Childhood Education Programs

    ERIC Educational Resources Information Center

    Tyler, Kathleen P.

    2012-01-01

    Interest in child development as a knowledge base for early childhood education programs flourished in the 1970s as a result of the theories and philosophies of Jean Piaget and other cognitive developmentalists. During subsequent decades in America, reform movements emphasizing accountability and achievement became a political and social…

  16. Small Knowledge-Based Systems in Education and Training: Something New Under the Sun.

    ERIC Educational Resources Information Center

    Wilson, Brent G.; Welsh, Jack R.

    1986-01-01

    Discusses artificial intelligence, robotics, natural language processing, and expert or knowledge-based systems research; examines two large expert systems, MYCIN and XCON; and reviews the resources required to build large expert systems and affordable smaller systems (intelligent job aids) for training. Expert system vendors and products are…

  17. Approximate Degrees of Similarity between a User's Knowledge and the Tutorial Systems' Knowledge Base

    ERIC Educational Resources Information Center

    Mogharreban, Namdar

    2004-01-01

    A typical tutorial system functions by means of interaction between four components: the expert knowledge base component, the inference engine component, the learner's knowledge component and the user interface component. In typical tutorial systems the interaction and the sequence of presentation as well as the mode of evaluation are…

  18. Universities and the Knowledge-Based Economy: Perceptions from a Developing Country

    ERIC Educational Resources Information Center

    Bano, Shah; Taylor, John

    2015-01-01

    This paper considers the role of universities in the creation of a knowledge-based economy (KBE) in a developing country, Pakistan. Some developing countries have moved quickly to develop a KBE, but progress in Pakistan is much slower. Higher education plays a crucial role as part of the triple helix model for innovation. Based on the perceptions…

  19. The Unintended Consequences of a Standardized Knowledge Base in Advancing Educational Leadership Preparation

    ERIC Educational Resources Information Center

    English, Fenwick W.

    2006-01-01

    Background: The quest for a "knowledge base" in educational administration resulting in the construction of national standards for preparing school leaders has brought with it an unexpected downside. Purpose: It is argued that instead of raising the bar for preparing educational leaders, the standards have lowered them, first by embracing only a…

  20. Promoting women's health: redefining the knowledge base and strategies for change.

    PubMed

    Ruzek, S; Hill, J

    1986-01-01

    Promoting women's health involves undertaking a critical gender-based analysis of women's health status and health needs and the knowledge bases which underlie health promotion action. The authors argue that professional and lay definitions of health problems often differ and that these differences stem from a differential emphasis on existing knowledge bases. Here the authors explore the focus of epidemiological, clinical, and experiential knowledge and suggest ways in which each does or does not address many key health issues which women themselves identify as important. Attention is also directed towards women's own suppressed and devalued knowledge as embodied in traditional folk practices and alternative care forms. Recommendations are made to improve existing knowledge bases by transforming some of the value orientations, priorities, methods and the social organization of research. The authors suggest that positive health promotion strategies must be based on an improved knowledge base and must incorporate three key concepts which women emphasize as central--self determination, women-centred values, and a gender-based political analysis. Strategies and methods to achieve these ends are suggested for health educators and policy-makers who wish to develop more positive approaches to promoting women's health. PMID:10286863

  1. Static and Completion Analysis for Planning Knowledge Base Development and Verification

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintaining the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems.

  2. A knowledge-based object recognition system for applications in the space station

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  3. Organizational Communication Research: An Exploratory Application of a Conceptual Model for an Organized Knowledge Base.

    ERIC Educational Resources Information Center

    Greenbaum, Howard H.; Falcione, Raymond L.

    Organizational communication research needs a conceptual model or taxonomy of variables for developing a knowledge base for past and future findings and information access and retrieval. The proposed Outcome-Determinant-Interface (ODI) model distinguishes three major groups of variables, each divided into classes and subclasses. The outcome…

  4. Artificial intelligence in process control: Knowledge base for the shuttle ECS model

    NASA Technical Reports Server (NTRS)

    Stiffler, A. Kent

    1989-01-01

    The general operation of KATE, an artificial intelligence controller, is outlined. A shuttle environmental control system (ECS) demonstration system for KATE is explained. The knowledge base model for this system is derived. An experimental test procedure is given to verify parameters in the model.

  5. Developing a Knowledge Base for Educational Leadership and Management in East Asia

    ERIC Educational Resources Information Center

    Hallinger, Philip

    2011-01-01

    The role of school leadership in educational reform has reached the status of a truism, and led to major changes in school leader recruitment, selection, training and appraisal. While similar policy trends are evident in East Asia, the empirical knowledge base underlying these measures is distorted and lacking in validation. This paper begins by…

  6. Knowledge-Based Indexing of the Medical Literature: The Indexing Aid Project.

    ERIC Educational Resources Information Center

    Humphrey, Suzanne; Miller, Nancy E.

    1987-01-01

    Describes the National Library of Medicine's (NLM) Indexing Aid Project for conducting research in knowledge representation and indexing for information retrieval, whose goal is to develop interactive knowledge-based systems for computer-assisted indexing of the periodical medical literature. Appendices include background information on NLM…

  7. Knowledge Bases for Effective Teaching: Beginning Teachers' Development as Teachers of Primary Geography

    ERIC Educational Resources Information Center

    Martin, Fran

    2008-01-01

    This paper reports the findings of a research project into beginning teacher development conducted in the United Kingdom. A model for beginning teacher development in the field of primary geography is proposed which looks at the relative knowledge bases needed for effective geography teaching. The model is used to aid analysis of data gathered…

  8. Towards knowledge-based retrieval of medical images. The role of semantic indexing, image content representation and knowledge-based retrieval.

    PubMed

    Lowe, H J; Antipov, I; Hersh, W; Smith, C A

    1998-01-01

    Medicine is increasingly image-intensive. The central importance of imaging technologies such as computerized tomography and magnetic resonance imaging in clinical decision making, combined with the trend to store many "traditional" clinical images such as conventional radiographs, microscopic pathology and dermatology images in digital format present both challenges and an opportunities for the designers of clinical information systems. The emergence of Multimedia Electronic Medical Record Systems (MEMRS), architectures that integrate medical images with text-based clinical data, will further hasten this trend. The development of these systems, storing a large and diverse set of medical images, suggests that in the future MEMRS will become important digital libraries supporting patient care, research and education. The representation and retrieval of clinical images within these systems is problematic as conventional database architectures and information retrieval models have, until recently, focused largely on text-based data. Medical imaging data differs in many ways from text-based medical data but perhaps the most important difference is that the information contained within imaging data is fundamentally knowledge-based. New representational and retrieval models for clinical images will be required to address this issue. Within the Image Engine multimedia medical record system project at the University of Pittsburgh we are evolving an approach to representation and retrieval of medical images which combines semantic indexing using the UMLS Metathesuarus, image content-based representation and knowledge-based image analysis. PMID:9929345

  9. Unconsciously elicited perceptual prior

    PubMed Central

    Chang, Raymond; Baria, Alexis T.; Flounders, Matthew W.; He, Biyu J.

    2016-01-01

    Increasing evidence over the past decade suggests that vision is not simply a passive, feed-forward process in which cortical areas relay progressively more abstract information to those higher up in the visual hierarchy, but rather an inferential process with top-down processes actively guiding and shaping perception. However, one major question that persists is whether such processes can be influenced by unconsciously perceived stimuli. Recent psychophysics and neuroimaging studies have revealed that while consciously perceived stimuli elicit stronger responses in higher visual and frontoparietal areas than those that fail to reach conscious awareness, the latter can still drive high-level brain and behavioral responses. We investigated whether unconscious processing of a masked natural image could facilitate subsequent conscious recognition of its degraded counterpart (a black-and-white “Mooney” image) presented many seconds later. We found that this is indeed the case, suggesting that conscious vision may be influenced by priors established by unconscious processing of a fleeting image.

  10. A scalable, knowledge-based analysis framework for genetic association studies

    PubMed Central

    2013-01-01

    Background Testing for marginal associations between numerous genetic variants and disease may miss complex relationships among variables (e.g., gene-gene interactions). Bayesian approaches can model multiple variables together and offer advantages over conventional model building strategies, including using existing biological evidence as modeling priors and acknowledging that many models may fit the data well. With many candidate variables, Bayesian approaches to variable selection rely on algorithms to approximate the posterior distribution of models, such as Markov-Chain Monte Carlo (MCMC). Unfortunately, MCMC is difficult to parallelize and requires many iterations to adequately sample the posterior. We introduce a scalable algorithm called PEAK that improves the efficiency of MCMC by dividing a large set of variables into related groups using a rooted graph that resembles a mountain peak. Our algorithm takes advantage of parallel computing and existing biological databases when available. Results By using graphs to manage a model space with more than 500,000 candidate variables, we were able to improve MCMC efficiency and uncover the true simulated causal variables, including a gene-gene interaction. We applied PEAK to a case-control study of childhood asthma with 2,521 genetic variants. We used an informative graph for oxidative stress derived from Gene Ontology and identified several variants in ERBB4, OXR1, and BCL2 with strong evidence for associations with childhood asthma. Conclusions We introduced an extremely flexible analysis framework capable of efficiently performing Bayesian variable selection on many candidate variables. The PEAK algorithm can be provided with an informative graph, which can be advantageous when considering gene-gene interactions, or a symmetric graph, which simply divides the model space into manageable regions. The PEAK framework is compatible with various model forms, allowing for the algorithm to be configured for different

  11. Knowledge-based automated road network extraction system using multispectral images

    NASA Astrophysics Data System (ADS)

    Sun, Weihua; Messinger, David W.

    2013-04-01

    A novel approach for automated road network extraction from multispectral WorldView-2 imagery using a knowledge-based system is presented. This approach uses a multispectral flood-fill technique to extract asphalt pixels from satellite images; it follows by identifying prominent curvilinear structures using template matching. The extracted curvilinear structures provide an initial estimate of the road network, which is refined by the knowledge-based system. This system breaks the curvilinear structures into small segments and then groups them using a set of well-defined rules; a saliency check is then performed to prune the road segments. As a final step, these segments, carrying road width and orientation information, can be reconstructed to generate a proper road map. The approach is shown to perform well with various urban and suburban scenes. It can also be deployed to extract the road network in large-scale scenes.

  12. Knowledge-based simulation of DNA metabolism: prediction of enzyme action.

    PubMed

    Brutlag, D L; Galper, A R; Millis, D H

    1991-01-01

    We have developed a knowledge-based simulation of DNA metabolism that accurately predicts the actions of enzymes on DNA under a large number of environmental conditions. Previous simulations of enzyme systems rely predominantly on mathematical models. We use a frame-based representation to model enzymes, substrates and conditions. Interactions between these objects are expressed using production rules and an underlying truth maintenance system. The system performs rapid inference and can explain its reasoning. A graphical interface provides access to all elements of the simulation, including object representations and explanation graphs. Predicting enzyme action is the first step in the development of a large knowledge base to envision the metabolic pathways of DNA replication and repair. PMID:2004281

  13. Knowledge-based vision for space station object motion detection, recognition, and tracking

    NASA Technical Reports Server (NTRS)

    Symosek, P.; Panda, D.; Yalamanchili, S.; Wehner, W., III

    1987-01-01

    Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed.

  14. Knowledge-based reasoning in the Paladin tactical decision generation system

    NASA Technical Reports Server (NTRS)

    Chappell, Alan R.

    1993-01-01

    A real-time tactical decision generation system for air combat engagements, Paladin, has been developed. A pilot's job in air combat includes tasks that are largely symbolic. These symbolic tasks are generally performed through the application of experience and training (i.e. knowledge) gathered over years of flying a fighter aircraft. Two such tasks, situation assessment and throttle control, are identified and broken out in Paladin to be handled by specialized knowledge based systems. Knowledge pertaining to these tasks is encoded into rule-bases to provide the foundation for decisions. Paladin uses a custom built inference engine and a partitioned rule-base structure to give these symbolic results in real-time. This paper provides an overview of knowledge-based reasoning systems as a subset of rule-based systems. The knowledge used by Paladin in generating results as well as the system design for real-time execution is discussed.

  15. SAFOD Brittle Microstructure and Mechanics Knowledge Base (SAFOD BM2KB)

    NASA Astrophysics Data System (ADS)

    Babaie, H. A.; Hadizadeh, J.; di Toro, G.; Mair, K.; Kumar, A.

    2008-12-01

    We have developed a knowledge base to store and present the data collected by a group of investigators studying the microstructures and mechanics of brittle faulting using core samples from the SAFOD (San Andreas Fault Observatory at Depth) project. The investigations are carried out with a variety of analytical and experimental methods primarily to better understand the physics of strain localization in fault gouge. The knowledge base instantiates an specially-designed brittle rock deformation ontology developed at Georgia State University. The inference rules embedded in the semantic web languages, such as OWL, RDF, and RDFS, which are used in our ontology, allow the Pellet reasoner used in this application to derive additional truths about the ontology and knowledge of this domain. Access to the knowledge base is via a public website, which is designed to provide the knowledge acquired by all the investigators involved in the project. The stored data will be products of studies such as: experiments (e.g., high-velocity friction experiment), analyses (e.g., microstructural, chemical, mass transfer, mineralogical, surface, image, texture), microscopy (optical, HRSEM, FESEM, HRTEM]), tomography, porosity measurement, microprobe, and cathodoluminesence. Data about laboratories, experimental conditions, methods, assumptions, equipments, and mechanical properties and lithology of the studied samples will also be presented on the website per investigation. The ontology was modeled applying the UML (Unified Modeling Language) in Rational Rose, and implemented in OWL-DL (Ontology Web Language) using the Protégé ontology editor. The UML model was converted to OWL-DL by first mapping it to Ecore (.ecore) and Generator model (.genmodel) with the help of the EMF (Eclipse Modeling Framework) plugin in Eclipse. The Ecore model was then mapped to a .uml file, which later was converted into an .owl file and subsequently imported into the Protégé ontology editing environment

  16. Delivering spacecraft control centers with embedded knowledge-based systems: The methodology issue

    NASA Technical Reports Server (NTRS)

    Ayache, S.; Haziza, M.; Cayrac, D.

    1994-01-01

    Matra Marconi Space (MMS) occupies a leading place in Europe in the domain of satellite and space data processing systems. The maturity of the knowledge-based systems (KBS) technology, the theoretical and practical experience acquired in the development of prototype, pre-operational and operational applications, make it possible today to consider the wide operational deployment of KBS's in space applications. In this perspective, MMS has to prepare the introduction of the new methods and support tools that will form the basis of the development of such systems. This paper introduces elements of the MMS methodology initiatives in the domain and the main rationale that motivated the approach. These initiatives develop along two main axes: knowledge engineering methods and tools, and a hybrid method approach for coexisting knowledge-based and conventional developments.

  17. FTDD973: A multimedia knowledge-based system and methodology for operator training and diagnostics

    NASA Technical Reports Server (NTRS)

    Hekmatpour, Amir; Brown, Gary; Brault, Randy; Bowen, Greg

    1993-01-01

    FTDD973 (973 Fabricator Training, Documentation, and Diagnostics) is an interactive multimedia knowledge based system and methodology for computer-aided training and certification of operators, as well as tool and process diagnostics in IBM's CMOS SGP fabrication line (building 973). FTDD973 is an example of what can be achieved with modern multimedia workstations. Knowledge-based systems, hypertext, hypergraphics, high resolution images, audio, motion video, and animation are technologies that in synergy can be far more useful than each by itself. FTDD973's modular and object-oriented architecture is also an example of how improvements in software engineering are finally making it possible to combine many software modules into one application. FTDD973 is developed in ExperMedia/2; and OS/2 multimedia expert system shell for domain experts.

  18. Increasing levels of assistance in refinement of knowledge-based retrieval systems

    NASA Technical Reports Server (NTRS)

    Baudin, Catherine; Kedar, Smadar; Pell, Barney

    1994-01-01

    The task of incrementally acquiring and refining the knowledge and algorithms of a knowledge-based system in order to improve its performance over time is discussed. In particular, the design of DE-KART, a tool whose goal is to provide increasing levels of assistance in acquiring and refining indexing and retrieval knowledge for a knowledge-based retrieval system, is presented. DE-KART starts with knowledge that was entered manually, and increases its level of assistance in acquiring and refining that knowledge, both in terms of the increased level of automation in interacting with users, and in terms of the increased generality of the knowledge. DE-KART is at the intersection of machine learning and knowledge acquisition: it is a first step towards a system which moves along a continuum from interactive knowledge acquisition to increasingly automated machine learning as it acquires more knowledge and experience.

  19. AI/COAG, A Knowledge-Based System for Consultation about Human Hemostasis Disorders: Progress Report

    PubMed Central

    Lindberg, Donald A. B.; Gaston, Lamont W.; Kingsland, Lawrence C.; Vanker, Anthony D.

    1981-01-01

    AI/Coag is an interactive, knowledge based computer system which reports, analyzes and interprets clinical blood coagulation laboratory studies. The laboratory subsystem presently deals with six tests: platelet count, bleeding time, prothrombin time, activated partial thromboplastin time, thrombin time, and urea clot solubility. This subsystem returns a detailed analysis and interpretation of the test results. It also offers further information on specific aspects of the interpretation in the form of “Tell-Me-More” (TMM) items. Individual TMM items may be requested by the user during each interpretive session. Also available are the literature sources underlying the knowledge base, stored by the subsystem as “Tell-Me-Reference” (TMR) items. A second subsystem, under development, contains a detailed hemostasis history questionnaire, the answers from which will be analyzed in conjunction with results from the laboratory subsystem to provide a definitive interpretation or to suggest further laboratory tests.

  20. PVDaCS - A prototype knowledge-based expert system for certification of spacecraft data

    NASA Technical Reports Server (NTRS)

    Wharton, Cathleen; Shiroma, Patricia J.; Simmons, Karen E.

    1989-01-01

    On-line data management techniques to certify spacecraft information are mandated by increasing telemetry rates. Knowledge-based expert systems offer the ability to certify data electronically without the need for time-consuming human interaction. Issues of automatic certification are explored by designing a knowledge-based expert system to certify data from a scientific instrument, the Orbiter Ultraviolet Spectrometer, on an operating NASA planetary spacecraft, Pioneer Venus. The resulting rule-based system, called PVDaCS (Pioneer Venus Data Certification System), is a functional prototype demonstrating the concepts of a larger system design. A key element of the system design is the representation of an expert's knowledge through the usage of well ordered sequences. PVDaCS produces a certification value derived from expert knowledge and an analysis of the instrument's operation. Results of system performance are presented.

  1. A multimedia Anatomy Browser incorporating a knowledge base and 3D images.

    PubMed Central

    Eno, K.; Sundsten, J. W.; Brinkley, J. F.

    1991-01-01

    We describe a multimedia program for teaching anatomy. The program, called the Anatomy Browser, displays cross-sectional and topographical images, with outlines around structures and regions of interest. The user may point to these structures and retrieve text descriptions, view symbolic relationships between structures, or view spatial relationships by accessing 3-D graphics animations from videodiscs produced specifically for this program. The software also helps students exercise what they have learned by asking them to identify structures by name and location. The program is implemented in a client-server architecture, with the user interface residing on a Macintosh, while images, data, and a growing symbolic knowledge base of anatomy are stored on a fileserver. This architecture allows us to develop practical tutorial modules that are in current use, while at the same time developing the knowledge base that will lead to more intelligent tutorial systems. PMID:1807699

  2. Research on Interactive Knowledge-Based Indexing: The MedIndEx Prototype

    PubMed Central

    Humphrey, Susanne M.

    1989-01-01

    The general purpose of the MedIndEx (Medical Indexing Expert) Project at the National Library of Medicine (NLM) is to design, develop, and test interactive knowledge-based systems for computer-assisted indexing of literature in the MEDLINE® database using terms from the MeSH® (Medical Subject Headings) thesaurus. In conventional MEDLINE indexing, although indexers enter MeSH descriptors at computer terminals, they consult the thesaurus, indexing manual, and other tools in published form. In the MedIndEx research prototype, the thesaurus and indexing rules are incorporated into a computerized knowledge base (KB) which provides specific assistance not possible in the conventional indexing system. We expect such a system, which combines principles and methods of artificial intelligence and information retrieval, will facilitate expert indexing that takes place at NLM.

  3. Designing Information Systems for Nursing Practice: Data Base and Knowledge Base Requirements of Different Organizational Technologies

    PubMed Central

    Ozbolt, Judy G.

    1985-01-01

    A prequisite to designing computer-aided information systems to support nurse decision making is to identify the kinds of decisions nurses make and to specify the data and the knowledge required to make those decisions. Perrow's (1970) models of organizational technologies, which consider the variability of stimuli and the nature of search procedures for deciding what to do about the stimuli, offer a useful approach to analyzing nurse decision making. Different assumptions about stimuli and search procedures result in different models of nursing, each with its own requirements for a knowledge base and a data base. Professional standards of nursing practice generally assume that clients are unique and therefore treat the stimuli with which nurses deal are highly varied. Existing nursing information systems, however, have been designed as though the stimuli had little variability. Nurses involved in developing the next generation of computer systems will need to identify appropriate models of nursing and to specify the data bases as knowledge bases accordingly.

  4. Orbital transfer vehicle launch operations study: Automated technology knowledge base, volume 4

    NASA Technical Reports Server (NTRS)

    1986-01-01

    A simplified retrieval strategy for compiling automation-related bibliographies from NASA/RECON is presented. Two subsets of NASA Thesaurus subject terms were extracted: a primary list, which is used to obtain an initial set of citations; and a secondary list, which is used to limit or further specify a large initial set of citations. These subject term lists are presented in Appendix A as the Automated Technology Knowledge Base (ATKB) Thesaurus.

  5. Strategic Concept of Competition Model in Knowledge-Based Logistics in Machinebuilding

    NASA Astrophysics Data System (ADS)

    Medvedeva, O. V.

    2015-09-01

    A competitive labor market needs serious changing. Machinebuilding is one of the main problem domains. The current direction to promote human capital competition demands for modernization. Therefore, it is necessary to develop a strategy for social and economic promotion of competition in conditions of knowledge-based economy, in particularly, in machinebuilding. The necessity is demonstrated, as well as basic difficulties faced this strategy for machinebuilding.

  6. Fuzzy knowledge base construction through belief networks based on Lukasiewicz logic

    NASA Technical Reports Server (NTRS)

    Lara-Rosano, Felipe

    1992-01-01

    In this paper, a procedure is proposed to build a fuzzy knowledge base founded on fuzzy belief networks and Lukasiewicz logic. Fuzzy procedures are developed to do the following: to assess the belief values of a consequent, in terms of the belief values of its logical antecedents and the belief value of the corresponding logical function; and to update belief values when new evidence is available.

  7. A knowledge-based approach of satellite image classification for urban wetland detection

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofan

    It has been a technical challenge to accurately detect urban wetlands with remotely sensed data by means of pixel-based image classification. This is mainly caused by inadequate spatial resolutions of satellite imagery, spectral similarities between urban wetlands and adjacent land covers, and the spatial complexity of wetlands in human-transformed, heterogeneous urban landscapes. Knowledge-based classification, with great potential to overcome or reduce these technical impediments, has been applied to various image classifications focusing on urban land use/land cover and forest wetlands, but rarely to mapping the wetlands in urban landscapes. This study aims to improve the mapping accuracy of urban wetlands by integrating the pixel-based classification with the knowledge-based approach. The study area is the metropolitan area of Kansas City, USA. SPOT satellite images of 1992, 2008, and 2010 were classified into four classes - wetland, farmland, built-up land, and forestland - using the pixel-based supervised maximum likelihood classification method. The products of supervised classification are used as the comparative base maps. For our new classification approach, a knowledge base is developed to improve urban wetland detection, which includes a set of decision rules of identifying wetland cover in relation to its elevation, spatial adjacencies, habitat conditions, hydro-geomorphological characteristics, and relevant geostatistics. Using ERDAS Imagine software's knowledge classifier tool, the decision rules are applied to the base maps in order to identify wetlands that are not able to be detected based on the pixel-based classification. The results suggest that the knowledge-based image classification approach can enhance the urban wetland detection capabilities and classification accuracies with remotely sensed satellite imagery.

  8. Computerized design of removable partial dentures: a knowledge-based system for the future.

    PubMed

    Davenport, J C; Hammond, P; Fitzpatrick, F J

    1993-06-01

    Dentists frequently fail to provide dental technicians with the design information necessary for the construction of removable partial dentures. The computerization of dental practices and the development of appropriate knowledge-based systems could provide a powerful tool for improving this aspect of dental care. This article describes one such system currently under development which is an example of the kind of additional facility that will become available for those practices with the necessary hardware. PMID:8299844

  9. GENEX: A knowledge-based expert assistant for Genbank data analysis

    NASA Technical Reports Server (NTRS)

    Batra, Sajeev; Macinnes, Mark A.

    1990-01-01

    We describe a knowledge-based expert assistant, GENEX (Gene Explorer), that simplifies some analysis of Genbank data. GENEX is written in CLIPS (C Language Integrated Production System), and expert system tool, developed at the NASA Johnson Space Center. The main purpose of the system is to look for gene start site annotations, unusual DNA sequence composition, and regulatory protein patterns. application where determinations are made via a decision tree.

  10. Expert verification of the knowledge base of FEED--a feedback expert system for EMS documentation.

    PubMed

    Saini, Devashish; Orthner, Helmuth F; Berner, Eta S; Mirza, Muzna; Godwin, Charles J; Brown, Todd B

    2008-01-01

    Feedback Expert System for Emergency Medical Services (EMS) Documentation (FEED) has a rule-based knowledge base (KB) that was verified against specifications in a focus group consisting of six experts. The focus group suggested changes in almost all rules discussed, indicating that the KB did not meet specifications at that stage of development. However, enough information was gathered to address these issues in the next iteration of development. PMID:18999259

  11. A Knowledge Based Expert-System For Synthetic Aperture Radar Target Recognition

    NASA Astrophysics Data System (ADS)

    Rogers, Steven K.; Kabrisky, Matthew; Anderson, Steven; Mills, James P.

    1988-03-01

    This paper describes a knowledge-based expert system that uses return features, provided by image analysts, to identify an object as a specific instance or class of object, such as a tank or truck. Partial feature sets allow the expert system to classify occluded and unfamiliar or falsified object data returns to the most likely class with a specified reasoning path. The rule based system was developed using the Prolog version of Ml.

  12. A knowledge-based decision support system in bioinformatics: an application to protein complex extraction

    PubMed Central

    2013-01-01

    Background We introduce a Knowledge-based Decision Support System (KDSS) in order to face the Protein Complex Extraction issue. Using a Knowledge Base (KB) coding the expertise about the proposed scenario, our KDSS is able to suggest both strategies and tools, according to the features of input dataset. Our system provides a navigable workflow for the current experiment and furthermore it offers support in the configuration and running of every processing component of that workflow. This last feature makes our system a crossover between classical DSS and Workflow Management Systems. Results We briefly present the KDSS' architecture and basic concepts used in the design of the knowledge base and the reasoning component. The system is then tested using a subset of Saccharomyces cerevisiae Protein-Protein interaction dataset. We used this subset because it has been well studied in literature by several research groups in the field of complex extraction: in this way we could easily compare the results obtained through our KDSS with theirs. Our system suggests both a preprocessing and a clustering strategy, and for each of them it proposes and eventually runs suited algorithms. Our system's final results are then composed of a workflow of tasks, that can be reused for other experiments, and the specific numerical results for that particular trial. Conclusions The proposed approach, using the KDSS' knowledge base, provides a novel workflow that gives the best results with regard to the other workflows produced by the system. This workflow and its numeric results have been compared with other approaches about PPI network analysis found in literature, offering similar results. PMID:23368995

  13. Evaluation of long-term maintenance of a large medical knowledge base.

    PubMed Central

    Giuse, D A; Giuse, N B; Miller, R A

    1995-01-01

    OBJECTIVE: Evaluate the effects of long-term maintenance activities on existing portions of a large internal medicine knowledge base. DESIGN: Five physicians who were not among the original developers of the knowledge base independently updated a total of 15 QMR disease profiles; each updated submission was modified by a review of group serving as the "gold standard, " and the pre- and post-study versions of each updated disease profile were compared. MEASUREMENTS: Numbers and types of changes, defined as any difference between the original version and the final version of a disease profile; reason for each change; and bibliographic references cited by the physicians as supporting evidence. RESULTS: A total of 16% of all entries were modified by the updating process; up to 95% of the entries in a disease profile were affected. The two most common modifications were changes to the frequency of an entry, and creation of a new entry. Laboratory findings were affected much more often than were history, symptom, or physical exam findings. The dominant reason for changes was appearance of new evidence in the medical literature. The literature cited ranged from 1944 to the present. CONCLUSIONS: This study provides an evaluation of the rate of change within the QMR medical knowledge base due to long-term maintenance. The results show that this is a demanding activity that may profoundly affect certain portions of a knowledge base, and that different types of knowledge (e.g., simple laboratory vs expensive or invasive laboratory findings) are affected by the process in different ways. PMID:7496879

  14. Development of a component centered fault monitoring and diagnosis knowledge based system for space power system

    NASA Technical Reports Server (NTRS)

    Lee, S. C.; Lollar, Louis F.

    1988-01-01

    The overall approach currently being taken in the development of AMPERES (Autonomously Managed Power System Extendable Real-time Expert System), a knowledge-based expert system for fault monitoring and diagnosis of space power systems, is discussed. The system architecture, knowledge representation, and fault monitoring and diagnosis strategy are examined. A 'component-centered' approach developed in this project is described. Critical issues requiring further study are identified.

  15. A knowledge-based design for assemble system for vehicle seat

    NASA Astrophysics Data System (ADS)

    Wahidin, L. S.; Tan, CheeFai; Khalil, S. N.; Juffrizal, K.; Nidzamuddin, M. Y.

    2015-05-01

    Companies worldwide are striving to reduce the costs of their products to impact their bottom line profitability. When it comes to improving profits, there are in two choices: sell more or cut the cost of what is currently being sold. Given the depressed economy of the last several years, the "sell more" option, in many cases, has been taken off the table. As a result, cost cutting is often the most effective path. One of the industrial challenges is to search for the shorten product development and lower manufacturing cost especially in the early stage of designing the product. Knowledge-based system is used to assist the industry when the expert is not available and to keep the expertise within the company. The application of knowledge-based system will enable the standardization and accuracy of the assembly process. For this purpose, a knowledge-based design for assemble system is developed to assist the industry to plan the assembly process of the vehicle seat.

  16. The Network of Excellence ``Knowledge-based Multicomponent Materials for Durable and Safe Performance''

    NASA Astrophysics Data System (ADS)

    Moreno, Arnaldo

    2008-02-01

    The Network of Excellence "Knowledge-based Multicomponent Materials for Durable and Safe Performance" (KMM-NoE) consists of 36 institutional partners from 10 countries representing leading European research institutes and university departments (25), small and medium enterprises, SMEs (5) and large industry (7) in the field of knowledge-based multicomponent materials (KMM), more specifically in intermetallics, metal-ceramic composites, functionally graded materials and thin layers. The main goal of the KMM-NoE (currently funded by the European Commission) is to mobilise and concentrate the fragmented scientific potential in the KMM field to create a durable and efficient organism capable of developing leading-edge research while spreading the accumulated knowledge outside the Network and enhancing the technological skills of the related industries. The long-term strategic goal of the KMM-NoE is to establish a self-supporting pan-European institution in the field of knowledge-based multicomponent materials—KMM Virtual Institute (KMM-VIN). It will combine industry oriented research with educational and training activities. The KMM Virtual Institute will be founded on three main pillars: KMM European Competence Centre, KMM Integrated Post-Graduate School, KMM Mobility Programme. The KMM-NoE is coordinated by the Institute of Fundamental Technological Research (IPPT) of the Polish Academy of Sciences, Warsaw, Poland.

  17. Knowledge-based approach to fault diagnosis and control in distributed process environments

    NASA Astrophysics Data System (ADS)

    Chung, Kwangsue; Tou, Julius T.

    1991-03-01

    This paper presents a new design approach to knowledge-based decision support systems for fault diagnosis and control for quality assurance and productivity improvement in automated manufacturing environments. Based on the observed manifestations, the knowledge-based diagnostic system hypothesizes a set of the most plausible disorders by mimicking the reasoning process of a human diagnostician. The data integration technique is designed to generate error-free hierarchical category files. A novel approach to diagnostic problem solving has been proposed by integrating the PADIKS (Pattern-Directed Knowledge-Based System) concept and the symbolic model of diagnostic reasoning based on the categorical causal model. The combination of symbolic causal reasoning and pattern-directed reasoning produces a highly efficient diagnostic procedure and generates a more realistic expert behavior. In addition, three distinctive constraints are designed to further reduce the computational complexity and to eliminate non-plausible hypotheses involved in the multiple disorders problem. The proposed diagnostic mechanism, which consists of three different levels of reasoning operations, significantly reduces the computational complexity in the diagnostic problem with uncertainty by systematically shrinking the hypotheses space. This approach is applied to the test and inspection data collected from a PCB manufacturing operation.

  18. HYPLOSP: a knowledge-based approach to protein local structure prediction.

    PubMed

    Chen, Ching-Tai; Lin, Hsin-Nan; Sung, Ting-Yi; Hsu, Wen-Lian

    2006-12-01

    Local structure prediction can facilitate ab initio structure prediction, protein threading, and remote homology detection. However, the accuracy of existing methods is limited. In this paper, we propose a knowledge-based prediction method that assigns a measure called the local match rate to each position of an amino acid sequence to estimate the confidence of our method. Empirically, the accuracy of the method correlates positively with the local match rate; therefore, we employ it to predict the local structures of positions with a high local match rate. For positions with a low local match rate, we propose a neural network prediction method. To better utilize the knowledge-based and neural network methods, we design a hybrid prediction method, HYPLOSP (HYbrid method to Protein LOcal Structure Prediction) that combines both methods. To evaluate the performance of the proposed methods, we first perform cross-validation experiments by applying our knowledge-based method, a neural network method, and HYPLOSP to a large dataset of 3,925 protein chains. We test our methods extensively on three different structural alphabets and evaluate their performance by two widely used criteria, Maximum Deviation of backbone torsion Angle (MDA) and Q(N), which is similar to Q(3) in secondary structure prediction. We then compare HYPLOSP with three previous studies using a dataset of 56 new protein chains. HYPLOSP shows promising results in terms of MDA and Q(N) accuracy and demonstrates its alphabet-independent capability. PMID:17245815

  19. Integrating Multiple On-line Knowledge Bases for Disease-Lab Test Relation Extraction.

    PubMed

    Zhang, Yaoyun; Soysal, Ergin; Moon, Sungrim; Wang, Jingqi; Tao, Cui; Xu, Hua

    2015-01-01

    A computable knowledge base containing relations between diseases and lab tests would be a great resource for many biomedical informatics applications. This paper describes our initial step towards establishing a comprehensive knowledge base of disease and lab tests relations utilizing three public on-line resources. LabTestsOnline, MedlinePlus and Wikipedia are integrated to create a freely available, computable disease-lab test knowledgebase. Disease and lab test concepts are identified using MetaMap and relations between diseases and lab tests are determined based on source-specific rules. Experimental results demonstrate a high precision for relation extraction, with Wikipedia achieving the highest precision of 87%. Combining the three sources reached a recall of 51.40%, when compared with a subset of disease-lab test relations extracted from a reference book. Moreover, we found additional disease-lab test relations from on-line resources, indicating they are complementary to existing reference books for building a comprehensive disease and lab test relation knowledge base. PMID:26306271

  20. A New Collaborative Knowledge-Based Approach for Wireless Sensor Networks

    PubMed Central

    Canada-Bago, Joaquin; Fernandez-Prieto, Jose Angel; Gadeo-Martos, Manuel Angel; Velasco, Juan Ramón

    2010-01-01

    This work presents a new approach for collaboration among sensors in Wireless Sensor Networks. These networks are composed of a large number of sensor nodes with constrained resources: limited computational capability, memory, power sources, etc. Nowadays, there is a growing interest in the integration of Soft Computing technologies into Wireless Sensor Networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks. The objective of this work is to design a collaborative knowledge-based network, in which each sensor executes an adapted Fuzzy Rule-Based System, which presents significant advantages such as: experts can define interpretable knowledge with uncertainty and imprecision, collaborative knowledge can be separated from control or modeling knowledge and the collaborative approach may support neighbor sensor failures and communication errors. As a real-world application of this approach, we demonstrate a collaborative modeling system for pests, in which an alarm about the development of olive tree fly is inferred. The results show that knowledge-based sensors are suitable for a wide range of applications and that the behavior of a knowledge-based sensor may be modified by inferences and knowledge of neighbor sensors in order to obtain a more accurate and reliable output. PMID:22219701

  1. A knowledge-based machine vision system for space station automation

    NASA Astrophysics Data System (ADS)

    Chipman, Laure J.; Ranganath, H. S.

    1989-11-01

    A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.

  2. Knowledge base and sensor bus messaging service architecture for critical tsunami warning and decision-support

    NASA Astrophysics Data System (ADS)

    Sabeur, Z. A.; Wächter, J.; Middleton, S. E.; Zlatev, Z.; Häner, R.; Hammitzsch, M.; Loewe, P.

    2012-04-01

    The intelligent management of large volumes of environmental monitoring data for early tsunami warning requires the deployment of robust and scalable service oriented infrastructure that is supported by an agile knowledge-base for critical decision-support In the TRIDEC project (TRIDEC 2010-2013), a sensor observation service bus of the TRIDEC system is being developed for the advancement of complex tsunami event processing and management. Further, a dedicated TRIDEC system knowledge-base is being implemented to enable on-demand access to semantically rich OGC SWE compliant hydrodynamic observations and operationally oriented meta-information to multiple subscribers. TRIDEC decision support requires a scalable and agile real-time processing architecture which enables fast response to evolving subscribers requirements as the tsunami crisis develops. This is also achieved with the support of intelligent processing services which specialise in multi-level fusion methods with relevance feedback and deep learning. The TRIDEC knowledge base development work coupled with that of the generic sensor bus platform shall be presented to demonstrate advanced decision-support with situation awareness in context of tsunami early warning and crisis management.

  3. Can Croatia join Europe as competitive knowledge-based society by 2010?

    PubMed

    Petrovecki, Mladen; Paar, Vladimir; Primorac, Dragan

    2006-12-01

    The 21st century has brought important changes in the paradigms of economic development, one of them being a shift toward recognizing knowledge and information as the most valuable commodities of today. The European Union (EU) has been working hard to become the most competitive knowledge-based society in the world, and Croatia, an EU candidate country, has been faced with a similar task. To establish itself as one of the best knowledge-based country in the Eastern European region over the next 4 years, Croatia realized it has to create an education and science system correspondent with European standards and sensitive to labor market needs. For that purpose, the Croatian Ministry of Science, Education, and Sports (MSES) has created and started implementing a complex strategy, consisting of the following key components: the reform of education system in accordance with the Bologna Declaration; stimulation of scientific production by supporting national and international research projects; reversing the "brain drain" into "brain gain" and strengthening the links between science and technology; and informatization of the whole education and science system. In this comprehensive report, we describe the implementation of these measures, whose coordination with the EU goals presents a challenge, as well as an opportunity for Croatia to become a knowledge-based society by 2010. PMID:17167853

  4. Knowledge-based algorithm for satellite image classification of urban wetlands

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofan; Ji, Wei

    2014-10-01

    It has been a challenge to accurately detect urban wetlands with remotely sensed data by means of pixel-based image classification. This technical difficulty results mainly from inadequate spatial resolutions of satellite imagery, spectral similarities between urban wetlands and adjacent land covers, and spatial complexity of wetlands in human transformed, heterogeneous urban landscapes. To address this issue, an image classification approach has been developed to improve the mapping accuracy of urban wetlands by integrating the pixel-based classification with a knowledge-based algorithm. The algorithm includes a set of decision rules of identifying wetland cover in relation to their elevation, spatial adjacencies, habitat conditions, hydro-geomorphological characteristics, and relevant geo-statistics. ERDAS Imagine software was used to develop the knowledge base and implement the classification. The study area is the metropolitan region of Kansas City, USA. SPOT satellite images of 1992, 2008, and 2010 were classified into four classes - wetland, farmland, built-up land, and forestland. The results suggest that the knowledge-based image classification approach can enhance urban wetland detection capabilities and classification accuracies with remotely sensed satellite imagery.

  5. Integrating Multiple On-line Knowledge Bases for Disease-Lab Test Relation Extraction

    PubMed Central

    Zhang, Yaoyun; Soysal, Ergin; Moon, Sungrim; Wang, Jingqi; Tao, Cui; Xu, Hua

    2015-01-01

    A computable knowledge base containing relations between diseases and lab tests would be a great resource for many biomedical informatics applications. This paper describes our initial step towards establishing a comprehensive knowledge base of disease and lab tests relations utilizing three public on-line resources. LabTestsOnline, MedlinePlus and Wikipedia are integrated to create a freely available, computable disease-lab test knowledgebase. Disease and lab test concepts are identified using MetaMap and relations between diseases and lab tests are determined based on source-specific rules. Experimental results demonstrate a high precision for relation extraction, with Wikipedia achieving the highest precision of 87%. Combining the three sources reached a recall of 51.40%, when compared with a subset of disease-lab test relations extracted from a reference book. Moreover, we found additional disease-lab test relations from on-line resources, indicating they are complementary to existing reference books for building a comprehensive disease and lab test relation knowledge base. PMID:26306271

  6. Can Croatia Join Europe as Competitive Knowledge-based Society by 2010?

    PubMed Central

    Petrovečki, Mladen; Paar, Vladimir; Primorac, Dragan

    2006-01-01

    The 21st century has brought important changes in the paradigms of economic development, one of them being a shift toward recognizing knowledge and information as the most important factors of today. The European Union (EU) has been working hard to become the most competitive knowledge-based society in the world, and Croatia, an EU candidate country, has been faced with a similar task. To establish itself as one of the best knowledge-based country in the Eastern European region over the next four years, Croatia realized it has to create an education and science system correspondent with European standards and sensitive to labor market needs. For that purpose, the Croatian Ministry of Science, Education, and Sports (MSES) has created and started implementing a complex strategy, consisting of the following key components: the reform of education system in accordance with the Bologna Declaration; stimulation of scientific production by supporting national and international research projects; reversing the “brain drain” into “brain gain” and strengthening the links between science and technology; and informatization of the whole education and science system. In this comprehensive report, we describe the implementation of these measures, whose coordination with the EU goals presents a challenge, as well as an opportunity for Croatia to become a knowledge-based society by 2010. PMID:17167853

  7. Virk: an active learning-based system for bootstrapping knowledge base development in the neurosciences.

    PubMed

    Ambert, Kyle H; Cohen, Aaron M; Burns, Gully A P C; Boudreau, Eilis; Sonmez, Kemal

    2013-01-01

    The frequency and volume of newly-published scientific literature is quickly making manual maintenance of publicly-available databases of primary data unrealistic and costly. Although machine learning (ML) can be useful for developing automated approaches to identifying scientific publications containing relevant information for a database, developing such tools necessitates manually annotating an unrealistic number of documents. One approach to this problem, active learning (AL), builds classification models by iteratively identifying documents that provide the most information to a classifier. Although this approach has been shown to be effective for related problems, in the context of scientific databases curation, it falls short. We present Virk, an AL system that, while being trained, simultaneously learns a classification model and identifies documents having information of interest for a knowledge base. Our approach uses a support vector machine (SVM) classifier with input features derived from neuroscience-related publications from the primary literature. Using our approach, we were able to increase the size of the Neuron Registry, a knowledge base of neuron-related information, by a factor of 90%, a knowledge base of neuron-related information, in 3 months. Using standard biocuration methods, it would have taken between 1 and 2 years to make the same number of contributions to the Neuron Registry. Here, we describe the system pipeline in detail, and evaluate its performance against other approaches to sampling in AL. PMID:24399964

  8. Reducing a Knowledge-Base Search Space When Data Are Missing

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    This software addresses the problem of how to efficiently execute a knowledge base in the presence of missing data. Computationally, this is an exponentially expensive operation that without heuristics generates a search space of 1 + 2n possible scenarios, where n is the number of rules in the knowledge base. Even for a knowledge base of the most modest size, say 16 rules, it would produce 65,537 possible scenarios. The purpose of this software is to reduce the complexity of this operation to a more manageable size. The problem that this system solves is to develop an automated approach that can reason in the presence of missing data. This is a meta-reasoning capability that repeatedly calls a diagnostic engine/model to provide prognoses and prognosis tracking. In the big picture, the scenario generator takes as its input the current state of a system, including probabilistic information from Data Forecasting. Using model-based reasoning techniques, it returns an ordered list of fault scenarios that could be generated from the current state, i.e., the plausible future failure modes of the system as it presently stands. The scenario generator models a Potential Fault Scenario (PFS) as a black box, the input of which is a set of states tagged with priorities and the output of which is one or more potential fault scenarios tagged by a confidence factor. The results from the system are used by a model-based diagnostician to predict the future health of the monitored system.

  9. Ada and knowledge-based systems: A prototype combining the best of both worlds

    NASA Technical Reports Server (NTRS)

    Brauer, David C.

    1986-01-01

    A software architecture is described which facilitates the construction of distributed expert systems using Ada and selected knowledge based systems. This architecture was utilized in the development of a Knowledge-based Maintenance Expert System (KNOMES) prototype for the Space Station Mobile Service Center (MSC). The KNOMES prototype monitors a simulated data stream from MSC sensors and built-in test equipment. It detects anomalies in the data and performs diagnosis to determine the cause. The software architecture which supports the KNOMES prototype allows for the monitoring and diagnosis tasks to be performed concurrently. The basic concept of this software architecture is named ACTOR (Ada Cognitive Task ORganization Scheme). An individual ACTOR is a modular software unit which contains both standard data processing and artificial intelligence components. A generic ACTOR module contains Ada packages for communicating with other ACTORs and accessing various data sources. The knowledge based component of an ACTOR determines the role it will play in a system. In this prototype, an ACTOR will monitor the MSC data stream.

  10. A knowledge-based machine vision system for space station automation

    NASA Technical Reports Server (NTRS)

    Chipman, Laure J.; Ranganath, H. S.

    1989-01-01

    A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.

  11. ISPE: A knowledge-based system for fluidization studies. 1990 Annual report

    SciTech Connect

    Reddy, S.

    1991-01-01

    Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all ``specified goals`` are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.

  12. Travel-time correction surface generation for the DOE Knowledge Base

    SciTech Connect

    Hipp, J.; Young, C.; Keyser, R.

    1997-08-01

    The DOE Knowledge Base data storage and access model consists of three parts: raw data processing, intermediate surface generation, and final output surface interpolation. The paper concentrates on the second step, surface generation, specifically applied to travel-time correction data. The surface generation for the intermediate step is accomplished using a modified kriging solution that provides robust error estimates for each for each interpolated point and satisfies many important physical requirements including differing quality data points, user-definable range of influence for each point, blend to background values for both interpolated values and error estimates beyond the ranges, and the ability to account for the effects of geologic region boundaries. These requirements are outlined and discussed and are linked to requirements specified for the final output model in the DOE Knowledge Base. Future work will focus on testing the entire Knowledge Base model using the regional calibration data sets which are being gathered by researchers at Los Alamos and Lawrence Livermore National Laboratories.

  13. Virk: an active learning-based system for bootstrapping knowledge base development in the neurosciences

    PubMed Central

    Ambert, Kyle H.; Cohen, Aaron M.; Burns, Gully A. P. C.; Boudreau, Eilis; Sonmez, Kemal

    2013-01-01

    The frequency and volume of newly-published scientific literature is quickly making manual maintenance of publicly-available databases of primary data unrealistic and costly. Although machine learning (ML) can be useful for developing automated approaches to identifying scientific publications containing relevant information for a database, developing such tools necessitates manually annotating an unrealistic number of documents. One approach to this problem, active learning (AL), builds classification models by iteratively identifying documents that provide the most information to a classifier. Although this approach has been shown to be effective for related problems, in the context of scientific databases curation, it falls short. We present Virk, an AL system that, while being trained, simultaneously learns a classification model and identifies documents having information of interest for a knowledge base. Our approach uses a support vector machine (SVM) classifier with input features derived from neuroscience-related publications from the primary literature. Using our approach, we were able to increase the size of the Neuron Registry, a knowledge base of neuron-related information, by a factor of 90%, a knowledge base of neuron-related information, in 3 months. Using standard biocuration methods, it would have taken between 1 and 2 years to make the same number of contributions to the Neuron Registry. Here, we describe the system pipeline in detail, and evaluate its performance against other approaches to sampling in AL. PMID:24399964

  14. Prior Knowledge Base of Constellations and Bright Stars among Non-Science Majoring Undergraduates and 14-15 Year Old Students

    ERIC Educational Resources Information Center

    Hintz, Eric G.; Hintz, Maureen L.; Lawler, M. Jeannette

    2015-01-01

    As part of an effort to improve students' knowledge of constellations and bright stars in an introductory level descriptive astronomy survey course, we measured the baseline knowledge that students bring to the class and how their score evolve over the course of the semester. This baseline is needed by the broader astronomy education research…

  15. The Knowledge Dictionary: A KBMS architecture for the many-to-many coupling of knowledge based-systems to databases

    SciTech Connect

    Davis, J.P.

    1989-01-01

    The effective management and leveraging of organizational knowledge has become the focus of much research in the computer industry. One specific area is the creation of information systems that combine the ability to manage large stores of data, making it available to many users, with the ability to reason and make inferences over bodies of knowledge capturing specific expertise in some problem domain. A Knowledge Base Management System (KBMS) is a system providing management of a large shared knowledge base for (potentially) many knowledge-based systems (KBS). A KBMS architecture for coupling knowledge-based systems to databases has been developed. The architecture is built around a repository known as the Knowledge Dictionary-a multi-level self-describing framework that facilitates the KBS-DBMS integration. Th Knowledge Dictionary architecture allows the following enhancements to be made to be made to the KBS environment: knowledge sharing among multiple KBS applications; knowledge management of semantic integrity over large-scale (declarative) knowledge bases; and, knowledge maintenance as the declarative portion of the shared knowledge base evolves over time. This dissertation discusses the architecture of the Knowledge Dictionary, and the underlying knowledge representation framework, focusing on how it is used to provide knowledge management services to the KBS applications having their declarative knowledge base components stored as databases in the DBMS. The specific service investigated is the management of semantic integrity of the knowledge base.

  16. Points of Departure: Developing the Knowledge Base of ESL and FSL Teachers for K-12 Programs in Canada

    ERIC Educational Resources Information Center

    Faez, Farahnaz

    2011-01-01

    In this paper I examine similarities and differences between the required knowledge base of teachers of English as a second language (ESL) and French as a second language (FSL) for teaching in Kindergarten through Grade 12 programs in Canada. Drawing on knowledge base frameworks in language teacher education (Freeman and Johnson, 1998; Richards,…

  17. Automated integration of external databases: a knowledge-based approach to enhancing rule-based expert systems.

    PubMed Central

    Berman, L.; Cullen, M. R.; Miller, P. L.

    1992-01-01

    Expert system applications in the biomedical domain have long been hampered by the difficulty inherent in maintaining and extending large knowledge bases. We have developed a knowledge-based method for automatically augmenting such knowledge bases. The method consists of automatically integrating data contained in commercially available, external, on-line databases with data contained in an expert system's knowledge base. We have built a prototype system, named DBX, using this technique to augment an expert system's knowledge base as a decision support aid and as a bibliographic retrieval tool. In this paper, we describe this prototype system in detail, illustrate its use and discuss the lessons we have learned in its implementation. PMID:1482872

  18. Glycine functionalized multiwall carbon nanotubes as a novel hollow fiber solid-phase microextraction sorbent for pre-concentration of venlafaxine and o-desmethylvenlafaxine in biological and water samples prior to determination by high-performance liquid chromatography.

    PubMed

    Ghorbani, Mahdi; Chamsaz, Mahmoud; Rounaghi, Gholam Hossein

    2016-06-01

    A hollow fiber solid-phase microextraction method for pre-concentration of venlafaxine and o-desmethylvenlafaxine in biological matrices is described for the first time. The functionalized MWCNTs with an amino acid, glycine, were synthesized and held in the pore of a hollow fiber by sol-gel technique. In order to extract venlafaxine and o-desmethylvenlafaxine from real samples, the hollow fiber was immersed into the sample solution under a magnetic stirring for 20 min. The extracted venlafaxine and o-desmethylvenlafaxine from the fibers were then desorbed with methanol by sonication and analyzed using high-performance liquid chromatography. Important microextraction parameters including pH of donor phase, donor phase volume, stirring rate, extraction time, and desorption conditions such as the type and volume of solvents and desorption time were thoroughly investigated and optimized. The optimized technique provides good repeatability (RSD of the intraday precision 3.7 and 3.4, interday precision of 5.8 and 5.4 %), linearity of (0.1-300 and 0.2-360 ng mL(-1)), low LODs of (0.03 and 0.07 ng mL(-1)), and high enrichment factor of (164 and 176) for venlafaxine and o-desmethylvenlafaxine, respectively. The analytical performance of Gly-MWCNTs as a new SPME sorbent was compared with MWCNTs and carboxylic MWCNTs. The results indicate that Gly-MWCNTs are quite effective for extraction of venlafaxine and o-desmethylvenlafaxine. Feasibility of the method was evaluated by analyzing human urine and real water samples. The results obtained in this work show a promising, simple, selective, and sensitive sample preparation and determination method for biological and water samples. PMID:27108286

  19. Diversity priors for learning early visual features.

    PubMed

    Xiong, Hanchen; Rodríguez-Sánchez, Antonio J; Szedmak, Sandor; Piater, Justus

    2015-01-01

    This paper investigates how utilizing diversity priors can discover early visual features that resemble their biological counterparts. The study is mainly motivated by the sparsity and selectivity of activations of visual neurons in area V1. Most previous work on computational modeling emphasizes selectivity or sparsity independently. However, we argue that selectivity and sparsity are just two epiphenomena of the diversity of receptive fields, which has been rarely exploited in learning. In this paper, to verify our hypothesis, restricted Boltzmann machines (RBMs) are employed to learn early visual features by modeling the statistics of natural images. Considering RBMs as neural networks, the receptive fields of neurons are formed by the inter-weights between hidden and visible nodes. Due to the conditional independence in RBMs, there is no mechanism to coordinate the activations of individual neurons or the whole population. A diversity prior is introduced in this paper for training RBMs. We find that the diversity prior indeed can assure simultaneously sparsity and selectivity of neuron activations. The learned receptive fields yield a high degree of biological similarity in comparison to physiological data. Also, corresponding visual features display a good generative capability in image reconstruction. PMID:26321941

  20. Diversity priors for learning early visual features

    PubMed Central

    Xiong, Hanchen; Rodríguez-Sánchez, Antonio J.; Szedmak, Sandor; Piater, Justus

    2015-01-01

    This paper investigates how utilizing diversity priors can discover early visual features that resemble their biological counterparts. The study is mainly motivated by the sparsity and selectivity of activations of visual neurons in area V1. Most previous work on computational modeling emphasizes selectivity or sparsity independently. However, we argue that selectivity and sparsity are just two epiphenomena of the diversity of receptive fields, which has been rarely exploited in learning. In this paper, to verify our hypothesis, restricted Boltzmann machines (RBMs) are employed to learn early visual features by modeling the statistics of natural images. Considering RBMs as neural networks, the receptive fields of neurons are formed by the inter-weights between hidden and visible nodes. Due to the conditional independence in RBMs, there is no mechanism to coordinate the activations of individual neurons or the whole population. A diversity prior is introduced in this paper for training RBMs. We find that the diversity prior indeed can assure simultaneously sparsity and selectivity of neuron activations. The learned receptive fields yield a high degree of biological similarity in comparison to physiological data. Also, corresponding visual features display a good generative capability in image reconstruction. PMID:26321941

  1. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage, and Access

    SciTech Connect

    HIPP,JAMES R.; MOORE,SUSAN G.; MYERS,STEPHEN C.; SCHULTZ,CRAIG A.; SHEPHERD,ELLEN; YOUNG,CHRISTOPHER J.

    1999-10-01

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis for accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process they call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fir the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.

  2. Parametric Grid Information in the DOE Knowledge Base: Data Preparation, Storage and Access.

    SciTech Connect

    Hipp, J. R.; Young, C. J.; Moore, S. G.; Shepherd, E. R.; Schultz, C. A.; Myers, S. C.

    1999-10-01

    The parametric grid capability of the Knowledge Base provides an efficient, robust way to store and access interpolatable information which is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use a new approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation (NNI). The method involves three basic steps: data preparation (DP), data storage (DS), and data access (DA). The goal of data preparation is to process a set of raw data points to produce a sufficient basis for accurate NNI of value and error estimates in the Data Access step. This basis includes a set of nodes and their connectedness, collectively known as a tessellation, and the corresponding values and errors that map to each node, which we call surfaces. In many cases, the raw data point distribution is not sufficiently dense to guarantee accurate error estimates from the NNI, so the original data set must be densified using a newly developed interpolation technique known as Modified Bayesian Kriging. Once appropriate kriging parameters have been determined by variogram analysis, the optimum basis for NNI is determined in a process we call mesh refinement, which involves iterative kriging, new node insertion, and Delauny triangle smoothing. The process terminates when an NNI basis has been calculated which will fit the kriged values within a specified tolerance. In the data storage step, the tessellations and surfaces are stored in the Knowledge Base, currently in a binary flatfile format but perhaps in the future in a spatially-indexed database. Finally, in the data access step, a client application makes a request for an interpolated value, which triggers a data fetch from the Knowledge Base through the libKBI interface, a walking triangle search for the containing triangle, and finally the NNI interpolation.

  3. The COPD Knowledge Base: enabling data analysis and computational simulation in translational COPD research

    PubMed Central

    2014-01-01

    Background Previously we generated a chronic obstructive pulmonary disease (COPD) specific knowledge base (http://www.copdknowledgebase.eu) from clinical and experimental data, text-mining results and public databases. This knowledge base allowed the retrieval of specific molecular networks together with integrated clinical and experimental data. Results The COPDKB has now been extended to integrate over 40 public data sources on functional interaction (e.g. signal transduction, transcriptional regulation, protein-protein interaction, gene-disease association). In addition we integrated COPD-specific expression and co-morbidity networks connecting over 6 000 genes/proteins with physiological parameters and disease states. Three mathematical models describing different aspects of systemic effects of COPD were connected to clinical and experimental data. We have completely redesigned the technical architecture of the user interface and now provide html and web browser-based access and form-based searches. A network search enables the use of interconnecting information and the generation of disease-specific sub-networks from general knowledge. Integration with the Synergy-COPD Simulation Environment enables multi-scale integrated simulation of individual computational models while integration with a Clinical Decision Support System allows delivery into clinical practice. Conclusions The COPD Knowledge Base is the only publicly available knowledge resource dedicated to COPD and combining genetic information with molecular, physiological and clinical data as well as mathematical modelling. Its integrated analysis functions provide overviews about clinical trends and connections while its semantically mapped content enables complex analysis approaches. We plan to further extend the COPDKB by offering it as a repository to publish and semantically integrate data from relevant clinical trials. The COPDKB is freely available after registration at http

  4. Knowledge based ranking algorithm for comparative assessment of post-closure care needs of closed landfills

    SciTech Connect

    Sizirici, Banu; Tansel, Berrin; Kumar, Vivek

    2011-06-15

    Post-closure care (PCC) activities at landfills include cap maintenance; water quality monitoring; maintenance and monitoring of the gas collection/control system, leachate collection system, groundwater monitoring wells, and surface water management system; and general site maintenance. The objective of this study was to develop an integrated data and knowledge based decision making tool for preliminary estimation of PCC needs at closed landfills. To develop the decision making tool, 11 categories of parameters were identified as critical areas which could affect future PCC needs. Each category was further analyzed by detailed questions which could be answered with limited data and knowledge about the site, its history, location, and site specific characteristics. Depending on the existing knowledge base, a score was assigned to each question (on a scale 1-10, as 1 being the best and 10 being the worst). Each category was also assigned a weight based on its relative importance on the site conditions and PCC needs. The overall landfill score was obtained from the total weighted sum attained. Based on the overall score, landfill conditions could be categorized as critical, acceptable, or good. Critical condition indicates that the landfill may be a threat to the human health and the environment and necessary steps should be taken. Acceptable condition indicates that the landfill is currently stable and the monitoring should be continued. Good condition indicates that the landfill is stable and the monitoring activities can be reduced in the future. The knowledge base algorithm was applied to two case study landfills for preliminary assessment of PCC performance.

  5. A framework for knowledge acquisition, representation and problem-solving in knowledge-based planning

    NASA Astrophysics Data System (ADS)

    Martinez-Bermudez, Iliana

    This research addresses the problem of developing planning knowledge-based applications. In particular, it is concerned with the problems of knowledge acquisition and representation---the issues that remain an impediment to the development of large-scale, knowledge-based planning applications. This work aims to develop a model of planning problem solving that facilitates expert knowledge elicitation and also supports effective problem solving. Achieving this goal requires determining the types of knowledge used by planning experts, the structure of this knowledge, and the problem-solving process that results in the plan. While answering these questions it became clear that the knowledge structure, as well as the process of problem solving, largely depends on the knowledge available to the expert. This dissertation proposes classification of planning problems based on their use of expert knowledge. Such classification can help in the selection of the appropriate planning method when dealing with a specific planning problem. The research concentrates on one of the identified classes of planning problems that can be characterized by well-defined and well-structured problem-solving knowledge. To achieve a more complete knowledge representation architecture for such problems, this work employs the task-specific approach to problem solving. The result of this endeavor is a task-specific methodology that allows the representation and use of planning knowledge in a structural, consistent manner specific to the domain of the application. The shell for building a knowledge-based planning application was created as a proof of concept for the methodology described in this dissertation. This shell enabled the development of a system for manufacturing planning---COMPLAN. COMPLAN encompasses knowledge related to four generic techniques used in composite material manufacturing and, given the description of the composite part, creates a family of plans capable of producing it.

  6. Knowledge-based and model-based hybrid methodology for comprehensive waste minimization in electroplating plants

    NASA Astrophysics Data System (ADS)

    Luo, Keqin

    1999-11-01

    The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and

  7. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  8. Knowledge-Based Motion Control of AN Intelligent Mobile Autonomous System

    NASA Astrophysics Data System (ADS)

    Isik, Can

    An Intelligent Mobile Autonomous System (IMAS), which is equipped with vision and low level sensors to cope with unknown obstacles, is modeled as a hierarchy of path planning and motion control. This dissertation concentrates on the lower level of this hierarchy (Pilot) with a knowledge-based controller. The basis of a theory of knowledge-based controllers is established, using the example of the Pilot level motion control of IMAS. In this context, the knowledge-based controller with a linguistic world concept is shown to be adequate for the minimum time control of an autonomous mobile robot motion. The Pilot level motion control of IMAS is approached in the framework of production systems. The three major components of the knowledge-based control that are included here are the hierarchies of the database, the rule base and the rule evaluator. The database, which is the representation of the state of the world, is organized as a semantic network, using a concept of minimal admissible vocabulary. The hierarchy of rule base is derived from the analytical formulation of minimum-time control of IMAS motion. The procedure introduced for rule derivation, which is called analytical model verbalization, utilizes the concept of causalities to describe the system behavior. A realistic analytical system model is developed and the minimum-time motion control in an obstacle strewn environment is decomposed to a hierarchy of motion planning and control. The conditions for the validity of the hierarchical problem decomposition are established, and the consistency of operation is maintained by detecting the long term conflicting decisions of the levels of the hierarchy. The imprecision in the world description is modeled using the theory of fuzzy sets. The method developed for the choice of the rule that prescribes the minimum-time motion control among the redundant set of applicable rules is explained and the usage of fuzzy set operators is justified. Also included in the

  9. Facilitating superior chronic disease management through a knowledge-based systems development model.

    PubMed

    Wickramasinghe, Nilmini S; Goldberg, Steve

    2008-01-01

    To date, the adoption and diffusion of technology-enabled solutions to deliver better healthcare has been slow. There are many reasons for this. One of the most significant is that the existing methodologies that are normally used in general for Information and Communications Technology (ICT) implementations tend to be less successful in a healthcare context. This paper describes a knowledge-based adaptive mapping to realisation methodology to traverse successfully from idea to realisation rapidly and without compromising rigour so that success ensues. It is discussed in connection with trying to implement superior ICT-enabled approaches to facilitate superior Chronic Disease Management (CDM). PMID:19174365

  10. A knowledge-based expert system for scheduling of airborne astronomical observations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, P. R.; Gevarter, W. B.; Stutz, J. C.; Banda, C. P.

    1985-01-01

    The Kuiper Airborne Observatory Scheduler (KAOS) is a knowledge-based expert system developed at NASA Ames Research Center to assist in route planning of a C-141 flying astronomical observatory. This program determines a sequence of flight legs that enables sequential observations of a set of heavenly bodies derived from a list of desirable objects. The possible flight legs are constrained by problems of observability, avoiding flyovers of warning and restricted military zones, and running out of fuel. A significant contribution of the KAOS program is that it couples computational capability with a reasoning system.

  11. An intelligent knowledge based approach for the automated radiographic inspection of castings

    NASA Astrophysics Data System (ADS)

    Kehoe, A.; Parker, G. A.

    An automated system for the radiographic inspection of castings is described which is based on both conventional image processing techniques and intelligent knowledge based techniques for the identification and evaluation of defects against quality assurance standards. An evaluation of the performance of the system against that of skilled inspectors demonstrates that the system is capable of meeting the requirements for an automated system for industrial radiographic inspection. The overall design of the system and elements of the system, including the knowledge representation scheme, inference mechanism, and control structure, are described. The discussion also covers examples of frame structures, frame taxonomies, data driven maintenance procedures, and rules for classification and evaluation.

  12. System and method for knowledge based matching of users in a network

    DOEpatents

    Verspoor, Cornelia Maria; Sims, Benjamin Hayden; Ambrosiano, John Joseph; Cleland, Timothy James

    2011-04-26

    A knowledge-based system and methods to matchmaking and social network extension are disclosed. The system is configured to allow users to specify knowledge profiles, which are collections of concepts that indicate a certain topic or area of interest selected from an. The system utilizes the knowledge model as the semantic space within which to compare similarities in user interests. The knowledge model is hierarchical so that indications of interest in specific concepts automatically imply interest in more general concept. Similarity measures between profiles may then be calculated based on suitable distance formulas within this space.

  13. Knowledge based systems: A critical survey of major concepts, issues and techniques. Visuals

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kavi, Srinu

    1984-01-01

    This Working Paper Series entry represents a collection of presentation visuals associated with the companion report entitled, Knowledge Based Systems: A Critical Survey of Major Concepts, Issues, and Techniques, USL/DBMS NASA/RECON Working Paper Series report number DBMS.NASA/RECON-9. The objectives of the report are to: examine various techniques used to build the KBS; to examine at least one KBS in detail, i.e., a case study; to list and identify limitations and problems with the KBS; to suggest future areas of research; and to provide extensive reference materials.

  14. Knowledge based translation and problem solving in an intelligent individualized instruction system

    NASA Technical Reports Server (NTRS)

    Jung, Namho; Biegel, John E.

    1994-01-01

    An Intelligent Individualized Instruction I(sup 3) system is being built to provide computerized instruction. We present the roles of a translator and a problem solver in an intelligent computer system. The modular design of the system provides for easier development and allows for future expansion and maintenance. CLIPS modules and classes are utilized for the purpose of the modular design and inter module communications. CLIPS facts and rules are used to represent the system components and the knowledge base. CLIPS provides an inferencing mechanism to allow the I(sup 3) system to solve problems presented to it in English.

  15. Application of flight systems methodologies to the validation of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.

    1988-01-01

    Flight and mission-critical systems are verified, qualified for flight, and validated using well-known and well-established techniques. These techniques define the validation methodology used for such systems. In order to verify, qualify, and validate knowledge-based systems (KBS's), the methodology used for conventional systems must be addressed, and the applicability and limitations of that methodology to KBS's must be identified. The author presents an outline of how this approach to the validation of KBS's is being developed and used at the Dryden Flight Research Facility of the NASA Ames Research Center.

  16. Investigation of candidate data structures and search algorithms to support a knowledge based fault diagnosis system

    NASA Technical Reports Server (NTRS)

    Bosworth, Edward L., Jr.

    1987-01-01

    The focus of this research is the investigation of data structures and associated search algorithms for automated fault diagnosis of complex systems such as the Hubble Space Telescope. Such data structures and algorithms will form the basis of a more sophisticated Knowledge Based Fault Diagnosis System. As a part of the research, several prototypes were written in VAXLISP and implemented on one of the VAX-11/780's at the Marshall Space Flight Center. This report describes and gives the rationale for both the data structures and algorithms selected. A brief discussion of a user interface is also included.

  17. Use of metaknowledge in the verification of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Morell, Larry J.

    1989-01-01

    Knowledge-based systems are modeled as deductive systems. The model indicates that the two primary areas of concern in verification are demonstrating consistency and completeness. A system is inconsistent if it asserts something that is not true of the modeled domain. A system is incomplete if it lacks deductive capability. Two forms of consistency are discussed along with appropriate verification methods. Three forms of incompleteness are discussed. The use of metaknowledge, knowledge about knowledge, is explored in connection to each form of incompleteness.

  18. Structured data collection and knowledge-based user guidance for abdominal ultrasound reporting.

    PubMed Central

    Kuhn, K.; Zemmler, T.; Reichert, M.; Heinlein, C.; Roesner, D.

    1993-01-01

    This paper describes a system for structured data collection and report generation in abdominal ultrasonography. The system is based on a controlled vocabulary and hierarchies of concepts; it uses a graphical user interface. More than 17,000 reports have been generated by 43 physicians using this system, which is integrated into a departmental information system. Evaluations have shown that it is a well accepted tool for the fast generation of reports of comparatively high quality. The functionality is enhanced by two additional components: a hybrid knowledge-based module for "intelligent" user guidance and an interactive tutoring system to illustrate the terminology. Images Fig. 4 Fig. 5 PMID:8130485

  19. A knowledge-based expert system for scheduling of airborne astronomical observations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, P. R.; Gevarter, W. B.; Stutz, J. C.; Banda, C. P.

    1986-01-01

    KAOS (Kuiper Airborne Observatory Scheduler) is a knowledge-based expert system developed at NASA Ames Research Center to assist in route planning of a C-141 flying astronomical observatory. This program determines a sequence of flight legs that enables sequential observations of a set of heavenly bodies derived from a list of desirable objects. The possible flight legs are constrained by problems of observability, avoiding flyovers of warning and restricted military zones, and running out of fuel. A significant contribution of the KAOS program is that it couples computational capability with a reasoning system.

  20. Enroute flight-path planning - Cooperative performance of flight crews and knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.; Mccoy, Elaine; Layton, Chuck; Galdes, Deb

    1989-01-01

    Interface design issues associated with the introduction of knowledge-based systems into the cockpit are discussed. Such issues include not only questions about display and control design, they also include deeper system design issues such as questions about the alternative roles and responsibilities of the flight crew and the computer system. In addition, the feasibility of using enroute flight path planning as a context for exploring such research questions is considered. In particular, the development of a prototyping shell that allows rapid design and study of alternative interfaces and system designs is discussed.

  1. HSTDEK: Developing a methodology for construction of large-scale, multi-use knowledge bases

    NASA Technical Reports Server (NTRS)

    Freeman, Michael S.

    1987-01-01

    The primary research objectives of the Hubble Space Telescope Design/Engineering Knowledgebase (HSTDEK) are to develop a methodology for constructing and maintaining large scale knowledge bases which can be used to support multiple applications. To insure the validity of its results, this research is being persued in the context of a real world system, the Hubble Space Telescope. The HSTDEK objectives are described in detail. The history and motivation of the project are briefly described. The technical challenges faced by the project are outlined.

  2. Knowledge-based approach for generating target system specifications from a domain model

    NASA Technical Reports Server (NTRS)

    Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan

    1992-01-01

    Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.

  3. Knowledge-based fault monitoring and diagnosis in Space Shuttle propellant loading

    NASA Technical Reports Server (NTRS)

    Scarl, E. A.; Jamieson, J.; Delaune, C.

    1984-01-01

    The LOX Expert System (LES), now being developed as a tool for the constraint-based monitoring and analysis of propellant loading at the Kennedy Space Center (KSC), is discussed. The loading of LOX at the KSC and its control and monitoring by the Launch Processing System are summarized, and the relevant problem for LES is presented. The LES database is briefly described, and the interaction of LES with KNOBS, a constraint- and frame-oriented knowledge-based system developed as a demonstration system in aid of tactical air mission planning, is the context of launch processing is discussed in detail. The design and fault isolation techniques of LES are also discussed.

  4. Using Unified Modeling Language for Conceptual Modelling of Knowledge-Based Systems

    NASA Astrophysics Data System (ADS)

    Abdullah, Mohd Syazwan; Benest, Ian; Paige, Richard; Kimble, Chris

    This paper discusses extending the Unified Modelling Language by means of a profile for modelling knowledge-based system in the context of Model Driven Architecture (MDA) framework. The profile is implemented using the eXecutable Modelling Framework (XMF) Mosaic tool. A case study from the health care domain demonstrates the practical use of this profile; with the prototype implemented in Java Expert System Shell (Jess). The paper also discusses the possible mapping of the profile elements to the platform specific model (PSM) of Jess and provides some discussion on the Production Rule Representation (PRR) standardisation work.

  5. A knowledge-based system for diagnosis of mastitis problems at the herd level. 2. Machine milking.

    PubMed

    Hogeveen, H; van Vliet, J H; Noordhuizen-Stassen, E N; De Koning, C; Tepp, D M; Brand, A

    1995-07-01

    A knowledge-based system for the diagnosis of mastitis problems at the herd level must search for possible causes, including malfunctioning milking machines or incorrect milking technique. A knowledge-based system on general mechanisms of mastitis infection, using hierarchical conditional causal models, was extended. Model building entailed extensive cooperation between the knowledge engineer and a domain expert. The extended knowledge-based system contains 12 submodels underlying the overview models. Nine submodels were concerned with mastitis problems arising from machine milking. These models are briefly described. The knowledge-based system has been validated by other experts after which the models were adjusted slightly. The final knowledge-based system was validated to data collected at 17 commercial dairy farms with high SCC in the bulk milk. Reports containing the farm data were accompanied by recommendations made by a dairy farm advisor. This validation showed good agreement between the knowledge-based system and the dairy farm advisors. The described knowledge-based system is a good tool for dairy farm advisors to solve herd mastitis problems caused by a malfunctioning milking machine or incorrect milking technique. PMID:7593837

  6. The Importance of Prior Knowledge.

    ERIC Educational Resources Information Center

    Cleary, Linda Miller

    1989-01-01

    Recounts a college English teacher's experience of reading and rereading Noam Chomsky, building up a greater store of prior knowledge. Argues that Frank Smith provides a theory for the importance of prior knowledge and Chomsky's work provided a personal example with which to interpret and integrate that theory. (RS)

  7. Menarche: Prior Knowledge and Experience.

    ERIC Educational Resources Information Center

    Skandhan, K. P.; And Others

    1988-01-01

    Recorded menstruation information among 305 young women in India, assessing the differences between those who did and did not have knowledge of menstruation prior to menarche. Those with prior knowledge considered menarche to be a normal physiological function and had a higher rate of regularity, lower rate of dysmenorrhea, and earlier onset of…

  8. A NASA/RAE cooperation in the development of a real-time knowledge-based autopilot

    NASA Technical Reports Server (NTRS)

    Daysh, Colin; Corbin, Malcolm; Butler, Geoff; Duke, Eugene L.; Belle, Steven D.; Brumbaugh, Randal W.

    1991-01-01

    As part of a US/UK cooperative aeronautical research program, a joint activity between the NASA Dryden Flight Research Facility and the Royal Aerospace Establishment on knowledge-based systems was established. This joint activity is concerned with tools and techniques for the implementation and validation of real-time knowledge-based systems. The proposed next stage of this research is described, in which some of the problems of implementing and validating a knowledge-based autopilot for a generic high-performance aircraft are investigated.

  9. C-PHIS: a concept map-based knowledge base framework to develop personal health information systems.

    PubMed

    Karla, Pramukh R; Gurupur, Varadraj P

    2013-10-01

    In this paper we describe the development of a Personal Health Information System using a knowledge base developed using concept maps. Here we describe a solution for providing the critical need to develop an information capturing system that helps domain experts in developing a graphical representation of the aforementioned knowledge base which can then be converted to a machine-actable form of information. A prototype application has been developed using this information capturing system that clearly demonstrates the use of the knowledge base framework using concept maps to develop Personal Health Information System for lung cancer patients. PMID:24014254

  10. Evaluation of the PharmGKB knowledge base as a resource for efficiently assessing the clinical validity and utility of pharmacogenetic assays.

    PubMed

    Kawamoto, Kensaku; Orlando, Lori A; Voora, Deepak; Lobach, David F; Joy, Scott; Cho, Alex; Ginsburg, Geoffrey S

    2009-01-01

    Prior to clinical use, pharmacogenetic tests should be systematically evaluated for their clinical validity and utility. Here, we evaluated whether the publicly available, online Pharmacogenomics Knowledge Base (PharmGKB) could facilitate such assessments by efficiently identifying relevant peer-reviewed manuscripts. The search targets were 55 manuscripts regarding clinical validity and utility included in systematic reviews of warfarin, antidepressant, and irinotecan pharmacogenetics. When direct inclusion in PharmGKB was the search criterion, recall was 33% and precision was 16%. However, recall increased to 78% when citation within a PharmGKB-identified manuscript was added as a search criterion. These recalled manuscripts accounted for 87% of the study subjects, and domain experts determined that the omission of the remaining manuscripts was unlikely to have changed the conclusions of the reviews. Thus, we conclude that PharmGKB can facilitate the systematic assessment of pharmacogenetic assays through the efficient identification of relevant peer-reviewed manuscripts. PMID:20351870

  11. Magnetic solid phase extraction of mefenamic acid from biological samples based on the formation of mixed hemimicelle aggregates on Fe(3)O(4) nanoparticles prior to its HPLC-UV detection.

    PubMed

    Beiraghi, Asadollah; Pourghazi, Kamyar; Amoli-Diva, Mitra; Razmara, Akbar

    2014-01-15

    A novel and sensitive solid phase extraction method based on the adsorption of cetyltrimethylammonium bromide on the surface of Fe3O4 nanoparticles was developed for extraction and preconcentration of ultra-trace amounts of mefenamic acid in biological fluids. The remarkable properties of Fe3O4 nanoparticles including high surface area and strong magnetization were utilized in this SPE procedure so that a high enrichment factor (98) and satisfactory extraction recoveries (92-99%) were obtained using only 50mg of magnetic adsorbent. Furthermore, a fast separation time (about 15min) was achieved for a large sample volume (200mL) avoiding time-consuming column-passing process of conventional SPE. A comprehensive study on the parameters effecting the extraction recovery such of the amount of surfactant, pH value, the amount of Fe3O4 nanoparticles, sample volume, desorption conditions and ionic strength were also presented. Under the optimum conditions, the method was linear in the 0.2-200ngmL(-1) range and good linearity (r(2)>0.9991) was obtained for all calibration curves. The limit of detection was 0.097 and 0.087ngmL(-1) in plasma and urine samples, respectively. The relative standard deviation (RSD %) for 10 and 50ngmL(-1) of the analyte (n=5) were 1.6% and 2.1% in plasma and 1.2% and 1.9% in urine samples, respectively. Finally, the method was successfully applied to the extraction and preconcentration of mefenamic acid in human plasma and urine samples. PMID:24321760

  12. KoBaMIN: a knowledge-based minimization web server for protein structure refinement

    PubMed Central

    Rodrigues, João P. G. L. M.; Levitt, Michael; Chopra, Gaurav

    2012-01-01

    The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin. PMID:22564897

  13. An Architecture for Performance Optimization in a Collaborative Knowledge-Based Approach for Wireless Sensor Networks

    PubMed Central

    Gadeo-Martos, Manuel Angel; Fernandez-Prieto, Jose Angel; Canada-Bago, Joaquin; Velasco, Juan Ramon

    2011-01-01

    Over the past few years, Intelligent Spaces (ISs) have received the attention of many Wireless Sensor Network researchers. Recently, several studies have been devoted to identify their common capacities and to set up ISs over these networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks for the purpose of implementing ISs. This work presents a distributed architecture proposal for collaborative Fuzzy Rule-Based Systems embedded in Wireless Sensor Networks, which has been designed to optimize the implementation of ISs. This architecture includes the following: (a) an optimized design for the inference engine; (b) a visual interface; (c) a module to reduce the redundancy and complexity of the knowledge bases; (d) a module to evaluate the accuracy of the new knowledge base; (e) a module to adapt the format of the rules to the structure used by the inference engine; and (f) a communications protocol. As a real-world application of this architecture and the proposed methodologies, we show an application to the problem of modeling two plagues of the olive tree: prays (olive moth, Prays oleae Bern.) and repilo (caused by the fungus Spilocaea oleagina). The results show that the architecture presented in this paper significantly decreases the consumption of resources (memory, CPU and battery) without a substantial decrease in the accuracy of the inferred values. PMID:22163687

  14. An integrated analytic tool and knowledge-based system approach to aerospace electric power system control

    NASA Astrophysics Data System (ADS)

    Owens, William R.; Henderson, Eric; Gandikota, Kapal

    1986-10-01

    Future aerospace electric power systems require new control methods because of increasing power system complexity, demands for power system management, greater system size and heightened reliability requirements. To meet these requirements, a combination of electric power system analytic tools and knowledge-based systems is proposed. The continual improvement in microelectronic performance has made it possible to envision the application of sophisticated electric power system analysis tools to aerospace vehicles. These tools have been successfully used in the measurement and control of large terrestrial electric power systems. Among these tools is state estimation which has three main benefits. The estimator builds a reliable database for the system structure and states. Security assessment and contingency evaluation also require a state estimator. Finally, the estimator will, combined with modern control theory, improve power system control and stability. Bad data detection as an adjunct to state estimation identifies defective sensors and communications channels. Validated data from the analytic tools is supplied to a number of knowledge-based systems. These systems will be responsible for the control, protection, and optimization of the electric power system.

  15. Hybrid hill-climbing and knowledge-based methods for intelligent news filtering

    SciTech Connect

    Mock, K.J.

    1996-12-31

    As the size of the Internet increases, the amount of data available to users has dramatically risen, resulting in an information overload for users. This work involved the creation of an intelligent information news filtering system named INFOS (Intelligent News Filtering Organizational System) to reduce the user`s search burden by automatically eliminating Usenet news articles predicted to be irrelevant. These predictions are learned automatically by adapting an internal user model that is based upon features taken from articles and collaborative features derived from other users. The features are manipulated through keyword-based techniques and knowledge-based techniques to perform the actual filtering. Knowledge-based systems have the advantage of analyzing input text in detail, but at the cost of computational complexity and the difficulty of scaling up to large domains. In contrast, statistical and keyword approaches scale up readily but result in a shallower understanding of the input. A hybrid system integrating both approaches improves accuracy over keyword approaches, supports domain knowledge, and retains scalability. The system would be enhanced by more robust word disambiguation.

  16. KoBaMIN: a knowledge-based minimization web server for protein structure refinement.

    PubMed

    Rodrigues, João P G L M; Levitt, Michael; Chopra, Gaurav

    2012-07-01

    The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin. PMID:22564897

  17. Towards a Food Safety Knowledge Base Applicable in Crisis Situations and Beyond

    PubMed Central

    Falenski, Alexander; Weiser, Armin A.; Thöns, Christian; Appel, Bernd; Käsbohrer, Annemarie; Filter, Matthias

    2015-01-01

    In case of contamination in the food chain, fast action is required in order to reduce the numbers of affected people. In such situations, being able to predict the fate of agents in foods would help risk assessors and decision makers in assessing the potential effects of a specific contamination event and thus enable them to deduce the appropriate mitigation measures. One efficient strategy supporting this is using model based simulations. However, application in crisis situations requires ready-to-use and easy-to-adapt models to be available from the so-called food safety knowledge bases. Here, we illustrate this concept and its benefits by applying the modular open source software tools PMM-Lab and FoodProcess-Lab. As a fictitious sample scenario, an intentional ricin contamination at a beef salami production facility was modelled. Predictive models describing the inactivation of ricin were reviewed, relevant models were implemented with PMM-Lab, and simulations on residual toxin amounts in the final product were performed with FoodProcess-Lab. Due to the generic and modular modelling concept implemented in these tools, they can be applied to simulate virtually any food safety contamination scenario. Apart from the application in crisis situations, the food safety knowledge base concept will also be useful in food quality and safety investigations. PMID:26247028

  18. A knowledge-based system for diagnosis of mastitis problems at the herd level. 1. Concepts.

    PubMed

    Hogeveen, H; Noordhuizen-Stassen, E N; Tepp, D M; Kremer, W D; van Vliet, J H; Brand, A

    1995-07-01

    Much specialized knowledge is involved in the diagnosis of a mastitis problem at the herd level. Because of their problem-solving capacities, knowledge-based systems can be very useful to support the diagnosis of mastitis problems in the herd. Conditional causal models with multiple layers are used as a representation scheme for the development of a knowledge-based system for diagnosing mastitis problems. Construction of models requires extensive cooperation between the knowledge engineer and the domain expert. The first layer consists of three overview models: the general overview conditional causal model, the contagious overview conditional causal model, and the environmental overview conditional causal model, giving a causal description of the pathways through which mastitis problems can occur. The conditional causal model for primary udder defense and the conditional causal model for host defense are attached to the overview models at the second layer, and the conditional causal model for deep primary udder defense is attached to the conditional causal model for the primary udder defense at the third layer. Based on quantitative user input, the system determines the qualitative values of the nodes that are used for reasoning. The developed models showed that conditional causal models are a good method for modeling the mechanisms involved in a mastitis problem. The system needs to be extended in order to be useful in practical circumstances. PMID:7593836

  19. The Digital Anatomist Distributed Framework and Its Applications to Knowledge-based Medical Imaging

    PubMed Central

    Brinkley, James F.; Rosse, Cornelius

    1997-01-01

    Abstract The domain of medical imaging is anatomy. Therefore, anatomic knowledge should be a rational basis for organizing and analyzing images. The goals of the Digital Anatomist Program at the University of Washington include the development of an anatomically based software framework for organizing, analyzing, visualizing and utilizing biomedical information. The framework is based on representations for both spatial and symbolic anatomic knowledge, and is being implemented in a distributed architecture in which multiple client programs on the Internet are used to update and access an expanding set of anatomical information resources. The development of this framework is driven by several practical applications, including symbolic anatomic reasoning, knowledge based image segmentation, anatomy information retrieval, and functional brain mapping. Since each of these areas involves many difficult image processing issues, our research strategy is an evolutionary one, in which applications are developed somewhat independently, and partial solutions are integrated in a piecemeal fashion, using the network as the substrate. This approach assumes that networks of interacting components can synergistically work together to solve problems larger than either could solve on its own. Each of the individual projects is described, along with evaluations that show that the individual components are solving the problems they were designed for, and are beginning to interact with each other in a synergistic manner. We argue that this synergy will increase, not only within our own group, but also among groups as the Internet matures, and that an anatomic knowledge base will be a useful means for fostering these interactions. PMID:9147337

  20. The Digital Anatomist distributed framework and its applications to knowledge-based medical imaging.

    PubMed

    Brinkley, J F; Rosse, C

    1997-01-01

    The domain of medical imaging is anatomy. Therefore, anatomic knowledge should be a rational basis for organizing and analyzing images. The goals of the Digital Anatomist Program at the University of Washington include the development of an anatomically based software framework for organizing, analyzing, visualizing and utilizing biomedical information. The framework is based on representations for both spatial and symbolic anatomic knowledge, and is being implemented in a distributed architecture in which multiple client programs on the Internet are used to update and access an expanding set of anatomical information resources. The development of this framework is driven by several practical applications, including symbolic anatomic reasoning, knowledge based image segmentation, anatomy information retrieval, and functional brain mapping. Since each of these areas involves many difficult image processing issues, our research strategy is an evolutionary one, in which applications are developed somewhat independently, and partial solutions are integrated in a piecemeal fashion, using the network as the substrate. This approach assumes that networks of interacting components can synergistically work together to solve problems larger than either could solve on its own. Each of the individual projects is described, along with evaluations that show that the individual components are solving the problems they were designed for, and are beginning to interact with each other in a synergistic manner. We argue that this synergy will increase, not only within our own group, but also among groups as the Internet matures, and that an anatomic knowledge base will be a useful means for fostering these interactions. PMID:9147337

  1. VIP: A knowledge-based design aid for the engineering of space systems

    NASA Technical Reports Server (NTRS)

    Lewis, Steven M.; Bellman, Kirstie L.

    1990-01-01

    The Vehicles Implementation Project (VIP), a knowledge-based design aid for the engineering of space systems is described. VIP combines qualitative knowledge in the form of rules, quantitative knowledge in the form of equations, and other mathematical modeling tools. The system allows users rapidly to develop and experiment with models of spacecraft system designs. As information becomes available to the system, appropriate equations are solved symbolically and the results are displayed. Users may browse through the system, observing dependencies and the effects of altering specific parameters. The system can also suggest approaches to the derivation of specific parameter values. In addition to providing a tool for the development of specific designs, VIP aims at increasing the user's understanding of the design process. Users may rapidly examine the sensitivity of a given parameter to others in the system and perform tradeoffs or optimizations of specific parameters. A second major goal of VIP is to integrate the existing corporate knowledge base of models and rules into a central, symbolic form.

  2. Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris

    2010-01-01

    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

  3. Syn-Lethality: An Integrative Knowledge Base of Synthetic Lethality towards Discovery of Selective Anticancer Therapies

    PubMed Central

    Li, Xue-juan; Mishra, Shital K.; Wu, Min; Zhang, Fan

    2014-01-01

    Synthetic lethality (SL) is a novel strategy for anticancer therapies, whereby mutations of two genes will kill a cell but mutation of a single gene will not. Therefore, a cancer-specific mutation combined with a drug-induced mutation, if they have SL interactions, will selectively kill cancer cells. While numerous SL interactions have been identified in yeast, only a few have been known in human. There is a pressing need to systematically discover and understand SL interactions specific to human cancer. In this paper, we present Syn-Lethality, the first integrative knowledge base of SL that is dedicated to human cancer. It integrates experimentally discovered and verified human SL gene pairs into a network, associated with annotations of gene function, pathway, and molecular mechanisms. It also includes yeast SL genes from high-throughput screenings which are mapped to orthologous human genes. Such an integrative knowledge base, organized as a relational database with user interface for searching and network visualization, will greatly expedite the discovery of novel anticancer drug targets based on synthetic lethality interactions. The database can be downloaded as a stand-alone Java application. PMID:24864230

  4. Three-dimensional reconstruction of pulmonary blood vessels by using anatomical knowledge base

    NASA Astrophysics Data System (ADS)

    Inaoka, Noriko; Suzuki, Hideo; Mori, Masaki; Takabatake, Hirotsugu; Suzuki, Akira

    1991-07-01

    This paper presents a knowledge-based method for automatic reconstruction and recognition of pulmonary blood vessels from chest x-ray CT images with 10-mm thickness. The system has four main stages: (1) automatic extraction and segmentation of blood vessel components from each 2-D image, (2) analysis of these components, (3) a search for points connecting blood vessel segments in different CT slices, using a knowledge base for 3-D reconstruction, and (4) object manipulation and display. The authors also describe a method of representing 3-D anatomical knowledge of the pulmonary blood vessel structure. The edges of blood vessels in chest x-ray images are unclear, in contrast to those in angiograms. Each CT slice has thickness, and blood vessels are slender, so a simple graphical display, which can be used for bone tissues from CT images, is not sufficient for pulmonary blood vessels. It is therefore necessary to use anatomical knowledge to track the blood vessel lines in 3-D spaces. Experimental results using actual images of a normal adult male has shown that utilizing anatomical information enables one to improve processing efficiency and precision, such as blood vessel extraction and searching for connecting points.

  5. Knowledge-based video compression for search and rescue robots and multiple sensor networks

    NASA Astrophysics Data System (ADS)

    Williams, Chris; Murphy, Robin R.

    2006-05-01

    Robot and sensor networks are needed for safety, security, and rescue applications such as port security and reconnaissance during a disaster. These applications rely on real-time transmission of images, which generally saturate the available wireless network infrastructure. Knowledge-based compression is a method for reducing the video frame transmission rate between robots or sensors and remote operators. Because images may need to be archived as evidence and/or distributed to multiple applications with different post processing needs, lossy compression schemes, such as MPEG, H.26x, etc., are not acceptable. This work proposes a lossless video server system consisting of three classes of filters (redundancy, task, and priority) which use different levels of knowledge (local sensed environment, human factors associated with a local task, and relative global priority of a task) at the application layer of the network. It demonstrates the redundancy and task filters for a realistic robot search scenario. The redundancy filter is shown to reduce the overall transmission bandwidth by 24.07% to 33.42%, and, when combined with the task filter, reduces overall transmission bandwidth by 59.08%to 67.83%. By itself, the task filter has the capability to reduce transmission bandwidth by 32.95% to 33.78%. While knowledge-based compression generally does not reach the same levels of reduction as MPEG, there are instances where the system outperforms MPEG encoding.

  6. SIGMA: A Knowledge-Based Simulation Tool Applied to Ecosystem Modeling

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Keller, Richard; Lawless, James G. (Technical Monitor)

    1994-01-01

    The need for better technology to facilitate building, sharing and reusing models is generally recognized within the ecosystem modeling community. The Scientists' Intelligent Graphical Modelling Assistant (SIGMA) creates an environment for model building, sharing and reuse which provides an alternative to more conventional approaches which too often yield poorly documented, awkwardly structured model code. The SIGMA interface presents the user a list of model quantities which can be selected for computation. Equations to calculate the model quantities may be chosen from an existing library of ecosystem modeling equations, or built using a specialized equation editor. Inputs for dim equations may be supplied by data or by calculation from other equations. Each variable and equation is expressed using ecological terminology and scientific units, and is documented with explanatory descriptions and optional literature citations. Automatic scientific unit conversion is supported and only physically-consistent equations are accepted by the system. The system uses knowledge-based semantic conditions to decide which equations in its library make sense to apply in a given situation, and supplies these to the user for selection. "Me equations and variables are graphically represented as a flow diagram which provides a complete summary of the model. Forest-BGC, a stand-level model that simulates photosynthesis and evapo-transpiration for conifer canopies, was originally implemented in Fortran and subsequenty re-implemented using SIGMA. The SIGMA version reproduces daily results and also provides a knowledge base which greatly facilitates inspection, modification and extension of Forest-BGC.

  7. An American knowledge base in England - Alternate implementations of an expert system flight status monitor

    NASA Technical Reports Server (NTRS)

    Butler, G. F.; Graves, A. T.; Disbrow, J. D.; Duke, E. L.

    1989-01-01

    A joint activity between the Dryden Flight Research Facility of the NASA Ames Research Center (Ames-Dryden) and the Royal Aerospace Establishment (RAE) on knowledge-based systems has been agreed. Under the agreement, a flight status monitor knowledge base developed at Ames-Dryden has been implemented using the real-time AI (artificial intelligence) toolkit MUSE, which was developed in the UK. Here, the background to the cooperation is described and the details of the flight status monitor and a prototype MUSE implementation are presented. It is noted that the capabilities of the expert-system flight status monitor to monitor data downlinked from the flight test aircraft and to generate information on the state and health of the system for the test engineers provides increased safety during flight testing of new systems. Furthermore, the expert-system flight status monitor provides the systems engineers with ready access to the large amount of information required to describe a complex aircraft system.

  8. An architecture for integrating distributed and cooperating knowledge-based Air Force decision aids

    NASA Technical Reports Server (NTRS)

    Nugent, Richard O.; Tucker, Richard W.

    1988-01-01

    MITRE has been developing a Knowledge-Based Battle Management Testbed for evaluating the viability of integrating independently-developed knowledge-based decision aids in the Air Force tactical domain. The primary goal for the testbed architecture is to permit a new system to be added to a testbed with little change to the system's software. Each system that connects to the testbed network declares that it can provide a number of services to other systems. When a system wants to use another system's service, it does not address the server system by name, but instead transmits a request to the testbed network asking for a particular service to be performed. A key component of the testbed architecture is a common database which uses a relational database management system (RDBMS). The RDBMS provides a database update notification service to requesting systems. Normally, each system is expected to monitor data relations of interest to it. Alternatively, a system may broadcast an announcement message to inform other systems that an event of potential interest has occurred. Current research is aimed at dealing with issues resulting from integration efforts, such as dealing with potential mismatches of each system's assumptions about the common database, decentralizing network control, and coordinating multiple agents.

  9. Knowledge-based approach to multiple-transaction processing and distributed data-base design

    SciTech Connect

    Park, J.T.

    1987-01-01

    The collective processing of multiple transactions in a data-base system has recently received renewed attention due to its capability of improving the overall performance of a data-base system and its applicability to the design of knowledge-based expert systems and extensible data-base systems. This dissertation consists of two parts. The first part presents a new knowledge-based approach to the problems of processing multiple concurrent queries and distributing replicated data objects for further improvement of the overall system performance. The second part deals with distributed database design, i.e., designing horizontal fragments using a semantic knowledge, and allocating data in a distributed environment. The semantic knowledge on data such as functional dependencies and semantic-data-integrity constraints are newly exploited for the identification of subset relationships between intermediate results of query executions involving joins, such that the (intermediate) results of queries can be utilized for the efficient processing of other queries. The expertise on the collective processing of multiple transactions is embodied into the rules of a rule-based expert system, MTP (Multiple Transaction Processor). In the second part, MTP is applied for the determination of horizontal fragments exploiting the semantic knowledge. Heuristics for allocating data in local area networks are developed.

  10. CLIPS implementation of a knowledge-based distributed control of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Bou-Ghannam, Akram A.; Doty, Keith L.

    1991-03-01

    We implement an architecture for the planning and control of an intelligent autonomous mobile robot which consists of concurrently running modules forming a hierarchy of control in which lower-level modules perform 'reflexive' tasks while higher-level modules perform tasks requiring greater processing of sensor data. A knowledge-based system performs the task planning and arbitration of lower-level behaviors. This system reasons about behavior selection (fusion) based on its current knowledge and the situation at hand provided by monitoring the status from lower-level behaviors and the map builder. We implement this knowledge-based planning module in CLIPS (C Language Implementation Production System), a rule-based expert systems shell. CLIPS is written in and fully integrated with the C language providing high probability and ease of integration with external systems. We discuss implementation issues including the implementation of control strategy in CLIPS rules and interfacing to other modules through the use of CLIPS user-defined external functions.

  11. Towards a Food Safety Knowledge Base Applicable in Crisis Situations and Beyond.

    PubMed

    Falenski, Alexander; Weiser, Armin A; Thöns, Christian; Appel, Bernd; Käsbohrer, Annemarie; Filter, Matthias

    2015-01-01

    In case of contamination in the food chain, fast action is required in order to reduce the numbers of affected people. In such situations, being able to predict the fate of agents in foods would help risk assessors and decision makers in assessing the potential effects of a specific contamination event and thus enable them to deduce the appropriate mitigation measures. One efficient strategy supporting this is using model based simulations. However, application in crisis situations requires ready-to-use and easy-to-adapt models to be available from the so-called food safety knowledge bases. Here, we illustrate this concept and its benefits by applying the modular open source software tools PMM-Lab and FoodProcess-Lab. As a fictitious sample scenario, an intentional ricin contamination at a beef salami production facility was modelled. Predictive models describing the inactivation of ricin were reviewed, relevant models were implemented with PMM-Lab, and simulations on residual toxin amounts in the final product were performed with FoodProcess-Lab. Due to the generic and modular modelling concept implemented in these tools, they can be applied to simulate virtually any food safety contamination scenario. Apart from the application in crisis situations, the food safety knowledge base concept will also be useful in food quality and safety investigations. PMID:26247028

  12. A spectral-knowledge-based approach for urban land-cover discrimination

    NASA Technical Reports Server (NTRS)

    Wharton, Stephen W.

    1987-01-01

    A prototype expert system was developed to demonstrate the feasibility of classifying multispectral remotely sensed data on the basis of spectral knowledge. The spectral expert was developed and tested with Thematic Mapper Simulator (TMS) data having eight spectral bands and a spatial resolution of 5 m. A knowledge base was developed that describes the target categories in terms of characteristic spectral relationships. The knowledge base was developed under the following assumptions: the data are calibrated to ground reflectance, the area is well illuminated, the pixels are dominated by a single category, and the target categories can be recognized without the use of spatial knowledge. Classification decisions are made on the basis of convergent evidence as derived from applying the spectral rules to a multiple spatial resolution representation of the image. The spectral expert achieved an accuracy of 80-percent correct or higher in recognizing 11 spectral categories in TMS data for the washington, DC, area. Classification performance can be expected to decrease for data that do not satisfy the above assumptions as illustrated by the 63-percent accuracy for 30-m resolution Thematic Mapper data.

  13. Knowledge-based biomedical word sense disambiguation: an evaluation and application to clinical document classification

    PubMed Central

    Garla, Vijay N; Brandt, Cynthia

    2013-01-01

    Background Word sense disambiguation (WSD) methods automatically assign an unambiguous concept to an ambiguous term based on context, and are important to many text-processing tasks. In this study we developed and evaluated a knowledge-based WSD method that uses semantic similarity measures derived from the Unified Medical Language System (UMLS) and evaluated the contribution of WSD to clinical text classification. Methods We evaluated our system on biomedical WSD datasets and determined the contribution of our WSD system to clinical document classification on the 2007 Computational Medicine Challenge corpus. Results Our system compared favorably with other knowledge-based methods. Machine learning classifiers trained on disambiguated concepts significantly outperformed those trained using all concepts. Conclusions We developed a WSD system that achieves high disambiguation accuracy on standard biomedical WSD datasets and showed that our WSD system improves clinical document classification. Data sharing We integrated our WSD system with MetaMap and the clinical Text Analysis and Knowledge Extraction System, two popular biomedical natural language processing systems. All codes required to reproduce our results and all tools developed as part of this study are released as open source, available under http://code.google.com/p/ytex. PMID:23077130

  14. Knowledge-base for the new human reliability analysis method, A Technique for Human Error Analysis (ATHEANA)

    SciTech Connect

    Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.

    1996-10-01

    This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ``A Technique for Human Error Analysis`` (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst.

  15. Towards a sharable numeric and symbolic knowledge base on cerebral cortex anatomy: lessons learned from a prototype.

    PubMed Central

    Dameron, Olivier; Gibaud, Bernard; Burgun, Anita; Morandi, Xavier

    2002-01-01

    We propose a knowledge base that combines numeric and symbolic knowledge about sulco-gyral brain cortex. This knowledge base is implemented using Web technologies. It is intended to be easily reusable in various application contexts such as teaching, decision support in neurosurgery and sharing of neuroimaging data for research purposes. Our analysis shows that (1) a formal representation of taxonomy and mereotopology, and (2) use of identity criteria to represent symbolic concepts, are needed to serve those applications. PMID:12463812

  16. Use of Knowledge Base Systems (EMDS) in Strategic and Tactical Forest Planning

    NASA Astrophysics Data System (ADS)

    Jensen, M. E.; Reynolds, K.; Stockmann, K.

    2008-12-01

    The USDA Forest Service 2008 Planning Rule requires Forest plans to provide a strategic vision for maintaining the sustainability of ecological, economic, and social systems across USFS lands through the identification of desired conditions and objectives. In this paper we show how knowledge-based systems can be efficiently used to evaluate disparate natural resource information to assess desired conditions and related objectives in Forest planning. We use the Ecosystem Management Decision Support (EMDS) system (http://www.institute.redlands.edu/emds/), which facilitates development of both logic-based models for evaluating ecosystem sustainability (desired conditions) and decision models to identify priority areas for integrated landscape restoration (objectives). The study area for our analysis spans 1,057 subwatersheds within western Montana and northern Idaho. Results of our study suggest that knowledge-based systems such as EMDS are well suited to both strategic and tactical planning and that the following points merit consideration in future National Forest (and other land management) planning efforts: 1) Logic models provide a consistent, transparent, and reproducible method for evaluating broad propositions about ecosystem sustainability such as: are watershed integrity, ecosystem and species diversity, social opportunities, and economic integrity in good shape across a planning area? The ability to evaluate such propositions in a formal logic framework also allows users the opportunity to evaluate statistical changes in outcomes over time, which could be very useful for regional and national reporting purposes and for addressing litigation; 2) The use of logic and decision models in strategic and tactical Forest planning provides a repository for expert knowledge (corporate memory) that is critical to the evaluation and management of ecosystem sustainability over time. This is especially true for the USFS and other federal resource agencies, which are

  17. Methodology development of an engineering design expert system utilizing a modular knowledge-base inference process

    NASA Astrophysics Data System (ADS)

    Winter, Steven John

    Methodology development was conducted to incorporate a modular knowledge-base representation into an expert system engineering design application. The objective for using multidisciplinary methodologies in defining a design system was to develop a system framework that would be applicable to a wide range of engineering applications. The technique of "knowledge clustering" was used to construct a general decision tree for all factual information relating to the design application. This construction combined the design process surface knowledge and specific application depth knowledge. Utilization of both levels of knowledge created a system capable of processing multiple controlling tasks including; organizing factual information relative to the cognitive levels of the design process, building finite element models for depth knowledge analysis, developing a standardized finite element code for parallel processing, and determining a best solution generated by design optimization procedures. Proof of concept for the methodology developed here is shown in the implementation of an application defining the analysis and optimization of a composite aircraft canard subjected to a general compound loading condition. This application contained a wide range of factual information and heuristic rules. The analysis tools used included a finite element (FE) processor and numerical optimizer. An advisory knowledge-base was also developed to provide a standard for conversion of serial FE code for parallel processing. All knowledge-bases developed operated as either an advisory, selection, or classification systems. Laminate properties are limited to even-numbered, quasi-isotropic ply stacking sequences. This retained full influence of the coupled in-plane and bending effects of the structures behavior. The canard is modeled as a constant thickness plate and discretized into a varying number of four or nine-noded, quadrilateral, shear-deformable plate elements. The benefit gained by

  18. Optimization of knowledge-based systems and expert system building tools

    NASA Technical Reports Server (NTRS)

    Yasuda, Phyllis; Mckellar, Donald

    1993-01-01

    The objectives of the NASA-AMES Cooperative Agreement were to investigate, develop, and evaluate, via test cases, the system parameters and processing algorithms that constrain the overall performance of the Information Sciences Division's Artificial Intelligence Research Facility. Written reports covering various aspects of the grant were submitted to the co-investigators for the grant. Research studies concentrated on the field of artificial intelligence knowledge-based systems technology. Activities included the following areas: (1) AI training classes; (2) merging optical and digital processing; (3) science experiment remote coaching; (4) SSF data management system tests; (5) computer integrated documentation project; (6) conservation of design knowledge project; (7) project management calendar and reporting system; (8) automation and robotics technology assessment; (9) advanced computer architectures and operating systems; and (10) honors program.

  19. Building the Knowledge Base to Support the Automatic Animation Generation of Chinese Traditional Architecture

    NASA Astrophysics Data System (ADS)

    Wei, Gongjin; Bai, Weijing; Yin, Meifang; Zhang, Songmao

    We present a practice of applying the Semantic Web technologies in the domain of Chinese traditional architecture. A knowledge base consisting of one ontology and four rule bases is built to support the automatic generation of animations that demonstrate the construction of various Chinese timber structures based on the user's input. Different Semantic Web formalisms are used, e.g., OWL DL, SWRL and Jess, to capture the domain knowledge, including the wooden components needed for a given building, construction sequence, and the 3D size and position of every piece of wood. Our experience in exploiting the current Semantic Web technologies in real-world application systems indicates their prominent advantages (such as the reasoning facilities and modeling tools) as well as the limitations (such as low efficiency).

  20. TEXSYS. [a knowledge based system for the Space Station Freedom thermal control system test-bed

    NASA Technical Reports Server (NTRS)

    Bull, John

    1990-01-01

    The Systems Autonomy Demonstration Project has recently completed a major test and evaluation of TEXSYS, a knowledge-based system (KBS) which demonstrates real-time control and FDIR for the Space Station Freedom thermal control system test-bed. TEXSYS is the largest KBS ever developed by NASA and offers a unique opportunity for the study of technical issues associated with the use of advanced KBS concepts including: model-based reasoning and diagnosis, quantitative and qualitative reasoning, integrated use of model-based and rule-based representations, temporal reasoning, and scale-up performance issues. TEXSYS represents a major achievement in advanced automation that has the potential to significantly influence Space Station Freedom's design for the thermal control system. An overview of the Systems Autonomy Demonstration Project, the thermal control system test-bed, the TEXSYS architecture, preliminary test results, and thermal domain expert feedback are presented.

  1. Building organisational cyber resilience: A strategic knowledge-based view of cyber security management.

    PubMed

    Ferdinand, Jason

    The concept of cyber resilience has emerged in recent years in response to the recognition that cyber security is more than just risk management. Cyber resilience is the goal of organisations, institutions and governments across the world and yet the emerging literature is somewhat fragmented due to the lack of a common approach to the subject. This limits the possibility of effective collaboration across public, private and governmental actors in their efforts to build and maintain cyber resilience. In response to this limitation, and to calls for a more strategically focused approach, this paper offers a knowledge-based view of cyber security management that explains how an organisation can build, assess, and maintain cyber resilience. PMID:26642176

  2. Knowledge-based load leveling and task allocation in human-machine systems

    NASA Technical Reports Server (NTRS)

    Chignell, M. H.; Hancock, P. A.

    1986-01-01

    Conventional human-machine systems use task allocation policies which are based on the premise of a flexible human operator. This individual is most often required to compensate for and augment the capabilities of the machine. The development of artificial intelligence and improved technologies have allowed for a wider range of task allocation strategies. In response to these issues a Knowledge Based Adaptive Mechanism (KBAM) is proposed for assigning tasks to human and machine in real time, using a load leveling policy. This mechanism employs an online workload assessment and compensation system which is responsive to variations in load through an intelligent interface. This interface consists of a loading strategy reasoner which has access to information about the current status of the human-machine system as well as a database of admissible human/machine loading strategies. Difficulties standing in the way of successful implementation of the load leveling strategy are examined.

  3. A knowledge-based approach to identification and adaptation in dynamical systems control

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Wong, C. M.

    1988-01-01

    Artificial intelligence techniques are applied to the problems of model form and parameter identification of large-scale dynamic systems. The object-oriented knowledge representation is discussed in the context of causal modeling and qualitative reasoning. Structured sets of rules are used for implementing qualitative component simulations, for catching qualitative discrepancies and quantitative bound violations, and for making reconfiguration and control decisions that affect the physical system. These decisions are executed by backward-chaining through a knowledge base of control action tasks. This approach was implemented for two examples: a triple quadrupole mass spectrometer and a two-phase thermal testbed. Results of tests with both of these systems demonstrate that the software replicates some or most of the functionality of a human operator, thereby reducing the need for a human-in-the-loop in the lower levels of control of these complex systems.

  4. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  5. Knowledge-based recognition algorithm for long-range infrared bridge images

    NASA Astrophysics Data System (ADS)

    Cao, Zhiguo; Sun, Qi; Zhang, Tianxu

    2001-10-01

    The recognition of bridge in long-range infrared images presents a number of problems due to the complexity of background, high noise interference, small size of target and low contrast between bridge and its surrounding water area. To counter these barriers, we have developed a new knowledge-based recognition algorithm. It first detects the candidate bridge sub-regions and then focus on them. According to the degree to which they match with our pre-built framework, different credits are given, so the false objects are excluded and eventually real target is found. The experimental results demonstrate our localized method is always superior to the traditional global algorithms adopted by most former researchers.

  6. Knowledge-Based Manufacturing and Structural Design for a High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Marx, William J.; Mavris, Dimitri N.; Schrage, Daniel P.

    1994-01-01

    The aerospace industry is currently addressing the problem of integrating manufacturing and design. To address the difficulties associated with using many conventional procedural techniques and algorithms, one feasible way to integrate the two concepts is with the development of an appropriate Knowledge-Based System (KBS). The authors present their reasons for selecting a KBS to integrate design and manufacturing. A methodology for an aircraft producibility assessment is proposed, utilizing a KBS for manufacturing process selection, that addresses both procedural and heuristic aspects of designing and manufacturing of a High Speed Civil Transport (HSCT) wing. A cost model is discussed that would allow system level trades utilizing information describing the material characteristics as well as the manufacturing process selections. Statements of future work conclude the paper.

  7. HEARTVIEW-A Knowledge Base to Support Clinical Research in Cardiology

    PubMed Central

    Leão, Beatriz de F.; Timmers, Teun; van der Lei, Johan; van Mulligen, Erik M.

    1990-01-01

    This paper presents HEARTVIEW - a knowledge base (KB) that offers aid for clinical research in cardiology. This KB is an essential component of the medical workstation MW2000, now under development at the department of Medical Informatics at the Erasmus University. HEARTVIEW integrates different types of knowledge: a conceptual model of the medical record in cardiology and knowledge on how to perform data analysis, according to the cardiological sub-domain. The design of HEARTVIEW is based on the assumption of a general structure in the medical record. The KB can be consulted by other modules in the MW2000 and by users through a medically-oriented graphical interface. The prototype is implemented on a Xerox 1100-series workstation. After evaluation, it will be transferred to the Unix environment, where the MW2000 is under development [1,2].

  8. Framing a Knowledge Base for a Legal Expert System Dealing with Indeterminate Concepts.

    PubMed

    Araszkiewicz, Michał; Łopatkiewicz, Agata; Zienkiewicz, Adam; Zurek, Tomasz

    2015-01-01

    Despite decades of development of formal tools for modelling legal knowledge and reasoning, the creation of a fully fledged legal decision support system remains challenging. Among those challenges, such system requires an enormous amount of commonsense knowledge to derive legal expertise. This paper describes the development of a negotiation decision support system (the Parenting Plan Support System or PPSS) to support parents in drafting an agreement (the parenting plan) for the exercise of parental custody of minor children after a divorce is granted. The main objective here is to discuss problems of framing an intuitively appealing and computationally efficient knowledge base that can adequately represent the indeterminate legal concept of the well-being of the child in the context of continental legal culture and of Polish law in particular. In addition to commonsense reasoning, interpretation of such a concept demands both legal expertise and significant professional knowledge from other domains. PMID:26495435

  9. Distributed-knowledge-based spectral processing and classification system for instruction and learning

    NASA Astrophysics Data System (ADS)

    Siddiqui, Khalid J.

    1999-12-01

    This paper develops a distributed knowledge-based spectral processing and classification system which functions in one of two modes, executive and assistant. In the executive mode the system functions as a stand-alone system, automatically performing all the tasks from spectral enhancement, feature extraction and selection, to spectral classification and interpretation using the optimally feasible algorithms. In the assistant mode the system leads the user through the entire spectral processing and classification process, allowing a user to select appropriate parameters, their weights, knowledge organization method and a classification algorithm. Thus, the latter mode can also be used for teaching and instruction. It is shown how novice users can select a set of parameters, adjust their weights, and examine the classification process. Since different classifiers have various underlying assumptions, provisions have been made to control these assumptions, allowing users to select the parameters individually and combined, and providing facilities to visualize the interrelationships among the parameters.

  10. KIPSE1: A Knowledge-based Interactive Problem Solving Environment for data estimation and pattern classification

    NASA Technical Reports Server (NTRS)

    Han, Chia Yung; Wan, Liqun; Wee, William G.

    1990-01-01

    A knowledge-based interactive problem solving environment called KIPSE1 is presented. The KIPSE1 is a system built on a commercial expert system shell, the KEE system. This environment gives user capability to carry out exploratory data analysis and pattern classification tasks. A good solution often consists of a sequence of steps with a set of methods used at each step. In KIPSE1, solution is represented in the form of a decision tree and each node of the solution tree represents a partial solution to the problem. Many methodologies are provided at each node to the user such that the user can interactively select the method and data sets to test and subsequently examine the results. Otherwise, users are allowed to make decisions at various stages of problem solving to subdivide the problem into smaller subproblems such that a large problem can be handled and a better solution can be found.

  11. Knowledge-based design-flow management in the OOTIF framework

    NASA Astrophysics Data System (ADS)

    Li, Sikun; Guo, Yang; Zhao, Li Y.

    1996-03-01

    This paper introduces the main techniques adopted by design flow management subsystem in the OOTIF Framework. OOTIF is an object oriented tool integrate CAD framework. In this paper, we present static model of design process based on flowchart; we implement tool control through tool template which makes use of object oriented concept; we develop design flowchart builder through which designers can express design intent when submitting tasks; knowledge base is built, which makes intelligent tool selection and flow management possible. Some ideas of future developments of OOTIF are also presented. With the help of design flow management system, designers are able to concentrate exclusively on those issues concerned with the creative and exploratory phases of design.

  12. Development the conceptual design of Knowledge Based System for Integrated Maintenance Strategy and Operation

    NASA Astrophysics Data System (ADS)

    Milana; Khan, M. K.; Munive, J. E.

    2014-07-01

    The importance of maintenance has escalated significantly by the increasing of automation in manufacturing process. This condition switches traditional maintenance perspective of inevitable cost into the business competitive driver. Consequently, maintenance strategy and operation decision needs to be synchronized to business and manufacturing concerns. This paper shows the development of conceptual design of Knowledge Based System for Integrated Maintenance Strategy and Operation (KBIMSO). The framework of KBIMSO is elaborated to show the process of how the KBIMSO works to reach the maintenance decision. By considering the multi-criteria of maintenance decision making, the KB system embedded with GAP and AHP to support integrated maintenance strategy and operation which is novel in this area. The KBIMSO is useful to review the existing maintenance system and give reasonable recommendation of maintenance decisions in respect to business and manufacturing perspective.

  13. Knowledge-base for interpretation of cerebrospinal fluid data patterns. Essentials in neurology and psychiatry.

    PubMed

    Reiber, Hansotto

    2016-06-01

    The physiological and biophysical knowledge base for interpretations of cerebrospinal fluid (CSF) data and reference ranges are essential for the clinical pathologist and neurochemist. With the popular description of the CSF flow dependent barrier function, the dynamics and concentration gradients of blood-derived, brain-derived and leptomeningeal proteins in CSF or the specificity-independent functions of B-lymphocytes in brain also the neurologist, psychiatrist, neurosurgeon as well as the neuropharmacologist may find essentials for diagnosis, research or development of therapies. This review may help to replace the outdated ideas like "leakage" models of the barriers, linear immunoglobulin Index Interpretations or CSF electrophoresis. Calculations, Interpretations and analytical pitfalls are described for albumin quotients, quantitation of immunoglobulin synthesis in Reibergrams, oligoclonal IgG, IgM analysis, the polyspecific ( MRZ- ) antibody reaction, the statistical treatment of CSF data and general quality assessment in the CSF laboratory. The diagnostic relevance is documented in an accompaning review. PMID:27332077

  14. Knowledge based systems: A preliminary survey of selected issues and techniques

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kavi, Srinu

    1984-01-01

    It is only recently that research in Artificial Intelligence (AI) is accomplishing practical results. Most of these results can be attributed to the design and use of expert systems (or Knowledge-Based Systems, KBS) - problem-solving computer programs that can reach a level of performance comparable to that of a human expert in some specialized problem domain. But many computer systems designed to see images, hear sounds, and recognize speech are still in a fairly early stage of development. In this report, a preliminary survey of recent work in the KBS is reported, explaining KBS concepts and issues and techniques used to construct them. Application considerations to construct the KBS and potential KBS research areas are identified. A case study (MYCIN) of a KBS is also provided.

  15. Knowledge-Based, Interactive, Custom Anatomical Scene Creation for Medical Education: The Biolucida System

    PubMed Central

    Warren, Wayne; Brinkley, James F.

    2005-01-01

    Few biomedical subjects of study are as resource-intensive to teach as gross anatomy. Medical education stands to benefit greatly from applications which deliver virtual representations of human anatomical structures. While many applications have been created to achieve this goal, their utility to the student is limited because of a lack of interactivity or customizability by expert authors. Here we describe the first version of the Biolucida system, which allows an expert anatomist author to create knowledge-based, customized, and fully interactive scenes and lessons for students of human macroscopic anatomy. Implemented in Java and VRML, Biolucida allows the sharing of these instructional 3D environments over the internet. The system simplifies the process of authoring immersive content while preserving its flexibility and expressivity. PMID:16779148

  16. Using fuzzy logic to integrate neural networks and knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Yen, John

    1991-01-01

    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.

  17. Enriching semantic knowledge bases for opinion mining in big data applications.

    PubMed

    Weichselbraun, A; Gindl, S; Scharl, A

    2014-10-01

    This paper presents a novel method for contextualizing and enriching large semantic knowledge bases for opinion mining with a focus on Web intelligence platforms and other high-throughput big data applications. The method is not only applicable to traditional sentiment lexicons, but also to more comprehensive, multi-dimensional affective resources such as SenticNet. It comprises the following steps: (i) identify ambiguous sentiment terms, (ii) provide context information extracted from a domain-specific training corpus, and (iii) ground this contextual information to structured background knowledge sources such as ConceptNet and WordNet. A quantitative evaluation shows a significant improvement when using an enriched version of SenticNet for polarity classification. Crowdsourced gold standard data in conjunction with a qualitative evaluation sheds light on the strengths and weaknesses of the concept grounding, and on the quality of the enrichment process. PMID:25431524

  18. Army nurses' knowledge base for determining triage categories in a mass casualty.

    PubMed

    Robison, Jennifer L

    2002-10-01

    The timing, location, and participants in a mass casualty scenario cannot be predicted. Nurses may be involved in performing triage, yet there is no published documentation of military nurses' ability to triage. A prospective design was used to describe 82 Army nurses' knowledge base related to designating triage categories for patients during a mass causality, examining the relationships among their education and experience as evaluated by The Darnall Mass Casualty Triage Test and Demographic Data Form. The most significant areas associated with higher scores on the Triage Test were: completion of Advanced Cardiac Life Support, advanced certification as a Certified Registered Nurse Anesthetists, Certified Emergency Nurse, or Critical Care Registered Nurse, and attendance to the Medical Management of Nuclear Weapons Course. An improved average score for nurses overall was also noted when compared with previous work with the Darnall MASCAL Triage Test. PMID:12392246

  19. Enriching semantic knowledge bases for opinion mining in big data applications

    PubMed Central

    Weichselbraun, A.; Gindl, S.; Scharl, A.

    2014-01-01

    This paper presents a novel method for contextualizing and enriching large semantic knowledge bases for opinion mining with a focus on Web intelligence platforms and other high-throughput big data applications. The method is not only applicable to traditional sentiment lexicons, but also to more comprehensive, multi-dimensional affective resources such as SenticNet. It comprises the following steps: (i) identify ambiguous sentiment terms, (ii) provide context information extracted from a domain-specific training corpus, and (iii) ground this contextual information to structured background knowledge sources such as ConceptNet and WordNet. A quantitative evaluation shows a significant improvement when using an enriched version of SenticNet for polarity classification. Crowdsourced gold standard data in conjunction with a qualitative evaluation sheds light on the strengths and weaknesses of the concept grounding, and on the quality of the enrichment process. PMID:25431524

  20. Cyanobacterial KnowledgeBase (CKB), a Compendium of Cyanobacterial Genomes and Proteomes

    PubMed Central

    Mohandass, Shylajanaciyar; Varadharaj, Sangeetha; Thilagar, Sivasudha; Abdul Kareem, Kaleel Ahamed; Dharmar, Prabaharan; Gopalakrishnan, Subramanian; Lakshmanan, Uma

    2015-01-01

    Cyanobacterial KnowledgeBase (CKB) is a free access database that contains the genomic and proteomic information of 74 fully sequenced cyanobacterial genomes belonging to seven orders. The database also contains tools for sequence analysis. The Species report and the gene report provide details about each species and gene (including sequence features and gene ontology annotations) respectively. The database also includes cyanoBLAST, an advanced tool that facilitates comparative analysis, among cyanobacterial genomes and genomes of E. coli (prokaryote) and Arabidopsis (eukaryote). The database is developed and maintained by the Sub-Distributed Informatics Centre (sponsored by the Department of Biotechnology, Govt. of India) of the National Facility for Marine Cyanobacteria, a facility dedicated to marine cyanobacterial research. CKB is freely available at http://nfmc.res.in/ckb/index.html. PMID:26305368