Science.gov

Sample records for prior biological knowledge-based

  1. SU-E-J-71: Spatially Preserving Prior Knowledge-Based Treatment Planning

    SciTech Connect

    Wang, H; Xing, L

    2015-06-15

    Purpose: Prior knowledge-based treatment planning is impeded by the use of a single dose volume histogram (DVH) curve. Critical spatial information is lost from collapsing the dose distribution into a histogram. Even similar patients possess geometric variations that becomes inaccessible in the form of a single DVH. We propose a simple prior knowledge-based planning scheme that extracts features from prior dose distribution while still preserving the spatial information. Methods: A prior patient plan is not used as a mere starting point for a new patient but rather stopping criteria are constructed. Each structure from the prior patient is partitioned into multiple shells. For instance, the PTV is partitioned into an inner, middle, and outer shell. Prior dose statistics are then extracted for each shell and translated into the appropriate Dmin and Dmax parameters for the new patient. Results: The partitioned dose information from a prior case has been applied onto 14 2-D prostate cases. Using prior case yielded final DVHs that was comparable to manual planning, even though the DVH for the prior case was different from the DVH for the 14 cases. Solely using a single DVH for the entire organ was also performed for comparison but showed a much poorer performance. Different ways of translating the prior dose statistics into parameters for the new patient was also tested. Conclusion: Prior knowledge-based treatment planning need to salvage the spatial information without transforming the patients on a voxel to voxel basis. An efficient balance between the anatomy and dose domain is gained through partitioning the organs into multiple shells. The use of prior knowledge not only serves as a starting point for a new case but the information extracted from the partitioned shells are also translated into stopping criteria for the optimization problem at hand.

  2. Case-based reasoning for space applications: Utilization of prior experience in knowledge-based systems

    NASA Technical Reports Server (NTRS)

    King, James A.

    1987-01-01

    The goal is to explain Case-Based Reasoning as a vehicle to establish knowledge-based systems based on experimental reasoning for possible space applications. This goal will be accomplished through an examination of reasoning based on prior experience in a sample domain, and also through a presentation of proposed space applications which could utilize Case-Based Reasoning techniques.

  3. Prior-knowledge-based spectral mixture analysis for impervious surface mapping

    SciTech Connect

    Zhang, Jinshui; He, Chunyang; Zhou, Yuyu; Zhu, Shuang; Shuai, Guanyuan

    2014-01-03

    In this study, we developed a prior-knowledge-based spectral mixture analysis (PKSMA) to map impervious surfaces by using endmembers derived separately for high- and low-density urban regions. First, an urban area was categorized into high- and low-density urban areas, using a multi-step classification method. Next, in high-density urban areas that were assumed to have only vegetation and impervious surfaces (ISs), the Vegetation-Impervious model (V-I) was used in a spectral mixture analysis (SMA) with three endmembers: vegetation, high albedo, and low albedo. In low-density urban areas, the Vegetation-Impervious-Soil model (V-I-S) was used in an SMA analysis with four endmembers: high albedo, low albedo, soil, and vegetation. The fraction of IS with high and low albedo in each pixel was combined to produce the final IS map. The root mean-square error (RMSE) of the IS map produced using PKSMA was about 11.0%, compared to 14.52% using four-endmember SMA. Particularly in high-density urban areas, PKSMA (RMSE = 6.47%) showed better performance than four-endmember (15.91%). The results indicate that PKSMA can improve IS mapping compared to traditional SMA by using appropriately selected endmembers and is particularly strong in high-density urban areas.

  4. CONCEPTUAL FRAMEWORK FOR THE CHEMICAL EFFECTS IN BIOLOGICAL SYSTEMS (CEBS) TOXICOGENOMICS KNOWLEDGE BASE

    EPA Science Inventory

    Conceptual Framework for the Chemical Effects in Biological Systems (CEBS) T oxicogenomics Knowledge Base

    Abstract
    Toxicogenomics studies how the genome is involved in responses to environmental stressors or toxicants. It combines genetics, genome-scale mRNA expressio...

  5. A prior-knowledge-based threshold segmentation method of forward-looking sonar images for underwater linear object detection

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Bian, Hongyu; Yagi, Shin-ichi; Yang, Xiaodong

    2016-07-01

    Raw sonar images may not be used for underwater detection or recognition directly because disturbances such as the grating-lobe and multi-path disturbance affect the gray-level distribution of sonar images and cause phantom echoes. To search for a more robust segmentation method with a reasonable computational cost, a prior-knowledge-based threshold segmentation method of underwater linear object detection is discussed. The possibility of guiding the segmentation threshold evolution of forward-looking sonar images using prior knowledge is verified by experiment. During the threshold evolution, the collinear relation of two lines that correspond to double peaks in the voting space of the edged image is used as the criterion of termination. The interaction is reflected in the sense that the Hough transform contributes to the basis of the collinear relation of lines, while the binary image generated from the current threshold provides the resource of the Hough transform. The experimental results show that the proposed method could maintain a good tradeoff between the segmentation quality and the computational time in comparison with conventional segmentation methods. The proposed method redounds to a further process for unsupervised underwater visual understanding.

  6. RegenBase: a knowledge base of spinal cord injury biology for translational research

    PubMed Central

    Callahan, Alison; Abeyruwan, Saminda W.; Al-Ali, Hassan; Sakurai, Kunie; Ferguson, Adam R.; Popovich, Phillip G.; Shah, Nigam H.; Visser, Ubbo; Bixby, John L.; Lemmon, Vance P.

    2016-01-01

    Spinal cord injury (SCI) research is a data-rich field that aims to identify the biological mechanisms resulting in loss of function and mobility after SCI, as well as develop therapies that promote recovery after injury. SCI experimental methods, data and domain knowledge are locked in the largely unstructured text of scientific publications, making large scale integration with existing bioinformatics resources and subsequent analysis infeasible. The lack of standard reporting for experiment variables and results also makes experiment replicability a significant challenge. To address these challenges, we have developed RegenBase, a knowledge base of SCI biology. RegenBase integrates curated literature-sourced facts and experimental details, raw assay data profiling the effect of compounds on enzyme activity and cell growth, and structured SCI domain knowledge in the form of the first ontology for SCI, using Semantic Web representation languages and frameworks. RegenBase uses consistent identifier schemes and data representations that enable automated linking among RegenBase statements and also to other biological databases and electronic resources. By querying RegenBase, we have identified novel biological hypotheses linking the effects of perturbagens to observed behavioral outcomes after SCI. RegenBase is publicly available for browsing, querying and download. Database URL: http://regenbase.org PMID:27055827

  7. RegenBase: a knowledge base of spinal cord injury biology for translational research.

    PubMed

    Callahan, Alison; Abeyruwan, Saminda W; Al-Ali, Hassan; Sakurai, Kunie; Ferguson, Adam R; Popovich, Phillip G; Shah, Nigam H; Visser, Ubbo; Bixby, John L; Lemmon, Vance P

    2016-01-01

    Spinal cord injury (SCI) research is a data-rich field that aims to identify the biological mechanisms resulting in loss of function and mobility after SCI, as well as develop therapies that promote recovery after injury. SCI experimental methods, data and domain knowledge are locked in the largely unstructured text of scientific publications, making large scale integration with existing bioinformatics resources and subsequent analysis infeasible. The lack of standard reporting for experiment variables and results also makes experiment replicability a significant challenge. To address these challenges, we have developed RegenBase, a knowledge base of SCI biology. RegenBase integrates curated literature-sourced facts and experimental details, raw assay data profiling the effect of compounds on enzyme activity and cell growth, and structured SCI domain knowledge in the form of the first ontology for SCI, using Semantic Web representation languages and frameworks. RegenBase uses consistent identifier schemes and data representations that enable automated linking among RegenBase statements and also to other biological databases and electronic resources. By querying RegenBase, we have identified novel biological hypotheses linking the effects of perturbagens to observed behavioral outcomes after SCI. RegenBase is publicly available for browsing, querying and download.Database URL:http://regenbase.org. PMID:27055827

  8. BioBIKE: a Web-based, programmable, integrated biological knowledge base.

    PubMed

    Elhai, Jeff; Taton, Arnaud; Massar, J P; Myers, John K; Travers, Mike; Casey, Johnny; Slupesky, Mark; Shrager, Jeff

    2009-07-01

    BioBIKE (biobike.csbc.vcu.edu) is a web-based environment enabling biologists with little programming expertise to combine tools, data, and knowledge in novel and possibly complex ways, as demanded by the biological problem at hand. BioBIKE is composed of three integrated components: a biological knowledge base, a graphical programming interface and an extensible set of tools. Each of the five current BioBIKE instances provides all available information (genomic, metabolic, experimental) appropriate to a given research community. The BioBIKE programming language and graphical programming interface employ familiar operations to help users combine functions and information to conduct biologically meaningful analyses. Many commonly used tools, such as Blast and PHYLIP, are built-in, allowing users to access them within the same interface and to pass results from one to another. Users may also invent their own tools, packaging complex expressions under a single name, which is immediately made accessible through the graphical interface. BioBIKE represents a partial solution to the difficult question of how to enable those with no background in computer programming to work directly and creatively with mass biological information. BioBIKE is distributed under the MIT Open Source license. A description of the underlying language and other technical matters is available at www.Biobike.org. PMID:19433511

  9. Integrating biological knowledge based on functional annotations for biclustering of gene expression data.

    PubMed

    Nepomuceno, Juan A; Troncoso, Alicia; Nepomuceno-Chamorro, Isabel A; Aguilar-Ruiz, Jesús S

    2015-05-01

    Gene expression data analysis is based on the assumption that co-expressed genes imply co-regulated genes. This assumption is being reformulated because the co-expression of a group of genes may be the result of an independent activation with respect to the same experimental condition and not due to the same regulatory regime. For this reason, traditional techniques are recently being improved with the use of prior biological knowledge from open-access repositories together with gene expression data. Biclustering is an unsupervised machine learning technique that searches patterns in gene expression data matrices. A scatter search-based biclustering algorithm that integrates biological information is proposed in this paper. In addition to the gene expression data matrix, the input of the algorithm is only a direct annotation file that relates each gene to a set of terms from a biological repository where genes are annotated. Two different biological measures, FracGO and SimNTO, are proposed to integrate this information by means of its addition to-be-optimized fitness function in the scatter search scheme. The measure FracGO is based on the biological enrichment and SimNTO is based on the overlapping among GO annotations of pairs of genes. Experimental results evaluate the proposed algorithm for two datasets and show the algorithm performs better when biological knowledge is integrated. Moreover, the analysis and comparison between the two different biological measures is presented and it is concluded that the differences depend on both the data source and how the annotation file has been built in the case GO is used. It is also shown that the proposed algorithm obtains a greater number of enriched biclusters than other classical biclustering algorithms typically used as benchmark and an analysis of the overlapping among biclusters reveals that the biclusters obtained present a low overlapping. The proposed methodology is a general-purpose algorithm which allows

  10. A Knowledge Base for Teaching Biology Situated in the Context of Genetic Testing

    ERIC Educational Resources Information Center

    van der Zande, Paul; Waarlo, Arend Jan; Brekelmans, Mieke; Akkerman, Sanne F.; Vermunt, Jan D.

    2011-01-01

    Recent developments in the field of genomics will impact the daily practice of biology teachers who teach genetics in secondary education. This study reports on the first results of a research project aimed at enhancing biology teacher knowledge for teaching genetics in the context of genetic testing. The increasing body of scientific knowledge…

  11. A Knowledge Base for Teaching Biology Situated in the Context of Genetic Testing

    NASA Astrophysics Data System (ADS)

    van der Zande, Paul; Waarlo, Arend Jan; Brekelmans, Mieke; Akkerman, Sanne F.; Vermunt, Jan D.

    2011-10-01

    Recent developments in the field of genomics will impact the daily practice of biology teachers who teach genetics in secondary education. This study reports on the first results of a research project aimed at enhancing biology teacher knowledge for teaching genetics in the context of genetic testing. The increasing body of scientific knowledge concerning genetic testing and the related consequences for decision-making indicate the societal relevance of such a situated learning approach. What content knowledge do biology teachers need for teaching genetics in the personal health context of genetic testing? This study describes the required content knowledge by exploring the educational practice and clinical genetic practices. Nine experienced teachers and 12 respondents representing the clinical genetic practices (clients, medical professionals, and medical ethicists) were interviewed about the biological concepts and ethical, legal, and social aspects (ELSA) of testing they considered relevant to empowering students as future health care clients. The ELSA suggested by the respondents were complemented by suggestions found in the literature on genetic counselling. The findings revealed that the required teacher knowledge consists of multiple layers that are embedded in specific genetic test situations: on the one hand, the knowledge of concepts represented by the curricular framework and some additional concepts (e.g. multifactorial and polygenic disorder) and, on the other hand, more knowledge of ELSA and generic characteristics of genetic test practice (uncertainty, complexity, probability, and morality). Suggestions regarding how to translate these characteristics, concepts, and ELSA into context-based genetics education are discussed.

  12. Enhancing Interpretability of Gene Signatures with Prior Biological Knowledge

    PubMed Central

    Squillario, Margherita; Barbieri, Matteo; Verri, Alessandro; Barla, Annalisa

    2016-01-01

    Biological interpretability is a key requirement for the output of microarray data analysis pipelines. The most used pipeline first identifies a gene signature from the acquired measurements and then uses gene enrichment analysis as a tool for functionally characterizing the obtained results. Recently Knowledge Driven Variable Selection (KDVS), an alternative approach which performs both steps at the same time, has been proposed. In this paper, we assess the effectiveness of KDVS against standard approaches on a Parkinson’s Disease (PD) dataset. The presented quantitative analysis is made possible by the construction of a reference list of genes and gene groups associated to PD. Our work shows that KDVS is much more effective than the standard approach in enhancing the interpretability of the obtained results. PMID:27600081

  13. Enhancing Interpretability of Gene Signatures with Prior Biological Knowledge.

    PubMed

    Squillario, Margherita; Barbieri, Matteo; Verri, Alessandro; Barla, Annalisa

    2016-01-01

    Biological interpretability is a key requirement for the output of microarray data analysis pipelines. The most used pipeline first identifies a gene signature from the acquired measurements and then uses gene enrichment analysis as a tool for functionally characterizing the obtained results. Recently Knowledge Driven Variable Selection (KDVS), an alternative approach which performs both steps at the same time, has been proposed. In this paper, we assess the effectiveness of KDVS against standard approaches on a Parkinson's Disease (PD) dataset. The presented quantitative analysis is made possible by the construction of a reference list of genes and gene groups associated to PD. Our work shows that KDVS is much more effective than the standard approach in enhancing the interpretability of the obtained results. PMID:27600081

  14. Knowledge-Based Abstracting.

    ERIC Educational Resources Information Center

    Black, William J.

    1990-01-01

    Discussion of automatic abstracting of technical papers focuses on a knowledge-based method that uses two sets of rules. Topics discussed include anaphora; text structure and discourse; abstracting techniques, including the keyword method and the indicator phrase method; and tools for text skimming. (27 references) (LRW)

  15. Knowledge based programming at KSC

    NASA Technical Reports Server (NTRS)

    Tulley, J. H., Jr.; Delaune, C. I.

    1986-01-01

    Various KSC knowledge-based systems projects are discussed. The objectives of the knowledge-based automatic test equipment and Shuttle connector analysis network projects are described. It is observed that knowledge-based programs must handle factual and expert knowledge; the characteristics of these two types of knowledge are examined. Applications for the knowledge-based programming technique are considered.

  16. Filtering genetic variants and placing informative priors based on putative biological function.

    PubMed

    Friedrichs, Stefanie; Malzahn, Dörthe; Pugh, Elizabeth W; Almeida, Marcio; Liu, Xiao Qing; Bailey, Julia N

    2016-01-01

    High-density genetic marker data, especially sequence data, imply an immense multiple testing burden. This can be ameliorated by filtering genetic variants, exploiting or accounting for correlations between variants, jointly testing variants, and by incorporating informative priors. Priors can be based on biological knowledge or predicted variant function, or even be used to integrate gene expression or other omics data. Based on Genetic Analysis Workshop (GAW) 19 data, this article discusses diversity and usefulness of functional variant scores provided, for example, by PolyPhen2, SIFT, or RegulomeDB annotations. Incorporating functional scores into variant filters or weights and adjusting the significance level for correlations between variants yielded significant associations with blood pressure traits in a large family study of Mexican Americans (GAW19 data set). Marker rs218966 in gene PHF14 and rs9836027 in MAP4 significantly associated with hypertension; additionally, rare variants in SNUPN significantly associated with systolic blood pressure. Variant weights strongly influenced the power of kernel methods and burden tests. Apart from variant weights in test statistics, prior weights may also be used when combining test statistics or to informatively weight p values while controlling false discovery rate (FDR). Indeed, power improved when gene expression data for FDR-controlled informative weighting of association test p values of genes was used. Finally, approaches exploiting variant correlations included identity-by-descent mapping and the optimal strategy for joint testing rare and common variants, which was observed to depend on linkage disequilibrium structure. PMID:26866982

  17. A collaborative knowledge base for cognitive phenomics

    PubMed Central

    Sabb, FW; Bearden, CE; Glahn, DC; Parker, DS; Freimer, N; Bilder, RM

    2014-01-01

    The human genome project has stimulated development of impressive repositories of biological knowledge at the genomic level and new knowledge bases are rapidly being developed in a ‘bottom-up’ fashion. In contrast, higher-level phenomics knowledge bases are underdeveloped, particularly with respect to the complex neuropsychiatric syndrome, symptom, cognitive, and neural systems phenotypes widely acknowledged as critical to advance molecular psychiatry research. This gap limits informatics strategies that could improve both the mining and representation of relevant knowledge, and help prioritize phenotypes for new research. Most existing structured knowledge bases also engage a limited set of contributors, and thus fail to leverage recent developments in social collaborative knowledge-building. We developed a collaborative annotation database to enable representation and sharing of empirical information about phenotypes important to neuropsychiatric research (www.Phenowiki.org). As a proof of concept, we focused on findings relevant to ‘cognitive control’, a neurocognitive construct considered important to multiple neuropsychiatric syndromes. Currently this knowledge base tabulates empirical findings about heritabilities and measurement properties of specific cognitive task and rating scale indicators (n = 449 observations). It is hoped that this new open resource can serve as a starting point that enables broadly collaborative knowledge-building, and help investigators select and prioritize endophenotypes for translational research. PMID:18180765

  18. Knowledge-based nursing diagnosis

    NASA Astrophysics Data System (ADS)

    Roy, Claudette; Hay, D. Robert

    1991-03-01

    Nursing diagnosis is an integral part of the nursing process and determines the interventions leading to outcomes for which the nurse is accountable. Diagnoses under the time constraints of modern nursing can benefit from a computer assist. A knowledge-based engineering approach was developed to address these problems. A number of problems were addressed during system design to make the system practical extended beyond capture of knowledge. The issues involved in implementing a professional knowledge base in a clinical setting are discussed. System functions, structure, interfaces, health care environment, and terminology and taxonomy are discussed. An integrated system concept from assessment through intervention and evaluation is outlined.

  19. Expert and Knowledge Based Systems.

    ERIC Educational Resources Information Center

    Demaid, Adrian; Edwards, Lyndon

    1987-01-01

    Discusses the nature and current state of knowledge-based systems and expert systems. Describes an expert system from the viewpoints of a computer programmer and an applications expert. Addresses concerns related to materials selection and forecasts future developments in the teaching of materials engineering. (ML)

  20. Population Education: A Knowledge Base.

    ERIC Educational Resources Information Center

    Jacobson, Willard J.

    To aid junior high and high school educators and curriculum planners as they develop population education programs, the book provides an overview of the population education knowledge base. In addition, it suggests learning activities, discussion questions, and background information which can be integrated into courses dealing with population,…

  1. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  2. Automated knowledge-base refinement

    NASA Technical Reports Server (NTRS)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  3. Knowledge based SAR images exploitations

    NASA Astrophysics Data System (ADS)

    Wang, David L.

    1987-01-01

    One of the basic functions of SAR images exploitation system is the detection of man-made objects. The performance of object detection is strongly limited by performance of segmentation modules. This paper presents a detection paradigm composed of an adaptive segmentation algorithm based on a priori knowledge of objects followed by a top-down hierarchical detection process that generates and evaluates object hypotheses. Shadow information and inter-object relationships can be added to the knowledge base to improve performance over that of a statistical detector based only on the attributes of individual objects.

  4. Knowledge based jet engine diagnostics

    NASA Technical Reports Server (NTRS)

    Jellison, Timothy G.; Dehoff, Ronald L.

    1987-01-01

    A fielded expert system automates equipment fault isolation and recommends corrective maintenance action for Air Force jet engines. The knowledge based diagnostics tool was developed as an expert system interface to the Comprehensive Engine Management System, Increment IV (CEMS IV), the standard Air Force base level maintenance decision support system. XMAM (trademark), the Expert Maintenance Tool, automates procedures for troubleshooting equipment faults, provides a facility for interactive user training, and fits within a diagnostics information feedback loop to improve the troubleshooting and equipment maintenance processes. The application of expert diagnostics to the Air Force A-10A aircraft TF-34 engine equipped with the Turbine Engine Monitoring System (TEMS) is presented.

  5. Does Teaching Experience Matter? Examining Biology Teachers' Prior Knowledge for Teaching in an Alternative Certification Program

    ERIC Educational Resources Information Center

    Friedrichsen, Patricia J.; Abell, Sandra K.; Pareja, Enrique M.; Brown, Patrick L.; Lankford, Deanna M.; Volkmann, Mark J.

    2009-01-01

    Alternative certification programs (ACPs) have been proposed as a viable way to address teacher shortages, yet we know little about how teacher knowledge develops within such programs. The purpose of this study was to investigate prior knowledge for teaching among students entering an ACP, comparing individuals with teaching experience to those…

  6. Cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward A.; Buchanan, Bruce G.

    1988-01-01

    This final report covers work performed under Contract NCC2-220 between NASA Ames Research Center and the Knowledge Systems Laboratory, Stanford University. The period of research was from March 1, 1987 to February 29, 1988. Topics covered were as follows: (1) concurrent architectures for knowledge-based systems; (2) methods for the solution of geometric constraint satisfaction problems, and (3) reasoning under uncertainty. The research in concurrent architectures was co-funded by DARPA, as part of that agency's Strategic Computing Program. The research has been in progress since 1985, under DARPA and NASA sponsorship. The research in geometric constraint satisfaction has been done in the context of a particular application, that of determining the 3-D structure of complex protein molecules, using the constraints inferred from NMR measurements.

  7. Knowledge Base Editor (SharpKBE)

    NASA Technical Reports Server (NTRS)

    Tikidjian, Raffi; James, Mark; Mackey, Ryan

    2007-01-01

    The SharpKBE software provides a graphical user interface environment for domain experts to build and manage knowledge base systems. Knowledge bases can be exported/translated to various target languages automatically, including customizable target languages.

  8. Using Conceptual Analysis To Build Knowledge Bases.

    ERIC Educational Resources Information Center

    Shinghal, Rajjan; Le Xuan, Albert

    This paper describes the methods and techniques called Conceptual Analysis (CA), a rigorous procedure to generate (without involuntary omissions and repetitions) knowledge bases for the development of knowledge-based systems. An introduction is given of CA and how it can be used to produce knowledge bases. A discussion is presented on what is…

  9. Foundation: Transforming data bases into knowledge bases

    NASA Technical Reports Server (NTRS)

    Purves, R. B.; Carnes, James R.; Cutts, Dannie E.

    1987-01-01

    One approach to transforming information stored in relational data bases into knowledge based representations and back again is described. This system, called Foundation, allows knowledge bases to take advantage of vast amounts of pre-existing data. A benefit of this approach is inspection, and even population, of data bases through an intelligent knowledge-based front-end.

  10. Knowledge-based landmarking of cephalograms.

    PubMed

    Lévy-Mandel, A D; Venetsanopoulos, A N; Tsotsos, J K

    1986-06-01

    Orthodontists have defined a certain number of characteristic points, or landmarks, on X-ray images of the human skull which are used to study growth or as a diagnostic aid. This work presents the first step toward an automatic extraction of these points. They are defined with respect to particular lines which are retrieved first. The original image is preprocessed with a prefiltering operator (median filter) followed by an edge detector (Mero-Vassy operator). A knowledge-based line-following algorithm is subsequently applied, involving a production system with organized sets of rules and a simple interpreter. The a priori knowledge implemented in the algorithm must take into account the fact that the lines represent biological shapes and can vary considerably from one patient to the next. The performance of the algorithm is judged with the help of objective quality criteria. Determination of the exact shapes of the lines allows the computation of the positions of the landmarks. PMID:3519070

  11. Genetic characterization for intraspecific hybridization of an exotic parasitoid prior its introduction for classical biological control

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The successful establishment of an exotic parasitoid in the context of classical biological control of insect pests depends upon its adaptability to the new environment. In theory, intraspecific hybridization may improve the success of the establishment as a result of an increase in the available ge...

  12. Knowledge-Based Network Operations

    NASA Astrophysics Data System (ADS)

    Wu, Chuan-lin; Hung, Chaw-Kwei; Stedry, Steven P.; McClure, James P.; Yeh, Show-Way

    1988-03-01

    An expert system is being implemented for enhancing operability of the Ground Communication Facility (GCF) of Jet Propulsion Laboratory's (JPL) Deep Space Network (DSN). The DSN is a tracking network for all of JPL's spacecraft plus a subset of spacecrafts launched by other NASA centers. A GCF upgrade task is set to replace the current GCF aging system with new, modern equipments which are capable of using knowledge-based monitor and control approach. The expert system, implemented in terms of KEE and SUN workstation, is used for performing network fault management, configuration management, and performance management in real-time. Monitor data are collected from each processor and DSCC's in every five seconds. In addition to serving as input parameters of the expert system, extracted management information is used to update a management information database. For the monitor and control purpose, software of each processor is divided into layers following the OSI standard. Each layer is modeled as a finite state machine. A System Management Application Process (SMAP) is implemented at application layer, which coordinates layer managers of the same processor and communicates with peer SMAPs of other processors. The expert system will be tuned by augmenting the production rules as the operation is going on, and its performance will be measured.

  13. A Natural Language Interface Concordant with a Knowledge Base

    PubMed Central

    Han, Yong-Jin; Park, Seong-Bae; Park, Se-Young

    2016-01-01

    The discordance between expressions interpretable by a natural language interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate natural language questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible natural language expressions corresponding to each formal query can also be collected in advance. A natural language expression has a one-to-one mapping with a formal query. Hence, a natural language question is translated into a formal query by matching the question with the most appropriate natural language expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively. PMID:26904105

  14. A Discussion of Knowledge Based Design

    NASA Technical Reports Server (NTRS)

    Wood, Richard M.; Bauer, Steven X. S.

    1999-01-01

    A discussion of knowledge and Knowledge- Based design as related to the design of aircraft is presented. The paper discusses the perceived problem with existing design studies and introduces the concepts of design and knowledge for a Knowledge- Based design system. A review of several Knowledge-Based design activities is provided. A Virtual Reality, Knowledge-Based system is proposed and reviewed. The feasibility of Virtual Reality to improve the efficiency and effectiveness of aerodynamic and multidisciplinary design, evaluation, and analysis of aircraft through the coupling of virtual reality technology and a Knowledge-Based design system is also reviewed. The final section of the paper discusses future directions for design and the role of Knowledge-Based design.

  15. Resolving intermediates in biological proton-coupled electron transfer: A tyrosyl radical prior to proton movement

    PubMed Central

    Faller, Peter; Goussias, Charilaos; Rutherford, A. William; Un, Sun

    2003-01-01

    The coupling of proton chemistry with redox reactions is important in many enzymes and is central to energy transduction in biology. However, the mechanistic details are poorly understood. Here, we have studied tyrosine oxidation, a reaction in which the removal of one electron from the amino acid is linked to the release of its phenolic proton. Using the unique photochemical properties of photosystem II, it was possible to oxidize the tyrosine at 1.8 K, a temperature at which proton and protein motions are limited. The state formed was detected by high magnetic field EPR as a high-energy radical intermediate trapped in an unprecedentedly electropositive environment. Warming of the protein allows this state to convert to a relaxed, stable form of the radical. The relaxation event occurs at 77 K and seems to involve proton migration and only a very limited movement of the protein. These reactions represent a stabilization process that prevents the back-reaction and determines the reactivity of the radical. PMID:12855767

  16. Resolving intermediates in biological proton-coupled electron transfer: a tyrosyl radical prior to proton movement.

    PubMed

    Faller, Peter; Goussias, Charilaos; Rutherford, A William; Un, Sun

    2003-07-22

    The coupling of proton chemistry with redox reactions is important in many enzymes and is central to energy transduction in biology. However, the mechanistic details are poorly understood. Here, we have studied tyrosine oxidation, a reaction in which the removal of one electron from the amino acid is linked to the release of its phenolic proton. Using the unique photochemical properties of photosystem II, it was possible to oxidize the tyrosine at 1.8 K, a temperature at which proton and protein motions are limited. The state formed was detected by high magnetic field EPR as a high-energy radical intermediate trapped in an unprecedentedly electropositive environment. Warming of the protein allows this state to convert to a relaxed, stable form of the radical. The relaxation event occurs at 77 K and seems to involve proton migration and only a very limited movement of the protein. These reactions represent a stabilization process that prevents the back-reaction and determines the reactivity of the radical. PMID:12855767

  17. IGENPRO knowledge-based operator support system.

    SciTech Connect

    Morman, J. A.

    1998-07-01

    Research and development is being performed on the knowledge-based IGENPRO operator support package for plant transient diagnostics and management to provide operator assistance during off-normal plant transient conditions. A generic thermal-hydraulic (T-H) first-principles approach is being implemented using automated reasoning, artificial neural networks and fuzzy logic to produce a generic T-H system-independent/plant-independent package. The IGENPRO package has a modular structure composed of three modules: the transient trend analysis module PROTREN, the process diagnostics module PRODIAG and the process management module PROMANA. Cooperative research and development work has focused on the PRODIAG diagnostic module of the IGENPRO package and the operator training matrix of transients used at the Braidwood Pressurized Water Reactor station. Promising simulator testing results with PRODIAG have been obtained for the Braidwood Chemical and Volume Control System (CVCS), and the Component Cooling Water System. Initial CVCS test results have also been obtained for the PROTREN module. The PROMANA effort also involves the CVCS. Future work will be focused on the long-term, slow and mild degradation transients where diagnoses of incipient T-H component failure prior to forced outage events is required. This will enhance the capability of the IGENPRO system as a predictive maintenance tool for plant staff and operator support.

  18. A knowledge base browser using hypermedia

    NASA Technical Reports Server (NTRS)

    Pocklington, Tony; Wang, Lui

    1990-01-01

    A hypermedia system is being developed to browse CLIPS (C Language Integrated Production System) knowledge bases. This system will be used to help train flight controllers for the Mission Control Center. Browsing this knowledge base will be accomplished either by having navigating through the various collection nodes that have already been defined, or through the query languages.

  19. Novel joint TOA/RSSI-based WCE location tracking method without prior knowledge of biological human body tissues.

    PubMed

    Ito, Takahiro; Anzai, Daisuke; Jianqing Wang

    2014-01-01

    This paper proposes a novel joint time of arrival (TOA)/received signal strength indicator (RSSI)-based wireless capsule endoscope (WCE) location tracking method without prior knowledge of biological human tissues. Generally, TOA-based localization can achieve much higher localization accuracy than other radio frequency-based localization techniques, whereas wireless signals transmitted from a WCE pass through various kinds of human body tissues, as a result, the propagation velocity inside a human body should be different from one in free space. Because the variation of propagation velocity is mainly affected by the relative permittivity of human body tissues, instead of pre-measurement for the relative permittivity in advance, we simultaneously estimate not only the WCE location but also the relative permittivity information. For this purpose, this paper first derives the relative permittivity estimation model with measured RSSI information. Then, we pay attention to a particle filter algorithm with the TOA-based localization and the RSSI-based relative permittivity estimation. Our computer simulation results demonstrates that the proposed tracking methods with the particle filter can accomplish an excellent localization accuracy of around 2 mm without prior information of the relative permittivity of the human body tissues. PMID:25571605

  20. A knowledge-based clustering algorithm driven by Gene Ontology.

    PubMed

    Cheng, Jill; Cline, Melissa; Martin, John; Finkelstein, David; Awad, Tarif; Kulp, David; Siani-Rose, Michael A

    2004-08-01

    We have developed an algorithm for inferring the degree of similarity between genes by using the graph-based structure of Gene Ontology (GO). We applied this knowledge-based similarity metric to a clique-finding algorithm for detecting sets of related genes with biological classifications. We also combined it with an expression-based distance metric to produce a co-cluster analysis, which accentuates genes with both similar expression profiles and similar biological characteristics and identifies gene clusters that are more stable and biologically meaningful. These algorithms are demonstrated in the analysis of MPRO cell differentiation time series experiments. PMID:15468759

  1. The Coming of Knowledge-Based Business.

    ERIC Educational Resources Information Center

    Davis, Stan; Botkin, Jim

    1994-01-01

    Economic growth will come from knowledge-based businesses whose "smart" products filter and interpret information. Businesses will come to think of themselves as educators and their customers as learners. (SK)

  2. Updating knowledge bases with disjunctive information

    SciTech Connect

    Zhang, Yan; Foo, Norman Y.

    1996-12-31

    It is well known that the minimal change principle was widely used in knowledge base updates. However, recent research has shown that conventional minimal change methods, eg. the PMA, are generally problematic for updating knowledge bases with disjunctive information. In this paper, we propose two different approaches to deal with this problem - one is called the minimal change with exceptions (MCE), the other is called the minimal change with maximal disjunctive inclusions (MCD). The first method is syntax-based, while the second is model-theoretic. We show that these two approaches are equivalent for propositional knowledge base updates, and the second method is also appropriate for first order knowledge base updates. We then prove that our new update approaches still satisfy the standard Katsuno and Mendelzon`s update postulates.

  3. Knowledge Based Systems and Metacognition in Radar

    NASA Astrophysics Data System (ADS)

    Capraro, Gerard T.; Wicks, Michael C.

    An airborne ground looking radar sensor's performance may be enhanced by selecting algorithms adaptively as the environment changes. A short description of an airborne intelligent radar system (AIRS) is presented with a description of the knowledge based filter and detection portions. A second level of artificial intelligence (AI) processing is presented that monitors, tests, and learns how to improve and control the first level. This approach is based upon metacognition, a way forward for developing knowledge based systems.

  4. Methodology for testing and validating knowledge bases

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  5. Distributed, cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    Some current research in the development and application of distributed, cooperating knowledge-based systems technology is addressed. The focus of the current research is the spacecraft ground operations environment. The underlying hypothesis is that, because of the increasing size, complexity, and cost of planned systems, conventional procedural approaches to the architecture of automated systems will give way to a more comprehensive knowledge-based approach. A hallmark of these future systems will be the integration of multiple knowledge-based agents which understand the operational goals of the system and cooperate with each other and the humans in the loop to attain the goals. The current work includes the development of a reference model for knowledge-base management, the development of a formal model of cooperating knowledge-based agents, the use of testbed for prototyping and evaluating various knowledge-based concepts, and beginning work on the establishment of an object-oriented model of an intelligent end-to-end (spacecraft to user) system. An introductory discussion of these activities is presented, the major concepts and principles being investigated are highlighted, and their potential use in other application domains is indicated.

  6. Patient Dependency Knowledge-Based Systems.

    PubMed

    Soliman, F

    1998-10-01

    The ability of Patient Dependency Systems to provide information for staffing decisions and budgetary development has been demonstrated. In addition, they have become powerful tools in modern hospital management. This growing interest in Patient Dependency Systems has renewed calls for their automation. As advances in Information Technology and in particular Knowledge-Based Engineering reach new heights, hospitals can no longer afford to ignore the potential benefits obtainable from developing and implementing Patient Dependency Knowledge-Based Systems. Experience has shown that the vast majority of decisions and rules used in the Patient Dependency method are too complex to capture in the form of a traditional programming language. Furthermore, the conventional Patient Dependency Information System automates the simple and rigid bookkeeping functions. On the other hand Knowledge-Based Systems automate complex decision making and judgmental processes and therefore are the appropriate technology for automating the Patient Dependency method. In this paper a new technique to automate Patient Dependency Systems using knowledge processing is presented. In this approach all Patient Dependency factors have been translated into a set of Decision Rules suitable for use in a Knowledge-Based System. The system is capable of providing the decision-maker with a number of scenarios and their possible outcomes. This paper also presents the development of Patient Dependency Knowledge-Based Systems, which can be used in allocating and evaluating resources and nursing staff in hospitals on the basis of patients' needs. PMID:9809275

  7. Decision support using causation knowledge base

    SciTech Connect

    Nakamura, K.; Iwai, S.; Sawaragi, T.

    1982-11-01

    A decision support system using a knowledge base of documentary data is presented. Causal assertions in documents are extracted and organized into cognitive maps, which are networks of causal relations, by the methodology of documentary coding. The knowledge base is constructed by joining cognitive maps of several documents concerned with a societal complex problem. The knowledge base is an integration of several expertises described in documents, though it is only concerned with causal structure of the problem, and includes overall and detailed information about the problem. Decisionmakers concerned with the problem interactively retrieve relevant information from the knowledge base in the process of decisionmaking and form their overall and detailed understanding of the complex problem based on the expertises stored in the knowledge base. Three retrieval modes are proposed according to types of the decisionmakers requests: 1) skeleton maps indicate overall causal structure of the problem, 2) hierarchical graphs give detailed information about parts of the causal structure, and 3) sources of causal relations are presented when necessary, for example when the decisionmaker wants to browse the causal assertions in documents. 10 references.

  8. Knowledge-based diagnosis for aerospace systems

    NASA Technical Reports Server (NTRS)

    Atkinson, David J.

    1988-01-01

    The need for automated diagnosis in aerospace systems and the approach of using knowledge-based systems are examined. Research issues in knowledge-based diagnosis which are important for aerospace applications are treated along with a review of recent relevant research developments in Artificial Intelligence. The design and operation of some existing knowledge-based diagnosis systems are described. The systems described and compared include the LES expert system for liquid oxygen loading at NASA Kennedy Space Center, the FAITH diagnosis system developed at the Jet Propulsion Laboratory, the PES procedural expert system developed at SRI International, the CSRL approach developed at Ohio State University, the StarPlan system developed by Ford Aerospace, the IDM integrated diagnostic model, and the DRAPhys diagnostic system developed at NASA Langley Research Center.

  9. Knowledge-based flow field zoning

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation flow field zoning in two dimensions is an important step towards easing the three-dimensional grid generation bottleneck in computational fluid dynamics. A knowledge based approach works well, but certain aspects of flow field zoning make the use of such an approach challenging. A knowledge based flow field zoner, called EZGrid, was implemented and tested on representative two-dimensional aerodynamic configurations. Results are shown which illustrate the way in which EZGrid incorporates the effects of physics, shape description, position, and user bias in a flow field zoning.

  10. Knowledge-based commodity distribution planning

    NASA Technical Reports Server (NTRS)

    Saks, Victor; Johnson, Ivan

    1994-01-01

    This paper presents an overview of a Decision Support System (DSS) that incorporates Knowledge-Based (KB) and commercial off the shelf (COTS) technology components. The Knowledge-Based Logistics Planning Shell (KBLPS) is a state-of-the-art DSS with an interactive map-oriented graphics user interface and powerful underlying planning algorithms. KBLPS was designed and implemented to support skilled Army logisticians to prepare and evaluate logistics plans rapidly, in order to support corps-level battle scenarios. KBLPS represents a substantial advance in graphical interactive planning tools, with the inclusion of intelligent planning algorithms that provide a powerful adjunct to the planning skills of commodity distribution planners.

  11. Knowledge-based Autonomous Test Engineer (KATE)

    NASA Technical Reports Server (NTRS)

    Parrish, Carrie L.; Brown, Barbara L.

    1991-01-01

    Mathematical models of system components have long been used to allow simulators to predict system behavior to various stimuli. Recent efforts to monitor, diagnose, and control real-time systems using component models have experienced similar success. NASA Kennedy is continuing the development of a tool for implementing real-time knowledge-based diagnostic and control systems called KATE (Knowledge based Autonomous Test Engineer). KATE is a model-based reasoning shell designed to provide autonomous control, monitoring, fault detection, and diagnostics for complex engineering systems by applying its reasoning techniques to an exchangeable quantitative model describing the structure and function of the various system components and their systemic behavior.

  12. Knowledge-Based Learning: Integration of Deductive and Inductive Learning for Knowledge Base Completion.

    ERIC Educational Resources Information Center

    Whitehall, Bradley Lane

    In constructing a knowledge-based system, the knowledge engineer must convert rules of thumb provided by the domain expert and previously solved examples into a working system. Research in machine learning has produced algorithms that create rules for knowledge-based systems, but these algorithms require either many examples or a complete domain…

  13. The Knowledge Bases of the Expert Teacher.

    ERIC Educational Resources Information Center

    Turner-Bisset, Rosie

    1999-01-01

    Presents a model for knowledge bases for teaching that will act as a mental map for understanding the complexity of teachers' professional knowledge. Describes the sources and evolution of the model, explains how the model functions in practice, and provides an illustration using an example of teaching in history. (CMK)

  14. Ethics, Inclusiveness, and the UCEA Knowledge Base.

    ERIC Educational Resources Information Center

    Strike, Kenneth A.

    1995-01-01

    Accepts most of Bull and McCarthy's rejection of the ethical boundary thesis in this same "EAQ" issue. Reinterprets their argument, using a three-part model of administrative knowledge. Any project for constructing an educational administration knowledge base is suspect, since little "pure" empirical and instrumental knowledge will be confirmed by…

  15. The adverse outcome pathway knowledge base

    EPA Science Inventory

    The rapid advancement of the Adverse Outcome Pathway (AOP) framework has been paralleled by the development of tools to store, analyse, and explore AOPs. The AOP Knowledge Base (AOP-KB) project has brought three independently developed platforms (Effectopedia, AOP-Wiki, and AOP-X...

  16. Improving the Knowledge Base in Teacher Education.

    ERIC Educational Resources Information Center

    Rockler, Michael J.

    Education in the United States for most of the last 50 years has built its knowledge base on a single dominating foundation--behavioral psychology. This paper analyzes the history of behaviorism. Syntheses are presented of the theories of Ivan P. Pavlov, J. B. Watson, and B. F. Skinner, all of whom contributed to the body of works on behaviorism.…

  17. Knowledge-based machine indexing from natural language text: Knowledge base design, development, and maintenance

    NASA Technical Reports Server (NTRS)

    Genuardi, Michael T.

    1993-01-01

    One strategy for machine-aided indexing (MAI) is to provide a concept-level analysis of the textual elements of documents or document abstracts. In such systems, natural-language phrases are analyzed in order to identify and classify concepts related to a particular subject domain. The overall performance of these MAI systems is largely dependent on the quality and comprehensiveness of their knowledge bases. These knowledge bases function to (1) define the relations between a controlled indexing vocabulary and natural language expressions; (2) provide a simple mechanism for disambiguation and the determination of relevancy; and (3) allow the extension of concept-hierarchical structure to all elements of the knowledge file. After a brief description of the NASA Machine-Aided Indexing system, concerns related to the development and maintenance of MAI knowledge bases are discussed. Particular emphasis is given to statistically-based text analysis tools designed to aid the knowledge base developer. One such tool, the Knowledge Base Building (KBB) program, presents the domain expert with a well-filtered list of synonyms and conceptually-related phrases for each thesaurus concept. Another tool, the Knowledge Base Maintenance (KBM) program, functions to identify areas of the knowledge base affected by changes in the conceptual domain (for example, the addition of a new thesaurus term). An alternate use of the KBM as an aid in thesaurus construction is also discussed.

  18. Bridging the gap: simulations meet knowledge bases

    NASA Astrophysics Data System (ADS)

    King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.

    2003-09-01

    Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.

  19. The importance of knowledge-based technology.

    PubMed

    Cipriano, Pamela F

    2012-01-01

    Nurse executives are responsible for a workforce that can provide safer and more efficient care in a complex sociotechnical environment. National quality priorities rely on technologies to provide data collection, share information, and leverage analytic capabilities to interpret findings and inform approaches to care that will achieve better outcomes. As a key steward for quality, the nurse executive exercises leadership to provide the infrastructure to build and manage nursing knowledge and instill accountability for following evidence-based practices. These actions contribute to a learning health system where new knowledge is captured as a by-product of care delivery enabled by knowledge-based electronic systems. The learning health system also relies on rigorous scientific evidence embedded into practice at the point of care. The nurse executive optimizes use of knowledge-based technologies, integrated throughout the organization, that have the capacity to help transform health care. PMID:22407206

  20. Knowledge-based system for computer security

    SciTech Connect

    Hunteman, W.J.

    1988-01-01

    The rapid expansion of computer security information and technology has provided little support for the security officer to identify and implement the safeguards needed to secure a computing system. The Department of Energy Center for Computer Security is developing a knowledge-based computer security system to provide expert knowledge to the security officer. The system is policy-based and incorporates a comprehensive list of system attack scenarios and safeguards that implement the required policy while defending against the attacks. 10 figs.

  1. Clips as a knowledge based language

    NASA Technical Reports Server (NTRS)

    Harrington, James B.

    1987-01-01

    CLIPS is a language for writing expert systems applications on a personal or small computer. Here, the CLIPS programming language is described and compared to three other artificial intelligence (AI) languages (LISP, Prolog, and OPS5) with regard to the processing they provide for the implementation of a knowledge based system (KBS). A discussion is given on how CLIPS would be used in a control system.

  2. Satellite Contamination and Materials Outgassing Knowledge base

    NASA Technical Reports Server (NTRS)

    Minor, Jody L.; Kauffman, William J. (Technical Monitor)

    2001-01-01

    Satellite contamination continues to be a design problem that engineers must take into account when developing new satellites. To help with this issue, NASA's Space Environments and Effects (SEE) Program funded the development of the Satellite Contamination and Materials Outgassing Knowledge base. This engineering tool brings together in one location information about the outgassing properties of aerospace materials based upon ground-testing data, the effects of outgassing that has been observed during flight and measurements of the contamination environment by on-orbit instruments. The knowledge base contains information using the ASTM Standard E- 1559 and also consolidates data from missions using quartz-crystal microbalances (QCM's). The data contained in the knowledge base was shared with NASA by government agencies and industry in the US and international space agencies as well. The term 'knowledgebase' was used because so much information and capability was brought together in one comprehensive engineering design tool. It is the SEE Program's intent to continually add additional material contamination data as it becomes available - creating a dynamic tool whose value to the user is ever increasing. The SEE Program firmly believes that NASA, and ultimately the entire contamination user community, will greatly benefit from this new engineering tool and highly encourages the community to not only use the tool but add data to it as well.

  3. Presentation planning using an integrated knowledge base

    NASA Technical Reports Server (NTRS)

    Arens, Yigal; Miller, Lawrence; Sondheimer, Norman

    1988-01-01

    A description is given of user interface research aimed at bringing together multiple input and output modes in a way that handles mixed mode input (commands, menus, forms, natural language), interacts with a diverse collection of underlying software utilities in a uniform way, and presents the results through a combination of output modes including natural language text, maps, charts and graphs. The system, Integrated Interfaces, derives much of its ability to interact uniformly with the user and the underlying services and to build its presentations, from the information present in a central knowledge base. This knowledge base integrates models of the application domain (Navy ships in the Pacific region, in the current demonstration version); the structure of visual displays and their graphical features; the underlying services (data bases and expert systems); and interface functions. The emphasis is on a presentation planner that uses the knowledge base to produce multi-modal output. There has been a flurry of recent work in user interface management systems. (Several recent examples are listed in the references). Existing work is characterized by an attempt to relieve the software designer of the burden of handcrafting an interface for each application. The work has generally focused on intelligently handling input. This paper deals with the other end of the pipeline - presentations.

  4. Knowledge-Based Query Construction Using the CDSS Knowledge Base for Efficient Evidence Retrieval.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Lee, Sungyoung; Kang, Byeong Ho

    2015-01-01

    Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important. PMID:26343669

  5. Knowledge-Based Query Construction Using the CDSS Knowledge Base for Efficient Evidence Retrieval

    PubMed Central

    Afzal, Muhammad; Hussain, Maqbool; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Lee, Sungyoung; Kang, Byeong Ho

    2015-01-01

    Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important. PMID:26343669

  6. Empirical Analysis and Refinement of Expert System Knowledge Bases

    PubMed Central

    Weiss, Sholom M.; Politakis, Peter; Ginsberg, Allen

    1986-01-01

    Recent progress in knowledge base refinement for expert systems is reviewed. Knowledge base refinement is characterized by the constrained modification of rule-components in an existing knowledge base. The goals are to localize specific weaknesses in a knowledge base and to improve an expert system's performance. Systems that automate some aspects of knowledge base refinement can have a significant impact on the related problems of knowledge base acquisition, maintenance, verification, and learning from experience. The SEEK empiricial analysis and refinement system is reviewed and its successor system, SEEK2, is introduced. Important areas for future research in knowledge base refinement are described.

  7. Developing a kidney and urinary pathway knowledge base

    PubMed Central

    2011-01-01

    Background Chronic renal disease is a global health problem. The identification of suitable biomarkers could facilitate early detection and diagnosis and allow better understanding of the underlying pathology. One of the challenges in meeting this goal is the necessary integration of experimental results from multiple biological levels for further analysis by data mining. Data integration in the life science is still a struggle, and many groups are looking to the benefits promised by the Semantic Web for data integration. Results We present a Semantic Web approach to developing a knowledge base that integrates data from high-throughput experiments on kidney and urine. A specialised KUP ontology is used to tie the various layers together, whilst background knowledge from external databases is incorporated by conversion into RDF. Using SPARQL as a query mechanism, we are able to query for proteins expressed in urine and place these back into the context of genes expressed in regions of the kidney. Conclusions The KUPKB gives KUP biologists the means to ask queries across many resources in order to aggregate knowledge that is necessary for answering biological questions. The Semantic Web technologies we use, together with the background knowledge from the domain’s ontologies, allows both rapid conversion and integration of this knowledge base. The KUPKB is still relatively small, but questions remain about scalability, maintenance and availability of the knowledge itself. Availability The KUPKB may be accessed via http://www.e-lico.eu/kupkb. PMID:21624162

  8. Formative Assessment Pre-Test to Identify College Students' Prior Knowledge, Misconceptions and Learning Difficulties in Biology

    ERIC Educational Resources Information Center

    Lazarowitz, Reuven; Lieb, Carl

    2006-01-01

    A formative assessment pretest was administered to undergraduate students at the beginning of a science course in order to find out their prior knowledge, misconceptions and learning difficulties on the topic of the human respiratory system and energy issues. Those findings could provide their instructors with the valuable information required in…

  9. PharmGKB: the Pharmacogenomics Knowledge Base.

    PubMed

    Thorn, Caroline F; Klein, Teri E; Altman, Russ B

    2013-01-01

    The Pharmacogenomics Knowledge Base, PharmGKB, is an interactive tool for researchers investigating how genetic variation affects drug response. The PharmGKB Web site, http://www.pharmgkb.org , displays genotype, molecular, and clinical knowledge integrated into pathway representations and Very Important Pharmacogene (VIP) summaries with links to additional external resources. Users can search and browse the knowledgebase by genes, variants, drugs, diseases, and pathways. Registration is free to the entire research community, but subject to agreement to use for research purposes only and not to redistribute. Registered users can access and download data to aid in the design of future pharmacogenetics and pharmacogenomics studies. PMID:23824865

  10. Knowledge-based systems in Japan

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward; Engelmore, Robert S.; Friedland, Peter E.; Johnson, Bruce B.; Nii, H. Penny; Schorr, Herbert; Shrobe, Howard

    1994-01-01

    This report summarizes a study of the state-of-the-art in knowledge-based systems technology in Japan, organized by the Japanese Technology Evaluation Center (JTEC) under the sponsorship of the National Science Foundation and the Advanced Research Projects Agency. The panel visited 19 Japanese sites in March 1992. Based on these site visits plus other interactions with Japanese organizations, both before and after the site visits, the panel prepared a draft final report. JTEC sent the draft to the host organizations for their review. The final report was published in May 1993.

  11. Knowledge Based Understanding of Radiology Text

    PubMed Central

    Ranum, David L.

    1988-01-01

    A data acquisition tool which will extract pertinent diagnostic information from radiology reports has been designed and implemented. Pertinent diagnostic information is defined as that clinical data which is used by the HELP medical expert system. The program uses a memory based semantic parsing technique to “understand” the text. Moreover, the memory structures and lexicon necessary to perform this action are automatically generated from the diagnostic knowledge base by using a special purpose compiler. The result is a system where data extraction from free text is directed by an expert system whose goal is diagnosis.

  12. PharmGKB: The Pharmacogenomics Knowledge Base

    PubMed Central

    Thorn, Caroline F.; Klein, Teri E.; Altman, Russ B.

    2014-01-01

    The Pharmacogenomics Knowledge Base, PharmGKB, is an interactive tool for researchers investigating how genetic variation affects drug response. The PharmGKB Web site, http://www.pharmgkb.org, displays genotype, molecular, and clinical knowledge integrated into pathway representations and Very Important Pharmacogene (VIP) summaries with links to additional external resources. Users can search and browse the knowledgebase by genes, variants, drugs, diseases, and pathways. Registration is free to the entire research community, but subject to agreement to use for research purposes only and not to redistribute. Registered users can access and download data to aid in the design of future pharmacogenetics and pharmacogenomics studies. PMID:23824865

  13. Knowledge-based simulation for aerospace systems

    NASA Technical Reports Server (NTRS)

    Will, Ralph W.; Sliwa, Nancy E.; Harrison, F. Wallace, Jr.

    1988-01-01

    Knowledge-based techniques, which offer many features that are desirable in the simulation and development of aerospace vehicle operations, exhibit many similarities to traditional simulation packages. The eventual solution of these systems' current symbolic processing/numeric processing interface problem will lead to continuous and discrete-event simulation capabilities in a single language, such as TS-PROLOG. Qualitative, totally-symbolic simulation methods are noted to possess several intrinsic characteristics that are especially revelatory of the system being simulated, and capable of insuring that all possible behaviors are considered.

  14. Knowledge-based fragment binding prediction.

    PubMed

    Tang, Grace W; Altman, Russ B

    2014-04-01

    Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening. PMID:24762971

  15. A knowledge based software engineering environment testbed

    NASA Technical Reports Server (NTRS)

    Gill, C.; Reedy, A.; Baker, L.

    1985-01-01

    The Carnegie Group Incorporated and Boeing Computer Services Company are developing a testbed which will provide a framework for integrating conventional software engineering tools with Artifical Intelligence (AI) tools to promote automation and productivity. The emphasis is on the transfer of AI technology to the software development process. Experiments relate to AI issues such as scaling up, inference, and knowledge representation. In its first year, the project has created a model of software development by representing software activities; developed a module representation formalism to specify the behavior and structure of software objects; integrated the model with the formalism to identify shared representation and inheritance mechanisms; demonstrated object programming by writing procedures and applying them to software objects; used data-directed and goal-directed reasoning to, respectively, infer the cause of bugs and evaluate the appropriateness of a configuration; and demonstrated knowledge-based graphics. Future plans include introduction of knowledge-based systems for rapid prototyping or rescheduling; natural language interfaces; blackboard architecture; and distributed processing

  16. Removal of Review and Reclassification Procedures for Biological Products Licensed Prior to July 1, 1972. Final rule.

    PubMed

    2016-02-12

    The Food and Drug Administration (FDA, the Agency, or we) is removing two regulations that prescribe procedures for FDA's review and classification of biological products licensed before July 1, 1972. FDA is taking this action because the two regulations are obsolete and no longer necessary in light of other statutory and regulatory authorities established since 1972, which allow FDA to evaluate and monitor the safety and effectiveness of all biological products. In addition, other statutory and regulatory authorities authorize FDA to revoke a license for biological products because they are not safe and effective, or are misbranded. FDA is taking this action as part of its retrospective review of its regulations to promote improvement and innovation. PMID:26878738

  17. Coastal habitats of the Elwha River, Washington- Biological and physical patterns and processes prior to dam removal

    USGS Publications Warehouse

    Duda, Jeffrey J.; Warrick, Jonathan A.; Magirl, Christopher S.

    2011-01-01

    Together, these different scientific perspectives form a basis for understanding the Elwha River ecosystem, an environment that has and will undergo substantial change. A century of change began with the start of dam construction in 1910; additional major change will result from dam removal scheduled to begin in September 2011. This report provides a scientific snapshot of the lower Elwha River, its estuary, and adjacent nearshore ecosystems prior to dam removal that can be used to evaluate the responses and dynamics of various system components following dam removal.

  18. Modeling Guru: Knowledge Base for NASA Modelers

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  19. Coastal and lower Elwha River, Washington, prior to dam removal--history, status, and defining characteristics: Chapter 1 in Coastal habitats of the Elwha River, Washington--biological and physical patterns and processes prior to dam removal

    USGS Publications Warehouse

    Duda, Jeffrey J.; Warrick, Jonathan A.; Magirl, Christopher S.

    2011-01-01

    Characterizing the physical and biological characteristics of the lower Elwha River, its estuary, and adjacent nearshore habitats prior to dam removal is essential to monitor changes to these areas during and following the historic dam-removal project set to begin in September 2011. Based on the size of the two hydroelectric projects and the amount of sediment that will be released, the Elwha River in Washington State will be home to the largest river restoration through dam removal attempted in the United States. Built in 1912 and 1927, respectively, the Elwha and Glines Canyon Dams have altered key physical and biological characteristics of the Elwha River. Once abundant salmon populations, consisting of all five species of Pacific salmon, are restricted to the lower 7.8 river kilometers downstream of Elwha Dam and are currently in low numbers. Dam removal will reopen access to more than 140 km of mainstem, flood plain, and tributary habitat, most of which is protected within Olympic National Park. The high capture rate of river-borne sediments by the two reservoirs has changed the geomorphology of the riverbed downstream of the dams. Mobilization and downstream transport of these accumulated reservoir sediments during and following dam removal will significantly change downstream river reaches, the estuary complex, and the nearshore environment. To introduce the more detailed studies that follow in this report, we summarize many of the key aspects of the Elwha River ecosystem including a regional and historical context for this unprecedented project.

  20. Is pharmacy a knowledge-based profession?

    PubMed

    Waterfield, Jon

    2010-04-12

    An increasingly important question for the pharmacy educator is the relationship between pharmacy knowledge and professionalism. There is a substantial body of literature on the theory of knowledge and it is useful to apply this to the profession of pharmacy. This review examines the types of knowledge and skill used by the pharmacist, with particular reference to tacit knowledge which cannot be codified. This leads into a discussion of practice-based pharmacy knowledge and the link between pharmaceutical science and practice. The final section of the paper considers the challenge of making knowledge work in practice. This includes a discussion of the production of knowledge within the context of application. The theoretical question posed by this review, "Is pharmacy a knowledge-based profession?" highlights challenging areas of debate for the pharmacy educator. PMID:20498743

  1. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  2. NASDA knowledge-based network planning system

    NASA Technical Reports Server (NTRS)

    Yamaya, K.; Fujiwara, M.; Kosugi, S.; Yambe, M.; Ohmori, M.

    1993-01-01

    One of the SODS (space operation and data system) sub-systems, NP (network planning) was the first expert system used by NASDA (national space development agency of Japan) for tracking and control of satellite. The major responsibilities of the NP system are: first, the allocation of network and satellite control resources and, second, the generation of the network operation plan data (NOP) used in automated control of the stations and control center facilities. Up to now, the first task of network resource scheduling was done by network operators. NP system automatically generates schedules using its knowledge base, which contains information on satellite orbits, station availability, which computer is dedicated to which satellite, and how many stations must be available for a particular satellite pass or a certain time period. The NP system is introduced.

  3. Compilation for critically constrained knowledge bases

    SciTech Connect

    Schrag, R.

    1996-12-31

    We show that many {open_quotes}critically constrained{close_quotes} Random 3SAT knowledge bases (KBs) can be compiled into disjunctive normal form easily by using a variant of the {open_quotes}Davis-Putnam{close_quotes} proof procedure. From these compiled KBs we can answer all queries about entailment of conjunctive normal formulas, also easily - compared to a {open_quotes}brute-force{close_quotes} approach to approximate knowledge compilation into unit clauses for the same KBs. We exploit this fact to develop an aggressive hybrid approach which attempts to compile a KB exactly until a given resource limit is reached, then falls back to approximate compilation into unit clauses. The resulting approach handles all of the critically constrained Random 3SAT KBs with average savings of an order of magnitude over the brute-force approach.

  4. Knowledge base rule partitioning design for CLIPS

    NASA Technical Reports Server (NTRS)

    Mainardi, Joseph D.; Szatkowski, G. P.

    1990-01-01

    This describes a knowledge base (KB) partitioning approach to solve the problem of real-time performance using the CLIPS AI shell when containing large numbers of rules and facts. This work is funded under the joint USAF/NASA Advanced Launch System (ALS) Program as applied research in expert systems to perform vehicle checkout for real-time controller and diagnostic monitoring tasks. The Expert System advanced development project (ADP-2302) main objective is to provide robust systems responding to new data frames of 0.1 to 1.0 second intervals. The intelligent system control must be performed within the specified real-time window, in order to meet the demands of the given application. Partitioning the KB reduces the complexity of the inferencing Rete net at any given time. This reduced complexity improves performance but without undo impacts during load and unload cycles. The second objective is to produce highly reliable intelligent systems. This requires simple and automated approaches to the KB verification & validation task. Partitioning the KB reduces rule interaction complexity overall. Reduced interaction simplifies the V&V testing necessary by focusing attention only on individual areas of interest. Many systems require a robustness that involves a large number of rules, most of which are mutually exclusive under different phases or conditions. The ideal solution is to control the knowledge base by loading rules that directly apply for that condition, while stripping out all rules and facts that are not used during that cycle. The practical approach is to cluster rules and facts into associated 'blocks'. A simple approach has been designed to control the addition and deletion of 'blocks' of rules and facts, while allowing real-time operations to run freely. Timing tests for real-time performance for specific machines under R/T operating systems have not been completed but are planned as part of the analysis process to validate the design.

  5. Knowledge-based systems and NASA's software support environment

    NASA Technical Reports Server (NTRS)

    Dugan, Tim; Carmody, Cora; Lennington, Kent; Nelson, Bob

    1990-01-01

    A proposed role for knowledge-based systems within NASA's Software Support Environment (SSE) is described. The SSE is chartered to support all software development for the Space Station Freedom Program (SSFP). This includes support for development of knowledge-based systems and the integration of these systems with conventional software systems. In addition to the support of development of knowledge-based systems, various software development functions provided by the SSE will utilize knowledge-based systems technology.

  6. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium...

  7. Knowledge-based optical system design

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik

    1992-03-01

    This work is a new approach for the design of start optical systems and represents a new contribution of artificial intelligence techniques in the optical design field. A knowledge-based optical-systems design (KBOSD), based on artificial intelligence algorithms, first order logic, knowledge representation, rules, and heuristics on lens design, is realized. This KBOSD is equipped with optical knowledge in the domain of centered dioptrical optical systems used at low aperture and small field angles. It generates centered dioptrical, on-axis and low-aperture optical systems, which are used as start systems for the subsequent optimization by existing lens design programs. This KBOSD produces monochromatic or polychromatic optical systems, such as singlet lens, doublet lens, triplet lens, reversed singlet lens, reversed doublet lens, reversed triplet lens, and telescopes. In the design of optical systems, the KBOSD takes into account many user constraints such as cost, resistance of the optical material (glass) to chemical, thermal, and mechanical effects, as well as the optical quality such as minimal aberrations and chromatic aberrations corrections. This KBOSD is developed in the programming language Prolog and has knowledge on optical design principles and optical properties. It is composed of more than 3000 clauses. Inference engine and interconnections in the cognitive world of optical systems are described. The system uses neither a lens library nor a lens data base; it is completely based on optical design knowledge.

  8. Knowledge-based approach to system integration

    NASA Technical Reports Server (NTRS)

    Blokland, W.; Krishnamurthy, C.; Biegl, C.; Sztipanovits, J.

    1988-01-01

    To solve complex problems one can often use the decomposition principle. However, a problem is seldom decomposable into completely independent subproblems. System integration deals with problem of resolving the interdependencies and the integration of the subsolutions. A natural method of decomposition is the hierarchical one. High-level specifications are broken down into lower level specifications until they can be transformed into solutions relatively easily. By automating the hierarchical decomposition and solution generation an integrated system is obtained in which the declaration of high level specifications is enough to solve the problem. We offer a knowledge-based approach to integrate the development and building of control systems. The process modeling is supported by using graphic editors. The user selects and connects icons that represent subprocesses and might refer to prewritten programs. The graphical editor assists the user in selecting parameters for each subprocess and allows the testing of a specific configuration. Next, from the definitions created by the graphical editor, the actual control program is built. Fault-diagnosis routines are generated automatically as well. Since the user is not required to write program code and knowledge about the process is present in the development system, the user is not required to have expertise in many fields.

  9. Irrelevance Reasoning in Knowledge Based Systems

    NASA Technical Reports Server (NTRS)

    Levy, A. Y.

    1993-01-01

    This dissertation considers the problem of reasoning about irrelevance of knowledge in a principled and efficient manner. Specifically, it is concerned with two key problems: (1) developing algorithms for automatically deciding what parts of a knowledge base are irrelevant to a query and (2) the utility of relevance reasoning. The dissertation describes a novel tool, the query-tree, for reasoning about irrelevance. Based on the query-tree, we develop several algorithms for deciding what formulas are irrelevant to a query. Our general framework sheds new light on the problem of detecting independence of queries from updates. We present new results that significantly extend previous work in this area. The framework also provides a setting in which to investigate the connection between the notion of irrelevance and the creation of abstractions. We propose a new approach to research on reasoning with abstractions, in which we investigate the properties of an abstraction by considering the irrelevance claims on which it is based. We demonstrate the potential of the approach for the cases of abstraction of predicates and projection of predicate arguments. Finally, we describe an application of relevance reasoning to the domain of modeling physical devices.

  10. Knowledge-based reusable software synthesis system

    NASA Technical Reports Server (NTRS)

    Donaldson, Cammie

    1989-01-01

    The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.

  11. An Ebola virus-centered knowledge base.

    PubMed

    Kamdar, Maulik R; Dumontier, Michel

    2015-01-01

    Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard. PMID:26055098

  12. An Ebola virus-centered knowledge base

    PubMed Central

    Kamdar, Maulik R.; Dumontier, Michel

    2015-01-01

    Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard. Database URL: http://ebola.semanticscience.org. PMID:26055098

  13. Sensitive Determination of Terazosin in Pharmaceutical Formulations and Biological Samples by Ionic-Liquid Microextraction Prior to Spectrofluorimetry

    PubMed Central

    Zeeb, Mohsen; Sadeghi, Mahdi

    2012-01-01

    An efficient and environmentally friendly sample preparation method based on the application of hydrophobic 1-Hexylpyridinium hexafluorophosphate [Hpy][PF6] ionic liquid (IL) as a microextraction solvent was proposed to preconcentrate terazosin. The performance of the microextraction method was improved by introducing a common ion of pyridinium IL into the sample solution. Due to the presence of the common ion, the solubility of IL significantly decreased. As a result, the phase separation successfully occurred even at high ionic strength, and the volume of the settled IL-phase was not influenced by variations in the ionic strength (up to 30% w/v). After preconcentration step, the enriched phase was introduced to the spectrofluorimeter for the determination of terazosin. The obtained results revealed that this system did not suffer from the limitations of that in conventional ionic-liquid microextraction. Under optimum experimental conditions, the proposed method provided a limit of detection (LOD) of 0.027 μg L−1 and a relative standard deviation (R.S.D.) of 2.4%. The present method was successfully applied to terazosin determination in actual pharmaceutical formulations and biological samples. Considering the large variety of ionic liquids, the proposed microextraction method earns many merits, and will present a wide application in the future. PMID:22505920

  14. Sensitive determination of terazosin in pharmaceutical formulations and biological samples by ionic-liquid microextraction prior to spectrofluorimetry.

    PubMed

    Zeeb, Mohsen; Sadeghi, Mahdi

    2012-01-01

    An efficient and environmentally friendly sample preparation method based on the application of hydrophobic 1-Hexylpyridinium hexafluorophosphate [Hpy][PF(6)] ionic liquid (IL) as a microextraction solvent was proposed to preconcentrate terazosin. The performance of the microextraction method was improved by introducing a common ion of pyridinium IL into the sample solution. Due to the presence of the common ion, the solubility of IL significantly decreased. As a result, the phase separation successfully occurred even at high ionic strength, and the volume of the settled IL-phase was not influenced by variations in the ionic strength (up to 30% w/v). After preconcentration step, the enriched phase was introduced to the spectrofluorimeter for the determination of terazosin. The obtained results revealed that this system did not suffer from the limitations of that in conventional ionic-liquid microextraction. Under optimum experimental conditions, the proposed method provided a limit of detection (LOD) of 0.027 μg L(-1) and a relative standard deviation (R.S.D.) of 2.4%. The present method was successfully applied to terazosin determination in actual pharmaceutical formulations and biological samples. Considering the large variety of ionic liquids, the proposed microextraction method earns many merits, and will present a wide application in the future. PMID:22505920

  15. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  16. A knowledge base for Vitis vinifera functional analysis

    PubMed Central

    2015-01-01

    Background Vitis vinifera (Grapevine) is the most important fruit species in the modern world. Wine and table grapes sales contribute significantly to the economy of major wine producing countries. The most relevant goals in wine production concern quality and safety. In order to significantly improve the achievement of these objectives and to gain biological knowledge about cultivars, a genomic approach is the most reliable strategy. The recent grapevine genome sequencing offers the opportunity to study the potential roles of genes and microRNAs in fruit maturation and other physiological and pathological processes. Although several systems allowing the analysis of plant genomes have been reported, none of them has been designed specifically for the functional analysis of grapevine genomes of cultivars under environmental stress in connection with microRNA data. Description Here we introduce a novel knowledge base, called BIOWINE, designed for the functional analysis of Vitis vinifera genomes of cultivars present in Sicily. The system allows the analysis of RNA-seq experiments of two different cultivars, namely Nero d'Avola and Nerello Mascalese. Samples were taken under different climatic conditions of phenological phases, diseases, and geographic locations. The BIOWINE web interface is equipped with data analysis modules for grapevine genomes. In particular users may analyze the current genome assembly together with the RNA-seq data through a customized version of GBrowse. The web interface allows users to perform gene set enrichment by exploiting third-party databases. Conclusions BIOWINE is a knowledge base implementing a set of bioinformatics tools for the analysis of grapevine genomes. The system aims to increase our understanding of the grapevine varieties and species of Sicilian products focusing on adaptability to different climatic conditions, phenological phases, diseases, and geographic locations. PMID:26050794

  17. Weather, knowledge base and life-style

    NASA Astrophysics Data System (ADS)

    Bohle, Martin

    2015-04-01

    Why to main-stream curiosity for earth-science topics, thus to appraise these topics as of public interest? Namely, to influence practices how humankind's activities intersect the geosphere. How to main-stream that curiosity for earth-science topics? Namely, by weaving diverse concerns into common threads drawing on a wide range of perspectives: be it beauty or particularity of ordinary or special phenomena, evaluating hazards for or from mundane environments, or connecting the scholarly investigation with concerns of citizens at large; applying for threading traditional or modern media, arts or story-telling. Three examples: First "weather"; weather is a topic of primordial interest for most people: weather impacts on humans lives, be it for settlement, for food, for mobility, for hunting, for fishing, or for battle. It is the single earth-science topic that went "prime-time" since in the early 1950-ties the broadcasting of weather forecasts started and meteorologists present their work to the public, daily. Second "knowledge base"; earth-sciences are a relevant for modern societies' economy and value setting: earth-sciences provide insights into the evolution of live-bearing planets, the functioning of Earth's systems and the impact of humankind's activities on biogeochemical systems on Earth. These insights bear on production of goods, living conditions and individual well-being. Third "life-style"; citizen's urban culture prejudice their experiential connections: earth-sciences related phenomena are witnessed rarely, even most weather phenomena. In the past, traditional rural communities mediated their rich experiences through earth-centric story-telling. In course of the global urbanisation process this culture has given place to society-centric story-telling. Only recently anthropogenic global change triggered discussions on geoengineering, hazard mitigation, demographics, which interwoven with arts, linguistics and cultural histories offer a rich narrative

  18. Model-based knowledge-based optical processors

    NASA Astrophysics Data System (ADS)

    Casasent, David; Liebowitz, Suzanne A.

    1987-05-01

    An efficient 3-D object-centered knowledge base is described. The ability to on-line generate a 2-D image projection or range image for any object/viewer orientation from this knowledge base is addressed. Applications of this knowledge base in associative processors and symbolic correlators are then discussed. Initial test results are presented for a multiple degree of freedom object recognition problem. These include new techniques to achieve object orientation information and two new associative memory matrix formulations.

  19. Advanced software development workstation. Knowledge base design: Design of knowledge base for flight planning application

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    1992-01-01

    The development process of the knowledge base for the generation of Test Libraries for Mission Operations Computer (MOC) Command Support focused on a series of information gathering interviews. These knowledge capture sessions are supporting the development of a prototype for evaluating the capabilities of INTUIT on such an application. the prototype includes functions related to POCC (Payload Operation Control Center) processing. It prompts the end-users for input through a series of panels and then generates the Meds associated with the initialization and the update of hazardous command tables for a POCC Processing TLIB.

  20. Automated knowledge base development from CAD/CAE databases

    NASA Technical Reports Server (NTRS)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  1. System Engineering for the NNSA Knowledge Base

    NASA Astrophysics Data System (ADS)

    Young, C.; Ballard, S.; Hipp, J.

    2006-05-01

    To improve ground-based nuclear explosion monitoring capability, GNEM R&E (Ground-based Nuclear Explosion Monitoring Research & Engineering) researchers at the national laboratories have collected an extensive set of raw data products. These raw data are used to develop higher level products (e.g. 2D and 3D travel time models) to better characterize the Earth at regional scales. The processed products and selected portions of the raw data are stored in an archiving and access system known as the NNSA (National Nuclear Security Administration) Knowledge Base (KB), which is engineered to meet the requirements of operational monitoring authorities. At its core, the KB is a data archive, and the effectiveness of the KB is ultimately determined by the quality of the data content, but access to that content is completely controlled by the information system in which that content is embedded. Developing this system has been the task of Sandia National Laboratories (SNL), and in this paper we discuss some of the significant challenges we have faced and the solutions we have engineered. One of the biggest system challenges with raw data has been integrating database content from the various sources to yield an overall KB product that is comprehensive, thorough and validated, yet minimizes the amount of disk storage required. Researchers at different facilities often use the same data to develop their products, and this redundancy must be removed in the delivered KB, ideally without requiring any additional effort on the part of the researchers. Further, related data content must be grouped together for KB user convenience. Initially SNL used whatever tools were already available for these tasks, and did the other tasks manually. The ever-growing volume of KB data to be merged, as well as a need for more control of merging utilities, led SNL to develop our own java software package, consisting of a low- level database utility library upon which we have built several

  2. Knowledge-based analysis of microarrays for the discovery of transcriptional regulation relationships

    PubMed Central

    2010-01-01

    Background The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. Results In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. Conclusion High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data. PMID:20122245

  3. Use of high-intensity sonication for pre-treatment of biological tissues prior to multielemental analysis by total reflection X-ray fluorescence spectrometry

    NASA Astrophysics Data System (ADS)

    La Calle, Inmaculada De; Costas, Marta; Cabaleiro, Noelia; Lavilla, Isela; Bendicho, Carlos

    2012-01-01

    In this work, two ultrasound-based procedures are developed for sample preparation prior to determination of P, K, Ca, Cr, Mn, Fe, Ni, Cu, Zn, As, Se and Sr in biological tissues by total reflection X-ray fluorescence spectrometry. Ultrasound-assisted extraction by means of a cup-horn sonoreactor and ultrasonic-probe slurry sampling were compared with a well-established procedure such as magnetic agitation slurry sampling. For that purpose, seven certified reference materials and different real samples of animal tissue were used. Similar accuracy and precision is obtained with the three sample preparation approaches tried. Limits of detection were dependent on both the sample matrix and the sample pre-treatment used, best values being achieved with ultrasound-assisted extraction. Advantages of ultrasound-assisted extraction include reduced sample handling, decreased contamination risks (neither addition of surfactants nor use of foreign objects inside the extraction vial), simpler background (no solid particles onto the sample carrier) and improved recovery for some elements such as P. A mixture of 10% v/v HNO3 + 20-40% v/v HCl was suitable for extraction from biological tissues.

  4. Applying Knowledge-Based Techniques to Software Development.

    ERIC Educational Resources Information Center

    Harandi, Mehdi T.

    1986-01-01

    Reviews overall structure and design principles of a knowledge-based programming support tool, the Knowledge-Based Programming Assistant, which is being developed at University of Illinois Urbana-Champaign. The system's major units (program design program coding, and intelligent debugging) and additional functions are described. (MBR)

  5. Knowledge base for expert system process control/optimization

    NASA Astrophysics Data System (ADS)

    Lee, C. W.; Abrams, Frances L.

    An expert system based on the philosophy of qualitative process automation has been developed for the autonomous cure cycle development and control of the autoclave curing process. The system's knowledge base in the form of declarative rules is based on the qualitative understanding of the curing process. The knowledge base and examples of the resulting cure cycle are presented.

  6. Knowledge-Based Entrepreneurship in a Boundless Research System

    ERIC Educational Resources Information Center

    Dell'Anno, Davide

    2008-01-01

    International entrepreneurship and knowledge-based entrepreneurship have recently generated considerable academic and non-academic attention. This paper explores the "new" field of knowledge-based entrepreneurship in a boundless research system. Cultural barriers to the development of business opportunities by researchers persist in some academic…

  7. Verification of knowledge bases based on containment checking

    SciTech Connect

    Levy. A.Y.; Rousset, M.C.

    1996-12-31

    Building complex knowledge based applications requires encoding large amounts of domain knowledge. After acquiring knowledge from domain experts, much of the effort in building a knowledge base goes into verifying that the knowledge is encoded correctly. We consider the problem of verifying hybrid knowledge bases that contain both Horn rules and a terminology in a description logic. Our approach to the verification problem is based on showing a close relationship to the problem of query containment. Our first contribution, based on this relationship, is presenting a thorough analysis of the decidability and complexity of the verification problem, for knowledge bases containing recursive rules and the interpreted predicates =, {le}, < and {ne}. Second, we show that important new classes of constraints on correct inputs and outputs can be expressed in a hybrid setting, in which a description logic class hierarchy is also considered, and we present the first complete algorithm for verifying such hybrid knowledge bases.

  8. Hyperincursion and the Globalization of the Knowledge-Based Economy

    NASA Astrophysics Data System (ADS)

    Leydesdorff, Loet

    2006-06-01

    In biological systems, the capacity of anticipation—that is, entertaining a model of the system within the system—can be considered as naturally given. Human languages enable psychological systems to construct and exchange mental models of themselves and their environments reflexively, that is, provide meaning to the events. At the level of the social system expectations can further be codified. When these codifications are functionally differentiated—like between market mechanisms and scientific research programs—the potential asynchronicity in the update among the subsystems provides room for a second anticipatory mechanism at the level of the transversal information exchange among differently codified meaning-processing subsystems. Interactions between the two different anticipatory mechanisms (the transversal one and the one along the time axis in each subsystem) may lead to co-evolutions and stabilization of expectations along trajectories. The wider horizon of knowledgeable expectations can be expected to meta-stabilize and also globalize a previously stabilized configuration of expectations against the axis of time. While stabilization can be considered as consequences of interaction and aggregation among incursive formulations of the logistic equation, globalization can be modeled using the hyperincursive formulation of this equation. The knowledge-based subdynamic at the global level which thus emerges, enables historical agents to inform the reconstruction of previous states and to co-construct future states of the social system, for example, in a techno-economic co-evolution.

  9. Bioenergy Science Center KnowledgeBase

    DOE Data Explorer

    Syed, M. H.; Karpinets, T. V.; Parang, M.; Leuze, M. R.; Park, B. H.; Hyatt, D.; Brown, S. D.; Moulton, S. Galloway, M.D.; Uberbacher, E. C.

    The challenge of converting cellulosic biomass to sugars is the dominant obstacle to cost effective production of biofuels in s capable of significant enough quantities to displace U. S. consumption of fossil transportation fuels. The BioEnergy Science Center (BESC) tackles this challenge of biomass recalcitrance by closely linking (1) plant research to make cell walls easier to deconstruct, and (2) microbial research to develop multi-talented biocatalysts tailor-made to produce biofuels in a single step. [from the 2011 BESC factsheet] The BioEnergy Science Center (BESC) is a multi-institutional, multidisciplinary research (biological, chemical, physical and computational sciences, mathematics and engineering) organization focused on the fundamental understanding and elimination of biomass recalcitrance. The BESC Knowledgebase and its associated tools is a discovery platform for bioenergy research. It consists of a collection of metadata, data, and computational tools for data analysis, integration, comparison and visualization for plants and microbes in the center.The BESC Knowledgebase (KB) and BESC Laboratory Information Management System (LIMS) enable bioenergy researchers to perform systemic research. [http://bobcat.ornl.gov/besc/index.jsp

  10. The data dictionary: A view into the CTBT knowledge base

    SciTech Connect

    Shepherd, E.R.; Keyser, R.G.; Armstrong, H.M.

    1997-08-01

    The data dictionary for the Comprehensive Test Ban Treaty (CTBT) knowledge base provides a comprehensive, current catalog of the projected contents of the knowledge base. It is written from a data definition view of the knowledge base and therefore organizes information in a fashion that allows logical storage within the computer. The data dictionary introduces two organization categories of data: the datatype, which is a broad, high-level category of data, and the dataset, which is a specific instance of a datatype. The knowledge base, and thus the data dictionary, consist of a fixed, relatively small number of datatypes, but new datasets are expected to be added on a regular basis. The data dictionary is a tangible result of the design effort for the knowledge base and is intended to be used by anyone who accesses the knowledge base for any purpose, such as populating the knowledge base with data, or accessing the data for use with automatic data processing (ADP) routines, or browsing through the data for verification purposes. For these two reasons, it is important to discuss the development of the data dictionary as well as to describe its contents to better understand its usefulness; that is the purpose of this paper.

  11. Clinical knowledge-based inverse treatment planning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Xing, Lei

    2004-11-01

    Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse

  12. Contribution of brain or biological reserve and cognitive or neural reserve to outcome after TBI: A meta-analysis (prior to 2015).

    PubMed

    Mathias, Jane L; Wheaton, Patricia

    2015-08-01

    Brain/biological (BR) and cognitive/neural reserve (CR) have increasingly been used to explain some of the variability that occurs as a consequence of normal ageing and neurological injuries or disease. However, research evaluating the impact of reserve on outcomes after adult traumatic brain injury (TBI) has yet to be quantitatively reviewed. This meta-analysis consolidated data from 90 studies (published prior to 2015) that either examined the relationship between measures of BR (genetics, age, sex) or CR (education, premorbid IQ) and outcomes after TBI or compared the outcomes of groups with high and low reserve. The evidence for genetic sources of reserve was limited and often contrary to prediction. APOE ∈4 status has been studied most, but did not have a consistent or sizeable impact on outcomes. The majority of studies found that younger age was associated with better outcomes, however most failed to adjust for normal age-related changes in cognitive performance that are independent of a TBI. This finding was reversed (older adults had better outcomes) in the small number of studies that provided age-adjusted scores; although it remains unclear whether differences in the cause and severity of injuries that are sustained by younger and older adults contributed to this finding. Despite being more likely to sustain a TBI, males have comparable outcomes to females. Overall, as is the case in the general population, higher levels of education and pre-morbid IQ are both associated with better outcomes. PMID:26054792

  13. Knowledge-Based Systems (KBS) development standards: A maintenance perspective

    NASA Technical Reports Server (NTRS)

    Brill, John

    1990-01-01

    Information on knowledge-based systems (KBS) is given in viewgraph form. Information is given on KBS standardization needs, the knowledge engineering process, program management, software and hardware issues, and chronic problem areas.

  14. The process for integrating the NNSA knowledge base.

    SciTech Connect

    Wilkening, Lisa K.; Carr, Dorthe Bame; Young, Christopher John; Hampton, Jeff; Martinez, Elaine

    2009-03-01

    From 2002 through 2006, the Ground Based Nuclear Explosion Monitoring Research & Engineering (GNEMRE) program at Sandia National Laboratories defined and modified a process for merging different types of integrated research products (IRPs) from various researchers into a cohesive, well-organized collection know as the NNSA Knowledge Base, to support operational treaty monitoring. This process includes defining the KB structure, systematically and logically aggregating IRPs into a complete set, and verifying and validating that the integrated Knowledge Base works as expected.

  15. Maintaining a Knowledge Base Using the MEDAS Knowledge Engineering Tools

    PubMed Central

    Naeymi-Rad, Frank; Evens, Martha; Koschmann, Timothy; Lee, Chui-Mei; Gudipati, Rao Y.C.; Kepic, Theresa; Rackow, Eric; Weil, Max Harry

    1985-01-01

    This paper describes the process by which a medical expert creates a new knowledge base for MEDAS, the Medical Emergency Decision Assistance System. It follows the expert physician step by step as a new disorder is entered along with its relevant symptoms. As the expanded knowledge base is tested, inconsistencies are detected, and corrections are made, showing at each step the available tools and giving an example of their use.

  16. XML-Based SHINE Knowledge Base Interchange Language

    NASA Technical Reports Server (NTRS)

    James, Mark; Mackey, Ryan; Tikidjian, Raffi

    2008-01-01

    The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.

  17. Conflict Resolution of Chinese Chess Endgame Knowledge Base

    NASA Astrophysics Data System (ADS)

    Chen, Bo-Nian; Liu, Pangfang; Hsu, Shun-Chin; Hsu, Tsan-Sheng

    Endgame heuristics are often incorperated as part of the evaluation function used in Chinese Chess programs. In our program, Contemplation, we have proposed an automatic strategy to construct a large set of endgame heuristics. In this paper, we propose a conflict resolution strategy to eliminate the conflicts among the constructed heuristic databases, which is called endgame knowledge base. In our experiment, the correctness of the obtained constructed endgame knowledge base is sufficiently high for practical usage.

  18. Analysis of molecular expression patterns and integration with other knowledge bases using probabilistic Bayesian network models

    SciTech Connect

    Moler, Edward J.; Mian, I.S.

    2000-03-01

    How can molecular expression experiments be interpreted with greater than ten to the fourth measurements per chip? How can one get the most quantitative information possible from the experimental data with good confidence? These are important questions whose solutions require an interdisciplinary combination of molecular and cellular biology, computer science, statistics, and complex systems analysis. The explosion of data from microarray techniques present the problem of interpreting the experiments. The availability of large-scale knowledge bases provide the opportunity to maximize the information extracted from these experiments. We have developed new methods of discovering biological function, metabolic pathways, and regulatory networks from these data and knowledge bases. These techniques are applicable to analyses for biomedical engineering, clinical, and fundamental cell and molecular biology studies. Our approach uses probabilistic, computational methods that give quantitative interpretations of data in a biological context. We have selected Bayesian statistical models with graphical network representations as a framework for our methods. As a first step, we use a nave Bayesian classifier to identify statistically significant patterns in gene expression data. We have developed methods which allow us to (a) characterize which genes or experiments distinguish each class from the others, (b) cross-index the resulting classes with other databases to assess biological meaning of the classes, and (c) display a gross overview of cellular dynamics. We have developed a number of visualization tools to convey the results. We report here our methods of classification and our first attempts at integrating the data and other knowledge bases together with new visualization tools. We demonstrate the utility of these methods and tools by analysis of a series of yeast cDNA microarray data and to a set of cancerous/normal sample data from colon cancer patients. We discuss

  19. Case-Based Tutoring from a Medical Knowledge Base

    PubMed Central

    Chin, Homer L.

    1988-01-01

    The past decade has seen the emergence of programs that make use of large knowledge bases to assist physicians in diagnosis within the general field of internal medicine. One such program, Internist-I, contains knowledge about over 600 diseases, covering a significant proportion of internal medicine. This paper describes the process of converting a subset of this knowledge base--in the area of cardiovascular diseases--into a probabilistic format, and the use of this resulting knowledge base to teach medical diagnostic knowledge. The system (called KBSimulator--for Knowledge-Based patient Simulator) generates simulated patient cases and uses these cases as a focal point from which to teach medical knowledge. It interacts with the student in a mixed-initiative fashion, presenting patients for the student to diagnose, and allowing the student to obtain further information on his/her own initiative in the context of that patient case. The system scores the student, and uses these scores to form a rudimentary model of the student. This resulting model of the student is then used to direct the generation of subsequent patient cases. This project demonstrates the feasibility of building an intelligent, flexible instructional system that uses a knowledge base constructed primarily for medical diagnosis.

  20. Evaluation of database technologies for the CTBT Knowledge Base prototype

    SciTech Connect

    Keyser, R.; Shepard-Dombroski, E.; Baur, D.; Hipp, J.; Moore, S.; Young, C.; Chael, E.

    1996-11-01

    This document examines a number of different software technologies in the rapidly changing field of database management systems, evaluates these systems in light of the expected needs of the Comprehensive Test Ban Treaty (CTBT) Knowledge Base, and makes some recommendations for the initial prototypes of the Knowledge Base. The Knowledge Base requirements are examined and then used as criteria for evaluation of the database management options. A mock-up of the data expected in the Knowledge Base is used as a basis for examining how four different database technologies deal with the problems of storing and retrieving the data. Based on these requirement and the results of the evaluation, the recommendation is that the Illustra database be considered for the initial prototype of the Knowledge Base. Illustra offers a unique blend of performance, flexibility, and features that will aid in the implementation of the prototype. At the same time, Illustra provides a high level of compatibility with the hardware and software environments present at the US NDC (National Data Center) and the PIDC (Prototype International Data Center).

  1. A Knowledge-Based System Developer for aerospace applications

    NASA Technical Reports Server (NTRS)

    Shi, George Z.; Wu, Kewei; Fensky, Connie S.; Lo, Ching F.

    1993-01-01

    A prototype Knowledge-Based System Developer (KBSD) has been developed for aerospace applications by utilizing artificial intelligence technology. The KBSD directly acquires knowledge from domain experts through a graphical interface then builds expert systems from that knowledge. This raises the state of the art of knowledge acquisition/expert system technology to a new level by lessening the need for skilled knowledge engineers. The feasibility, applicability , and efficiency of the proposed concept was established, making a continuation which would develop the prototype to a full-scale general-purpose knowledge-based system developer justifiable. The KBSD has great commercial potential. It will provide a marketable software shell which alleviates the need for knowledge engineers and increase productivity in the workplace. The KBSD will therefore make knowledge-based systems available to a large portion of industry.

  2. Arranging ISO 13606 archetypes into a knowledge base.

    PubMed

    Kopanitsa, Georgy

    2014-01-01

    To enable the efficient reuse of standard based medical data we propose to develop a higher level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analyzed for their ability to be applied in the implementation of a higher level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future. PMID:25160140

  3. Knowledge based systems: From process control to policy analysis

    SciTech Connect

    Marinuzzi, J.G.

    1993-01-01

    Los Alamos has been pursuing the use of Knowledge Based Systems for many years. These systems are currently being used to support projects that range across many production and operations areas. By investing time and money in people and equipment, Los Alamos has developed one of the strongest knowledge based systems capabilities within the DOE. Staff of Los Alamos' Mechanical Electronic Engineering Division are using these knowledge systems to increase capability, productivity and competitiveness in areas of manufacturing quality control, robotics, process control, plant design and management decision support. This paper describes some of these projects and associated technical program approaches, accomplishments, benefits and future goals.

  4. Knowledge based systems: From process control to policy analysis

    SciTech Connect

    Marinuzzi, J.G.

    1993-06-01

    Los Alamos has been pursuing the use of Knowledge Based Systems for many years. These systems are currently being used to support projects that range across many production and operations areas. By investing time and money in people and equipment, Los Alamos has developed one of the strongest knowledge based systems capabilities within the DOE. Staff of Los Alamos` Mechanical & Electronic Engineering Division are using these knowledge systems to increase capability, productivity and competitiveness in areas of manufacturing quality control, robotics, process control, plant design and management decision support. This paper describes some of these projects and associated technical program approaches, accomplishments, benefits and future goals.

  5. Online Knowledge-Based Model for Big Data Topic Extraction

    PubMed Central

    Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan

    2016-01-01

    Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004

  6. Aquatic ecology of the Elwha River estuary prior to dam removal: Chapter 7 in Coastal habitats of the Elwha River, Washington--biological and physical patterns and processes prior to dam removal

    USGS Publications Warehouse

    Duda, Jeffrey J.; Beirne, Matthew M.; Larsen, Kimberly; Barry, Dwight; Stenberg, Karl; McHenry, Michael L.

    2011-01-01

    The removal of two long-standing dams on the Elwha River in Washington State will initiate a suite of biological and physical changes to the estuary at the river mouth. Estuaries represent a transition between freshwater and saltwater, have unique assemblages of plants and animals, and are a critical habitat for some salmon species as they migrate to the ocean. This chapter summarizes a number of studies in the Elwha River estuary, and focuses on physical and biological aspects of the ecosystem that are expected to change following dam removal. Included are data sets that summarize (1) water chemistry samples collected over a 16 month period; (2) beach seining activities targeted toward describing the fish assemblage of the estuary and migratory patterns of juvenile salmon; (3) descriptions of the aquatic and terrestrial invertebrate communities in the estuary, which represent an important food source for juvenile fish and are important water quality indicators; and (4) the diet and growth patterns of juvenile Chinook salmon in the lower Elwha River and estuary. These data represent baseline conditions of the ecosystem after nearly a century of changes due to the dams and will be useful in monitoring the changes to the river and estuary following dam removal.

  7. Development to Release of CTBT Knowledge Base Datasets

    SciTech Connect

    Moore, S.G.; Shepherd, E.R.

    1998-10-20

    For the CTBT Knowledge Base to be useful as a tool for improving U.S. monitoring capabilities, the contents of the Knowledge Base must be subjected to a well-defined set of procedures to ensure integrity and relevance of the con- stituent datasets. This paper proposes a possible set of procedures for datasets that are delivered to Sandia National Laboratories (SNL) for inclusion in the Knowledge Base. The proposed procedures include defining preliminary acceptance criteria, performing verification and validation activities, and subjecting the datasets to approvrd by domain experts. Preliminary acceptance criteria include receipt of the data, its metadata, and a proposal for its usability for U.S. National Data Center operations. Verification activi- ties establish the correctness and completeness of the data, while validation activities establish the relevance of the data to its proposed use. Results from these activities are presented to domain experts, such as analysts and peers for final approval of the datasets for release to the Knowledge Base. Formats and functionality will vary across datasets, so the procedures proposed herein define an overall plan for establishing integrity and relevance of the dataset. Specific procedures for verification, validation, and approval will be defined for each dataset, or for each type of dataset, as appropriate. Potential dataset sources including Los Alamos National Laboratories and Lawrence Livermore National Laborato- ries have contributed significantly to the development of thk process.

  8. Computer Assisted Multi-Center Creation of Medical Knowledge Bases

    PubMed Central

    Giuse, Nunzia Bettinsoli; Giuse, Dario A.; Miller, Randolph A.

    1988-01-01

    Computer programs which support different aspects of medical care have been developed in recent years. Their capabilities range from diagnosis to medical imaging, and include hospital management systems and therapy prescription. In spite of their diversity these systems have one commonality: their reliance on a large body of medical knowledge in computer-readable form. This knowledge enables such programs to draw inferences, validate hypotheses, and in general to perform their intended task. As has been clear to developers of such systems, however, the creation and maintenance of medical knowledge bases are very expensive. Practical and economical difficulties encountered during this long-term process have discouraged most attempts. This paper discusses knowledge base creation and maintenance, with special emphasis on medical applications. We first describe the methods currently used and their limitations. We then present our recent work on developing tools and methodologies which will assist in the process of creating a medical knowledge base. We focus, in particular, on the possibility of multi-center creation of the knowledge base.

  9. Knowledge-Based Hierarchies: Using Organizations to Understand the Economy

    ERIC Educational Resources Information Center

    Garicano, Luis; Rossi-Hansberg, Esteban

    2015-01-01

    Incorporating the decision of how to organize the acquisition, use, and communication of knowledge into economic models is essential to understand a wide variety of economic phenomena. We survey the literature that has used knowledge-based hierarchies to study issues such as the evolution of wage inequality, the growth and productivity of firms,…

  10. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  11. Knowledge-Based Aid: A Four Agency Comparative Study

    ERIC Educational Resources Information Center

    McGrath, Simon; King, Kenneth

    2004-01-01

    Part of the response of many development cooperation agencies to the challenges of globalisation, ICTs and the knowledge economy is to emphasise the importance of knowledge for development. This paper looks at the discourses and practices of ''knowledge-based aid'' through an exploration of four agencies: the World Bank, DFID, Sida and JICA. It…

  12. Value Creation in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Liu, Fang-Chun

    2013-01-01

    Effective investment strategies help companies form dynamic core organizational capabilities allowing them to adapt and survive in today's rapidly changing knowledge-based economy. This dissertation investigates three valuation issues that challenge managers with respect to developing business-critical investment strategies that can have…

  13. PLAN-IT - Knowledge-based mission sequencing

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric W.

    1987-01-01

    PLAN-IT (Plan-Integrated Timelines), a knowledge-based approach to assist in mission sequencing, is discussed. PLAN-IT uses a large set of scheduling techniques known as strategies to develop and maintain a mission sequence. The approach implemented by PLAN-IT and the current applications of PLAN-IT for sequencing at NASA are reported.

  14. Dynamic Strategic Planning in a Professional Knowledge-Based Organization

    ERIC Educational Resources Information Center

    Olivarius, Niels de Fine; Kousgaard, Marius Brostrom; Reventlow, Susanne; Quelle, Dan Grevelund; Tulinius, Charlotte

    2010-01-01

    Professional, knowledge-based institutions have a particular form of organization and culture that makes special demands on the strategic planning supervised by research administrators and managers. A model for dynamic strategic planning based on a pragmatic utilization of the multitude of strategy models was used in a small university-affiliated…

  15. A Text Knowledge Base from the AI Handbook.

    ERIC Educational Resources Information Center

    Simmons, Robert F.

    1987-01-01

    Describes a prototype natural language text knowledge system (TKS) that was used to organize 50 pages of a handbook on artificial intelligence as an inferential knowledge base with natural language query and command capabilities. Representation of text, database navigation, query systems, discourse structuring, and future research needs are…

  16. Knowledge Based Engineering for Spatial Database Management and Use

    NASA Technical Reports Server (NTRS)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  17. Planning and Implementing a High Performance Knowledge Base.

    ERIC Educational Resources Information Center

    Cortez, Edwin M.

    1999-01-01

    Discusses the conceptual framework for developing a rapid-prototype high-performance knowledge base for the four mission agencies of the United States Department of Agriculture and their university partners. Describes the background of the project and methods used for establishing the requirements; examines issues and problems surrounding semantic…

  18. CACTUS: Command and Control Training Using Knowledge-Based Simulations

    ERIC Educational Resources Information Center

    Hartley, Roger; Ravenscroft, Andrew; Williams, R. J.

    2008-01-01

    The CACTUS project was concerned with command and control training of large incidents where public order may be at risk, such as large demonstrations and marches. The training requirements and objectives of the project are first summarized justifying the use of knowledge-based computer methods to support and extend conventional training…

  19. Cataloging and Expert Systems: AACR2 as a Knowledge Base.

    ERIC Educational Resources Information Center

    Hjerppe, Roland; Olander, Birgitta

    1989-01-01

    Describes a project that developed two expert systems for library cataloging using the second edition of the Anglo American Cataloging Rules (AACR2) as a knowledge base. The discussion covers cataloging as interpretation, the structure of AACR2, and the feasibility of using expert systems for cataloging in traditional library settings. (26…

  20. Intelligent Tools for Planning Knowledge base Development and Verification

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintaining the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems.

  1. KBGIS-II: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, Terence; Peuquet, Donna; Menon, Sudhakar; Agarwal, Pankaj

    1986-01-01

    The architecture and working of a recently implemented Knowledge-Based Geographic Information System (KBGIS-II), designed to satisfy several general criteria for the GIS, is described. The system has four major functions including query-answering, learning and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial object language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is performing all its designated tasks successfully. Future reports will relate performance characteristics of the system.

  2. KBGIS-2: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, T.; Peuquet, D.; Menon, S.; Agarwal, P.

    1986-01-01

    The architecture and working of a recently implemented knowledge-based geographic information system (KBGIS-2) that was designed to satisfy several general criteria for the geographic information system are described. The system has four major functions that include query-answering, learning, and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial objects language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is currently performing all its designated tasks successfully, although currently implemented on inadequate hardware. Future reports will detail the performance characteristics of the system, and various new extensions are planned in order to enhance the power of KBGIS-2.

  3. The Ignorance of the Knowledge-Based Economy. The Iconoclast.

    ERIC Educational Resources Information Center

    McMurtry, John

    1996-01-01

    Castigates the supposed "knowledge-based economy" as simply a public relations smokescreen covering up the free market exploitation of people and resources serving corporate interests. Discusses the many ways that private industry, often with government collusion, has controlled or denied dissemination of information to serve its own interests.…

  4. Conventional and Knowledge-Based Information Retrieval with Prolog.

    ERIC Educational Resources Information Center

    Leigh, William; Paz, Noemi

    1988-01-01

    Describes the use of PROLOG to program knowledge-based information retrieval systems, in which the knowledge contained in a document is translated into machine processable logic. Several examples of the resulting search process, and the program rules supporting the process, are given. (10 references) (CLB)

  5. Grey Documentation as a Knowledge Base in Social Work.

    ERIC Educational Resources Information Center

    Berman, Yitzhak

    1994-01-01

    Defines grey documentation as documents issued informally and not available through normal channels and discusses the role that grey documentation can play in the social work knowledge base. Topics addressed include grey documentation and science; social work and the empirical approach in knowledge development; and dissemination of grey…

  6. Ada as an implementation language for knowledge based systems

    NASA Technical Reports Server (NTRS)

    Rochowiak, Daniel

    1990-01-01

    Debates about the selection of programming languages often produce cultural collisions that are not easily resolved. This is especially true in the case of Ada and knowledge based programming. The construction of programming tools provides a desirable alternative for resolving the conflict.

  7. Malaysia Transitions toward a Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Mustapha, Ramlee; Abdullah, Abu

    2004-01-01

    The emergence of a knowledge-based economy (k-economy) has spawned a "new" notion of workplace literacy, changing the relationship between employers and employees. The traditional covenant where employees expect a stable or lifelong employment will no longer apply. The retention of employees will most probably be based on their skills and…

  8. Special Issue: Decision Support and Knowledge-Based Systems.

    ERIC Educational Resources Information Center

    Stohr, Edward A.; And Others

    1987-01-01

    Six papers dealing with decision support and knowledge based systems are presented. Five of the papers are concerned in some way with the use of artificial intelligence techniques in individual or group decision support. The sixth paper presents empirical results from the use of a group decision support system. (CLB)

  9. National Nuclear Security Administration Knowledge Base Core Table Schema Document

    SciTech Connect

    CARR,DORTHE B.

    2002-09-01

    The National Nuclear Security Administration is creating a Knowledge Base to store technical information to support the United States nuclear explosion monitoring mission. This document defines the core database tables that are used in the Knowledge Base. The purpose of this document is to present the ORACLE database tables in the NNSA Knowledge Base that on modifications to the CSS3.0 Database Schema developed in 1990. (Anderson et al., 1990). These modifications include additional columns to the affiliation table, an increase in the internal ORACLE format from 8 integers to 9 integers for thirteen IDs, and new primary and unique key definitions for six tables. It is intended to be used as a reference by researchers inside and outside of NNSA/DOE as they compile information to submit to the NNSA Knowledge Base. These ''core'' tables are separated into two groups. The Primary tables are dynamic and consist of information that can be used in automatic and interactive processing (e.g. arrivals, locations). The Lookup tables change infrequently and are used for auxiliary information used by the processing. In general, the information stored in the core tables consists of: arrivals; events, origins, associations of arrivals; magnitude information; station information (networks, site descriptions, instrument responses); pointers to waveform data; and comments pertaining to the information. This document is divided into four sections, the first being this introduction. Section two defines the sixteen tables that make up the core tables of the NNSA Knowledge Base database. Both internal (ORACLE) and external formats for the attributes are defined, along with a short description of each attribute. In addition, the primary, unique and foreign keys are defined. Section three of the document shows the relationships between the different tables by using entity-relationship diagrams. The last section, defines the columns or attributes of the various tables. Information that is

  10. Openness to and preference for attributes of biologic therapy prior to initiation among patients with rheumatoid arthritis: patient and rheumatologist perspectives and implications for decision making

    PubMed Central

    Bolge, Susan C; Goren, Amir; Brown, Duncan; Ginsberg, Seth; Allen, Isabel

    2016-01-01

    Purpose Despite American College of Rheumatology recommendations, appropriate and timely initiation of biologic therapies does not always occur. This study examined openness to and preference for attributes of biologic therapies among patients with rheumatoid arthritis (RA), differences in patients’ and rheumatologists’ perceptions, and discussions around biologic therapy initiation. Patients and methods A self-administered online survey was completed by 243 adult patients with RA in the US who were taking disease-modifying antirheumatic drugs (DMARDs) and had never taken, but had discussed biologic therapy with a rheumatologist. Patients were recruited from a consumer panel (n=142) and patient advocacy organization (n=101). A separate survey was completed by 103 rheumatologists who treated at least 25 patients with RA per month with biologic therapy. Descriptive and bivariate analyses were conducted separately for patients and rheumatologists. Attributes of biologic therapy included route of administration (intravenous infusion or subcutaneous injection), frequency of injections/infusions, and duration of infusion. Results Over half of patients (53.1%) were open to both intravenous infusion and subcutaneous injection, whereas rheumatologists reported 40.7% of patients would be open to both. Only 26.3% of patients strongly preferred subcutaneous injection, whereas rheumatologists reported 35.2%. Discrepancies were even more pronounced among specific patient types (eg, older vs younger patients and Medicare recipients). Among patients, 23% reported initiating discussion about biologics and 54% reported their rheumatologist initiated the discussion. A majority of rheumatologists reported discussing in detail several key aspects of biologics, whereas a minority of patients reported the same. Conclusion Preferences differed among patients with RA from rheumatologists’ perceptions of these preferences for biologic therapy, including greater openness to intravenous

  11. Predicting Mycobacterium tuberculosis complex clades using knowledge-based Bayesian networks.

    PubMed

    Aminian, Minoo; Couvin, David; Shabbeer, Amina; Hadley, Kane; Vandenberg, Scott; Rastogi, Nalin; Bennett, Kristin P

    2014-01-01

    We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238

  12. Predicting Mycobacterium tuberculosis Complex Clades Using Knowledge-Based Bayesian Networks

    PubMed Central

    Bennett, Kristin P.

    2014-01-01

    We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238

  13. Dealing with difficult deformations: construction of a knowledge-based deformation atlas

    NASA Astrophysics Data System (ADS)

    Thorup, S. S.; Darvann, T. A.; Hermann, N. V.; Larsen, P.; Ólafsdóttir, H.; Paulsen, R. R.; Kane, A. A.; Govier, D.; Lo, L.-J.; Kreiborg, S.; Larsen, R.

    2010-03-01

    Twenty-three Taiwanese infants with unilateral cleft lip and palate (UCLP) were CT-scanned before lip repair at the age of 3 months, and again after lip repair at the age of 12 months. In order to evaluate the surgical result, detailed point correspondence between pre- and post-surgical images was needed. We have previously demonstrated that non-rigid registration using B-splines is able to provide automated determination of point correspondences in populations of infants without cleft lip. However, this type of registration fails when applied to the task of determining the complex deformation from before to after lip closure in infants with UCLP. The purpose of the present work was to show that use of prior information about typical deformations due to lip closure, through the construction of a knowledge-based atlas of deformations, could overcome the problem. Initially, mean volumes (atlases) for the pre- and post-surgical populations, respectively, were automatically constructed by non-rigid registration. An expert placed corresponding landmarks in the cleft area in the two atlases; this provided prior information used to build a knowledge-based deformation atlas. We model the change from pre- to post-surgery using thin-plate spline warping. The registration results are convincing and represent a first move towards an automatic registration method for dealing with difficult deformations due to this type of surgery.

  14. Scaling up explanation generation: Large-scale knowledge bases and empirical studies

    SciTech Connect

    Lester, J.C.; Porter, B.W.

    1996-12-31

    To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. We first describe Knight, a robust explanation system that constructs multi-sentential and multi-paragraph explanations from the Biology Knowledge Base, a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We then introduce the Two Panel evaluation methodology and describe how Knight`s performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, Knight scored within {open_quotes}half a grade{close_quote} of domain experts, and its performance exceeded that of one of the domain experts.

  15. RIBOWEB: linking structural computations to a knowledge base of published experimental data.

    PubMed

    Chen, R O; Felciano, R; Altman, R B

    1997-01-01

    The world wide web (WWW) has become critical for storing and disseminating biological data. It offers an additional opportunity, however, to support distributed computation and sharing of results. Currently, computational analysis tools are often separated from the data in a manner that makes iterative hypothesis testing cumbersome. We hypothesize that the cycle of scientific reasoning (using data to build models, and evaluating models in light of data) can be facilitated with resources that link computations with semantic models of the data. Riboweb is an on-line knowledge-based resource that supports the creation of three-dimensional models of the 30S ribosomal subunit. It has three components: (I) a knowledge base containing representations of the essential physical components and published structural data, (II) computational modules that use the knowledge base to build or analyze structural models, and (III) a web-based user interface that supports multiple users, sessions and computations. We have built a prototype of Riboweb, and have used it to refine a rough model of the central domain of the 30S subunit from E. coli. procedure. Our results suggest that sophisticated and integrated computational capabilities can be delivered to biologists using this simple three-component architecture. PMID:9322019

  16. Knowledge-based zonal grid generation for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.

  17. A specialized framework for Medical Diagnostic Knowledge Based Systems.

    PubMed Central

    Lanzola, G.; Stefanelli, M.

    1991-01-01

    To have a knowledge based system (KBS) exhibiting an intelligent behavior, it must be endowed even with knowledge able to represent the expert's strategies, other than with domain knowledge. The elicitation task is inherently difficult for strategic knowledge, because strategy is often tacit, and, even when it has been made explicit, it is not an easy task to describe it in a form that may be directly translated and implemented into a program. This paper describes a Specialized Framework for Medical Diagnostic Knowledge Based Systems able to help an expert in the process of building KBSs in a medical domain. The framework is based on an epistemological model of diagnostic reasoning which has proved to be helpful in describing the diagnostic process in terms of the tasks by which it is composed of. PMID:1807566

  18. Automated annual cropland mapping using knowledge-based temporal features

    NASA Astrophysics Data System (ADS)

    Waldner, François; Canto, Guadalupe Sepulcre; Defourny, Pierre

    2015-12-01

    Global, timely, accurate and cost-effective cropland mapping is a prerequisite for reliable crop condition monitoring. This article presented a simple and comprehensive methodology capable to meet the requirements of operational cropland mapping by proposing (1) five knowledge-based temporal features that remain stable over time, (2) a cleaning method that discards misleading pixels from a baseline land cover map and (3) a classifier that delivers high accuracy cropland maps (> 80%). This was demonstrated over four contrasted agrosystems in Argentina, Belgium, China and Ukraine. It was found that the quality and accuracy of the baseline impact more the certainty of the classification rather than the classification output itself. In addition, it was shown that interpolation of the knowledge-based features increases the stability of the classifier allowing for its re-use from year to year without recalibration. Hence, the method shows potential for application at larger scale as well as for delivering cropland map in near real time.

  19. Managing Project Landscapes in Knowledge-Based Enterprises

    NASA Astrophysics Data System (ADS)

    Stantchev, Vladimir; Franke, Marc Roman

    Knowledge-based enterprises are typically conducting a large number of research and development projects simultaneously. This is a particularly challenging task in complex and diverse project landscapes. Project Portfolio Management (PPM) can be a viable framework for knowledge and innovation management in such landscapes. A standardized process with defined functions such as project data repository, project assessment, selection, reporting, and portfolio reevaluation can serve as a starting point. In this work we discuss the benefits a multidimensional evaluation framework can provide for knowledge-based enterprises. Furthermore, we describe a knowledge and learning strategy and process in the context of PPM and evaluate their practical applicability at different stages of the PPM process.

  20. A knowledge-based system for prototypical reasoning

    NASA Astrophysics Data System (ADS)

    Lieto, Antonio; Minieri, Andrea; Piana, Alberto; Radicioni, Daniele P.

    2015-04-01

    In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired architecture for the representation of conceptual information. The proposed system aims at extending the classical representational and reasoning capabilities of the ontology-based frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base, composed of a classical symbolic component (grounded on a formal ontology) with a typicality based one (grounded on the conceptual spaces framework). The resulting system attempts to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual process theories of reasoning and rationality. The system has been experimentally assessed in a conceptual categorisation task where common sense linguistic descriptions were given in input, and the corresponding target concepts had to be identified. The results show that the proposed solution substantially extends the representational and reasoning 'conceptual' capabilities of standard ontology-based systems.

  1. MetaShare: Enabling Knowledge-Based Data Management

    NASA Astrophysics Data System (ADS)

    Pennington, D. D.; Salayandia, L.; Gates, A.; Osuna, F.

    2013-12-01

    MetaShare is a free and open source knowledge-based system for supporting data management planning, now required by some agencies and publishers. MetaShare supports users as they describe the types of data they will collect, expected standards, and expected policies for sharing. MetaShare's semantic model captures relationships between disciplines, tools, data types, data formats, and metadata standards. As the user plans their data management activities, MetaShare recommends choices based on practices and decisions from a community that has used the system for similar purposes, and extends the knowledge base to capture new relationships. The MetaShare knowledge base is being seeded with information for geoscience and environmental science domains, and is currently undergoing testing on at the University of Texas at El Paso. Through time and usage, it is expected to grow to support a variety of research domains, enabling community-based learning of data management practices. Knowledge of a user's choices during the planning phase can be used to support other tasks in the data life cycle, e.g., collecting, disseminating, and archiving data. A key barrier to scientific data sharing is the lack of sufficient metadata that provides context under which data were collected. The next phase of MetaShare development will automatically generate data collection instruments with embedded metadata and semantic annotations based on the information provided during the planning phase. While not comprehensive, this metadata will be sufficient for discovery and will enable user's to focus on more detailed descriptions of their data. Details are available at: Salayandia, L., Pennington, D., Gates, A., and Osuna, F. (accepted). MetaShare: From data management plans to knowledge base systems. AAAI Fall Symposium Series Workshop on Discovery Informatics, November 15-17, 2013, Arlington, VA.

  2. Knowledge-based interpretation of outdoor natural color scenes

    SciTech Connect

    Ohta, Y.

    1985-01-01

    One of the major targets in vision research is to develop a total vision system starting from images to a symbolic description, utilizing various knowledge sources. This book demonstrates a knowledge-based image interpretation system that analyzes natural color scenes. Topics covered include color information for region segmentation, preliminary segmentation of color images, and a bottom-up and top-down region analyzer.

  3. A knowledge based model of electric utility operations. Final report

    SciTech Connect

    1993-08-11

    This report consists of an appendix to provide a documentation and help capability for an analyst using the developed expert system of electric utility operations running in CLIPS. This capability is provided through a separate package running under the WINDOWS Operating System and keyed to provide displays of text, graphics and mixed text and graphics that explain and elaborate on the specific decisions being made within the knowledge based expert system.

  4. Current and future trends in metagenomics : Development of knowledge bases

    NASA Astrophysics Data System (ADS)

    Mori, Hiroshi; Yamada, Takuji; Kurokawa, Ken

    Microbes are essential for every part of life on Earth. Numerous microbes inhabit the biosphere, many of which are uncharacterized or uncultivable. They form a complex microbial community that deeply affects against surrounding environments. Metagenome analysis provides a radically new way of examining such complex microbial community without isolation or cultivation of individual bacterial community members. In this article, we present a brief discussion about a metagenomics and the development of knowledge bases, and also discuss about the future trends in metagenomics.

  5. A knowledge-based expert system for inferring vegetation characteristics

    NASA Technical Reports Server (NTRS)

    Kimes, Daniel S.; Harrison, Patrick R.; Ratcliffe, P. A.

    1991-01-01

    A prototype knowledge-based expert system VEG is presented that focuses on extracting spectral hemispherical reflectance using any combination of nadir and/or directional reflectance data as input. The system is designed to facilitate expansion to handle other inferences regarding vegetation properties such as total hemispherical reflectance, leaf area index, percent ground cover, phosynthetic capacity, and biomass. This approach is more robust and accurate than conventional extraction techniques previously developed.

  6. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul

    1994-01-01

    This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.

  7. ISPE: A knowledge-based system for fluidization studies

    SciTech Connect

    Reddy, S.

    1991-01-01

    Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.

  8. The Knowledge Base in Education Administration: Did NCATE Open a Pandora's Box?

    ERIC Educational Resources Information Center

    Achilles, C. M.; DuVall, L.

    The controversial nature of the knowledge base of educational administration is discussed in this paper. Included are a definition of professionalism, a discussion of how to build and develop a knowledge base, and a review of the obstacles to knowledge base development. Elements of a consensual knowledge base include theory, practice, and other…

  9. Building an organized knowledge base: Concept mapping and achievement in secondary school physics

    NASA Astrophysics Data System (ADS)

    Pankratius, William J.

    Direct teaching of problem-solving methods to high school physics students met with little success. Expert problem solving depended upon an organized knowledge base. Concept mapping was found to be a key to organizing an effective knowledge base. The investigation of the effect of the degree of concept mapping on achievement was the purpose of this study. Six intact high school physics classes, taught by this investigator, took part in the study. Two classes were control groups and received standard instruction. Four classes received six weeks of concept-mapping instruction prior to the unit under study. Two of these four classes were the low-level treatment group and were required to submit concept maps at the conclusion of the instruction. The other two classes were the high-level treatment group and were required to submit concept maps at the beginning and at the conclusion of the unit under study. One class from each treatment group took a pretest prior to instruction. An analysis of the posttest results revealed no pretest sensitization. A one-way analysis of covariance indicated a significant main effect for the treatment level at the p < 0.05 level. A pair of single-df comparisons of the adjusted treatment means resulted in significant differences (p < 0.05) between the control group and the average of the treatment means as well as between the two experimental groups. It can be concluded that for this sample (upper-middle-class high school physics students) mapping concepts prior to, during, and subsequent to instruction led to greater achievement as measured by posttest scores.

  10. Using the DOE Knowledge Base for Special Event Analysis

    SciTech Connect

    Armstrong, H.M.; Harris, J.M.; Young, C.J.

    1998-10-20

    The DOE Knowledge Base is a library of detailed information whose purpose is to support the United States National Data Center (USNDC) in its mission to monitor compliance with the Comprehensive Test Ban Treaty (CTBT). One of the important tasks which the USNDC must accomplish is to periodically perform detailed analysis of events of high interest, so-called "Special Events", to provide the national authority with information needed to make policy decisions. In this paper we investigate some possible uses of the Knowledge Base for Special Event Analysis (SEA), and make recommendations for improving Knowledge Base support for SEA. To analyze an event in detail, there are two basic types of data which must be used sensor-derived data (wave- forms, arrivals, events, etc.) and regiohalized contextual data (known sources, geological characteristics, etc.). Cur- rently there is no single package which can provide full access to both types of data, so for our study we use a separate package for each MatSeis, the Sandia Labs-developed MATLAB-based seismic analysis package, for wave- form data analysis, and ArcView, an ESRI product, for contextual data analysis. Both packages are well-suited to pro- totyping because they provide a rich set of currently available functionality and yet are also flexible and easily extensible, . Using these tools and Phase I Knowledge Base data sets, we show how the Knowledge Base can improve both the speed and the quality of SEA. Empirically-derived interpolated correction information can be accessed to improve both location estimates and associated error estimates. This information can in turn be used to identi~ any known nearby sources (e.g. mines, volcanos), which may then trigger specialized processing of the sensor data. Based on the location estimate, preferred magnitude formulas and discriminants can be retrieved, and any known blockages can be identified to prevent miscalculations. Relevant historic events can be identilled either by

  11. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  12. Knowledge-based GIS techniques applied to geological engineering

    USGS Publications Warehouse

    Usery, E. Lynn; Altheide, Phyllis; Deister, Robin R.P.; Barr, David J.

    1988-01-01

    A knowledge-based geographic information system (KBGIS) approach which requires development of a rule base for both GIS processing and for the geological engineering application has been implemented. The rule bases are implemented in the Goldworks expert system development shell interfaced to the Earth Resources Data Analysis System (ERDAS) raster-based GIS for input and output. GIS analysis procedures including recoding, intersection, and union are controlled by the rule base, and the geological engineering map product is generted by the expert system. The KBGIS has been used to generate a geological engineering map of Creve Coeur, Missouri.

  13. Modeling materials failures for knowledge based system applications

    SciTech Connect

    Roberge, P.R.

    1996-12-31

    The evaluation of the probability of given premises to play a role in a final outcome can only be done when the parameters involved and their interactions are properly elucidated. But, for complex engineering situations, this often appears as an insurmountable task. The prediction of failures for the optimization of inspection and maintenance is such an example of complexity. After reviewing the models of expertise commonly used by knowledge engineers, this paper presents an object-oriented framework to guide the elicitation and organization of lifetime information for knowledge based system applications.

  14. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  15. Knowledge-based potential functions in protein design.

    PubMed

    Russ, William P; Ranganathan, Rama

    2002-08-01

    Predicting protein sequences that fold into specific native three-dimensional structures is a problem of great potential complexity. Although the complete solution is ultimately rooted in understanding the physical chemistry underlying the complex interactions between amino acid residues that determine protein stability, recent work shows that empirical information about these first principles is embedded in the statistics of protein sequence and structure databases. This review focuses on the use of 'knowledge-based' potentials derived from these databases in designing proteins. In addition, the data suggest how the study of these empirical potentials might impact our fundamental understanding of the energetic principles of protein structure. PMID:12163066

  16. Building a knowledge based economy in Russia using guided entrepreneurship

    NASA Astrophysics Data System (ADS)

    Reznik, Boris N.; Daniels, Marc; Ichim, Thomas E.; Reznik, David L.

    2005-06-01

    Despite advanced scientific and technological (S&T) expertise, the Russian economy is presently based upon manufacturing and raw material exports. Currently, governmental incentives are attempting to leverage the existing scientific infrastructure through the concept of building a Knowledge Based Economy. However, socio-economic changes do not occur solely by decree, but by alteration of approach to the market. Here we describe the "Guided Entrepreneurship" plan, a series of steps needed for generation of an army of entrepreneurs, which initiate a chain reaction of S&T-driven growth. The situation in Russia is placed in the framework of other areas where Guided Entrepreneurship has been successful.

  17. SAFOD Brittle Microstructure and Mechanics Knowledge Base (BM2KB)

    NASA Astrophysics Data System (ADS)

    Babaie, Hassan A.; Broda Cindi, M.; Hadizadeh, Jafar; Kumar, Anuj

    2013-07-01

    Scientific drilling near Parkfield, California has established the San Andreas Fault Observatory at Depth (SAFOD), which provides the solid earth community with short range geophysical and fault zone material data. The BM2KB ontology was developed in order to formalize the knowledge about brittle microstructures in the fault rocks sampled from the SAFOD cores. A knowledge base, instantiated from this domain ontology, stores and presents the observed microstructural and analytical data with respect to implications for brittle deformation and mechanics of faulting. These data can be searched on the knowledge base‧s Web interface by selecting a set of terms (classes, properties) from different drop-down lists that are dynamically populated from the ontology. In addition to this general search, a query can also be conducted to view data contributed by a specific investigator. A search by sample is done using the EarthScope SAFOD Core Viewer that allows a user to locate samples on high resolution images of core sections belonging to different runs and holes. The class hierarchy of the BM2KB ontology was initially designed using the Unified Modeling Language (UML), which was used as a visual guide to develop the ontology in OWL applying the Protégé ontology editor. Various Semantic Web technologies such as the RDF, RDFS, and OWL ontology languages, SPARQL query language, and Pellet reasoning engine, were used to develop the ontology. An interactive Web application interface was developed through Jena, a java based framework, with AJAX technology, jsp pages, and java servlets, and deployed via an Apache tomcat server. The interface allows the registered user to submit data related to their research on a sample of the SAFOD core. The submitted data, after initial review by the knowledge base administrator, are added to the extensible knowledge base and become available in subsequent queries to all types of users. The interface facilitates inference capabilities in the

  18. Three forms of assessment of prior knowledge, and improved performance following an enrichment programme, of English second language biology students within the context of a marine theme

    NASA Astrophysics Data System (ADS)

    Feltham, Nicola F.; Downs, Colleen T.

    2002-02-01

    The Science Foundation Programme (SFP) was launched in 1991 at the University of Natal, Pietermaritzburg, South Africa in an attempt to equip a selected number of matriculants from historically disadvantaged schools with the skills, resources and self-confidence needed to embark on their tertiary studies. Previous research within the SFP biology component suggests that a major contributor to poor achievement and low retention rates among English second language (ESL) students in the Life Sciences is the inadequate background knowledge in natural history. In this study, SFP student background knowledge was assessed along a continuum of language dependency using a set of three probes. Improved student performance in each of the respective assessments examined the extent to which a sound natural history background facilitated meaningful learning relative to ESL proficiency. Student profiles and attitudes to biology were also examined. Results indicated that students did not perceive language to be a problem in biology. However, analysis of the student performance in the assessment probes indicated that, although the marine course provided the students with the background knowledge that they were initially lacking, they continued to perform better in the drawing and MCQ tools in the post-tests, suggesting that it is their inability to express themselves in the written form that hampers their development. These results have implications for curriculum development within the constructivist framework of the SFP.

  19. Knowledge-Based Parallel Performance Technology for Scientific Application Competitiveness Final Report

    SciTech Connect

    Malony, Allen D; Shende, Sameer

    2011-08-15

    The primary goal of the University of Oregon's DOE "œcompetitiveness" project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translation of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.

  20. Strong earthquakes knowledge base for calibrating fast damage assessment systems

    NASA Astrophysics Data System (ADS)

    Frolova, N.; Kozlov, M.; Larionov, V.; Nikolaev, A.; Suchshev, S.; Ugarov, A.

    2003-04-01

    At present Systems for fast damage and loss assessment due to strong earthquakes may use as input data: (1) information about event parameters (magnitude, depth and coordinates) issued by Alert Seismological Surveys; (2) wave-form data obtained by strong-motion seismograph network; (3) high resolution space images of the affected area obtained before and after the event. When data about magnidute, depth and location of event are used to simulate possible consequences, the reliability of estimations depends on completeness and reliability of databases on elements at risk (population and built environment); reliability of vulnerability functions of elements at risk; and errors in strong earthquakes' parameters determination by Alert Seismological Surveys. Some of these factors may be taken into account at the expense of the System calibration with usage of well documented past strong earthquakes. The paper is describing the structure and content of the knowledge base about well documented strong events, which occurred in last century. It contains the description of more than 1000 events. The data are distributed almost homogeneously as the losses due to earthquakes are concerned; the most events are in the magnitude range 6.5 -7.9. Software is created to accumulate and analyze the information about these events source parameters and social consequences. Created knowledge base is used for calibration the Fast Damage Assessment Tool, which is at present on duty with the framework of EDRIM Program. It is also used as additional information by experts who analyses the results of computations.

  1. Knowledge-based assistant for ultrasonic inspection in metals

    NASA Astrophysics Data System (ADS)

    Franklin, Reynold; Halabe, Udaya B.

    1997-12-01

    Ultrasonic is a popular nondestructive testing technique for detecting flaws in metals, composites and other materials. A major limitation of this technique for successful field implementation is the need for skilled labor to identify an appropriate testing methodology and conduct the inspection. A knowledge-based assistant that can help the inspector in choosing the suitable testing methodology would greatly reduce the cost for inspection while maintaining reliability. Therefore a rule-based decision logic that can incorporate the expertise of a skilled operator for choosing a suitable ultrasonic configuration and testing procedure for a given application is explored and reported in this paper. A personal computer (PC) based expert system shell, VP Expert, is used to encode the rules and assemble the knowledge to address the different methods in ultrasonic inspection for metals. The expert system will be configured in a question-answer format. Since several factors (such as frequency, couplant, sensors, etc.) influence the inspection, appropriate decisions have to be made about each factor depending on the type of inspection method and the intended use of the metal. This knowledge base will help in identifying the methodology for detecting flaws, cracks, and thickness measurements, etc., which will lead to increase safety.

  2. Portable Knowledge-Based Diagnostic And Maintenance Systems

    NASA Astrophysics Data System (ADS)

    Darvish, John; Olson, Noreen S.

    1989-03-01

    It is difficult to diagnose faults and maintain weapon systems because (1) they are highly complex pieces of equipment composed of multiple mechanical, electrical, and hydraulic assemblies, and (2) talented maintenance personnel are continuously being lost through the attrition process. To solve this problem, we developed a portable diagnostic and maintenance aid that uses a knowledge-based expert system. This aid incorporates diagnostics, operational procedures, repair and replacement procedures, and regularly scheduled maintenance into one compact, 18-pound graphics workstation. Drawings and schematics can be pulled up from the CD-ROM to assist the operator in answering the expert system's questions. Work for this aid began with the development of the initial knowledge-based expert system in a fast prototyping environment using a LISP machine. The second phase saw the development of a personal computer-based system that used videodisc technology to pictorially assist the operator. The current version of the aid eliminates the high expenses associated with videodisc preparation by scanning in the art work already in the manuals. A number of generic software tools have been developed that streamlined the construction of each iteration of the aid; these tools will be applied to the development of future systems.

  3. Framework Support For Knowledge-Based Software Development

    NASA Astrophysics Data System (ADS)

    Huseth, Steve

    1988-03-01

    The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.

  4. Utilizing knowledge-base semantics in graph-based algorithms

    SciTech Connect

    Darwiche, A.

    1996-12-31

    Graph-based algorithms convert a knowledge base with a graph structure into one with a tree structure (a join-tree) and then apply tree-inference on the result. Nodes in the join-tree are cliques of variables and tree-inference is exponential in w*, the size of the maximal clique in the join-tree. A central property of join-trees that validates tree-inference is the running-intersection property: the intersection of any two cliques must belong to every clique on the path between them. We present two key results in connection to graph-based algorithms. First, we show that the running-intersection property, although sufficient, is not necessary for validating tree-inference. We present a weaker property for this purpose, called running-interaction, that depends on non-structural (semantical) properties of a knowledge base. We also present a linear algorithm that may reduce w* of a join-tree, possibly destroying its running-intersection property, while maintaining its running-interaction property and, hence, its validity for tree-inference. Second, we develop a simple algorithm for generating trees satisfying the running-interaction property. The algorithm bypasses triangulation (the standard technique for constructing join-trees) and does not construct a join-tree first. We show that the proposed algorithm may in some cases generate trees that are more efficient than those generated by modifying a join-tree.

  5. Interactive classification: A technique for acquiring and maintaining knowledge bases

    SciTech Connect

    Finin, T.W.

    1986-10-01

    The practical application of knowledge-based systems, such as in expert systems, often requires the maintenance of large amounts of declarative knowledge. As a knowledge base (KB) grows in size and complexity, it becomes more difficult to maintain and extend. Even someone who is familiar with the knowledge domain, how it is represented in the KB, and the actual contents of the current KB may have severe difficulties in updating it. Even if the difficulties can be tolerated, there is a very real danger that inconsistencies and errors may be introduced into the KB through the modification. This paper describes an approach to this problem based on a tool called an interactive classifier. An interactive classifier uses the contents of the existing KB and knowledge about its representation to help the maintainer describe new KB objects. The interactive classifier will identify the appropriate taxonomic location for the newly described object and add it to the KB. The new object is allowed to be a generalization of existing KB objects, enabling the system to learn more about existing objects.

  6. Network fingerprint: a knowledge-based characterization of biomedical networks

    PubMed Central

    Cui, Xiuliang; He, Haochen; He, Fuchu; Wang, Shengqi; Li, Fei; Bo, Xiaochen

    2015-01-01

    It can be difficult for biomedical researchers to understand complex molecular networks due to their unfamiliarity with the mathematical concepts employed. To represent molecular networks with clear meanings and familiar forms for biomedical researchers, we introduce a knowledge-based computational framework to decipher biomedical networks by making systematic comparisons to well-studied “basic networks”. A biomedical network is characterized as a spectrum-like vector called “network fingerprint”, which contains similarities to basic networks. This knowledge-based multidimensional characterization provides a more intuitive way to decipher molecular networks, especially for large-scale network comparisons and clustering analyses. As an example, we extracted network fingerprints of 44 disease networks in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The comparisons among the network fingerprints of disease networks revealed informative disease-disease and disease-signaling pathway associations, illustrating that the network fingerprinting framework will lead to new approaches for better understanding of biomedical networks. PMID:26307246

  7. Hospital nurses' use of knowledge-based information resources.

    PubMed

    Tannery, Nancy Hrinya; Wessel, Charles B; Epstein, Barbara A; Gadd, Cynthia S

    2007-01-01

    The purpose of this study was to evaluate the information-seeking practices of nurses before and after access to a library's electronic collection of information resources. This is a pre/post intervention study of nurses at a rural community hospital. The hospital contracted with an academic health sciences library for access to a collection of online knowledge-based resources. Self-report surveys were used to obtain information about nurses' computer use and how they locate and access information to answer questions related to their patient care activities. In 2001, self-report surveys were sent to the hospital's 573 nurses during implementation of access to online resources with a post-implementation survey sent 1 year later. At the initiation of access to the library's electronic resources, nurses turned to colleagues and print textbooks or journals to satisfy their information needs. After 1 year of access, 20% of the nurses had begun to use the library's electronic resources. The study outcome suggests ready access to knowledge-based electronic information resources can lead to changes in behavior among some nurses. PMID:17289463

  8. A proven knowledge-based approach to prioritizing process information

    NASA Technical Reports Server (NTRS)

    Corsberg, Daniel R.

    1991-01-01

    Many space-related processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect is rapid analysis of the changing process information. During a disturbance, this task can overwhelm humans as well as computers. Humans deal with this by applying heuristics in determining significant information. A simple, knowledge-based approach to prioritizing information is described. The approach models those heuristics that humans would use in similar circumstances. The approach described has received two patents and was implemented in the Alarm Filtering System (AFS) at the Idaho National Engineering Laboratory (INEL). AFS was first developed for application in a nuclear reactor control room. It has since been used in chemical processing applications, where it has had a significant impact on control room environments. The approach uses knowledge-based heuristics to analyze data from process instrumentation and respond to that data according to knowledge encapsulated in objects and rules. While AFS cannot perform the complete diagnosis and control task, it has proven to be extremely effective at filtering and prioritizing information. AFS was used for over two years as a first level of analysis for human diagnosticians. Given the approach's proven track record in a wide variety of practical applications, it should be useful in both ground- and space-based systems.

  9. A prototype knowledge-based simulation support system

    SciTech Connect

    Hill, T.R.; Roberts, S.D.

    1987-04-01

    As a preliminary step toward the goal of an intelligent automated system for simulation modeling support, we explore the feasibility of the overall concept by generating and testing a prototypical framework. A prototype knowledge-based computer system was developed to support a senior level course in industrial engineering so that the overall feasibility of an expert simulation support system could be studied in a controlled and observable setting. The system behavior mimics the diagnostic (intelligent) process performed by the course instructor and teaching assistants, finding logical errors in INSIGHT simulation models and recommending appropriate corrective measures. The system was programmed in a non-procedural language (PROLOG) and designed to run interactively with students working on course homework and projects. The knowledge-based structure supports intelligent behavior, providing its users with access to an evolving accumulation of expert diagnostic knowledge. The non-procedural approach facilitates the maintenance of the system and helps merge the roles of expert and knowledge engineer by allowing new knowledge to be easily incorporated without regard to the existing flow of control. The background, features and design of the system are describe and preliminary results are reported. Initial success is judged to demonstrate the utility of the reported approach and support the ultimate goal of an intelligent modeling system which can support simulation modelers outside the classroom environment. Finally, future extensions are suggested.

  10. Big data analytics in immunology: a knowledge-based approach.

    PubMed

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677

  11. Big Data Analytics in Immunology: A Knowledge-Based Approach

    PubMed Central

    Zhang, Guang Lan

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677

  12. Knowledge-based simulation using object-oriented programming

    NASA Technical Reports Server (NTRS)

    Sidoran, Karen M.

    1993-01-01

    Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.

  13. Knowledge-based imaging-sensor fusion system

    NASA Technical Reports Server (NTRS)

    Westrom, George

    1989-01-01

    An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.

  14. Knowledge-based inference engine for online video dissemination

    NASA Astrophysics Data System (ADS)

    Zhou, Wensheng; Kuo, C.-C. Jay

    2000-10-01

    To facilitate easy access to rich information of multimedia over the Internet, we develop a knowledge-based classification system that supports automatic Indexing and filtering based on semantic concepts for the dissemination of on-line real-time media. Automatic segmentation, annotation and summarization of media for fast information browsing and updating are achieved in the same time. In the proposed system, a real-time scene-change detection proxy performs an initial video structuring process by splitting a video clip into scenes. Motional and visual features are extracted in real time for every detected scene by using online feature extraction proxies. Higher semantics are then derived through a joint use of low-level features along with inference rules in the knowledge base. Inference rules are derived through a supervised learning process based on representative samples. On-line media filtering based on semantic concepts becomes possible by using the proposed video inference engine. Video streams are either blocked or sent to certain channels depending on whether or not the video stream is matched with the user's profile. The proposed system is extensively evaluated by applying the engine to video of basketball games.

  15. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  16. Mindtagger: A Demonstration of Data Labeling in Knowledge Base Construction

    PubMed Central

    Shin, Jaeho; Ré, Christopher; Cafarella, Michael

    2016-01-01

    End-to-end knowledge base construction systems using statistical inference are enabling more people to automatically extract high-quality domain-specific information from unstructured data. As a result of deploying DeepDive framework across several domains, we found new challenges in debugging and improving such end-to-end systems to construct high-quality knowledge bases. DeepDive has an iterative development cycle in which users improve the data. To help our users, we needed to develop principles for analyzing the system's error as well as provide tooling for inspecting and labeling various data products of the system. We created guidelines for error analysis modeled after our colleagues' best practices, in which data labeling plays a critical role in every step of the analysis. To enable more productive and systematic data labeling, we created Mindtagger, a versatile tool that can be configured to support a wide range of tasks. In this demonstration, we show in detail what data labeling tasks are modeled in our error analysis guidelines and how each of them is performed using Mindtagger. PMID:27144082

  17. A knowledge-based care protocol system for ICU.

    PubMed

    Lau, F; Vincent, D D

    1995-01-01

    There is a growing interest in using care maps in ICU. So far, the emphasis has been on developing the critical path, problem/outcome, and variance reporting for specific diagnoses. This paper presents a conceptual knowledge-based care protocol system design for the ICU. It is based on the manual care map currently in use for managing myocardial infarction in the ICU of the Sturgeon General Hospital in Alberta. The proposed design uses expert rules, object schemas, case-based reasoning, and quantitative models as sources of its knowledge. Also being developed is a decision model with explicit linkages for outcome-process-measure from the care map. The resulting system is intended as a bedside charting and decision-support tool for caregivers. Proposed usage includes charting by acknowledgment, generation of alerts, and critiques on variances/events recorded, recommendations for planned interventions, and comparison with historical cases. Currently, a prototype is being developed on a PC-based network with Visual Basic, Level-Expert Object, and xBase. A clinical trial is also planned to evaluate whether this knowledge-based care protocol can reduce the length of stay of patients with myocardial infarction in the ICU. PMID:8591604

  18. Automatic in-syringe dispersive liquid-liquid microextraction of ⁹⁹Tc from biological samples and hospital residues prior to liquid scintillation counting.

    PubMed

    Villar, Marina; Avivar, Jessica; Ferrer, Laura; Borràs, Antoni; Vega, Fernando; Cerdà, Víctor

    2015-07-01

    A new approach exploiting in-syringe dispersive liquid-liquid microextraction (DLLME) for (99)Tc extraction and preconcentration from biological samples, i.e., urine and saliva, and liquid residues from treated patients is presented. (99)Tc is a beta emitter with a long half-life (2.111 × 10(5) years) and mobility in the different environmental compartments. One of the sources of this radionuclide is through the use of its father (99m)Tc in medical diagnosis. For the first time a critical comparison between extractants and disperser solvents for (99)Tc DLLME is presented, e.g., tributyl phosphate (TBP), trioctylmethylammonium chloride (Aliquat®336), triisooctylamine (TiOA), as extractants in apolar solvents such as xylene and dodecane, and disperser solvents such as acetone, acetonitrile, ethanol, methanol, 1-propanol, and 2-propanol. The system was optimized by experimental design, and 22.5% of Aliquat®336 in acetone was selected as extractant and disperser, respectively. Off-line detection was performed using a liquid scintillation counter. The present method has a (99)Tc minimum detectable activity (MDA) of 0.075 Bq with a high extraction/preconcentration frequency (8 h(-1)). Urine, saliva, and hospital residues were satisfactorily analyzed with recoveries of 82-119%. Thus, the proposed system is an automatic powerful tool to monitor the entry of (99)Tc into the environment. Graphical Abstract (99m)Tc is widely used in Nuclear Medicine for diagnosis. Its daugther (99)Tc is automatically monitored in biological samples from treated patients by in-syringe dispersive liquid-liquid microextraction. PMID:26007698

  19. Baseline levels of bioaerosols and volatile organic compounds around a municipal waste incinerator prior to the construction of a mechanical-biological treatment plant.

    PubMed

    Vilavert, Lolita; Nadal, Martí; Inza, Isabel; Figueras, María J; Domingo, José L

    2009-09-01

    New waste management programs are currently aimed at developing alternative treatment technologies such as mechanical-biological treatment (MBT) and composting plants. However, there is still a high uncertainty concerning the chemical and microbiological risks for human health, not only for workers of these facilities, but also for the population living in the neighborhood. A new MBT plant is planned to be constructed adjacently to a municipal solid waste incinerator (MSWI) in Tarragona (Catalonia, Spain). In order to evaluate its potential impact and to differentiate the impacts of MSWI from those of the MBT when the latter is operative, a pre-operational survey was initiated by determining the concentrations of 20 volatile organic compounds (VOCs) and bioaerosols (total bacteria, gram-negative bacteria, fungi and Aspergillus fumigatus) in airborne samples around the MSWI. The results indicated that the current concentrations of bioaerosols (ranges: 382-3882, 18-790, 44-926, and <1-7 CFU/m(3) for fungi at 25 degrees C, fungi at 37 degrees C, total bacteria, and gram-negative bacteria, respectively) and VOCs (ranging from 0.9 to 121.2 microg/m(3)) are very low in comparison to reported levels in indoor and outdoor air in composting and MBT plants, as well in urban and industrial zones. With the exception of total bacteria, no correlations were observed between the environmental concentrations of biological agents and the direction/distance from the facility. However, total bacteria presented significantly higher levels downwind. Moreover, a non-significant increase of VOCs was detected in sites closer to the incinerator, which means that the MSWI could have a very minor impact on the surrounding environment. PMID:19346120

  20. Baseline levels of bioaerosols and volatile organic compounds around a municipal waste incinerator prior to the construction of a mechanical-biological treatment plant

    SciTech Connect

    Vilavert, Lolita; Nadal, Marti; Inza, Isabel; Figueras, Maria J.; Domingo, Jose L.

    2009-09-15

    New waste management programs are currently aimed at developing alternative treatment technologies such as mechanical-biological treatment (MBT) and composting plants. However, there is still a high uncertainty concerning the chemical and microbiological risks for human health, not only for workers of these facilities, but also for the population living in the neighborhood. A new MBT plant is planned to be constructed adjacently to a municipal solid waste incinerator (MSWI) in Tarragona (Catalonia, Spain). In order to evaluate its potential impact and to differentiate the impacts of MSWI from those of the MBT when the latter is operative, a pre-operational survey was initiated by determining the concentrations of 20 volatile organic compounds (VOCs) and bioaerosols (total bacteria, Gram-negative bacteria, fungi and Aspergillus fumigatus) in airborne samples around the MSWI. The results indicated that the current concentrations of bioaerosols (ranges: 382-3882, 18-790, 44-926, and <1-7 CFU/m{sup 3} for fungi at 25 deg. C, fungi at 37 deg. C, total bacteria, and Gram-negative bacteria, respectively) and VOCs (ranging from 0.9 to 121.2 {mu}g/m{sup 3}) are very low in comparison to reported levels in indoor and outdoor air in composting and MBT plants, as well in urban and industrial zones. With the exception of total bacteria, no correlations were observed between the environmental concentrations of biological agents and the direction/distance from the facility. However, total bacteria presented significantly higher levels downwind. Moreover, a non-significant increase of VOCs was detected in sites closer to the incinerator, which means that the MSWI could have a very minor impact on the surrounding environment.

  1. ASExpert: an integrated knowledge-based system for activated sludge plants.

    PubMed

    Sorour, M T; Bahgat, L M F; El, Iskandarani M A; Horan, N J

    2002-08-01

    The activated sludge process is commonly used for secondary wastewater treatment worldwide. This process is capable of achieving high quality effluent. However it has the reputation of being difficult to operate because of its poorly understood biological behaviour, variability of input flows and the need to incorporate qualitative data. To augment this incomplete knowledge with experience, knowledge-based systems were introduced in the 1980s however they didn't receive much popularity. This paper presents the Activated Sludge Expert system (ASExpert), which is a rule-based expert system plus a complete database tool proposed for use in activated sludge plants. The paper focuses on presenting the system's main features and capabilities to revive the interest in knowledge-based systems as a reliable means for monitoring plants. Then it presents the methodology adopted for ASExpert validation along with an assessment of testing results. Finally it concludes that expert systems technology has proved its importance for enhancing performance, especially if in the future it is integrated to a modern control system. PMID:12211453

  2. A knowledge-based control system for air-scour optimisation in membrane bioreactors.

    PubMed

    Ferrero, G; Monclús, H; Sancho, L; Garrido, J M; Comas, J; Rodríguez-Roda, I

    2011-01-01

    Although membrane bioreactors (MBRs) technology is still a growing sector, its progressive implementation all over the world, together with great technical achievements, has allowed it to reach a mature degree, just comparable to other more conventional wastewater treatment technologies. With current energy requirements around 0.6-1.1 kWh/m3 of treated wastewater and investment costs similar to conventional treatment plants, main market niche for MBRs can be areas with very high restrictive discharge limits, where treatment plants have to be compact or where water reuse is necessary. Operational costs are higher than for conventional treatments; consequently there is still a need and possibilities for energy saving and optimisation. This paper presents the development of a knowledge-based decision support system (DSS) for the integrated operation and remote control of the biological and physical (filtration and backwashing or relaxation) processes in MBRs. The core of the DSS is a knowledge-based control module for air-scour consumption automation and energy consumption minimisation. PMID:21902045

  3. The 2004 knowledge base parametric grid data software suite.

    SciTech Connect

    Wilkening, Lisa K.; Simons, Randall W.; Ballard, Sandy; Jensen, Lee A.; Chang, Marcus C.; Hipp, James Richard

    2004-08-01

    One of the most important types of data in the National Nuclear Security Administration (NNSA) Ground-Based Nuclear Explosion Monitoring Research and Engineering (GNEM R&E) Knowledge Base (KB) is parametric grid (PG) data. PG data can be used to improve signal detection, signal association, and event discrimination, but so far their greatest use has been for improving event location by providing ground-truth-based corrections to travel-time base models. In this presentation we discuss the latest versions of the complete suite of Knowledge Base PG tools developed by NNSA to create, access, manage, and view PG data. The primary PG population tool is the Knowledge Base calibration integration tool (KBCIT). KBCIT is an interactive computer application to produce interpolated calibration-based information that can be used to improve monitoring performance by improving precision of model predictions and by providing proper characterizations of uncertainty. It is used to analyze raw data and produce kriged correction surfaces that can be included in the Knowledge Base. KBCIT not only produces the surfaces but also records all steps in the analysis for later review and possible revision. New features in KBCIT include a new variogram autofit algorithm; the storage of database identifiers with a surface; the ability to merge surfaces; and improved surface-smoothing algorithms. The Parametric Grid Library (PGL) provides the interface to access the data and models stored in a PGL file database. The PGL represents the core software library used by all the GNEM R&E tools that read or write PGL data (e.g., KBCIT and LocOO). The library provides data representations and software models to support accurate and efficient seismic phase association and event location. Recent improvements include conversion of the flat-file database (FDB) to an Oracle database representation; automatic access of station/phase tagged models from the FDB during location; modification of the core

  4. Knowledge-based assistance in costing the space station DMS

    NASA Technical Reports Server (NTRS)

    Henson, Troy; Rone, Kyle

    1988-01-01

    The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.

  5. Knowledge-based fault diagnosis system for refuse collection vehicle

    NASA Astrophysics Data System (ADS)

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-01

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  6. Knowledge-based requirements analysis for automating software development

    NASA Technical Reports Server (NTRS)

    Markosian, Lawrence Z.

    1988-01-01

    We present a new software development paradigm that automates the derivation of implementations from requirements. In this paradigm, informally-stated requirements are expressed in a domain-specific requirements specification language. This language is machine-understable and requirements expressed in it are captured in a knowledge base. Once the requirements are captured, more detailed specifications and eventually implementations are derived by the system using transformational synthesis. A key characteristic of the process is that the required human intervention is in the form of providing problem- and domain-specific engineering knowledge, not in writing detailed implementations. We describe a prototype system that applies the paradigm in the realm of communication engineering: the prototype automatically generates implementations of buffers following analysis of the requirements on each buffer.

  7. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  8. Knowledge-based system for flight information management. Thesis

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.

    1990-01-01

    The use of knowledge-based system (KBS) architectures to manage information on the primary flight display (PFD) of commercial aircraft is described. The PFD information management strategy used tailored the information on the PFD to the tasks the pilot performed. The KBS design and implementation of the task-tailored PFD information management application is described. The knowledge acquisition and subsequent system design of a flight-phase-detection KBS is also described. The flight-phase output of this KBS was used as input to the task-tailored PFD information management KBS. The implementation and integration of this KBS with existing aircraft systems and the other KBS is described. The flight tests are examined of both KBS's, collectively called the Task-Tailored Flight Information Manager (TTFIM), which verified their implementation and integration, and validated the software engineering advantages of the KBS approach in an operational environment.

  9. The Knowledge-Based Software Assistant: Beyond CASE

    NASA Technical Reports Server (NTRS)

    Carozzoni, Joseph A.

    1993-01-01

    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.

  10. Structure of the knowledge base for an expert labeling system

    NASA Technical Reports Server (NTRS)

    Rajaram, N. S.

    1981-01-01

    One of the principal objectives of the NASA AgRISTARS program is the inventory of global crop resources using remotely sensed data gathered by Land Satellites (LANDSAT). A central problem in any such crop inventory procedure is the interpretation of LANDSAT images and identification of parts of each image which are covered by a particular crop of interest. This task of labeling is largely a manual one done by trained human analysts and consequently presents obstacles to the development of totally automated crop inventory systems. However, development in knowledge engineering as well as widespread availability of inexpensive hardware and software for artificial intelligence work offers possibilities for developing expert systems for labeling of crops. Such a knowledge based approach to labeling is presented.

  11. Knowledge-based fault diagnosis system for refuse collection vehicle

    SciTech Connect

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-15

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  12. ProbOnto: ontology and knowledge base of probability distributions

    PubMed Central

    Swat, Maciej J.; Grenon, Pierre; Wimalaratne, Sarala

    2016-01-01

    Motivation: Probability distributions play a central role in mathematical and statistical modelling. The encoding, annotation and exchange of such models could be greatly simplified by a resource providing a common reference for the definition of probability distributions. Although some resources exist, no suitably detailed and complex ontology exists nor any database allowing programmatic access. Results: ProbOnto, is an ontology-based knowledge base of probability distributions, featuring more than 80 uni- and multivariate distributions with their defining functions, characteristics, relationships and re-parameterization formulas. It can be used for model annotation and facilitates the encoding of distribution-based models, related functions and quantities. Availability and Implementation: http://probonto.org Contact: mjswat@ebi.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153608

  13. TMS for Instantiating a Knowledge Base With Incomplete Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A computer program that belongs to the class known among software experts as output truth-maintenance-systems (output TMSs) has been devised as one of a number of software tools for reducing the size of the knowledge base that must be searched during execution of artificial- intelligence software of the rule-based inference-engine type in a case in which data are missing. This program determines whether the consequences of activation of two or more rules can be combined without causing a logical inconsistency. For example, in a case involving hypothetical scenarios that could lead to turning a given device on or off, the program determines whether a scenario involving a given combination of rules could lead to turning the device both on and off at the same time, in which case that combination of rules would not be included in the scenario.

  14. Knowledge-based navigation of complex information spaces

    SciTech Connect

    Burke, R.D.; Hammond, K.J.; Young, B.C.

    1996-12-31

    While the explosion of on-line information has brought new opportunities for finding and using electronic data, it has also brought to the forefront the problem of isolating useful information and making sense of large multi-dimension information spaces. We have built several developed an approach to building data {open_quotes}tour guides,{close_quotes} called FINDME systems. These programs know enough about an information space to be able to help a user navigate through it. The user not only comes away with items of useful information but also insights into the structure of the information space itself. In these systems, we have combined ideas of instance-based browsing, structuring retrieval around the critiquing of previously-retrieved examples, and retrieval strategies, knowledge-based heuristics for finding relevant information. We illustrate these techniques with several examples, concentrating especially on the RENTME system, a FINDME system for helping users find suitable rental apartments in the Chicago metropolitan area.

  15. A model for a knowledge-based system's life cycle

    NASA Technical Reports Server (NTRS)

    Kiss, Peter A.

    1990-01-01

    The American Institute of Aeronautics and Astronautics has initiated a Committee on Standards for Artificial Intelligence. Presented here are the initial efforts of one of the working groups of that committee. The purpose here is to present a candidate model for the development life cycle of Knowledge Based Systems (KBS). The intent is for the model to be used by the Aerospace Community and eventually be evolved into a standard. The model is rooted in the evolutionary model, borrows from the spiral model, and is embedded in the standard Waterfall model for software development. Its intent is to satisfy the development of both stand-alone and embedded KBSs. The phases of the life cycle are detailed as are and the review points that constitute the key milestones throughout the development process. The applicability and strengths of the model are discussed along with areas needing further development and refinement by the aerospace community.

  16. Knowledge-based visualization of time-oriented clinical data.

    PubMed Central

    Shahar, Y.; Cheng, C.

    1998-01-01

    We describe a domain-independent framework (KNAVE) specific to the task of interpretation, summarization, visualization, explanation, and interactive exploration in a context-sensitive manner through time-oriented raw clinical data and the multiple levels of higher-level, interval-based concepts that can be abstracted from these data. The KNAVE exploration operators, which are independent of any particular clinical domain, access a knowledge base of temporal properties of measured data and interventions that is specific to the clinical domain. Thus, domain-specific knowledge underlies the domain-independent semantics of the interpretation, visualization, and exploration processes. Initial evaluation of the KNAVE prototype by a small number of users with variable clinical and informatics training has been encouraging. Images Figure 3 Figure 4 PMID:9929201

  17. Knowledge-Based Framework: its specification and new related discussions

    NASA Astrophysics Data System (ADS)

    Rodrigues, Douglas; Zaniolo, Rodrigo R.; Branco, Kalinka R. L. J. C.

    2015-09-01

    Unmanned Aerial Vehicle is a common application of critical embedded systems. The heterogeneity prevalent in these vehicles in terms of services for avionics is particularly relevant to the elaboration of multi-application missions. Besides, this heterogeneity in UAV services is often manifested in the form of characteristics such as reliability, security and performance. Different service implementations typically offer different guarantees in terms of these characteristics and in terms of associated costs. Particularly, we explore the notion of Service-Oriented Architecture (SOA) in the context of UAVs as safety-critical embedded systems for the composition of services to fulfil application-specified performance and dependability guarantees. So, we propose a framework for the deployment of these services and their variants. This framework is called Knowledge-Based Framework for Dynamically Changing Applications (KBF) and we specify its services module, discussing all the related issues.

  18. Knowledge-based decision support for patient monitoring in cardioanesthesia.

    PubMed

    Schecke, T; Langen, M; Popp, H J; Rau, G; Käsmacher, H; Kalff, G

    1992-01-01

    An approach to generating 'intelligent alarms' is presented that aggregates many information items, i.e. measured vital signs, recent medications, etc., into state variables that more directly reflect the patient's physiological state. Based on these state variables the described decision support system AES-2 also provides therapy recommendations. The assessment of the state variables and the generation of therapeutic advice follow a knowledge-based approach. Aspects of uncertainty, e.g. a gradual transition between 'normal' and 'below normal', are considered applying a fuzzy set approach. Special emphasis is laid on the ergonomic design of the user interface, which is based on color graphics and finger touch input on the screen. Certain simulation techniques considerably support the design process of AES-2 as is demonstrated with a typical example from cardioanesthesia. PMID:1402299

  19. The Knowledge Base Interface for Parametric Grid Information

    SciTech Connect

    Hipp, James R.; Simons, Randall W.; Young, Chris J.

    1999-08-03

    The parametric grid capability of the Knowledge Base (KBase) provides an efficient robust way to store and access interpolatable information that is needed to monitor the Comprehensive Nuclear Test Ban Treaty. To meet both the accuracy and performance requirements of operational monitoring systems, we use an approach which combines the error estimation of kriging with the speed and robustness of Natural Neighbor Interpolation. The method involves three basic steps: data preparation, data storage, and data access. In past presentations we have discussed in detail the first step. In this paper we focus on the latter two, describing in detail the type of information which must be stored and the interface used to retrieve parametric grid data from the Knowledge Base. Once data have been properly prepared, the information (tessellation and associated value surfaces) needed to support the interface functionality, can be entered into the KBase. The primary types of parametric grid data that must be stored include (1) generic header information; (2) base model, station, and phase names and associated ID's used to construct surface identifiers; (3) surface accounting information; (4) tessellation accounting information; (5) mesh data for each tessellation; (6) correction data defined for each surface at each node of the surfaces owning tessellation (7) mesh refinement calculation set-up and flag information; and (8) kriging calculation set-up and flag information. The eight data components not only represent the results of the data preparation process but also include all required input information for several population tools that would enable the complete regeneration of the data results if that should be necessary.

  20. Autonomous Cryogenic Load Operations: Knowledge-Based Autonomous Test Engineer

    NASA Technical Reports Server (NTRS)

    Schrading, J. Nicolas

    2013-01-01

    The Knowledge-Based Autonomous Test Engineer (KATE) program has a long history at KSC. Now a part of the Autonomous Cryogenic Load Operations (ACLO) mission, this software system has been sporadically developed over the past 20 years. Originally designed to provide health and status monitoring for a simple water-based fluid system, it was proven to be a capable autonomous test engineer for determining sources of failure in the system. As part of a new goal to provide this same anomaly-detection capability for a complicated cryogenic fluid system, software engineers, physicists, interns and KATE experts are working to upgrade the software capabilities and graphical user interface. Much progress was made during this effort to improve KATE. A display of the entire cryogenic system's graph, with nodes for components and edges for their connections, was added to the KATE software. A searching functionality was added to the new graph display, so that users could easily center their screen on specific components. The GUI was also modified so that it displayed information relevant to the new project goals. In addition, work began on adding new pneumatic and electronic subsystems into the KATE knowledge base, so that it could provide health and status monitoring for those systems. Finally, many fixes for bugs, memory leaks, and memory errors were implemented and the system was moved into a state in which it could be presented to stakeholders. Overall, the KATE system was improved and necessary additional features were added so that a presentation of the program and its functionality in the next few months would be a success.

  1. Knowledge-based architecture for airborne mine and minefield detection

    NASA Astrophysics Data System (ADS)

    Agarwal, Sanjeev; Menon, Deepak; Swonger, C. W.

    2004-09-01

    One of the primary lessons learned from airborne mid-wave infrared (MWIR) based mine and minefield detection research and development over the last few years has been the fact that no single algorithm or static detection architecture is able to meet mine and minefield detection performance specifications. This is true not only because of the highly varied environmental and operational conditions under which an airborne sensor is expected to perform but also due to the highly data dependent nature of sensors and algorithms employed for detection. Attempts to make the algorithms themselves more robust to varying operating conditions have only been partially successful. In this paper, we present a knowledge-based architecture to tackle this challenging problem. The detailed algorithm architecture is discussed for such a mine/minefield detection system, with a description of each functional block and data interface. This dynamic and knowledge-driven architecture will provide more robust mine and minefield detection for a highly multi-modal operating environment. The acquisition of the knowledge for this system is predominantly data driven, incorporating not only the analysis of historical airborne mine and minefield imagery data collection, but also other "all source data" that may be available such as terrain information and time of day. This "all source data" is extremely important and embodies causal information that drives the detection performance. This information is not being used by current detection architectures. Data analysis for knowledge acquisition will facilitate better understanding of the factors that affect the detection performance and will provide insight into areas for improvement for both sensors and algorithms. Important aspects of this knowledge-based architecture, its motivations and the potential gains from its implementation are discussed, and some preliminary results are presented.

  2. On the optimal design of molecular sensing interfaces with lipid bilayer assemblies - A knowledge based approach

    NASA Astrophysics Data System (ADS)

    Siontorou, Christina G.

    2012-12-01

    Biosensors are analytic devices that incorporate a biochemical recognition system (biological, biologicalderived or biomimic: enzyme, antibody, DNA, receptor, etc.) in close contact with a physicochemical transducer (electrochemical, optical, piezoelectric, conductimetric, etc.) that converts the biochemical information, produced by the specific biological recognition reaction (analyte-biomolecule binding), into a chemical or physical output signal, related to the concentration of the analyte in the measuring sample. The biosensing concept is based on natural chemoreception mechanisms, which are feasible over/within/by means of a biological membrane, i.e., a structured lipid bilayer, incorporating or attached to proteinaceous moieties that regulate molecular recognition events which trigger ion flux changes (facilitated or passive) through the bilayer. The creation of functional structures that are similar to natural signal transduction systems, correlating and interrelating compatibly and successfully the physicochemical transducer with the lipid film that is self-assembled on its surface while embedding the reconstituted biological recognition system, and at the same time manage to satisfy the basic conditions for measuring device development (simplicity, easy handling, ease of fabrication) is far from trivial. The aim of the present work is to present a methodological framework for designing such molecular sensing interfaces, functioning within a knowledge-based system built on an ontological platform for supplying sub-systems options, compatibilities, and optimization parameters.

  3. The Knowledge-Based Economy and E-Learning: Critical Considerations for Workplace Democracy

    ERIC Educational Resources Information Center

    Remtulla, Karim A.

    2007-01-01

    The ideological shift by nation-states to "a knowledge-based economy" (also referred to as "knowledge-based society") is causing changes in the workplace. Brought about by the forces of globalisation and technological innovation, the ideologies of the "knowledge-based economy" are not limited to influencing the production, consumption and economic…

  4. Knowledge-based potentials in bioinformatics: From a physicist’s viewpoint

    NASA Astrophysics Data System (ADS)

    Zheng, Wei-Mou

    2015-12-01

    Biological raw data are growing exponentially, providing a large amount of information on what life is. It is believed that potential functions and the rules governing protein behaviors can be revealed from analysis on known native structures of proteins. Many knowledge-based potentials for proteins have been proposed. Contrary to most existing review articles which mainly describe technical details and applications of various potential models, the main foci for the discussion here are ideas and concepts involving the construction of potentials, including the relation between free energy and energy, the additivity of potentials of mean force and some key issues in potential construction. Sequence analysis is briefly viewed from an energetic viewpoint. Project supported in part by the National Natural Science Foundation of China (Grant Nos. 11175224 and 11121403).

  5. Use of thermal analysis techniques (TG-DSC) for the characterization of diverse organic municipal waste streams to predict biological stability prior to land application

    SciTech Connect

    Fernandez, Jose M.; Plaza, Cesar; Polo, Alfredo; Plante, Alain F.

    2012-01-15

    ) techniques. Total amounts of CO{sub 2} respired indicated that the organic matter in the TS was the least stable, while that in the CS was the most stable. This was confirmed by changes detected with the spectroscopic methods in the composition of the organic wastes due to C mineralization. Differences were especially pronounced for TS, which showed a remarkable loss of aliphatic and proteinaceous compounds during the incubation process. TG, and especially DSC analysis, clearly reflected these differences between the three organic wastes before and after the incubation. Furthermore, the calculated energy density, which represents the energy available per unit of organic matter, showed a strong correlation with cumulative respiration. Results obtained support the hypothesis of a potential link between the thermal and biological stability of the studied organic materials, and consequently the ability of thermal analysis to characterize the maturity of municipal organic wastes and composts.

  6. Dithizone modified magnetic nanoparticles for fast and selective solid phase extraction of trace elements in environmental and biological samples prior to their determination by ICP-OES.

    PubMed

    Cheng, Guihong; He, Man; Peng, Hanyong; Hu, Bin

    2012-01-15

    A fast and simple method for analysis of trace amounts of Cr(III), Cu(II), Pb(II) and Zn(II) in environmental and biological samples was developed by combining magnetic solid phase extraction (MSPE) with inductively coupled plasma-optical emission spectrometry (ICP-OES) detection. Dithizone modified silica-coated magnetic Fe(3)O(4) nanoparticles (H(2)Dz-SCMNPs) were prepared and used for MSPE of trace amounts of Cr(III), Cu(II), Pb(II) and Zn(II). The prepared magnetic nanoparticles were characterized by scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray powder diffraction (XRD), and Fourier transform infrared spectroscopy (FT-IR). The factors affecting the extraction of the target metal ions such as pH, sample volume, eluent, and interfering ions had been investigated and the adsorption mechanism of the target metals on the self-prepared H(2)Dz-SCMNPs was investigated by FT-IR and X-ray photo electron spectroscopy (XPS). Under the optimized conditions, the detection limits of the developed method for Cr(III), Cu(II), Pb(II) and Zn(II) were 35, 11, 62, and 8ngL(-1), respectively, with the enrichment factor of 100. The relative standard deviations (RSDs, c=10μgL(-1), n=7) were in the range of 1.7-3.1% and the linear range was 0.1-100μgL(-1). The proposed method had been validated by two certified reference materials (GSBZ50009-88 environmental water and GBW07601 human hair), and the determined values were in good agreement with the certified values. The method was also applied for the determination of trace metals in real water and human hair samples with recoveries in the range of 85-110% for the spiked samples. The developed MSPE-ICP-OES method has the advantages of simplicity, rapidity, selectivity, high extraction efficiency and is suitable for the analysis of samples with large volume and complex matrix. PMID:22265534

  7. LLNL Middle East, North Africa and Western Eurasia Knowledge Base

    SciTech Connect

    O'Boyle, J; Ruppert, S D; Hauk, T F; Dodge, D A; Ryall, F; Firpo, M A

    2001-07-12

    The Lawrence Livermore National Laboratory (LLNL) Ground-Based Nuclear Event Monitoring (GNEM) program has made significant progress populating a comprehensive Seismic Research Knowledge Base (SRKB) and deriving calibration parameters for the Middle East, North Africa and Western Eurasia (ME/NA/WE) regions. The LLNL SRKB provides not only a coherent framework in which to store and organize very large volumes of collected seismic waveforms, associated event parameter information, and spatial contextual data, but also provides an efficient data processing/research environment for deriving location and discrimination correction surfaces. The SRKB is a flexible and extensible framework consisting of a relational database (RDB), Geographical Information System (GIS), and associated product/data visualization and data management tools. This SRKB framework is designed to accommodate large volumes of data (almost 3 million waveforms from 57,000 events) in diverse formats from many sources (both LLNL derived research and integrated contractor products), in addition to maintaining detailed quality control and metadata. We have developed expanded look-up tables for critical station parameter information (including location and response) and an integrated and reconciled event catalog data set (including specification of preferred origin solutions and associated phase arrivals) for the PDE, CMT, ISC, REB and selected regional catalogs. Using the SRKB framework, we are combining traveltime observations, event characterization studies, and regional tectonic models to assemble a library of ground truth information and phenomenology (e.g. travel-time and amplitude) correction surfaces required for support of the ME/NA/WE regionalization program. We also use the SRKB to integrate data and research products from a variety of sources, such as contractors and universities, to merge and maintain quality control of the data sets. Corrections and parameters distilled from the LLNL SRKB

  8. A clinical trial of a knowledge-based medical record.

    PubMed

    Safran, C; Rind, D M; Davis, R B; Sands, D Z; Caraballo, E; Rippel, K; Wang, Q; Rury, C; Makadon, H J; Cotton, D J

    1995-01-01

    To meet the needs of primary care physicians caring for patients with HIV infection, we developed a knowledge-based medical record to allow the on-line patient record to play an active role in the care process. These programs integrate the on-line patient record, rule-based decision support, and full-text information retrieval into a clinical workstation for the practicing clinician. To determine whether use of a knowledge-based medical record was associated with more rapid and complete adherence to practice guidelines and improved quality of care, we performed a controlled clinical trial among physicians and nurse practitioners caring for 349 patients infected with the human immuno-deficiency virus (HIV); 191 patients were treated by 65 physicians and nurse practitioners assigned to the intervention group, and 158 patients were treated by 61 physicians and nurse practitioners assigned to the control group. During the 18-month study period, the computer generated 303 alerts in the intervention group and 388 in the control group. The median response time of clinicians to these alerts was 11 days in the intervention group and 52 days in the control group (PJJ0.0001, log-rank test). During the study, the computer generated 432 primary care reminders for the intervention group and 360 reminders for the control group. The median response time of clinicians to these alerts was 114 days in the intervention group and more than 500 days in the control group (PJJ0.0001, log-rank test). Of the 191 patients in the intervention group, 67 (35%) had one or more hospitalizations, compared with 70 (44%) of the 158 patients in the control group (PJ=J0.04, Wilcoxon test stratified for initial CD4 count). There was no difference in survival between the intervention and control groups (P = 0.18, log-rank test). We conclude that our clinical workstation significantly changed physicians' behavior in terms of their response to alerts regarding primary care interventions and that these

  9. EHR based Genetic Testing Knowledge Base (iGTKB) Development

    PubMed Central

    2015-01-01

    Background The gap between a large growing number of genetic tests and a suboptimal clinical workflow of incorporating these tests into regular clinical practice poses barriers to effective reliance on advanced genetic technologies to improve quality of healthcare. A promising solution to fill this gap is to develop an intelligent genetic test recommendation system that not only can provide a comprehensive view of genetic tests as education resources, but also can recommend the most appropriate genetic tests to patients based on clinical evidence. In this study, we developed an EHR based Genetic Testing Knowledge Base for Individualized Medicine (iGTKB). Methods We extracted genetic testing information and patient medical records from EHR systems at Mayo Clinic. Clinical features have been semi-automatically annotated from the clinical notes by applying a Natural Language Processing (NLP) tool, MedTagger suite. To prioritize clinical features for each genetic test, we compared odds ratio across four population groups. Genetic tests, genetic disorders and clinical features with their odds ratios have been applied to establish iGTKB, which is to be integrated into the Genetic Testing Ontology (GTO). Results Overall, there are five genetic tests operated with sample size greater than 100 in 2013 at Mayo Clinic. A total of 1,450 patients who was tested by one of the five genetic tests have been selected. We assembled 243 clinical features from the Human Phenotype Ontology (HPO) for these five genetic tests. There are 60 clinical features with at least one mention in clinical notes of patients taking the test. Twenty-eight clinical features with high odds ratio (greater than 1) have been selected as dominant features and deposited into iGTKB with their associated information about genetic tests and genetic disorders. Conclusions In this study, we developed an EHR based genetic testing knowledge base, iGTKB. iGTKB will be integrated into the GTO by providing relevant

  10. The Role of Causal Knowledge in Knowledge-Based Patient Simulation

    PubMed Central

    Chin, Homer L.; Cooper, Gregory F.

    1987-01-01

    We have investigated the ability to simulate a patient from a knowledge base. Specifically, we have examined the use of knowledge bases that associate findings with diseases through the use of probability measures, and their ability to generate realistic patient cases that can be used for teaching purposes. Many of these knowledge bases encode neither the interdependence among findings, nor intermediate disease states. Because of this, the use of these knowledge bases results in the generation of inconsistent or nonsensical patients. This paper describes an approach for the addition of causal structure to these knowledge bases which can overcome many of these limitations and improve the explanatory capability of such systems.

  11. Automatic tumor segmentation using knowledge-based techniques.

    PubMed

    Clark, M C; Hall, L O; Goldgof, D B; Velthuizen, R; Murtagh, F R; Silbiger, M S

    1998-04-01

    A system that automatically segments and labels glioblastoma-multiforme tumors in magnetic resonance images (MRI's) of the human brain is presented. The MRI's consist of T1-weighted, proton density, and T2-weighted feature images and are processed by a system which integrates knowledge-based (KB) techniques with multispectral analysis. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intracranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intracranial region, with region analysis used in performing the final tumor labeling. This system has been trained on three volume data sets and tested on thirteen unseen volume data sets acquired from a single MRI system. The KB tumor segmentation was compared with supervised, radiologist-labeled "ground truth" tumor volumes and supervised k-nearest neighbors tumor segmentations. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time. PMID:9688151

  12. Compiling knowledge-based systems from KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.

  13. How Quality Improvement Practice Evidence Can Advance the Knowledge Base.

    PubMed

    OʼRourke, Hannah M; Fraser, Kimberly D

    2016-01-01

    Recommendations for the evaluation of quality improvement interventions have been made in order to improve the evidence base of whether, to what extent, and why quality improvement interventions affect chosen outcomes. The purpose of this article is to articulate why these recommendations are appropriate to improve the rigor of quality improvement intervention evaluation as a research endeavor, but inappropriate for the purposes of everyday quality improvement practice. To support our claim, we describe the differences between quality improvement interventions that occur for the purpose of practice as compared to research. We then carefully consider how feasibility, ethics, and the aims of evaluation each impact how quality improvement interventions that occur in practice, as opposed to research, can or should be evaluated. Recommendations that fit the evaluative goals of practice-based quality improvement interventions are needed to support fair appraisal of the distinct evidence they produce. We describe a current debate on the nature of evidence to assist in reenvisioning how quality improvement evidence generated from practice might complement that generated from research, and contribute in a value-added way to the knowledge base. PMID:27584696

  14. Knowledge base navigator facilitating regional analysis inter-tool communication.

    SciTech Connect

    Hampton, Jeffery Wade; Chael, Eric Paul; Hart, Darren M.; Merchant, Bion John; Chown, Matthew N.

    2004-08-01

    To make use of some portions of the National Nuclear Security Administration (NNSA) Knowledge Base (KB) for which no current operational monitoring applications were available, Sandia National Laboratories have developed a set of prototype regional analysis tools (MatSeis, EventID Tool, CodaMag Tool, PhaseMatch Tool, Dendro Tool, Infra Tool, etc.), and we continue to maintain and improve these. Individually, these tools have proven effective in addressing specific monitoring tasks, but collectively their number and variety tend to overwhelm KB users, so we developed another application - the KB Navigator - to launch the tools and facilitate their use for real monitoring tasks. The KB Navigator is a flexible, extensible java application that includes a browser for KB data content, as well as support to launch any of the regional analysis tools. In this paper, we will discuss the latest versions of KB Navigator and the regional analysis tools, with special emphasis on the new overarching inter-tool communication methodology that we have developed to make the KB Navigator and the tools function together seamlessly. We use a peer-to-peer communication model, which allows any tool to communicate with any other. The messages themselves are passed as serialized XML, and the conversion from Java to XML (and vice versa) is done using Java Architecture for XML Binding (JAXB).

  15. Dynamic reasoning in a knowledge-based system

    NASA Technical Reports Server (NTRS)

    Rao, Anand S.; Foo, Norman Y.

    1988-01-01

    Any space based system, whether it is a robot arm assembling parts in space or an onboard system monitoring the space station, has to react to changes which cannot be foreseen. As a result, apart from having domain-specific knowledge as in current expert systems, a space based AI system should also have general principles of change. This paper presents a modal logic which can not only represent change but also reason with it. Three primitive operations, expansion, contraction and revision are introduced and axioms which specify how the knowledge base should change when the external world changes are also specified. Accordingly the notion of dynamic reasoning is introduced, which unlike the existing forms of reasoning, provide general principles of change. Dynamic reasoning is based on two main principles, namely minimize change and maximize coherence. A possible-world semantics which incorporates the above two principles is also discussed. The paper concludes by discussing how the dynamic reasoning system can be used to specify actions and hence form an integral part of an autonomous reasoning and planning system.

  16. Prospector II: Towards a knowledge base for mineral deposits

    USGS Publications Warehouse

    McCammon, R.B.

    1994-01-01

    What began in the mid-seventies as a research effort in designing an expert system to aid geologists in exploring for hidden mineral deposits has in the late eighties become a full-sized knowledge-based system to aid geologists in conducting regional mineral resource assessments. Prospector II, the successor to Prospector, is interactive-graphics oriented, flexible in its representation of mineral deposit models, and suited to regional mineral resource assessment. In Prospector II, the geologist enters the findings for an area, selects the deposit models or examples of mineral deposits for consideration, and the program compares the findings with the models or the examples selected, noting the similarities, differences, and missing information. The models or the examples selected are ranked according to scores that are based on the comparisons with the findings. Findings can be reassessed and the process repeated if necessary. The results provide the geologist with a rationale for identifying those mineral deposit types that the geology of an area permits. In future, Prospector II can assist in the creation of new models used in regional mineral resource assessment and in striving toward an ultimate classification of mineral deposits. ?? 1994 International Association for Mathematical Geology.

  17. Knowledge-acquisition tools for medical knowledge-based systems.

    PubMed

    Lanzola, G; Quaglini, S; Stefanelli, M

    1995-03-01

    Knowledge-based systems (KBS) have been proposed to solve a large variety of medical problems. A strategic issue for KBS development and maintenance are the efforts required for both knowledge engineers and domain experts. The proposed solution is building efficient knowledge acquisition (KA) tools. This paper presents a set of KA tools we are developing within a European Project called GAMES II. They have been designed after the formulation of an epistemological model of medical reasoning. The main goal is that of developing a computational framework which allows knowledge engineers and domain experts to interact cooperatively in developing a medical KBS. To this aim, a set of reusable software components is highly recommended. Their design was facilitated by the development of a methodology for KBS construction. It views this process as comprising two activities: the tailoring of the epistemological model to the specific medical task to be executed and the subsequent translation of this model into a computational architecture so that the connections between computational structures and their knowledge level counterparts are maintained. The KA tools we developed are illustrated taking examples from the behavior of a KBS we are building for the management of children with acute myeloid leukemia. PMID:9082135

  18. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  19. Knowledge-based system for design of signalized intersections

    SciTech Connect

    Linkenheld, J.S. ); Benekohal, R.F. ); Garrett, J.H. Jr. )

    1992-03-01

    For an efficient traffic operation in intelligent highway systems, traffic signals need to respond to the changes in roadway and traffic demand. The phasing and timing of traffic signals requires the use of heuristic rules of thumb to determine what phases are needed and how the green time should be assigned to them. Because of the need for judgmental knowledge in solving this problem, this study has used knowledge-based expert-system technology to develop a system for the phasing and signal timing (PHAST) of an isolated intersection. PHAST takes intersection geometry and traffic volume as input and generates appropriate phase plan, cycle length, and green time for each phase. The phase plan and signal timing change when intersection geometry or traffic demand changes. This paper describes the intended system functionality, the system architecture, the knowledge used to phase and time an intersection, the implementation of the system, and system verification. PHAST's performance was validated using phase plans and timings of several intersections.

  20. A knowledge-based system design/information tool

    NASA Technical Reports Server (NTRS)

    Allen, James G.; Sikora, Scott E.

    1990-01-01

    The objective of this effort was to develop a Knowledge Capture System (KCS) for the Integrated Test Facility (ITF) at the Dryden Flight Research Facility (DFRF). The DFRF is a NASA Ames Research Center (ARC) facility. This system was used to capture the design and implementation information for NASA's high angle-of-attack research vehicle (HARV), a modified F/A-18A. In particular, the KCS was used to capture specific characteristics of the design of the HARV fly-by-wire (FBW) flight control system (FCS). The KCS utilizes artificial intelligence (AI) knowledge-based system (KBS) technology. The KCS enables the user to capture the following characteristics of automated systems: the system design; the hardware (H/W) design and implementation; the software (S/W) design and implementation; and the utilities (electrical and hydraulic) design and implementation. A generic version of the KCS was developed which can be used to capture the design information for any automated system. The deliverable items for this project consist of the prototype generic KCS and an application, which captures selected design characteristics of the HARV FCS.

  1. A knowledge based expert system for condition monitoring

    SciTech Connect

    Selkirk, C.G.; Roberge, P.R.; Fisher, G.F.; Yeung, K.K.

    1994-12-31

    Condition monitoring (CM) is the focus of many maintenance philosophies around the world today. In the Canadian Forces (CF), CM has played an important role in the maintenance of aircraft systems since the introduction of spectrometric oil analysis (SOAP) over twenty years ago. Other techniques in use in the CF today include vibration analysis (VA), ferrography, and filter debris analysis (FDA). To improve the usefulness and utility gained from these CM techniques, work is currently underway to incorporate expert systems into them. An expert system for FDA is being developed which will aid filter debris analysts in identifying wear debris and wear level trends, and which will provide the analyst with reference examples in an attempt to standardize results. Once completed, this knowledge based expert system will provide a blueprint from which other CM expert systems can be created. Amalgamating these specific systems into a broad based global system will provide the CM analyst with a tool that will be able to correlate data and results from each of the techniques, thereby increasing the utility of each individual method of analysis. This paper will introduce FDA and then outline the development of the FDA expert system and future applications.

  2. Knowledge-based design of complex mechanical systems

    SciTech Connect

    Ishii, K.

    1988-01-01

    The recent development of Artificial Intelligence (AI) techniques allows incorporation of qualitative aspects of design into the computer aids. This thesis presents a framework for applying AI techniques to the design of complex mechanical systems. A complex, yet well-understood design example as a vehicle for the effort is used. The author first reviews how experienced designers use knowledge at various stages of system design. He then proposes a knowledge-based model of the design process and develop frameworks for applying knowledge engineering in order to construct a consultation system for the designers. He proposes four such frameworks for use at different stages of design: (1) Design Compatibility Analysis (DCA) analyzes the compatibility of the designer's design alternatives with the design specification, (2) Initial Design Suggestion (IDS) provides the designer with reasonable initial estimates of the design variables, (3) Rule-based Sensitivity Analysis (RSA) guides the user through redesign, and (4) Active Constraint Deduction (ACD) identifies the bottlenecks of design by heuristic knowledge. These frameworks eliminate unnecessary iterations and allows the user to obtain a satisfactory solution rapidly.

  3. FunSecKB: the Fungal Secretome KnowledgeBase

    PubMed Central

    Lum, Gengkon; Min, Xiang Jia

    2011-01-01

    The Fungal Secretome KnowledgeBase (FunSecKB) provides a resource of secreted fungal proteins, i.e. secretomes, identified from all available fungal protein data in the NCBI RefSeq database. The secreted proteins were identified using a well evaluated computational protocol which includes SignalP, WolfPsort and Phobius for signal peptide or subcellular location prediction, TMHMM for identifying membrane proteins, and PS-Scan for identifying endoplasmic reticulum (ER) target proteins. The entries were mapped to the UniProt database and any annotations of subcellular locations that were either manually curated or computationally predicted were included in FunSecKB. Using a web-based user interface, the database is searchable, browsable and downloadable by using NCBI’s RefSeq accession or gi number, UniProt accession number, keyword or by species. A BLAST utility was integrated to allow users to query the database by sequence similarity. A user submission tool was implemented to support community annotation of subcellular locations of fungal proteins. With the complete fungal data from RefSeq and associated web-based tools, FunSecKB will be a valuable resource for exploring the potential applications of fungal secreted proteins. Database URL: http://proteomics.ysu.edu/secretomes/fungi.php PMID:21300622

  4. Effective domain-dependent reuse in medical knowledge bases.

    PubMed

    Dojat, M; Pachet, F

    1995-12-01

    Knowledge reuse is now a critical issue for most developers of medical knowledge-based systems. As a rule, reuse is addressed from an ambitious, knowledge-engineering perspective that focuses on reusable general purpose knowledge modules, concepts, and methods. However, such a general goal fails to take into account the specific aspects of medical practice. From the point of view of the knowledge engineer, whose goal is to capture the specific features and intricacies of a given domain, this approach addresses the wrong level of generality. In this paper, we adopt a more pragmatic viewpoint, introducing the less ambitious goal of "domain-dependent limited reuse" and suggesting effective means of achieving it in practice. In a knowledge representation framework combining objects and production rules, we propose three mechanisms emerging from the combination of object-oriented programming and rule-based programming. We show these mechanisms contribute to achieve limited reuse and to introduce useful limited variations in medical expertise. PMID:8770532

  5. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  6. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  7. RKB: a Semantic Web knowledge base for RNA

    PubMed Central

    2010-01-01

    Increasingly sophisticated knowledge about RNA structure and function requires an inclusive knowledge representation that facilitates the integration of independently –generated information arising from such efforts as genome sequencing projects, microarray analyses, structure determination and RNA SELEX experiments. While RNAML, an XML-based representation, has been proposed as an exchange format for a select subset of information, it lacks domain-specific semantics that are essential for answering questions that require expert knowledge. Here, we describe an RNA knowledge base (RKB) for structure-based knowledge using RDF/OWL Semantic Web technologies. RKB extends a number of ontologies and contains basic terminology for nucleic acid composition along with context/model-specific structural features such as sugar conformations, base pairings and base stackings. RKB (available at http://semanticscience.org/projects/rkb) is populated with PDB entries and MC-Annotate structural annotation. We show queries to the RKB using description logic reasoning, thus opening the door to question answering over independently-published RNA knowledge using Semantic Web technologies. PMID:20626922

  8. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  9. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  10. Knowledge-based graphical interfaces for presenting technical information

    NASA Technical Reports Server (NTRS)

    Feiner, Steven

    1988-01-01

    Designing effective presentations of technical information is extremely difficult and time-consuming. Moreover, the combination of increasing task complexity and declining job skills makes the need for high-quality technical presentations especially urgent. We believe that this need can ultimately be met through the development of knowledge-based graphical interfaces that can design and present technical information. Since much material is most naturally communicated through pictures, our work has stressed the importance of well-designed graphics, concentrating on generating pictures and laying out displays containing them. We describe APEX, a testbed picture generation system that creates sequences of pictures that depict the performance of simple actions in a world of 3D objects. Our system supports rules for determining automatically the objects to be shown in a picture, the style and level of detail with which they should be rendered, the method by which the action itself should be indicated, and the picture's camera specification. We then describe work on GRIDS, an experimental display layout system that addresses some of the problems in designing displays containing these pictures, determining the position and size of the material to be presented.