Science.gov

Sample records for prior biological knowledge-based

  1. Prior knowledge-based approach for associating ...

    EPA Pesticide Factsheets

    Evaluating the potential human health and/or ecological risks associated with exposures to complex chemical mixtures in the ambient environment is one of the central challenges of chemical safety assessment and environmental protection. There is a need for approaches that can help to integrate chemical monitoring and bio-effects data to evaluate risks associated with chemicals present in the environment. We used prior knowledge about chemical-gene interactions to develop a knowledge assembly model for detected chemicals at five locations near two wastewater treatment plants. The assembly model was used to generate hypotheses about the biological impacts of the chemicals at each location. The hypotheses were tested using empirical hepatic gene expression data from fathead minnows exposed for 12 d at each location. Empirical gene expression data was also mapped to the assembly models to statistically evaluate the likelihood of a chemical contributing to the observed biological responses. The prior knowledge approach was able reasonably hypothesize the biological impacts at one site but not the other. Chemicals most likely contributing to the observed biological responses were identified at each location. Despite limitations to the approach, knowledge assembly models have strong potential for associating chemical occurrence with potential biological effects and providing a foundation for hypothesis generation to guide research and/or monitoring efforts relat

  2. Prior knowledge-based approach for associating ...

    EPA Pesticide Factsheets

    Evaluating the potential human health and/or ecological risks associated with exposures to complex chemical mixtures in the ambient environment is one of the central challenges of chemical safety assessment and environmental protection. There is a need for approaches that can help to integrate chemical monitoring and bio-effects data to evaluate risks associated with chemicals present in the environment. We used prior knowledge about chemical-gene interactions to develop a knowledge assembly model for detected chemicals at five locations near two wastewater treatment plants. The assembly model was used to generate hypotheses about the biological impacts of the chemicals at each location. The hypotheses were tested using empirical hepatic gene expression data from fathead minnows exposed for 12 d at each location. Empirical gene expression data was also mapped to the assembly models to statistically evaluate the likelihood of a chemical contributing to the observed biological responses. The prior knowledge approach was able reasonably hypothesize the biological impacts at one site but not the other. Chemicals most likely contributing to the observed biological responses were identified at each location. Despite limitations to the approach, knowledge assembly models have strong potential for associating chemical occurrence with potential biological effects and providing a foundation for hypothesis generation to guide research and/or monitoring efforts relat

  3. Creating a knowledge base of biological research papers

    SciTech Connect

    Hafner, C.D.; Baclawski, K.; Futrelle, R.P.; Fridman, N.

    1994-12-31

    Intelligent text-oriented tools for representing and searching the biological research literature are being developed, which combine object-oriented databases with artificial intelligence techniques to create a richly structured knowledge base of Materials and Methods sections of biological research papers. A knowledge model of experimental processes, biological and chemical substances, and analytical techniques is described, based on the representation techniques of taxonomic semantic nets and knowledge frames. Two approaches to populating the knowledge base with the contents of biological research papers are described: natural language processing and an interactive knowledge definition tool.

  4. SU-E-J-71: Spatially Preserving Prior Knowledge-Based Treatment Planning

    SciTech Connect

    Wang, H; Xing, L

    2015-06-15

    Purpose: Prior knowledge-based treatment planning is impeded by the use of a single dose volume histogram (DVH) curve. Critical spatial information is lost from collapsing the dose distribution into a histogram. Even similar patients possess geometric variations that becomes inaccessible in the form of a single DVH. We propose a simple prior knowledge-based planning scheme that extracts features from prior dose distribution while still preserving the spatial information. Methods: A prior patient plan is not used as a mere starting point for a new patient but rather stopping criteria are constructed. Each structure from the prior patient is partitioned into multiple shells. For instance, the PTV is partitioned into an inner, middle, and outer shell. Prior dose statistics are then extracted for each shell and translated into the appropriate Dmin and Dmax parameters for the new patient. Results: The partitioned dose information from a prior case has been applied onto 14 2-D prostate cases. Using prior case yielded final DVHs that was comparable to manual planning, even though the DVH for the prior case was different from the DVH for the 14 cases. Solely using a single DVH for the entire organ was also performed for comparison but showed a much poorer performance. Different ways of translating the prior dose statistics into parameters for the new patient was also tested. Conclusion: Prior knowledge-based treatment planning need to salvage the spatial information without transforming the patients on a voxel to voxel basis. An efficient balance between the anatomy and dose domain is gained through partitioning the organs into multiple shells. The use of prior knowledge not only serves as a starting point for a new case but the information extracted from the partitioned shells are also translated into stopping criteria for the optimization problem at hand.

  5. Case-based reasoning for space applications: Utilization of prior experience in knowledge-based systems

    NASA Technical Reports Server (NTRS)

    King, James A.

    1987-01-01

    The goal is to explain Case-Based Reasoning as a vehicle to establish knowledge-based systems based on experimental reasoning for possible space applications. This goal will be accomplished through an examination of reasoning based on prior experience in a sample domain, and also through a presentation of proposed space applications which could utilize Case-Based Reasoning techniques.

  6. Prior-knowledge-based spectral mixture analysis for impervious surface mapping

    SciTech Connect

    Zhang, Jinshui; He, Chunyang; Zhou, Yuyu; Zhu, Shuang; Shuai, Guanyuan

    2014-01-03

    In this study, we developed a prior-knowledge-based spectral mixture analysis (PKSMA) to map impervious surfaces by using endmembers derived separately for high- and low-density urban regions. First, an urban area was categorized into high- and low-density urban areas, using a multi-step classification method. Next, in high-density urban areas that were assumed to have only vegetation and impervious surfaces (ISs), the Vegetation-Impervious model (V-I) was used in a spectral mixture analysis (SMA) with three endmembers: vegetation, high albedo, and low albedo. In low-density urban areas, the Vegetation-Impervious-Soil model (V-I-S) was used in an SMA analysis with four endmembers: high albedo, low albedo, soil, and vegetation. The fraction of IS with high and low albedo in each pixel was combined to produce the final IS map. The root mean-square error (RMSE) of the IS map produced using PKSMA was about 11.0%, compared to 14.52% using four-endmember SMA. Particularly in high-density urban areas, PKSMA (RMSE = 6.47%) showed better performance than four-endmember (15.91%). The results indicate that PKSMA can improve IS mapping compared to traditional SMA by using appropriately selected endmembers and is particularly strong in high-density urban areas.

  7. Reactome: a knowledge base of biologic pathways and processes

    PubMed Central

    2007-01-01

    Reactome http://www.reactome.org, an online curated resource for human pathway data, provides infrastructure for computation across the biologic reaction network. We use Reactome to infer equivalent reactions in multiple nonhuman species, and present data on the reliability of these inferred reactions for the distantly related eukaryote Saccharomyces cerevisiae. Finally, we describe the use of Reactome both as a learning resource and as a computational tool to aid in the interpretation of microarrays and similar large-scale datasets. PMID:17367534

  8. CONCEPTUAL FRAMEWORK FOR THE CHEMICAL EFFECTS IN BIOLOGICAL SYSTEMS (CEBS) TOXICOGENOMICS KNOWLEDGE BASE

    EPA Science Inventory

    Conceptual Framework for the Chemical Effects in Biological Systems (CEBS) T oxicogenomics Knowledge Base

    Abstract
    Toxicogenomics studies how the genome is involved in responses to environmental stressors or toxicants. It combines genetics, genome-scale mRNA expressio...

  9. CONCEPTUAL FRAMEWORK FOR THE CHEMICAL EFFECTS IN BIOLOGICAL SYSTEMS (CEBS) TOXICOGENOMICS KNOWLEDGE BASE

    EPA Science Inventory

    Conceptual Framework for the Chemical Effects in Biological Systems (CEBS) T oxicogenomics Knowledge Base

    Abstract
    Toxicogenomics studies how the genome is involved in responses to environmental stressors or toxicants. It combines genetics, genome-scale mRNA expressio...

  10. Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.

    PubMed

    Chen, C W; Chen, D Z

    2001-11-01

    Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.

  11. A prior-knowledge-based threshold segmentation method of forward-looking sonar images for underwater linear object detection

    NASA Astrophysics Data System (ADS)

    Liu, Lixin; Bian, Hongyu; Yagi, Shin-ichi; Yang, Xiaodong

    2016-07-01

    Raw sonar images may not be used for underwater detection or recognition directly because disturbances such as the grating-lobe and multi-path disturbance affect the gray-level distribution of sonar images and cause phantom echoes. To search for a more robust segmentation method with a reasonable computational cost, a prior-knowledge-based threshold segmentation method of underwater linear object detection is discussed. The possibility of guiding the segmentation threshold evolution of forward-looking sonar images using prior knowledge is verified by experiment. During the threshold evolution, the collinear relation of two lines that correspond to double peaks in the voting space of the edged image is used as the criterion of termination. The interaction is reflected in the sense that the Hough transform contributes to the basis of the collinear relation of lines, while the binary image generated from the current threshold provides the resource of the Hough transform. The experimental results show that the proposed method could maintain a good tradeoff between the segmentation quality and the computational time in comparison with conventional segmentation methods. The proposed method redounds to a further process for unsupervised underwater visual understanding.

  12. RegenBase: a knowledge base of spinal cord injury biology for translational research.

    PubMed

    Callahan, Alison; Abeyruwan, Saminda W; Al-Ali, Hassan; Sakurai, Kunie; Ferguson, Adam R; Popovich, Phillip G; Shah, Nigam H; Visser, Ubbo; Bixby, John L; Lemmon, Vance P

    2016-01-01

    Spinal cord injury (SCI) research is a data-rich field that aims to identify the biological mechanisms resulting in loss of function and mobility after SCI, as well as develop therapies that promote recovery after injury. SCI experimental methods, data and domain knowledge are locked in the largely unstructured text of scientific publications, making large scale integration with existing bioinformatics resources and subsequent analysis infeasible. The lack of standard reporting for experiment variables and results also makes experiment replicability a significant challenge. To address these challenges, we have developed RegenBase, a knowledge base of SCI biology. RegenBase integrates curated literature-sourced facts and experimental details, raw assay data profiling the effect of compounds on enzyme activity and cell growth, and structured SCI domain knowledge in the form of the first ontology for SCI, using Semantic Web representation languages and frameworks. RegenBase uses consistent identifier schemes and data representations that enable automated linking among RegenBase statements and also to other biological databases and electronic resources. By querying RegenBase, we have identified novel biological hypotheses linking the effects of perturbagens to observed behavioral outcomes after SCI. RegenBase is publicly available for browsing, querying and download.Database URL:http://regenbase.org.

  13. BioBIKE: A Web-based, programmable, integrated biological knowledge base

    PubMed Central

    Elhai, Jeff; Taton, Arnaud; Massar, JP; Myers, John K.; Travers, Mike; Casey, Johnny; Slupesky, Mark; Shrager, Jeff

    2009-01-01

    BioBIKE (biobike.csbc.vcu.edu) is a web-based environment enabling biologists with little programming expertise to combine tools, data, and knowledge in novel and possibly complex ways, as demanded by the biological problem at hand. BioBIKE is composed of three integrated components: a biological knowledge base, a graphical programming interface and an extensible set of tools. Each of the five current BioBIKE instances provides all available information (genomic, metabolic, experimental) appropriate to a given research community. The BioBIKE programming language and graphical programming interface employ familiar operations to help users combine functions and information to conduct biologically meaningful analyses. Many commonly used tools, such as Blast and PHYLIP, are built-in, allowing users to access them within the same interface and to pass results from one to another. Users may also invent their own tools, packaging complex expressions under a single name, which is immediately made accessible through the graphical interface. BioBIKE represents a partial solution to the difficult question of how to enable those with no background in computer programming to work directly and creatively with mass biological information. BioBIKE is distributed under the MIT Open Source license. A description of the underlying language and other technical matters is available at www.Biobike.org. PMID:19433511

  14. RegenBase: a knowledge base of spinal cord injury biology for translational research

    PubMed Central

    Callahan, Alison; Abeyruwan, Saminda W.; Al-Ali, Hassan; Sakurai, Kunie; Ferguson, Adam R.; Popovich, Phillip G.; Shah, Nigam H.; Visser, Ubbo; Bixby, John L.; Lemmon, Vance P.

    2016-01-01

    Spinal cord injury (SCI) research is a data-rich field that aims to identify the biological mechanisms resulting in loss of function and mobility after SCI, as well as develop therapies that promote recovery after injury. SCI experimental methods, data and domain knowledge are locked in the largely unstructured text of scientific publications, making large scale integration with existing bioinformatics resources and subsequent analysis infeasible. The lack of standard reporting for experiment variables and results also makes experiment replicability a significant challenge. To address these challenges, we have developed RegenBase, a knowledge base of SCI biology. RegenBase integrates curated literature-sourced facts and experimental details, raw assay data profiling the effect of compounds on enzyme activity and cell growth, and structured SCI domain knowledge in the form of the first ontology for SCI, using Semantic Web representation languages and frameworks. RegenBase uses consistent identifier schemes and data representations that enable automated linking among RegenBase statements and also to other biological databases and electronic resources. By querying RegenBase, we have identified novel biological hypotheses linking the effects of perturbagens to observed behavioral outcomes after SCI. RegenBase is publicly available for browsing, querying and download. Database URL: http://regenbase.org PMID:27055827

  15. Integrating biological knowledge based on functional annotations for biclustering of gene expression data.

    PubMed

    Nepomuceno, Juan A; Troncoso, Alicia; Nepomuceno-Chamorro, Isabel A; Aguilar-Ruiz, Jesús S

    2015-05-01

    Gene expression data analysis is based on the assumption that co-expressed genes imply co-regulated genes. This assumption is being reformulated because the co-expression of a group of genes may be the result of an independent activation with respect to the same experimental condition and not due to the same regulatory regime. For this reason, traditional techniques are recently being improved with the use of prior biological knowledge from open-access repositories together with gene expression data. Biclustering is an unsupervised machine learning technique that searches patterns in gene expression data matrices. A scatter search-based biclustering algorithm that integrates biological information is proposed in this paper. In addition to the gene expression data matrix, the input of the algorithm is only a direct annotation file that relates each gene to a set of terms from a biological repository where genes are annotated. Two different biological measures, FracGO and SimNTO, are proposed to integrate this information by means of its addition to-be-optimized fitness function in the scatter search scheme. The measure FracGO is based on the biological enrichment and SimNTO is based on the overlapping among GO annotations of pairs of genes. Experimental results evaluate the proposed algorithm for two datasets and show the algorithm performs better when biological knowledge is integrated. Moreover, the analysis and comparison between the two different biological measures is presented and it is concluded that the differences depend on both the data source and how the annotation file has been built in the case GO is used. It is also shown that the proposed algorithm obtains a greater number of enriched biclusters than other classical biclustering algorithms typically used as benchmark and an analysis of the overlapping among biclusters reveals that the biclusters obtained present a low overlapping. The proposed methodology is a general-purpose algorithm which allows

  16. Knowledge-based approach for functional MRI analysis by SOM neural network using prior labels from Talairach stereotaxic space

    NASA Astrophysics Data System (ADS)

    Erberich, Stephan G.; Willmes, Klaus; Thron, Armin; Oberschelp, Walter; Huang, H. K.

    2002-04-01

    Among the methods proposed for the analysis of functional MR we have previously introduced a model-independent analysis based on the self-organizing map (SOM) neural network technique. The SOM neural network can be trained to identify the temporal patterns in voxel time-series of individual functional MRI (fMRI) experiments. The separated classes consist of activation, deactivation and baseline patterns corresponding to the task-paradigm. While the classification capability of the SOM is not only based on the distinctness of the patterns themselves but also on their frequency of occurrence in the training set, a weighting or selection of voxels of interest should be considered prior to the training of the neural network to improve pattern learning. Weighting of interesting voxels by means of autocorrelation or F-test significance levels has been used successfully, but still a large number of baseline voxels is included in the training. The purpose of this approach is to avoid the inclusion of these voxels by using three different levels of segmentation and mapping from Talairach space: (1) voxel partitions at the lobe level, (2) voxel partitions at the gyrus level and (3) voxel partitions at the cell level (Brodmann areas). The results of the SOM classification based on these mapping levels in comparison to training with all brain voxels are presented in this paper.

  17. A Knowledge Base for Teaching Biology Situated in the Context of Genetic Testing

    ERIC Educational Resources Information Center

    van der Zande, Paul; Waarlo, Arend Jan; Brekelmans, Mieke; Akkerman, Sanne F.; Vermunt, Jan D.

    2011-01-01

    Recent developments in the field of genomics will impact the daily practice of biology teachers who teach genetics in secondary education. This study reports on the first results of a research project aimed at enhancing biology teacher knowledge for teaching genetics in the context of genetic testing. The increasing body of scientific knowledge…

  18. A Knowledge Base for Teaching Biology Situated in the Context of Genetic Testing

    ERIC Educational Resources Information Center

    van der Zande, Paul; Waarlo, Arend Jan; Brekelmans, Mieke; Akkerman, Sanne F.; Vermunt, Jan D.

    2011-01-01

    Recent developments in the field of genomics will impact the daily practice of biology teachers who teach genetics in secondary education. This study reports on the first results of a research project aimed at enhancing biology teacher knowledge for teaching genetics in the context of genetic testing. The increasing body of scientific knowledge…

  19. A Knowledge Base for Teaching Biology Situated in the Context of Genetic Testing

    NASA Astrophysics Data System (ADS)

    van der Zande, Paul; Waarlo, Arend Jan; Brekelmans, Mieke; Akkerman, Sanne F.; Vermunt, Jan D.

    2011-10-01

    Recent developments in the field of genomics will impact the daily practice of biology teachers who teach genetics in secondary education. This study reports on the first results of a research project aimed at enhancing biology teacher knowledge for teaching genetics in the context of genetic testing. The increasing body of scientific knowledge concerning genetic testing and the related consequences for decision-making indicate the societal relevance of such a situated learning approach. What content knowledge do biology teachers need for teaching genetics in the personal health context of genetic testing? This study describes the required content knowledge by exploring the educational practice and clinical genetic practices. Nine experienced teachers and 12 respondents representing the clinical genetic practices (clients, medical professionals, and medical ethicists) were interviewed about the biological concepts and ethical, legal, and social aspects (ELSA) of testing they considered relevant to empowering students as future health care clients. The ELSA suggested by the respondents were complemented by suggestions found in the literature on genetic counselling. The findings revealed that the required teacher knowledge consists of multiple layers that are embedded in specific genetic test situations: on the one hand, the knowledge of concepts represented by the curricular framework and some additional concepts (e.g. multifactorial and polygenic disorder) and, on the other hand, more knowledge of ELSA and generic characteristics of genetic test practice (uncertainty, complexity, probability, and morality). Suggestions regarding how to translate these characteristics, concepts, and ELSA into context-based genetics education are discussed.

  20. Knowledge-based fuzzy system for diagnosis and control of an integrated biological wastewater treatment process.

    PubMed

    Pires, O C; Palma, C; Costa, J C; Moita, I; Alves, M M; Ferreira, E C

    2006-01-01

    A supervisory expert system based on fuzzy logic rules was developed for diagnosis and control of a laboratory- scale plant comprising anaerobic digestion and anoxic/aerobic modules for combined high rate biological N and C removal. The design and implementation of a computational environment in LabVIEW for data acquisition, plant operation and distributed equipment control is described. A step increase in ammonia concentration from 20 to 60 mg N/L was applied during a trial period of 73 h. Recycle flow rate from the aerobic to the anoxic module and bypass flow rate from the influent directly to the anoxic reactor were the output variables of the fuzzy system. They were automatically changed (from 34 to 111 L/day and from 8 to 13 L/day, respectively), when new plant conditions were recognised by the expert system. Denitrification efficiency higher than 85% was achieved 30 h after the disturbance and 15 h after the system response at an HRT as low as 1.5 h. Nitrification efficiency gradually increased from 12 to 50% at an HRT of 3 h. The system proved to react properly in order to set adequate operating conditions that led to timely and efficient recovery of N and C removal rates.

  1. Patient Preference for Dosing Frequency Based on Prior Biologic Experience.

    PubMed

    Zhang, Mingliang; Carter, Chureen; Olson, William H; Johnson, Michael P; Brennem, Susan K; Lee, Seina; Farahi, Kamyar

    2017-03-01

    There is limited research exploring patient preferences regarding dosing frequency of biologic treatment of psoriasis. Patients with moderate-to-severe plaque psoriasis identified in a healthcare claims database completed a survey regarding experience with psoriasis treatments and preferred dosing frequency. Survey questions regarding preferences were posed in two ways: (1) by likelihood of choosing once per week or 2 weeks, or 12 weeks; and (2) by choosing one option among once every 1-2 or 3-4 weeks or 1-2 or 2-3 months. Data were analyzed by prior biologic history (biologic-experienced vs biologic-naïve, and with one or two specific biologics). Overall, 426 patients completed the survey: 163 biologic-naïve patients and 263 biologic-experienced patients (159 had some experience with etanercept, 105 with adalimumab, and 49 with ustekinumab). Among patients who indicated experience with one or two biologics, data were available for 219 (30 with three biologics and 14 did not specify which biologic experience). The majority of biologic-naïve (68.8%) and overall biologic-experienced (69.4%) patients indicated that they were very likely to choose the least frequent dosing option of once every 12 weeks (Table 1). In contrast, fewer biologic-naïve (9.1% and 16.7%) and biologic-experienced (22.5% and 25.3%) patients indicated that they were very likely to choose the 1-week and 2-week dosing interval options, respectively. In each cohort grouped by experience with specific biologics, among those with no experience with ustekinumab, the most chosen option was 1-2 weeks. The most frequently chosen option was every 2-3 months, among patients with any experience with ustekinumab, regardless of their experience with other biologics. The least frequent dosing interval was preferred among biologic naïve patients and patients who had any experience with ustekinumab. Dosing interval may influence the shared decision-making process for psoriasis treatment with biologics.

  2. XML-based data model and architecture for a knowledge-based grid-enabled problem-solving environment for high-throughput biological imaging.

    PubMed

    Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif

    2008-03-01

    High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.

  3. Mortality according to a prior assessment of biological age.

    PubMed

    Bulpitt, Christopher J; Antikainen, Riitta L; Markowe, Hugh L J; Shipley, Martin J

    2009-12-01

    Measures of biological age have not been proven to predict mortality. This study examines whether measuring biological age improves the prediction of mortality. Prospective study from 1981 to 2001 of 397 male London Civil Servants. Two indices of biological ageing were calculated. 60 men died and both indices of biological ageing were related to survival. In a model that mutually adjusted for both chronological and biological age, biological age using index one was statistically significant with a hazard ratio (HR) of 1.11 per year of age (95% confidence interval 1.01 - 1.21, P=0.03). The useful components of the measures of biological ageing were systolic blood pressure (HR 1.31 for 1SD), albumin, and, to a lesser degree, Erythrocyte Sedimentation Rate (ESR). Greying of the hair, skin inelasticity, arcus senilis, and baldness were not predictors of mortality as measured by our methods. Similarly serum cholesterol, creatinine, calcium and urate could be excluded. A modified index was developed including systolic pressure, ESR, urea, albumin, and bilirubin and had a sensitivity of 78% and specificity of 51% in predicting subjects who died. This study represents 'proof of principle' in demonstrating the utility and validity of measuring biological age. The modified index needs to be tested prospectively.

  4. Introduction to knowledge base

    SciTech Connect

    Ohsuga, S.

    1986-01-01

    This work provides a broad range of easy-to-understand information on basic knowledge base concepts and basic element technology for the building of a knowledge base system. It also discusses various languages and networks for development of knowledge base systems. It describes applications of knowledge base utilization methodology and prospects for the future in such areas as pattern recognition, natural language processing, expert systems, and CAD/CAM.

  5. Cooperative Knowledge Bases.

    DTIC Science & Technology

    1988-02-01

    intellegent knowledge bases. The present state of our system for concurrent evaluation of a knowledge base of logic clauses using static allocation...de Kleer, J., An assumption-based TMS, Artificial Intelligence, Vol. 28, No. 2, 1986. [Doyle 79) Doyle, J. A truth maintenance system, Artificial

  6. Monitoring Knowledge Base (MKB)

    EPA Pesticide Factsheets

    The Monitoring Knowledge Base (MKB) is a compilation of emissions measurement and monitoring techniques associated with air pollution control devices, industrial process descriptions, and permitting techniques, including flexible permit development. Using MKB, one can gain a comprehensive understanding of emissions sources, control devices, and monitoring techniques, enabling one to determine appropriate permit terms and conditions.

  7. Knowledge-Based Abstracting.

    ERIC Educational Resources Information Center

    Black, William J.

    1990-01-01

    Discussion of automatic abstracting of technical papers focuses on a knowledge-based method that uses two sets of rules. Topics discussed include anaphora; text structure and discourse; abstracting techniques, including the keyword method and the indicator phrase method; and tools for text skimming. (27 references) (LRW)

  8. Knowledge Based Text Generation

    DTIC Science & Technology

    1989-08-01

    knowledge base as well as communicate the reasoning behind a particular diagnosis. This is discussed more thoroughly in subsequent sections. On the other...explanation. Wcincr proposed that a statement can be justified by offering reasons , supporting examples, and implausible alternatives, except for the statement...These justification techniques are realized in his system by four predicates: statement, reason , example and alternative. Connectives such as and/or

  9. Incorporation of Biological Pathway Knowledge in the Construction of Priors for Optimal Bayesian Classification.

    PubMed

    Esfahani, Mohammad Shahrokh; Dougherty, Edward R

    2014-01-01

    Small samples are commonplace in genomic/proteomic classification, the result being inadequate classifier design and poor error estimation. The problem has recently been addressed by utilizing prior knowledge in the form of a prior distribution on an uncertainty class of feature-label distributions. A critical issue remains: how to incorporate biological knowledge into the prior distribution. For genomics/proteomics, the most common kind of knowledge is in the form of signaling pathways. Thus, it behooves us to find methods of transforming pathway knowledge into knowledge of the feature-label distribution governing the classification problem. In this paper, we address the problem of prior probability construction by proposing a series of optimization paradigms that utilize the incomplete prior information contained in pathways (both topological and regulatory). The optimization paradigms employ the marginal log-likelihood, established using a small number of feature-label realizations (sample points) regularized with the prior pathway information about the variables. In the special case of a Normal-Wishart prior distribution on the mean and inverse covariance matrix (precision matrix) of a Gaussian distribution, these optimization problems become convex. Companion website: gsp.tamu.edu/Publications/supplementary/shahrokh13a.

  10. Reconstruction of Biological Networks by Incorporating Prior Knowledge into Bayesian Network Models

    PubMed Central

    Shin, Dong-Guk

    2012-01-01

    Abstract Bayesian network model is widely used for reverse engineering of biological network structures. An advantage of this model is its capability to integrate prior knowledge into the model learning process, which can lead to improving the quality of the network reconstruction outcome. Some previous works have explored this area with focus on using prior knowledge of the direct molecular links, except for a few recent ones proposing to examine the effects of molecular orderings. In this study, we propose a Bayesian network model that can integrate both direct links and orderings into the model. Random weights are assigned to these two types of prior knowledge to alleviate bias toward certain types of information. We evaluate our model performance using both synthetic data and biological data for the RAF signaling network, and illustrate the significant improvement on network structure reconstruction of the proposing models over the existing methods. We also examine the correlation between the improvement and the abundance of ordering prior knowledge. To address the issue of generating prior knowledge, we propose an approach to automatically extract potential molecular orderings from knowledge resources such as Kyoto Encyclopedia of Genes and Genomes (KEGG) database and Gene Ontology (GO) annotation. PMID:23210479

  11. Prior knowledge-based approach for associating contaminants with biological effects: A case study in the St. Croix river basin, MN, WI, USA.

    EPA Science Inventory

    Evaluating the potential human health and/or ecological risks associated with exposures to complex chemical mixtures in the ambient environment is one of the central challenges of chemical safety assessment and environmental protection. There is a need for approaches that can he...

  12. Prior knowledge-based approach for associating contaminants with biological effects: A case study in the St. Croix river basin, MN, WI, USA.

    EPA Science Inventory

    Evaluating the potential human health and/or ecological risks associated with exposures to complex chemical mixtures in the ambient environment is one of the central challenges of chemical safety assessment and environmental protection. There is a need for approaches that can he...

  13. CORE-Net: exploiting prior knowledge and preferential attachment to infer biological interaction networks.

    PubMed

    Montefusco, F; Cosentino, C; Amato, F

    2010-09-01

    The problem of reverse engineering in the topology of functional interaction networks from time-course experimental data has received considerable attention in literature, due to the potential applications in the most diverse fields, comprising engineering, biology, economics and social sciences. The present work introduces a novel technique, CORE-Net, which addresses this problem focusing on the case of biological interaction networks. The method is based on the representation of the network in the form of a dynamical system and on an iterative convex optimisation procedure. A first advantage of the proposed approach is that it allows to exploit qualitative prior knowledge about the network interactions, of the same kind as typically available from biological literature and databases. A second novel contribution consists of exploiting the growth and preferential attachment mechanisms to improve the inference performances when dealing with networks which exhibit a scale-free topology. The technique is first assessed through numerical tests on in silico random networks, subsequently it is applied to reverse engineering a cell cycle regulatory subnetwork in Saccharomyces cerevisiae from experimental microarray data. These tests show that the combined exploitation of prior knowledge and preferential attachment significantly improves the predictions with respect to other approaches.

  14. Mobile robot knowledge base

    NASA Astrophysics Data System (ADS)

    Heath Pastore, Tracy; Barnes, Mitchell; Hallman, Rory

    2005-05-01

    Robot technology is developing at a rapid rate for both commercial and Department of Defense (DOD) applications. As a result, the task of managing both technology and experience information is growing. In the not-to-distant past, tracking development efforts of robot platforms, subsystems and components was not too difficult, expensive, or time consuming. To do the same today is a significant undertaking. The Mobile Robot Knowledge Base (MRKB) provides the robotics community with a web-accessible, centralized resource for sharing information, experience, and technology to more efficiently and effectively meet the needs of the robot system user. The resource includes searchable information on robot components, subsystems, mission payloads, platforms, and DOD robotics programs. In addition, the MRKB website provides a forum for technology and information transfer within the DOD robotics community and an interface for the Robotic Systems Pool (RSP). The RSP manages a collection of small teleoperated and semi-autonomous robotic platforms, available for loan to DOD and other qualified entities. The objective is to put robots in the hands of users and use the test data and fielding experience to improve robot systems.

  15. Bayesian network prior: network analysis of biological data using external knowledge

    PubMed Central

    Isci, Senol; Dogan, Haluk; Ozturk, Cengizhan; Otu, Hasan H.

    2014-01-01

    Motivation: Reverse engineering GI networks from experimental data is a challenging task due to the complex nature of the networks and the noise inherent in the data. One way to overcome these hurdles would be incorporating the vast amounts of external biological knowledge when building interaction networks. We propose a framework where GI networks are learned from experimental data using Bayesian networks (BNs) and the incorporation of external knowledge is also done via a BN that we call Bayesian Network Prior (BNP). BNP depicts the relation between various evidence types that contribute to the event ‘gene interaction’ and is used to calculate the probability of a candidate graph (G) in the structure learning process. Results: Our simulation results on synthetic, simulated and real biological data show that the proposed approach can identify the underlying interaction network with high accuracy even when the prior information is distorted and outperforms existing methods. Availability: Accompanying BNP software package is freely available for academic use at http://bioe.bilgi.edu.tr/BNP. Contact: hasan.otu@bilgi.edu.tr Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:24215027

  16. Filtering genetic variants and placing informative priors based on putative biological function.

    PubMed

    Friedrichs, Stefanie; Malzahn, Dörthe; Pugh, Elizabeth W; Almeida, Marcio; Liu, Xiao Qing; Bailey, Julia N

    2016-02-03

    High-density genetic marker data, especially sequence data, imply an immense multiple testing burden. This can be ameliorated by filtering genetic variants, exploiting or accounting for correlations between variants, jointly testing variants, and by incorporating informative priors. Priors can be based on biological knowledge or predicted variant function, or even be used to integrate gene expression or other omics data. Based on Genetic Analysis Workshop (GAW) 19 data, this article discusses diversity and usefulness of functional variant scores provided, for example, by PolyPhen2, SIFT, or RegulomeDB annotations. Incorporating functional scores into variant filters or weights and adjusting the significance level for correlations between variants yielded significant associations with blood pressure traits in a large family study of Mexican Americans (GAW19 data set). Marker rs218966 in gene PHF14 and rs9836027 in MAP4 significantly associated with hypertension; additionally, rare variants in SNUPN significantly associated with systolic blood pressure. Variant weights strongly influenced the power of kernel methods and burden tests. Apart from variant weights in test statistics, prior weights may also be used when combining test statistics or to informatively weight p values while controlling false discovery rate (FDR). Indeed, power improved when gene expression data for FDR-controlled informative weighting of association test p values of genes was used. Finally, approaches exploiting variant correlations included identity-by-descent mapping and the optimal strategy for joint testing rare and common variants, which was observed to depend on linkage disequilibrium structure.

  17. Impact of prior biologic use on persistence of treatment in patients with psoriatic arthritis enrolled in the US Corrona registry.

    PubMed

    Harrold, Leslie R; Stolshek, Bradley S; Rebello, Sabrina; Collier, David H; Mutebi, Alex; Wade, Sally W; Malley, Wendi; Greenberg, Jeffrey D; Etzel, Carol J

    2017-04-01

    Psoriatic arthritis (PsA) is a chronic condition characterized by a diverse set of symptoms, from swollen joints to nail disease to skin disease. A variety of treatment options are available, including tumor necrosis factor inhibitors (TNFis). Little is known about treatment persistence in patients with PsA who initiate TNFi therapy, with and without prior biologic use. This study assessed persistence in these subgroups of patients with PsA and identified factors associated with persistence. This retrospective study utilized data from the Corrona registry of patients with PsA-with or without prior biologic experience-who initiated TNFi therapy between October 1, 2002, and March 21, 2013. Kaplan-Meier curves estimated median time to nonpersistence (discontinuation or switch to another biologic). Cox proportional hazards models identified factors associated with TNFi nonpersistence. A total of 1241 TNFi initiations were identified: 549 by biologic-naïve and 692 by biologic-experienced patients. Through 4 years of follow-up, more biologic-naïve than biologic-experienced patients remained persistent. Biologic-naïve patients had a greater mean time to nonpersistence compared with biologic-experienced patients: 32 vs 23 months (p = 0.0002). Moderate and high disease activities based on clinical disease activity index and disease duration were associated with persistence in both biologic-naïve and biologic-experienced patients. Additionally, in the biologic-experienced patients, the number of prior medications and skin disease were associated with persistence. The majority of patients with PsA in this study were persistent with their TNFi therapy; biologic-naïve patients had greater persistence compared with biologic-experienced patients. Predictors of persistence differed slightly between biologic-naïve and biologic-experienced patients.

  18. Knowledge-Based Image Analysis.

    DTIC Science & Technology

    1981-04-01

    UNCLASSIF1 ED ETL-025s N IIp ETL-0258 AL Ai01319 S"Knowledge-based image analysis u George C. Stockman Barbara A. Lambird I David Lavine Laveen N. Kanal...extraction, verification, region classification, pattern recognition, image analysis . 3 20. A. CT (Continue on rever.. d. It necessary and Identify by...UNCLgSTFTF n In f SECURITY CLASSIFICATION OF THIS PAGE (When Date Entered) .L1 - I Table of Contents Knowledge Based Image Analysis I Preface

  19. Knowledge-based nursing diagnosis

    NASA Astrophysics Data System (ADS)

    Roy, Claudette; Hay, D. Robert

    1991-03-01

    Nursing diagnosis is an integral part of the nursing process and determines the interventions leading to outcomes for which the nurse is accountable. Diagnoses under the time constraints of modern nursing can benefit from a computer assist. A knowledge-based engineering approach was developed to address these problems. A number of problems were addressed during system design to make the system practical extended beyond capture of knowledge. The issues involved in implementing a professional knowledge base in a clinical setting are discussed. System functions, structure, interfaces, health care environment, and terminology and taxonomy are discussed. An integrated system concept from assessment through intervention and evaluation is outlined.

  20. Population Education: A Knowledge Base.

    ERIC Educational Resources Information Center

    Jacobson, Willard J.

    To aid junior high and high school educators and curriculum planners as they develop population education programs, the book provides an overview of the population education knowledge base. In addition, it suggests learning activities, discussion questions, and background information which can be integrated into courses dealing with population,…

  1. Population Education: A Knowledge Base.

    ERIC Educational Resources Information Center

    Jacobson, Willard J.

    To aid junior high and high school educators and curriculum planners as they develop population education programs, the book provides an overview of the population education knowledge base. In addition, it suggests learning activities, discussion questions, and background information which can be integrated into courses dealing with population,…

  2. Epistemology of knowledge based simulation

    SciTech Connect

    Reddy, R.

    1987-04-01

    Combining artificial intelligence concepts, with traditional simulation methodologies yields a powerful design support tool known as knowledge based simulation. This approach turns a descriptive simulation tool into a prescriptive tool, one which recommends specific goals. Much work in the area of general goal processing and explanation of recommendations remains to be done.

  3. Automated knowledge-base refinement

    NASA Technical Reports Server (NTRS)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  4. Knowledge-based media adaptation

    NASA Astrophysics Data System (ADS)

    Leopold, Klaus; Jannach, Dietmar; Hellwagner, Hermann

    2004-10-01

    This paper introduces the principal approach and describes the basic architecture and current implementation of the knowledge-based multimedia adaptation framework we are currently developing. The framework can be used in Universal Multimedia Access scenarios, where multimedia content has to be adapted to specific usage environment parameters (network and client device capabilities, user preferences). Using knowledge-based techniques (state-space planning), the framework automatically computes an adaptation plan, i.e., a sequence of media conversion operations, to transform the multimedia resources to meet the client's requirements or constraints. The system takes as input standards-compliant descriptions of the content (using MPEG-7 metadata) and of the target usage environment (using MPEG-21 Digital Item Adaptation metadata) to derive start and goal states for the planning process, respectively. Furthermore, declarative descriptions of the conversion operations (such as available via software library functions) enable existing adaptation algorithms to be invoked without requiring programming effort. A running example in the paper illustrates the descriptors and techniques employed by the knowledge-based media adaptation system.

  5. Does Teaching Experience Matter? Examining Biology Teachers' Prior Knowledge for Teaching in an Alternative Certification Program

    ERIC Educational Resources Information Center

    Friedrichsen, Patricia J.; Abell, Sandra K.; Pareja, Enrique M.; Brown, Patrick L.; Lankford, Deanna M.; Volkmann, Mark J.

    2009-01-01

    Alternative certification programs (ACPs) have been proposed as a viable way to address teacher shortages, yet we know little about how teacher knowledge develops within such programs. The purpose of this study was to investigate prior knowledge for teaching among students entering an ACP, comparing individuals with teaching experience to those…

  6. Does Teaching Experience Matter? Examining Biology Teachers' Prior Knowledge for Teaching in an Alternative Certification Program

    ERIC Educational Resources Information Center

    Friedrichsen, Patricia J.; Abell, Sandra K.; Pareja, Enrique M.; Brown, Patrick L.; Lankford, Deanna M.; Volkmann, Mark J.

    2009-01-01

    Alternative certification programs (ACPs) have been proposed as a viable way to address teacher shortages, yet we know little about how teacher knowledge develops within such programs. The purpose of this study was to investigate prior knowledge for teaching among students entering an ACP, comparing individuals with teaching experience to those…

  7. Knowledge based jet engine diagnostics

    NASA Technical Reports Server (NTRS)

    Jellison, Timothy G.; Dehoff, Ronald L.

    1987-01-01

    A fielded expert system automates equipment fault isolation and recommends corrective maintenance action for Air Force jet engines. The knowledge based diagnostics tool was developed as an expert system interface to the Comprehensive Engine Management System, Increment IV (CEMS IV), the standard Air Force base level maintenance decision support system. XMAM (trademark), the Expert Maintenance Tool, automates procedures for troubleshooting equipment faults, provides a facility for interactive user training, and fits within a diagnostics information feedback loop to improve the troubleshooting and equipment maintenance processes. The application of expert diagnostics to the Air Force A-10A aircraft TF-34 engine equipped with the Turbine Engine Monitoring System (TEMS) is presented.

  8. Genetic characterization for intraspecific hybridization of an exotic parasitoid prior its introduction for classical biological control

    USDA-ARS?s Scientific Manuscript database

    The successful establishment of an exotic parasitoid in the context of classical biological control of insect pests depends upon its adaptability to the new environment. In theory, intraspecific hybridization may improve the success of the establishment as a result of an increase in the available ge...

  9. Optimizing Methotrexate Treatment in Rheumatoid Arthritis: The Case for Subcutaneous Methotrexate Prior to Biologics.

    PubMed

    Sharma, Poonam; Scott, David G I

    2015-11-01

    Methotrexate is the most common disease-modifying antirheumatic drug (DMARD) used in the treatment of rheumatoid arthritis (RA). Current evidence supports its efficacy in the treatment of RA, resulting in improved short-term disease control and long-term outcomes in terms of radiographic progression. Oral methotrexate has traditionally been used first-line due to various reasons, including ease of administration, low cost and easy availability. A methotrexate dose of >15 mg/week is generally required for disease control but oral methotrexate may be only partially effective or poorly tolerated in some patients. The rationale for using subcutaneous (SC) methotrexate is based on its improved bioavailability at higher doses and better tolerability in some patients who have side effects when receiving oral methotrexate. Current guidance advocates 'treating to target', with the aim of inducing remission in RA patients. In some patients, this can be achieved using methotrexate alone or in combination with other traditional DMARDs. Patients who have not responded to two DMARDs, including methotrexate, are eligible for biological therapy as per current National Institute for Health and Care Excellence (NICE) guidance in the UK. Biological treatments are expensive and using SC methotrexate can improve disease control in RA patients, thus potentially avoiding or delaying the requirement for future biological treatment.

  10. Nonlinear knowledge-based classification.

    PubMed

    Mangasarian, Olvi L; Wild, Edward W

    2008-10-01

    In this brief, prior knowledge over general nonlinear sets is incorporated into nonlinear kernel classification problems as linear constraints in a linear program. These linear constraints are imposed at arbitrary points, not necessarily where the prior knowledge is given. The key tool in this incorporation is a theorem of the alternative for convex functions that converts nonlinear prior knowledge implications into linear inequalities without the need to kernelize these implications. Effectiveness of the proposed formulation is demonstrated on publicly available classification data sets, including a cancer prognosis data set. Nonlinear kernel classifiers for these data sets exhibit marked improvements upon the introduction of nonlinear prior knowledge compared to nonlinear kernel classifiers that do not utilize such knowledge.

  11. Cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward A.; Buchanan, Bruce G.

    1988-01-01

    This final report covers work performed under Contract NCC2-220 between NASA Ames Research Center and the Knowledge Systems Laboratory, Stanford University. The period of research was from March 1, 1987 to February 29, 1988. Topics covered were as follows: (1) concurrent architectures for knowledge-based systems; (2) methods for the solution of geometric constraint satisfaction problems, and (3) reasoning under uncertainty. The research in concurrent architectures was co-funded by DARPA, as part of that agency's Strategic Computing Program. The research has been in progress since 1985, under DARPA and NASA sponsorship. The research in geometric constraint satisfaction has been done in the context of a particular application, that of determining the 3-D structure of complex protein molecules, using the constraints inferred from NMR measurements.

  12. Incorporating prior biological knowledge for network-based differential gene expression analysis using differentially weighted graphical LASSO.

    PubMed

    Zuo, Yiming; Cui, Yi; Yu, Guoqiang; Li, Ruijiang; Ressom, Habtom W

    2017-02-10

    Conventional differential gene expression analysis by methods such as student's t-test, SAM, and Empirical Bayes often searches for statistically significant genes without considering the interactions among them. Network-based approaches provide a natural way to study these interactions and to investigate the rewiring interactions in disease versus control groups. In this paper, we apply weighted graphical LASSO (wgLASSO) algorithm to integrate a data-driven network model with prior biological knowledge (i.e., protein-protein interactions) for biological network inference. We propose a novel differentially weighted graphical LASSO (dwgLASSO) algorithm that builds group-specific networks and perform network-based differential gene expression analysis to select biomarker candidates by considering their topological differences between the groups. Through simulation, we showed that wgLASSO can achieve better performance in building biologically relevant networks than purely data-driven models (e.g., neighbor selection, graphical LASSO), even when only a moderate level of information is available as prior biological knowledge. We evaluated the performance of dwgLASSO for survival time prediction using two microarray breast cancer datasets previously reported by Bild et al. and van de Vijver et al. Compared with the top 10 significant genes selected by conventional differential gene expression analysis method, the top 10 significant genes selected by dwgLASSO in the dataset from Bild et al. led to a significantly improved survival time prediction in the independent dataset from van de Vijver et al. Among the 10 genes selected by dwgLASSO, UBE2S, SALL2, XBP1 and KIAA0922 have been confirmed by literature survey to be highly relevant in breast cancer biomarker discovery study. Additionally, we tested dwgLASSO on TCGA RNA-seq data acquired from patients with hepatocellular carcinoma (HCC) on tumors samples and their corresponding non-tumorous liver tissues. Improved

  13. Knowledge Base Editor (SharpKBE)

    NASA Technical Reports Server (NTRS)

    Tikidjian, Raffi; James, Mark; Mackey, Ryan

    2007-01-01

    The SharpKBE software provides a graphical user interface environment for domain experts to build and manage knowledge base systems. Knowledge bases can be exported/translated to various target languages automatically, including customizable target languages.

  14. Foundation: Transforming data bases into knowledge bases

    NASA Technical Reports Server (NTRS)

    Purves, R. B.; Carnes, James R.; Cutts, Dannie E.

    1987-01-01

    One approach to transforming information stored in relational data bases into knowledge based representations and back again is described. This system, called Foundation, allows knowledge bases to take advantage of vast amounts of pre-existing data. A benefit of this approach is inspection, and even population, of data bases through an intelligent knowledge-based front-end.

  15. Interlaboratory comparison of size and surface charge measurements on nanoparticles prior to biological impact assessment

    NASA Astrophysics Data System (ADS)

    Roebben, G.; Ramirez-Garcia, S.; Hackley, V. A.; Roesslein, M.; Klaessig, F.; Kestens, V.; Lynch, I.; Garner, C. M.; Rawle, A.; Elder, A.; Colvin, V. L.; Kreyling, W.; Krug, H. F.; Lewicka, Z. A.; McNeil, S.; Nel, A.; Patri, A.; Wick, P.; Wiesner, M.; Xia, T.; Oberdörster, G.; Dawson, K. A.

    2011-07-01

    The International Alliance for NanoEHS Harmonization (IANH) organises interlaboratory comparisons of methods used to study the potential biological impacts of nanomaterials. The aim of IANH is to identify and reduce or remove sources of variability and irreproducibility in existing protocols. Here, we present results of the first IANH round robin studies into methods to assess the size and surface charge of suspended nanoparticles. The test materials used (suspensions of gold, silica, polystyrene, and ceria nanoparticles, with [primary] particles sizes between 10 nm and 80 nm) were first analysed in repeatability conditions to assess the possible contribution of between-sample heterogeneity to the between-laboratory variability. Reproducibility of the selected methods was investigated in an interlaboratory comparison between ten different laboratories in the USA and Europe. Robust statistical analysis was used to evaluate within- and between-laboratory variability. It is shown that, if detailed shipping, measurement, and reporting protocols are followed, measurement of the hydrodynamic particle diameter of nanoparticles in predispersed monomodal suspensions using the dynamic light scattering method is reproducible. On the other hand, measurements of more polydisperse suspensions of nanoparticle aggregates or agglomerates were not reproducible between laboratories. Ultrasonication, which is commonly used to prepare dispersions before cell exposures, was observed to further increase variability. The variability of the zeta potential values, which were also measured, indicates the need to define better surface charge test protocols and to identify sources of variability.

  16. A Natural Language Interface Concordant with a Knowledge Base.

    PubMed

    Han, Yong-Jin; Park, Seong-Bae; Park, Se-Young

    2016-01-01

    The discordance between expressions interpretable by a natural language interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate natural language questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible natural language expressions corresponding to each formal query can also be collected in advance. A natural language expression has a one-to-one mapping with a formal query. Hence, a natural language question is translated into a formal query by matching the question with the most appropriate natural language expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively.

  17. A Natural Language Interface Concordant with a Knowledge Base

    PubMed Central

    Han, Yong-Jin; Park, Seong-Bae; Park, Se-Young

    2016-01-01

    The discordance between expressions interpretable by a natural language interface (NLI) system and those answerable by a knowledge base is a critical problem in the field of NLIs. In order to solve this discordance problem, this paper proposes a method to translate natural language questions into formal queries that can be generated from a graph-based knowledge base. The proposed method considers a subgraph of a knowledge base as a formal query. Thus, all formal queries corresponding to a concept or a predicate in the knowledge base can be generated prior to query time and all possible natural language expressions corresponding to each formal query can also be collected in advance. A natural language expression has a one-to-one mapping with a formal query. Hence, a natural language question is translated into a formal query by matching the question with the most appropriate natural language expression. If the confidence of this matching is not sufficiently high the proposed method rejects the question and does not answer it. Multipredicate queries are processed by regarding them as a set of collected expressions. The experimental results show that the proposed method thoroughly handles answerable questions from the knowledge base and rejects unanswerable ones effectively. PMID:26904105

  18. Effects of Teacher Use of Analogies on Achievement of High School Biology Students with Varying Levels of Cognitive Ability and Prior Knowledge.

    ERIC Educational Resources Information Center

    Burns, Joseph C.; Okey, James R.

    This study investigated the effects of analogy-based and conventional lecture-based instructional strategies on the achievement of four classes of high school biology students (N=123). Prior to treatment, students were assessed for cognitive ability and prior knowledge of the analogy vehicle. The analogy-based treatment consisted of teacher…

  19. A Discussion of Knowledge Based Design

    NASA Technical Reports Server (NTRS)

    Wood, Richard M.; Bauer, Steven X. S.

    1999-01-01

    A discussion of knowledge and Knowledge- Based design as related to the design of aircraft is presented. The paper discusses the perceived problem with existing design studies and introduces the concepts of design and knowledge for a Knowledge- Based design system. A review of several Knowledge-Based design activities is provided. A Virtual Reality, Knowledge-Based system is proposed and reviewed. The feasibility of Virtual Reality to improve the efficiency and effectiveness of aerodynamic and multidisciplinary design, evaluation, and analysis of aircraft through the coupling of virtual reality technology and a Knowledge-Based design system is also reviewed. The final section of the paper discusses future directions for design and the role of Knowledge-Based design.

  20. Novel joint TOA/RSSI-based WCE location tracking method without prior knowledge of biological human body tissues.

    PubMed

    Ito, Takahiro; Anzai, Daisuke; Jianqing Wang

    2014-01-01

    This paper proposes a novel joint time of arrival (TOA)/received signal strength indicator (RSSI)-based wireless capsule endoscope (WCE) location tracking method without prior knowledge of biological human tissues. Generally, TOA-based localization can achieve much higher localization accuracy than other radio frequency-based localization techniques, whereas wireless signals transmitted from a WCE pass through various kinds of human body tissues, as a result, the propagation velocity inside a human body should be different from one in free space. Because the variation of propagation velocity is mainly affected by the relative permittivity of human body tissues, instead of pre-measurement for the relative permittivity in advance, we simultaneously estimate not only the WCE location but also the relative permittivity information. For this purpose, this paper first derives the relative permittivity estimation model with measured RSSI information. Then, we pay attention to a particle filter algorithm with the TOA-based localization and the RSSI-based relative permittivity estimation. Our computer simulation results demonstrates that the proposed tracking methods with the particle filter can accomplish an excellent localization accuracy of around 2 mm without prior information of the relative permittivity of the human body tissues.

  1. IGENPRO knowledge-based operator support system.

    SciTech Connect

    Morman, J. A.

    1998-07-01

    Research and development is being performed on the knowledge-based IGENPRO operator support package for plant transient diagnostics and management to provide operator assistance during off-normal plant transient conditions. A generic thermal-hydraulic (T-H) first-principles approach is being implemented using automated reasoning, artificial neural networks and fuzzy logic to produce a generic T-H system-independent/plant-independent package. The IGENPRO package has a modular structure composed of three modules: the transient trend analysis module PROTREN, the process diagnostics module PRODIAG and the process management module PROMANA. Cooperative research and development work has focused on the PRODIAG diagnostic module of the IGENPRO package and the operator training matrix of transients used at the Braidwood Pressurized Water Reactor station. Promising simulator testing results with PRODIAG have been obtained for the Braidwood Chemical and Volume Control System (CVCS), and the Component Cooling Water System. Initial CVCS test results have also been obtained for the PROTREN module. The PROMANA effort also involves the CVCS. Future work will be focused on the long-term, slow and mild degradation transients where diagnoses of incipient T-H component failure prior to forced outage events is required. This will enhance the capability of the IGENPRO system as a predictive maintenance tool for plant staff and operator support.

  2. Integration of prior biological knowledge and epigenetic information enhances the prediction accuracy of the Bayesian Wnt pathway.

    PubMed

    Sinha, Shriprakash

    2014-11-01

    Computational modeling of the Wnt signaling pathway has gained prominence for its use as a diagnostic tool to develop therapeutic cancer target drugs and predict test samples as tumorous/normal. Diagnostic tools entail modeling of the biological phenomena behind the pathway while prediction requires inclusion of factors for discriminative classification. This manuscript develops simple static Bayesian network predictive models of varying complexity by encompassing prior partially available biological knowledge about intra/extracellular factors and incorporating information regarding epigenetic modification into a few genes that are known to have an inhibitory effect on the pathway. Incorporation of epigenetic information enhances the prediction accuracy of test samples in human colorectal cancer. In comparison to the Naive Bayes model where β-catenin transcription complex activation predictions are assumed to correspond to sample predictions, the new biologically inspired models shed light on differences in behavior of the transcription complex and the state of samples. Receiver operator curves and their respective area under the curve measurements obtained from predictions of the state of the test sample and the corresponding predictions of the state of activation of the β-catenin transcription complex of the pathway for the test sample indicate a significant difference between the transcription complex being on (off) and its association with the sample being tumorous (normal). The two-sample Kolmogorov-Smirnov test confirms the statistical deviation between the distributions of these predictions. Hitherto unknown relationship between factors like DKK2, DKK3-1 and SFRP-2/3/5 w.r.t. the β-catenin transcription complex has been inferred using these causal models.

  3. Delving into cornerstones of hypersensitivity to antineoplastic and biological agents: value of diagnostic tools prior to desensitization.

    PubMed

    Alvarez-Cuesta, E; Madrigal-Burgaleta, R; Angel-Pereira, D; Ureña-Tavera, A; Zamora-Verduga, M; Lopez-Gonzalez, P; Berges-Gimeno, M P

    2015-07-01

    Evidence regarding drug provocation test (DPT) with antineoplastic and biological agents is scarce. Our aim was to assess the usefulness of including DPT as a paramount gold standard diagnostic tool (prior to desensitization). Prospective, observational, longitudinal study with patients who, during a 3-year period, were referred to the Desensitization Program at Ramon y Cajal University Hospital. Patients underwent a structured diagnostic protocol by means of anamnesis, skin tests (ST), risk assessment, and DPT. Oxaliplatin-specific IgE was determined in oxaliplatin-reactive patients (who underwent DPT regardless of oxaliplatin-specific IgE results). Univariate analysis and multivariate analysis were used to identify predictors of the final diagnosis among several variables. A total of 186 patients were assessed. A total of 104 (56%) patients underwent DPT. Sixty-four percent of all DPTs were negative (i.e., hypersensitivity was excluded). Sensitivity for oxaliplatin-specific IgE (0.35 UI/l cutoff point) was 34%, specificity 90.3%, negative predictive value 45.9%, positive predictive value 85%, negative likelihood ratio 0.7, and positive likelihood ratio 3.5. These are the first reported data based on more than 100 DPTs with antineoplastic and biological agents (paclitaxel, oxaliplatin, rituximab, infliximab, irinotecan, and other drugs). Implementation of DPT in diagnostic protocols helps exclude hypersensitivity (in 36% of all referred patients), and avoids unnecessary desensitizations in nonhypersensitive patients (30-56% of patients, depending on culprit-drug). Drug provocation test is vital to validate diagnostic tools; consequently, quality data are shown on oxaliplatin-specific IgE and oxaliplatin-ST in the largest series of oxaliplatin-reactive patients reported to date (74 oxaliplatin-reactive patients). Identifying phenotypes and predictors of a diagnosis of hypersensitivity may be helpful for tailored plans. © 2015 John Wiley & Sons A/S. Published by

  4. Screening for latent TB in patients with rheumatic disorders prior to biologic agents in a 'high-risk' TB population: comparison of two interferon gamma release assays.

    PubMed

    Melath, Sunil; Ismajli, Mediola; Smith, Robin; Patel, Ishita; Steuer, Alan

    2014-01-01

    Patients with rheumatic disorders treated with TNF inhibitors are at increased risk of developing TB. There is no 'gold-standard' for the diagnosis of latent TB prior to initiation of biologic agents. We report our own experience of comparing two interferon gamma release assays (IGRAs) in screening for latent TB in a 'high-risk' TB area in patients with rheumatic disorders. The study demonstrated good concordance between the two tests. We believe the additional cost of these assays is justified in high-risk populations prior to biologic agents, with 16% of the current study population with at least one positive IGRA assay.

  5. The Coming of Knowledge-Based Business.

    ERIC Educational Resources Information Center

    Davis, Stan; Botkin, Jim

    1994-01-01

    Economic growth will come from knowledge-based businesses whose "smart" products filter and interpret information. Businesses will come to think of themselves as educators and their customers as learners. (SK)

  6. The Coming of Knowledge-Based Business.

    ERIC Educational Resources Information Center

    Davis, Stan; Botkin, Jim

    1994-01-01

    Economic growth will come from knowledge-based businesses whose "smart" products filter and interpret information. Businesses will come to think of themselves as educators and their customers as learners. (SK)

  7. Updating knowledge bases with disjunctive information

    SciTech Connect

    Zhang, Yan; Foo, Norman Y.

    1996-12-31

    It is well known that the minimal change principle was widely used in knowledge base updates. However, recent research has shown that conventional minimal change methods, eg. the PMA, are generally problematic for updating knowledge bases with disjunctive information. In this paper, we propose two different approaches to deal with this problem - one is called the minimal change with exceptions (MCE), the other is called the minimal change with maximal disjunctive inclusions (MCD). The first method is syntax-based, while the second is model-theoretic. We show that these two approaches are equivalent for propositional knowledge base updates, and the second method is also appropriate for first order knowledge base updates. We then prove that our new update approaches still satisfy the standard Katsuno and Mendelzon`s update postulates.

  8. Methodology for testing and validating knowledge bases

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, C.; Padalkar, S.; Sztipanovits, J.; Purves, B. R.

    1987-01-01

    A test and validation toolset developed for artificial intelligence programs is described. The basic premises of this method are: (1) knowledge bases have a strongly declarative character and represent mostly structural information about different domains, (2) the conditions for integrity, consistency, and correctness can be transformed into structural properties of knowledge bases, and (3) structural information and structural properties can be uniformly represented by graphs and checked by graph algorithms. The interactive test and validation environment have been implemented on a SUN workstation.

  9. Knowledge Based Systems and Metacognition in Radar

    NASA Astrophysics Data System (ADS)

    Capraro, Gerard T.; Wicks, Michael C.

    An airborne ground looking radar sensor's performance may be enhanced by selecting algorithms adaptively as the environment changes. A short description of an airborne intelligent radar system (AIRS) is presented with a description of the knowledge based filter and detection portions. A second level of artificial intelligence (AI) processing is presented that monitors, tests, and learns how to improve and control the first level. This approach is based upon metacognition, a way forward for developing knowledge based systems.

  10. Knowledge based programming environments: A perspective

    NASA Technical Reports Server (NTRS)

    Amin, Ashok T.

    1988-01-01

    Programming environments is an area of recent origin and refers to an integrated set of tools, such as program library, text editor, compiler, and debugger, in support of program development. Understanding of programs and programming has lead to automated techniques for program development. Knowledge based programming system using program transformations offer significant impact on future program development methodologies. A review of recent developments in the area of knowledge based programming environments, from the perspective of software engineering, is presented.

  11. Logic Programming and Knowledge Base Maintenance.

    DTIC Science & Technology

    1983-11-01

    fallen into-three classes: 1) expression of various quasi-intelligent expert systems tasks; 2) development of basic knowledge base systems; and 3...exploration of reasoning systems for maintenance of knowledge bases. We discuss each of these below. 1) Expression of expert systems tasks. We have coded...and run in metaProlog a diagnostic assistant based on the Oak Ridge spills expert of [Rosie3. This experiment demonstrated the usefulness and

  12. Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

    PubMed Central

    Xian, Xuefeng; Cui, Zhiming

    2017-01-01

    Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611

  13. Distributed, cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    Some current research in the development and application of distributed, cooperating knowledge-based systems technology is addressed. The focus of the current research is the spacecraft ground operations environment. The underlying hypothesis is that, because of the increasing size, complexity, and cost of planned systems, conventional procedural approaches to the architecture of automated systems will give way to a more comprehensive knowledge-based approach. A hallmark of these future systems will be the integration of multiple knowledge-based agents which understand the operational goals of the system and cooperate with each other and the humans in the loop to attain the goals. The current work includes the development of a reference model for knowledge-base management, the development of a formal model of cooperating knowledge-based agents, the use of testbed for prototyping and evaluating various knowledge-based concepts, and beginning work on the establishment of an object-oriented model of an intelligent end-to-end (spacecraft to user) system. An introductory discussion of these activities is presented, the major concepts and principles being investigated are highlighted, and their potential use in other application domains is indicated.

  14. Survey of rheumatologists on the use of the Philippine Guidelines on the Screening for Tuberculosis prior to use of Biologic Agents.

    PubMed

    Aquino-Villamin, Melissa; Tankeh-Torres, Sandra; Lichauco, Juan Javier

    2016-11-01

    The use of biologic agents has become an important option in treating patients with rheumatoid arthritis. However, these drugs have been associated with an increased risk of tuberculosis (TB) reactivation. Local guidelines for TB screening prior to the use of biologic agents were developed to address this issue. This study is a survey describing the compliance of Filipino rheumatologists to these guidelines. Eighty-seven rheumatologists in the Philippines were given the questionnaire and responses from 61 rheumatologists were included in the analysis. All respondents agree that patients should be screened prior to giving the biologic agents. Local guidelines recommend screening with tuberculin skin test (TST) and chest radiograph. However, cut-off values considered for a positive TST and timing of initiation of biologic agents after starting TB prophylaxis and treatment varied among respondents. In addition, screening of close household contacts were only performed by 41 (69.5%) respondents. There were 11 respondents who reported 16 patients developing TB during or after receiving biologic agents, despite adherence to the guidelines. This survey describes the compliance rate of Filipino rheumatologists in applying current local recommendations for TB screening prior to initiating biologic agents. The incidence of new TB cases despite the current guidelines emphasizes the importance of compliance and the need to revise the guidelines based on updated existing literature. © 2015 Asia Pacific League of Associations for Rheumatology and Wiley Publishing Asia Pty Ltd.

  15. A knowledge base of vasopressin actions in the kidney

    PubMed Central

    Sanghi, Akshay; Zaringhalam, Matthew; Corcoran, Callan C.; Saeed, Fahad; Hoffert, Jason D.; Sandoval, Pablo; Pisitkun, Trairak

    2014-01-01

    Biological information is growing at a rapid pace, making it difficult for individual investigators to be familiar with all information that is relevant to their own research. Computers are beginning to be used to extract and curate biological information; however, the complexity of human language used in research papers continues to be a critical barrier to full automation of knowledge extraction. Here, we report a manually curated knowledge base of vasopressin actions in renal epithelial cells that is designed to be readable either by humans or by computer programs using natural language processing algorithms. The knowledge base consists of three related databases accessible at https://helixweb.nih.gov/ESBL/TinyUrls/Vaso_portal.html. One of the component databases reports vasopressin actions on individual proteins expressed in renal epithelia, including effects on phosphorylation, protein abundances, protein translocation from one subcellular compartment to another, protein-protein binding interactions, etc. The second database reports vasopressin actions on physiological measures in renal epithelia, and the third reports specific mRNA species whose abundances change in response to vasopressin. We illustrate the application of the knowledge base by using it to generate a protein kinase network that connects vasopressin binding in collecting duct cells to physiological effects to regulate the water channel protein aquaporin-2. PMID:25056354

  16. Patient Dependency Knowledge-Based Systems.

    PubMed

    Soliman, F

    1998-10-01

    The ability of Patient Dependency Systems to provide information for staffing decisions and budgetary development has been demonstrated. In addition, they have become powerful tools in modern hospital management. This growing interest in Patient Dependency Systems has renewed calls for their automation. As advances in Information Technology and in particular Knowledge-Based Engineering reach new heights, hospitals can no longer afford to ignore the potential benefits obtainable from developing and implementing Patient Dependency Knowledge-Based Systems. Experience has shown that the vast majority of decisions and rules used in the Patient Dependency method are too complex to capture in the form of a traditional programming language. Furthermore, the conventional Patient Dependency Information System automates the simple and rigid bookkeeping functions. On the other hand Knowledge-Based Systems automate complex decision making and judgmental processes and therefore are the appropriate technology for automating the Patient Dependency method. In this paper a new technique to automate Patient Dependency Systems using knowledge processing is presented. In this approach all Patient Dependency factors have been translated into a set of Decision Rules suitable for use in a Knowledge-Based System. The system is capable of providing the decision-maker with a number of scenarios and their possible outcomes. This paper also presents the development of Patient Dependency Knowledge-Based Systems, which can be used in allocating and evaluating resources and nursing staff in hospitals on the basis of patients' needs.

  17. Knowledge-based scheduling of arrival aircraft

    NASA Technical Reports Server (NTRS)

    Krzeczowski, K.; Davis, T.; Erzberger, H.; Lev-Ram, I.; Bergh, C.

    1995-01-01

    A knowledge-based method for scheduling arrival aircraft in the terminal area has been implemented and tested in real-time simulation. The scheduling system automatically sequences, assigns landing times, and assigns runways to arrival aircraft by utilizing continuous updates of aircraft radar data and controller inputs. The scheduling algorithms is driven by a knowledge base which was obtained in over two thousand hours of controller-in-the-loop real-time simulation. The knowledge base contains a series of hierarchical 'rules' and decision logic that examines both performance criteria, such as delay reduction, as well as workload reduction criteria, such as conflict avoidance. The objective of the algorithms is to devise an efficient plan to land the aircraft in a manner acceptable to the air traffic controllers. This paper will describe the scheduling algorithms, give examples of their use, and present data regarding their potential benefits to the air traffic system.

  18. Knowledge-based diagnosis for aerospace systems

    NASA Technical Reports Server (NTRS)

    Atkinson, David J.

    1988-01-01

    The need for automated diagnosis in aerospace systems and the approach of using knowledge-based systems are examined. Research issues in knowledge-based diagnosis which are important for aerospace applications are treated along with a review of recent relevant research developments in Artificial Intelligence. The design and operation of some existing knowledge-based diagnosis systems are described. The systems described and compared include the LES expert system for liquid oxygen loading at NASA Kennedy Space Center, the FAITH diagnosis system developed at the Jet Propulsion Laboratory, the PES procedural expert system developed at SRI International, the CSRL approach developed at Ohio State University, the StarPlan system developed by Ford Aerospace, the IDM integrated diagnostic model, and the DRAPhys diagnostic system developed at NASA Langley Research Center.

  19. Knowledge-based flow field zoning

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation flow field zoning in two dimensions is an important step towards easing the three-dimensional grid generation bottleneck in computational fluid dynamics. A knowledge based approach works well, but certain aspects of flow field zoning make the use of such an approach challenging. A knowledge based flow field zoner, called EZGrid, was implemented and tested on representative two-dimensional aerodynamic configurations. Results are shown which illustrate the way in which EZGrid incorporates the effects of physics, shape description, position, and user bias in a flow field zoning.

  20. A knowledge-based approach to design

    NASA Astrophysics Data System (ADS)

    Mitchell, T. M.; Steinberg, L. I.; Shulman, J. S.

    1985-09-01

    The potential advantages of knowledge-based methods for computer-aided design are examined, and the organization of VEXED, a knowledge-based system for VLSI design, is described in detail. In particular, attention is given to the principles underlying the design of VEXED and several issues that have arisen from implementing and experimenting with the prototype system. The issues discussed include questions regarding the grainsize of rules, the possibility of learning new rules automatically, and issues related to constraint propagation and management.

  1. Knowledge-based commodity distribution planning

    NASA Technical Reports Server (NTRS)

    Saks, Victor; Johnson, Ivan

    1994-01-01

    This paper presents an overview of a Decision Support System (DSS) that incorporates Knowledge-Based (KB) and commercial off the shelf (COTS) technology components. The Knowledge-Based Logistics Planning Shell (KBLPS) is a state-of-the-art DSS with an interactive map-oriented graphics user interface and powerful underlying planning algorithms. KBLPS was designed and implemented to support skilled Army logisticians to prepare and evaluate logistics plans rapidly, in order to support corps-level battle scenarios. KBLPS represents a substantial advance in graphical interactive planning tools, with the inclusion of intelligent planning algorithms that provide a powerful adjunct to the planning skills of commodity distribution planners.

  2. Knowledge-based Autonomous Test Engineer (KATE)

    NASA Technical Reports Server (NTRS)

    Parrish, Carrie L.; Brown, Barbara L.

    1991-01-01

    Mathematical models of system components have long been used to allow simulators to predict system behavior to various stimuli. Recent efforts to monitor, diagnose, and control real-time systems using component models have experienced similar success. NASA Kennedy is continuing the development of a tool for implementing real-time knowledge-based diagnostic and control systems called KATE (Knowledge based Autonomous Test Engineer). KATE is a model-based reasoning shell designed to provide autonomous control, monitoring, fault detection, and diagnostics for complex engineering systems by applying its reasoning techniques to an exchangeable quantitative model describing the structure and function of the various system components and their systemic behavior.

  3. Knowledge-Based Learning: Integration of Deductive and Inductive Learning for Knowledge Base Completion.

    ERIC Educational Resources Information Center

    Whitehall, Bradley Lane

    In constructing a knowledge-based system, the knowledge engineer must convert rules of thumb provided by the domain expert and previously solved examples into a working system. Research in machine learning has produced algorithms that create rules for knowledge-based systems, but these algorithms require either many examples or a complete domain…

  4. Knowledge-Based Learning: Integration of Deductive and Inductive Learning for Knowledge Base Completion.

    ERIC Educational Resources Information Center

    Whitehall, Bradley Lane

    In constructing a knowledge-based system, the knowledge engineer must convert rules of thumb provided by the domain expert and previously solved examples into a working system. Research in machine learning has produced algorithms that create rules for knowledge-based systems, but these algorithms require either many examples or a complete domain…

  5. Viewing Knowledge Bases as Qualitative Models.

    ERIC Educational Resources Information Center

    Clancey, William J.

    The concept of a qualitative model provides a unifying perspective for understanding how expert systems differ from conventional programs. Knowledge bases contain qualitative models of systems in the world, that is, primarily non-numeric descriptions that provide a basis for explaining and predicting behavior and formulating action plans. The…

  6. Improving the Knowledge Base in Teacher Education.

    ERIC Educational Resources Information Center

    Rockler, Michael J.

    Education in the United States for most of the last 50 years has built its knowledge base on a single dominating foundation--behavioral psychology. This paper analyzes the history of behaviorism. Syntheses are presented of the theories of Ivan P. Pavlov, J. B. Watson, and B. F. Skinner, all of whom contributed to the body of works on behaviorism.…

  7. The knowledge-based software assistant

    NASA Technical Reports Server (NTRS)

    Benner, Kevin M.; White, Douglas A.

    1987-01-01

    Where the Knowledge Based Software Assistant (KBSA) is now, four years after the initial report, is discussed. Also described is what the Rome Air Development Center expects at the end of the first contract iteration. What the second and third contract iterations will look like are characterized.

  8. Improving the Knowledge Base in Teacher Education.

    ERIC Educational Resources Information Center

    Rockler, Michael J.

    Education in the United States for most of the last 50 years has built its knowledge base on a single dominating foundation--behavioral psychology. This paper analyzes the history of behaviorism. Syntheses are presented of the theories of Ivan P. Pavlov, J. B. Watson, and B. F. Skinner, all of whom contributed to the body of works on behaviorism.…

  9. The adverse outcome pathway knowledge base

    EPA Science Inventory

    The rapid advancement of the Adverse Outcome Pathway (AOP) framework has been paralleled by the development of tools to store, analyse, and explore AOPs. The AOP Knowledge Base (AOP-KB) project has brought three independently developed platforms (Effectopedia, AOP-Wiki, and AOP-X...

  10. A Knowledge Base for FIA Data Uses

    Treesearch

    Victor A. Rudis

    2005-01-01

    Knowledge management provides a way to capture the collective wisdom of an organization, facilitate organizational learning, and foster opportunities for improvement. This paper describes a knowledge base compiled from uses of field observations made by the U.S. Department of Agriculture Forest Service, Forest Inventory and Analysis program and a citation database of...

  11. Knowledge-Based Instructional Gaming: GEO.

    ERIC Educational Resources Information Center

    Duchastel, Philip

    1989-01-01

    Describes the design and development of an instructional game, GEO, in which the user learns elements of Canadian geography. The use of knowledge-based artificial intelligence techniques is discussed, the use of HyperCard in the design of GEO is explained, and future directions are suggested. (15 references) (Author/LRW)

  12. Constructing Knowledge Bases: A Promising Instructional Tool.

    ERIC Educational Resources Information Center

    Trollip, Stanley R.; Lippert, Renate C.

    1987-01-01

    Argues that construction of knowledge bases is an instructional tool that encourages students' critical thinking in problem solving situations through metacognitive experiences. A study is described in which college students created expert systems to test the effectiveness of this method of instruction, and benefits for students and teachers are…

  13. Knowledge-Based Instructional Gaming: GEO.

    ERIC Educational Resources Information Center

    Duchastel, Philip

    1989-01-01

    Describes the design and development of an instructional game, GEO, in which the user learns elements of Canadian geography. The use of knowledge-based artificial intelligence techniques is discussed, the use of HyperCard in the design of GEO is explained, and future directions are suggested. (15 references) (Author/LRW)

  14. The adverse outcome pathway knowledge base

    EPA Science Inventory

    The rapid advancement of the Adverse Outcome Pathway (AOP) framework has been paralleled by the development of tools to store, analyse, and explore AOPs. The AOP Knowledge Base (AOP-KB) project has brought three independently developed platforms (Effectopedia, AOP-Wiki, and AOP-X...

  15. Knowledge-Based Inferences Are Not General

    ERIC Educational Resources Information Center

    Shears, Connie; Chiarello, Christine

    2004-01-01

    Although knowledge-based inferences (Graesser, Singer, & Trabasso, 1994) depend on general knowledge, there may be differences across knowledge areas in how they support these processes. This study explored processing differences between 2 areas of knowledge (physical cause?effect vs. goals and planning) to establish (a) that each would support…

  16. Knowledge-based machine indexing from natural language text: Knowledge base design, development, and maintenance

    NASA Technical Reports Server (NTRS)

    Genuardi, Michael T.

    1993-01-01

    One strategy for machine-aided indexing (MAI) is to provide a concept-level analysis of the textual elements of documents or document abstracts. In such systems, natural-language phrases are analyzed in order to identify and classify concepts related to a particular subject domain. The overall performance of these MAI systems is largely dependent on the quality and comprehensiveness of their knowledge bases. These knowledge bases function to (1) define the relations between a controlled indexing vocabulary and natural language expressions; (2) provide a simple mechanism for disambiguation and the determination of relevancy; and (3) allow the extension of concept-hierarchical structure to all elements of the knowledge file. After a brief description of the NASA Machine-Aided Indexing system, concerns related to the development and maintenance of MAI knowledge bases are discussed. Particular emphasis is given to statistically-based text analysis tools designed to aid the knowledge base developer. One such tool, the Knowledge Base Building (KBB) program, presents the domain expert with a well-filtered list of synonyms and conceptually-related phrases for each thesaurus concept. Another tool, the Knowledge Base Maintenance (KBM) program, functions to identify areas of the knowledge base affected by changes in the conceptual domain (for example, the addition of a new thesaurus term). An alternate use of the KBM as an aid in thesaurus construction is also discussed.

  17. Formative Assessment Pre-Test to Identify College Students' Prior Knowledge, Misconceptions and Learning Difficulties in Biology

    ERIC Educational Resources Information Center

    Lazarowitz, Reuven; Lieb, Carl

    2006-01-01

    A formative assessment pretest was administered to undergraduate students at the beginning of a science course in order to find out their prior knowledge, misconceptions and learning difficulties on the topic of the human respiratory system and energy issues. Those findings could provide their instructors with the valuable information required in…

  18. Formative Assessment Pre-Test to Identify College Students' Prior Knowledge, Misconceptions and Learning Difficulties in Biology

    ERIC Educational Resources Information Center

    Lazarowitz, Reuven; Lieb, Carl

    2006-01-01

    A formative assessment pretest was administered to undergraduate students at the beginning of a science course in order to find out their prior knowledge, misconceptions and learning difficulties on the topic of the human respiratory system and energy issues. Those findings could provide their instructors with the valuable information required in…

  19. Bridging the gap: simulations meet knowledge bases

    NASA Astrophysics Data System (ADS)

    King, Gary W.; Morrison, Clayton T.; Westbrook, David L.; Cohen, Paul R.

    2003-09-01

    Tapir and Krill are declarative languages for specifying actions and agents, respectively, that can be executed in simulation. As such, they bridge the gap between strictly declarative knowledge bases and strictly executable code. Tapir and Krill components can be combined to produce models of activity which can answer questions about mechanisms and processes using conventional inference methods and simulation. Tapir was used in DARPA's Rapid Knowledge Formation (RKF) project to construct models of military tactics from the Army Field Manual FM3-90. These were then used to build Courses of Actions (COAs) which could be critiqued by declarative reasoning or via Monte Carlo simulation. Tapir and Krill can be read and written by non-knowledge engineers making it an excellent vehicle for Subject Matter Experts to build and critique knowledge bases.

  20. A knowledge based approach to VLSI CAD

    NASA Astrophysics Data System (ADS)

    Steinberg, L. I.; Mitchell, T. M.

    1983-09-01

    Artificial Intelligence (AI) techniques offer one possible avenue toward new CAD tools to handle the complexities of VLSI. This paper summarizes the experience of the Rutgers AI/VLSI group in exploring applications of AI to VLSI design over the past few years. In particular, it summarizes our experience in developing REDESIGN, a knowledge-based system for providing interactive aid in the functional redesign of digital circuits. Given a desired change to the function of a circuit, REDESIGN combines rule-based knowledge of design tactics with its ability to analyze signal propagation through circuits, in order to (1) help the user focus on an appropriate portion of the circuit to redesign, (2) suggest local redesign alternatives, and (3) determine side effects of possible redesigns. We also summarize our more recent research toward constructing a knowledge-based system for VLSI design and a system for chip debugging, both based on extending the techniques used by the REDESIGN system.

  1. The importance of knowledge-based technology.

    PubMed

    Cipriano, Pamela F

    2012-01-01

    Nurse executives are responsible for a workforce that can provide safer and more efficient care in a complex sociotechnical environment. National quality priorities rely on technologies to provide data collection, share information, and leverage analytic capabilities to interpret findings and inform approaches to care that will achieve better outcomes. As a key steward for quality, the nurse executive exercises leadership to provide the infrastructure to build and manage nursing knowledge and instill accountability for following evidence-based practices. These actions contribute to a learning health system where new knowledge is captured as a by-product of care delivery enabled by knowledge-based electronic systems. The learning health system also relies on rigorous scientific evidence embedded into practice at the point of care. The nurse executive optimizes use of knowledge-based technologies, integrated throughout the organization, that have the capacity to help transform health care.

  2. Clips as a knowledge based language

    NASA Technical Reports Server (NTRS)

    Harrington, James B.

    1987-01-01

    CLIPS is a language for writing expert systems applications on a personal or small computer. Here, the CLIPS programming language is described and compared to three other artificial intelligence (AI) languages (LISP, Prolog, and OPS5) with regard to the processing they provide for the implementation of a knowledge based system (KBS). A discussion is given on how CLIPS would be used in a control system.

  3. Satellite Contamination and Materials Outgassing Knowledge base

    NASA Technical Reports Server (NTRS)

    Minor, Jody L.; Kauffman, William J. (Technical Monitor)

    2001-01-01

    Satellite contamination continues to be a design problem that engineers must take into account when developing new satellites. To help with this issue, NASA's Space Environments and Effects (SEE) Program funded the development of the Satellite Contamination and Materials Outgassing Knowledge base. This engineering tool brings together in one location information about the outgassing properties of aerospace materials based upon ground-testing data, the effects of outgassing that has been observed during flight and measurements of the contamination environment by on-orbit instruments. The knowledge base contains information using the ASTM Standard E- 1559 and also consolidates data from missions using quartz-crystal microbalances (QCM's). The data contained in the knowledge base was shared with NASA by government agencies and industry in the US and international space agencies as well. The term 'knowledgebase' was used because so much information and capability was brought together in one comprehensive engineering design tool. It is the SEE Program's intent to continually add additional material contamination data as it becomes available - creating a dynamic tool whose value to the user is ever increasing. The SEE Program firmly believes that NASA, and ultimately the entire contamination user community, will greatly benefit from this new engineering tool and highly encourages the community to not only use the tool but add data to it as well.

  4. Presentation planning using an integrated knowledge base

    NASA Technical Reports Server (NTRS)

    Arens, Yigal; Miller, Lawrence; Sondheimer, Norman

    1988-01-01

    A description is given of user interface research aimed at bringing together multiple input and output modes in a way that handles mixed mode input (commands, menus, forms, natural language), interacts with a diverse collection of underlying software utilities in a uniform way, and presents the results through a combination of output modes including natural language text, maps, charts and graphs. The system, Integrated Interfaces, derives much of its ability to interact uniformly with the user and the underlying services and to build its presentations, from the information present in a central knowledge base. This knowledge base integrates models of the application domain (Navy ships in the Pacific region, in the current demonstration version); the structure of visual displays and their graphical features; the underlying services (data bases and expert systems); and interface functions. The emphasis is on a presentation planner that uses the knowledge base to produce multi-modal output. There has been a flurry of recent work in user interface management systems. (Several recent examples are listed in the references). Existing work is characterized by an attempt to relieve the software designer of the burden of handcrafting an interface for each application. The work has generally focused on intelligently handling input. This paper deals with the other end of the pipeline - presentations.

  5. Empirical Analysis and Refinement of Expert System Knowledge Bases.

    DTIC Science & Technology

    1987-11-30

    Knowledge base refinement is the modification of an existing expert system knowledge base with the goals of localizing specific weaknesses in a... expert system techniques for knowledge acquisition, knowledge base refinement, maintenance, and verification....on the related problems of knowledge base acquisition, maintenance, verification, and learning from experience. The SEEK system was the first expert

  6. Knowledge-Based Query Construction Using the CDSS Knowledge Base for Efficient Evidence Retrieval

    PubMed Central

    Afzal, Muhammad; Hussain, Maqbool; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Lee, Sungyoung; Kang, Byeong Ho

    2015-01-01

    Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important. PMID:26343669

  7. Coastal habitats of the Elwha River, Washington- Biological and physical patterns and processes prior to dam removal

    USGS Publications Warehouse

    Duda, Jeffrey J.; Warrick, Jonathan A.; Magirl, Christopher S.

    2011-01-01

    Together, these different scientific perspectives form a basis for understanding the Elwha River ecosystem, an environment that has and will undergo substantial change. A century of change began with the start of dam construction in 1910; additional major change will result from dam removal scheduled to begin in September 2011. This report provides a scientific snapshot of the lower Elwha River, its estuary, and adjacent nearshore ecosystems prior to dam removal that can be used to evaluate the responses and dynamics of various system components following dam removal.

  8. Coastal and lower Elwha River, Washington, prior to dam removal--history, status, and defining characteristics: Chapter 1 in Coastal habitats of the Elwha River, Washington--biological and physical patterns and processes prior to dam removal

    USGS Publications Warehouse

    Duda, Jeffrey J.; Warrick, Jonathan A.; Magirl, Christopher S.; Duda, Jeffrey J.; Warrick, Jonathan A.; Magirl, Christopher S.

    2011-01-01

    Characterizing the physical and biological characteristics of the lower Elwha River, its estuary, and adjacent nearshore habitats prior to dam removal is essential to monitor changes to these areas during and following the historic dam-removal project set to begin in September 2011. Based on the size of the two hydroelectric projects and the amount of sediment that will be released, the Elwha River in Washington State will be home to the largest river restoration through dam removal attempted in the United States. Built in 1912 and 1927, respectively, the Elwha and Glines Canyon Dams have altered key physical and biological characteristics of the Elwha River. Once abundant salmon populations, consisting of all five species of Pacific salmon, are restricted to the lower 7.8 river kilometers downstream of Elwha Dam and are currently in low numbers. Dam removal will reopen access to more than 140 km of mainstem, flood plain, and tributary habitat, most of which is protected within Olympic National Park. The high capture rate of river-borne sediments by the two reservoirs has changed the geomorphology of the riverbed downstream of the dams. Mobilization and downstream transport of these accumulated reservoir sediments during and following dam removal will significantly change downstream river reaches, the estuary complex, and the nearshore environment. To introduce the more detailed studies that follow in this report, we summarize many of the key aspects of the Elwha River ecosystem including a regional and historical context for this unprecedented project.

  9. Knowledge based imaging for terrain analysis

    NASA Technical Reports Server (NTRS)

    Holben, Rick; Westrom, George; Rossman, David; Kurrasch, Ellie

    1992-01-01

    A planetary rover will have various vision based requirements for navigation, terrain characterization, and geological sample analysis. In this paper we describe a knowledge-based controller and sensor development system for terrain analysis. The sensor system consists of a laser ranger and a CCD camera. The controller, under the input of high-level commands, performs such functions as multisensor data gathering, data quality monitoring, and automatic extraction of sample images meeting various criteria. In addition to large scale terrain analysis, the system's ability to extract useful geological information from rock samples is illustrated. Image and data compression strategies are also discussed in light of the requirements of earth bound investigators.

  10. Knowledge-based systems in Japan

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward; Engelmore, Robert S.; Friedland, Peter E.; Johnson, Bruce B.; Nii, H. Penny; Schorr, Herbert; Shrobe, Howard

    1994-01-01

    This report summarizes a study of the state-of-the-art in knowledge-based systems technology in Japan, organized by the Japanese Technology Evaluation Center (JTEC) under the sponsorship of the National Science Foundation and the Advanced Research Projects Agency. The panel visited 19 Japanese sites in March 1992. Based on these site visits plus other interactions with Japanese organizations, both before and after the site visits, the panel prepared a draft final report. JTEC sent the draft to the host organizations for their review. The final report was published in May 1993.

  11. Nearshore biological communities prior to the removal of the Elwha River dams: Chapter 6 in Coastal habitats of the Elwha River, Washington--biological and physical patterns and processes prior to dam removal

    USGS Publications Warehouse

    Rubin, Stephen P.; Miller, Ian M.; Elder, Nancy; Reisenbichler, Reginald R.; Duda, Jeffrey J.; Duda, Jeffrey J.; Warrick, Jonathan A.; Magirl, Christopher S.

    2011-01-01

    (3–18 m) near the mouth of the Elwha River, between the west end of Freshwater Bay and the base of Ediz Hook, were surveyed in August and September 2008, to establish baselines prior to dam removal. Density was estimated for 9 kelp taxa, 65 taxa of invertebrates larger than 2.5 cm any dimension and 24 fish taxa. Density averaged over all sites was 3.1 per square meter (/m2) for kelp, 2.7/m2 for invertebrates, and 0.1/m2 for fish. Community structure was partly controlled by substrate type, seafloor relief, and depth. On average, 12 more taxa occurred where boulders were present compared to areas lacking boulders but with similar base substrate. Four habitat types were identified: (1) Bedrock/boulder reefs had the highest kelp density and taxa richness, and were characterized by a canopy of Nereocystis leutkeana (bull kelp) at the water surface and a secondary canopy of perennial kelp 1–2 m above the seafloor; (2) Mixed sand and gravel-cobble habitats with moderate relief provided by boulders had the highest density of invertebrates and a taxa richness nearly equivalent to that for bedrock/boulder reefs; (3) Mixed sand and gravel-cobble habitats lacking boulders supported a moderate density of kelp, primarily annual species with low growth forms (blades close to the seafloor), and the lowest invertebrate density among habitats; and (4) Sand habitats had the lowest kelp density and taxa richness among habitats and a moderate density of invertebrates. Uncertainties about nearshore community responses to increases in deposited and suspended sediments highlight the opportunity to advance scientific understanding by measuring responses following dam removal.

  12. A knowledge based software engineering environment testbed

    NASA Technical Reports Server (NTRS)

    Gill, C.; Reedy, A.; Baker, L.

    1985-01-01

    The Carnegie Group Incorporated and Boeing Computer Services Company are developing a testbed which will provide a framework for integrating conventional software engineering tools with Artifical Intelligence (AI) tools to promote automation and productivity. The emphasis is on the transfer of AI technology to the software development process. Experiments relate to AI issues such as scaling up, inference, and knowledge representation. In its first year, the project has created a model of software development by representing software activities; developed a module representation formalism to specify the behavior and structure of software objects; integrated the model with the formalism to identify shared representation and inheritance mechanisms; demonstrated object programming by writing procedures and applying them to software objects; used data-directed and goal-directed reasoning to, respectively, infer the cause of bugs and evaluate the appropriateness of a configuration; and demonstrated knowledge-based graphics. Future plans include introduction of knowledge-based systems for rapid prototyping or rescheduling; natural language interfaces; blackboard architecture; and distributed processing

  13. Knowledge-based representations of risk beliefs.

    PubMed

    Tonn, B E; Travis, C B; Goeltz, R T; Phillippi, R H

    1990-03-01

    Beliefs about risks associated with two risk agents, AIDS and toxic waste, are modeled using knowledge-based methods and elicited from subjects via interactive computer technology. A concept net is developed to organize subject responses concerning the consequences of the risk agents. It is found that death and adverse personal emotional and sociological consequences are most associated with AIDS. Toxic waste is most associated with environmental problems. These consequence profiles are quite dissimilar, although past work in risk perception would have judged the risk agents as being quite similar. Subjects frequently used causal semantics to represent their beliefs and "% of time" instead of "probability" to represent likelihoods. The news media is the most prevalent source of risk information although experiences of acquaintances appear more credible. The results suggest that "broadly based risk" communication may be ineffective because people differ in their conceptual representation of risk beliefs. In general, the knowledge-based approach to risk perception representation has great potential to increase our understanding of important risk topics.

  14. Automated Fictional Ideation via Knowledge Base Manipulation.

    PubMed

    Llano, Maria Teresa; Colton, Simon; Hepworth, Rose; Gow, Jeremy

    The invention of fictional ideas (ideation) is often a central process in the creative production of artefacts such as poems, music and paintings, but has barely been studied in the computational creativity community. We present here a general approach to automated fictional ideation that works by manipulating facts specified in knowledge bases. More specifically, we specify a number of constructions which, by altering and combining facts from a knowledge base, result in the generation of fictions. Moreover, we present an instantiation of these constructions through the use of ConceptNet, a database of common sense knowledge. In order to evaluate the success of these constructions, we present a curation analysis that calculates the proportion of ideas which pass a typicality judgement. We further evaluate the output of this approach through a crowd-sourcing experiment in which participants were asked to rank ideas. We found a positive correlation between the participant's rankings and a chaining inference technique that automatically assesses the value of the fictions generated through our approach. We believe that these results show that this approach constitutes a firm basis for automated fictional ideation with evaluative capacity.

  15. Developing the Knowledge Base for Supervisor Induction and Professional Growth: Validating the Model.

    ERIC Educational Resources Information Center

    Friedman, Malcolm; Watkins, Regina M.

    Prior to implementation of a supervisory staff development program, it is necessary to review current literature in the field of educational administration in order to define specific elements (domains) of the supervisory knowledge base. A qualitative research study involving a review of relevant literature yielded 13 primary domains or categories…

  16. The Influence of the Knowledge Base on the Development of Mnemonic Strategies.

    ERIC Educational Resources Information Center

    Ornstein, Peter A.; Naus, Mary J.

    A dominant theme in cognitive psychology is that prior knowledge in long-term memory has a strong influence on an individual's cognitive processing. Citing numerous memory studies with children, knowledge base effects are presented as part of a broader picture of memory development. Using the sort/recall procedure (asking subjects to group sets of…

  17. Modeling Guru: Knowledge Base for NASA Modelers

    NASA Astrophysics Data System (ADS)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  18. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  19. Compilation for critically constrained knowledge bases

    SciTech Connect

    Schrag, R.

    1996-12-31

    We show that many {open_quotes}critically constrained{close_quotes} Random 3SAT knowledge bases (KBs) can be compiled into disjunctive normal form easily by using a variant of the {open_quotes}Davis-Putnam{close_quotes} proof procedure. From these compiled KBs we can answer all queries about entailment of conjunctive normal formulas, also easily - compared to a {open_quotes}brute-force{close_quotes} approach to approximate knowledge compilation into unit clauses for the same KBs. We exploit this fact to develop an aggressive hybrid approach which attempts to compile a KB exactly until a given resource limit is reached, then falls back to approximate compilation into unit clauses. The resulting approach handles all of the critically constrained Random 3SAT KBs with average savings of an order of magnitude over the brute-force approach.

  20. NASDA knowledge-based network planning system

    NASA Technical Reports Server (NTRS)

    Yamaya, K.; Fujiwara, M.; Kosugi, S.; Yambe, M.; Ohmori, M.

    1993-01-01

    One of the SODS (space operation and data system) sub-systems, NP (network planning) was the first expert system used by NASDA (national space development agency of Japan) for tracking and control of satellite. The major responsibilities of the NP system are: first, the allocation of network and satellite control resources and, second, the generation of the network operation plan data (NOP) used in automated control of the stations and control center facilities. Up to now, the first task of network resource scheduling was done by network operators. NP system automatically generates schedules using its knowledge base, which contains information on satellite orbits, station availability, which computer is dedicated to which satellite, and how many stations must be available for a particular satellite pass or a certain time period. The NP system is introduced.

  1. DeepDive: Declarative Knowledge Base Construction.

    PubMed

    De Sa, Christopher; Ratner, Alex; Ré, Christopher; Shin, Jaeho; Wang, Feiran; Wu, Sen; Zhang, Ce

    2016-03-01

    The dark data extraction or knowledge base construction (KBC) problem is to populate a SQL database with information from unstructured data sources including emails, webpages, and pdf reports. KBC is a long-standing problem in industry and research that encompasses problems of data extraction, cleaning, and integration. We describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems. The key idea in DeepDive is that statistical inference and machine learning are key tools to attack classical data problems in extraction, cleaning, and integration in a unified and more effective manner. DeepDive programs are declarative in that one cannot write probabilistic inference algorithms; instead, one interacts by defining features or rules about the domain. A key reason for this design choice is to enable domain experts to build their own KBC systems. We present the applications, abstractions, and techniques of DeepDive employed to accelerate construction of KBC systems.

  2. Sensitive determination of terazosin in pharmaceutical formulations and biological samples by ionic-liquid microextraction prior to spectrofluorimetry.

    PubMed

    Zeeb, Mohsen; Sadeghi, Mahdi

    2012-01-01

    An efficient and environmentally friendly sample preparation method based on the application of hydrophobic 1-Hexylpyridinium hexafluorophosphate [Hpy][PF(6)] ionic liquid (IL) as a microextraction solvent was proposed to preconcentrate terazosin. The performance of the microextraction method was improved by introducing a common ion of pyridinium IL into the sample solution. Due to the presence of the common ion, the solubility of IL significantly decreased. As a result, the phase separation successfully occurred even at high ionic strength, and the volume of the settled IL-phase was not influenced by variations in the ionic strength (up to 30% w/v). After preconcentration step, the enriched phase was introduced to the spectrofluorimeter for the determination of terazosin. The obtained results revealed that this system did not suffer from the limitations of that in conventional ionic-liquid microextraction. Under optimum experimental conditions, the proposed method provided a limit of detection (LOD) of 0.027 μg L(-1) and a relative standard deviation (R.S.D.) of 2.4%. The present method was successfully applied to terazosin determination in actual pharmaceutical formulations and biological samples. Considering the large variety of ionic liquids, the proposed microextraction method earns many merits, and will present a wide application in the future.

  3. Sensitive Determination of Terazosin in Pharmaceutical Formulations and Biological Samples by Ionic-Liquid Microextraction Prior to Spectrofluorimetry

    PubMed Central

    Zeeb, Mohsen; Sadeghi, Mahdi

    2012-01-01

    An efficient and environmentally friendly sample preparation method based on the application of hydrophobic 1-Hexylpyridinium hexafluorophosphate [Hpy][PF6] ionic liquid (IL) as a microextraction solvent was proposed to preconcentrate terazosin. The performance of the microextraction method was improved by introducing a common ion of pyridinium IL into the sample solution. Due to the presence of the common ion, the solubility of IL significantly decreased. As a result, the phase separation successfully occurred even at high ionic strength, and the volume of the settled IL-phase was not influenced by variations in the ionic strength (up to 30% w/v). After preconcentration step, the enriched phase was introduced to the spectrofluorimeter for the determination of terazosin. The obtained results revealed that this system did not suffer from the limitations of that in conventional ionic-liquid microextraction. Under optimum experimental conditions, the proposed method provided a limit of detection (LOD) of 0.027 μg L−1 and a relative standard deviation (R.S.D.) of 2.4%. The present method was successfully applied to terazosin determination in actual pharmaceutical formulations and biological samples. Considering the large variety of ionic liquids, the proposed microextraction method earns many merits, and will present a wide application in the future. PMID:22505920

  4. A rapid microbiopsy system to improve the preservation of biological samples prior to high-pressure freezing.

    PubMed

    Vanhecke, D; Graber, W; Herrmann, G; Al-Amoudi, A; Eggli, P; Studer, D

    2003-10-01

    A microbiopsy system for fast excision and transfer of biological specimens from donor to high-pressure freezer was developed. With a modified, commercially available, Promag 1.2 biopsy gun, tissue samples can be excised with a size small enough (0.6 mm x 1.2 mm x 0.3 mm) to be easily transferred into a newly designed specimen platelet. A self-made transfer unit allows fast transfer of the specimen from the needle into the specimen platelet. The platelet is then fixed in a commercially available specimen holder of a high-pressure freezing machine (EM PACT, Leica Microsystems, Vienna, Austria) and frozen therein. The time required by a well-instructed (but not experienced) person to execute all steps is in the range of half a minute. This period is considered short enough to maintain the excised tissue pieces close to their native state. We show that a range of animal tissues (liver, brain, kidney and muscle) are well preserved. To prove the quality of freezing achieved with the system, we show vitrified ivy leaves high-pressure frozen in the new specimen platelet.

  5. Knowledge-based systems and NASA's software support environment

    NASA Technical Reports Server (NTRS)

    Dugan, Tim; Carmody, Cora; Lennington, Kent; Nelson, Bob

    1990-01-01

    A proposed role for knowledge-based systems within NASA's Software Support Environment (SSE) is described. The SSE is chartered to support all software development for the Space Station Freedom Program (SSFP). This includes support for development of knowledge-based systems and the integration of these systems with conventional software systems. In addition to the support of development of knowledge-based systems, various software development functions provided by the SSE will utilize knowledge-based systems technology.

  6. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    USDA-ARS?s Scientific Manuscript database

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium...

  7. A knowledge-based multiple-sequence alignment algorithm.

    PubMed

    Nguyen, Ken D; Pan, Yi

    2013-01-01

    A common and cost-effective mechanism to identify the functionalities, structures, or relationships between species is multiple-sequence alignment, in which DNA/RNA/protein sequences are arranged and aligned so that similarities between sequences are clustered together. Correctly identifying and aligning these sequence biological similarities help from unwinding the mystery of species evolution to drug design. We present our knowledge-based multiple sequence alignment (KB-MSA) technique that utilizes the existing knowledge databases such as SWISSPROT, GENBANK, or HOMSTRAD to provide a more realistic and reliable sequence alignment. We also provide a modified version of this algorithm (CB-MSA) that utilizes the sequence consistency information when sequence knowledge databases are not available. Our benchmark tests on BAliBASE, PREFAB, HOMSTRAD, and SABMARK references show accuracy improvements up to 10 percent on twilight data sets against many leading alignment tools such as ISPALIGN, PADT, CLUSTALW, MAFFT, PROBCONS, and T-COFFEE.

  8. Irrelevance Reasoning in Knowledge Based Systems

    NASA Technical Reports Server (NTRS)

    Levy, A. Y.

    1993-01-01

    This dissertation considers the problem of reasoning about irrelevance of knowledge in a principled and efficient manner. Specifically, it is concerned with two key problems: (1) developing algorithms for automatically deciding what parts of a knowledge base are irrelevant to a query and (2) the utility of relevance reasoning. The dissertation describes a novel tool, the query-tree, for reasoning about irrelevance. Based on the query-tree, we develop several algorithms for deciding what formulas are irrelevant to a query. Our general framework sheds new light on the problem of detecting independence of queries from updates. We present new results that significantly extend previous work in this area. The framework also provides a setting in which to investigate the connection between the notion of irrelevance and the creation of abstractions. We propose a new approach to research on reasoning with abstractions, in which we investigate the properties of an abstraction by considering the irrelevance claims on which it is based. We demonstrate the potential of the approach for the cases of abstraction of predicates and projection of predicate arguments. Finally, we describe an application of relevance reasoning to the domain of modeling physical devices.

  9. An Ebola virus-centered knowledge base

    PubMed Central

    Kamdar, Maulik R.; Dumontier, Michel

    2015-01-01

    Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard. Database URL: http://ebola.semanticscience.org. PMID:26055098

  10. An Ebola virus-centered knowledge base.

    PubMed

    Kamdar, Maulik R; Dumontier, Michel

    2015-01-01

    Ebola virus (EBOV), of the family Filoviridae viruses, is a NIAID category A, lethal human pathogen. It is responsible for causing Ebola virus disease (EVD) that is a severe hemorrhagic fever and has a cumulative death rate of 41% in the ongoing epidemic in West Africa. There is an ever-increasing need to consolidate and make available all the knowledge that we possess on EBOV, even if it is conflicting or incomplete. This would enable biomedical researchers to understand the molecular mechanisms underlying this disease and help develop tools for efficient diagnosis and effective treatment. In this article, we present our approach for the development of an Ebola virus-centered Knowledge Base (Ebola-KB) using Linked Data and Semantic Web Technologies. We retrieve and aggregate knowledge from several open data sources, web services and biomedical ontologies. This knowledge is transformed to RDF, linked to the Bio2RDF datasets and made available through a SPARQL 1.1 Endpoint. Ebola-KB can also be explored using an interactive Dashboard visualizing the different perspectives of this integrated knowledge. We showcase how different competency questions, asked by domain users researching the druggability of EBOV, can be formulated as SPARQL Queries or answered using the Ebola-KB Dashboard.

  11. Knowledge-based reusable software synthesis system

    NASA Technical Reports Server (NTRS)

    Donaldson, Cammie

    1989-01-01

    The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.

  12. A Collaborative Environment for Knowledge Base Development

    NASA Astrophysics Data System (ADS)

    Li, W.; Yang, C.; Raskin, R.; Nebert, D. D.; Wu, H.

    2009-12-01

    Knowledge Base (KB) is an essential component for capturing, structuring and defining the meanings of domain knowledge. It’s important in enabling the sharing and interoperability of scientific data and services in a smart manner. It’s also the foundation for most the research in semantic field, such as semantic reasoning and ranking. In collaborating with ESIP, GMU is developing an online interface and supporting infrastructure to allow semantic registration of datasets and other web resources. The semantic description of data, services, and scientific content will be collected and transformed to the KB. As a case study, the harvest of web map services from by Nordic mapping agencies to build a virtual Arctic spatial data infrastructure will be used as the domain example. To automate the process, a controlled vocabulary of certain subjects, such as solid water, is created to filter from existing data and service repositories to obtain a collection of closely related document. Then latent semantic indexing is utilized to analyze semantic relationship among concepts that appears in service document. At last, semantic structure in plain text will be mapped and automatically populated to the specific presentation of knowledge in the KB.

  13. DeepDive: Declarative Knowledge Base Construction

    PubMed Central

    De Sa, Christopher; Ratner, Alex; Ré, Christopher; Shin, Jaeho; Wang, Feiran; Wu, Sen; Zhang, Ce

    2016-01-01

    The dark data extraction or knowledge base construction (KBC) problem is to populate a SQL database with information from unstructured data sources including emails, webpages, and pdf reports. KBC is a long-standing problem in industry and research that encompasses problems of data extraction, cleaning, and integration. We describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems. The key idea in DeepDive is that statistical inference and machine learning are key tools to attack classical data problems in extraction, cleaning, and integration in a unified and more effective manner. DeepDive programs are declarative in that one cannot write probabilistic inference algorithms; instead, one interacts by defining features or rules about the domain. A key reason for this design choice is to enable domain experts to build their own KBC systems. We present the applications, abstractions, and techniques of DeepDive employed to accelerate construction of KBC systems. PMID:28344371

  14. Knowledge-based nonuniform sampling in multidimensional NMR.

    PubMed

    Schuyler, Adam D; Maciejewski, Mark W; Arthanari, Haribabu; Hoch, Jeffrey C

    2011-07-01

    The full resolution afforded by high-field magnets is rarely realized in the indirect dimensions of multidimensional NMR experiments because of the time cost of uniformly sampling to long evolution times. Emerging methods utilizing nonuniform sampling (NUS) enable high resolution along indirect dimensions by sampling long evolution times without sampling at every multiple of the Nyquist sampling interval. While the earliest NUS approaches matched the decay of sampling density to the decay of the signal envelope, recent approaches based on coupled evolution times attempt to optimize sampling by choosing projection angles that increase the likelihood of resolving closely-spaced resonances. These approaches employ knowledge about chemical shifts to predict optimal projection angles, whereas prior applications of tailored sampling employed only knowledge of the decay rate. In this work we adapt the matched filter approach as a general strategy for knowledge-based nonuniform sampling that can exploit prior knowledge about chemical shifts and is not restricted to sampling projections. Based on several measures of performance, we find that exponentially weighted random sampling (envelope matched sampling) performs better than shift-based sampling (beat matched sampling). While shift-based sampling can yield small advantages in sensitivity, the gains are generally outweighed by diminished robustness. Our observation that more robust sampling schemes are only slightly less sensitive than schemes highly optimized using prior knowledge about chemical shifts has broad implications for any multidimensional NMR study employing NUS. The results derived from simulated data are demonstrated with a sample application to PfPMT, the phosphoethanolamine methyltransferase of the human malaria parasite Plasmodium falciparum.

  15. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  16. A knowledge base for Vitis vinifera functional analysis

    PubMed Central

    2015-01-01

    Background Vitis vinifera (Grapevine) is the most important fruit species in the modern world. Wine and table grapes sales contribute significantly to the economy of major wine producing countries. The most relevant goals in wine production concern quality and safety. In order to significantly improve the achievement of these objectives and to gain biological knowledge about cultivars, a genomic approach is the most reliable strategy. The recent grapevine genome sequencing offers the opportunity to study the potential roles of genes and microRNAs in fruit maturation and other physiological and pathological processes. Although several systems allowing the analysis of plant genomes have been reported, none of them has been designed specifically for the functional analysis of grapevine genomes of cultivars under environmental stress in connection with microRNA data. Description Here we introduce a novel knowledge base, called BIOWINE, designed for the functional analysis of Vitis vinifera genomes of cultivars present in Sicily. The system allows the analysis of RNA-seq experiments of two different cultivars, namely Nero d'Avola and Nerello Mascalese. Samples were taken under different climatic conditions of phenological phases, diseases, and geographic locations. The BIOWINE web interface is equipped with data analysis modules for grapevine genomes. In particular users may analyze the current genome assembly together with the RNA-seq data through a customized version of GBrowse. The web interface allows users to perform gene set enrichment by exploiting third-party databases. Conclusions BIOWINE is a knowledge base implementing a set of bioinformatics tools for the analysis of grapevine genomes. The system aims to increase our understanding of the grapevine varieties and species of Sicilian products focusing on adaptability to different climatic conditions, phenological phases, diseases, and geographic locations. PMID:26050794

  17. Weather, knowledge base and life-style

    NASA Astrophysics Data System (ADS)

    Bohle, Martin

    2015-04-01

    Why to main-stream curiosity for earth-science topics, thus to appraise these topics as of public interest? Namely, to influence practices how humankind's activities intersect the geosphere. How to main-stream that curiosity for earth-science topics? Namely, by weaving diverse concerns into common threads drawing on a wide range of perspectives: be it beauty or particularity of ordinary or special phenomena, evaluating hazards for or from mundane environments, or connecting the scholarly investigation with concerns of citizens at large; applying for threading traditional or modern media, arts or story-telling. Three examples: First "weather"; weather is a topic of primordial interest for most people: weather impacts on humans lives, be it for settlement, for food, for mobility, for hunting, for fishing, or for battle. It is the single earth-science topic that went "prime-time" since in the early 1950-ties the broadcasting of weather forecasts started and meteorologists present their work to the public, daily. Second "knowledge base"; earth-sciences are a relevant for modern societies' economy and value setting: earth-sciences provide insights into the evolution of live-bearing planets, the functioning of Earth's systems and the impact of humankind's activities on biogeochemical systems on Earth. These insights bear on production of goods, living conditions and individual well-being. Third "life-style"; citizen's urban culture prejudice their experiential connections: earth-sciences related phenomena are witnessed rarely, even most weather phenomena. In the past, traditional rural communities mediated their rich experiences through earth-centric story-telling. In course of the global urbanisation process this culture has given place to society-centric story-telling. Only recently anthropogenic global change triggered discussions on geoengineering, hazard mitigation, demographics, which interwoven with arts, linguistics and cultural histories offer a rich narrative

  18. An Evaluation of Knowledge Base Systems for Large OWL Datasets

    DTIC Science & Technology

    2004-01-01

    An Evaluation of Knowledge Base Systems for Large OWL Datasets Yuanbo Guo, Zhengxiang Pan, and Jeff Heflin Computer Science & Engineering ...present our work on evaluating knowledge base sys- tems with respect to use in large OWL applications. To this end, we have de- veloped the Lehigh...University Benchmark (LUBM). The benchmark is in- tended to evaluate knowledge base systems with respect to extensional queries over a large dataset that

  19. Advanced software development workstation. Knowledge base design: Design of knowledge base for flight planning application

    NASA Technical Reports Server (NTRS)

    Izygon, Michel E.

    1992-01-01

    The development process of the knowledge base for the generation of Test Libraries for Mission Operations Computer (MOC) Command Support focused on a series of information gathering interviews. These knowledge capture sessions are supporting the development of a prototype for evaluating the capabilities of INTUIT on such an application. the prototype includes functions related to POCC (Payload Operation Control Center) processing. It prompts the end-users for input through a series of panels and then generates the Meds associated with the initialization and the update of hazardous command tables for a POCC Processing TLIB.

  20. Automated knowledge base development from CAD/CAE databases

    NASA Technical Reports Server (NTRS)

    Wright, R. Glenn; Blanchard, Mary

    1988-01-01

    Knowledge base development requires a substantial investment in time, money, and resources in order to capture the knowledge and information necessary for anything other than trivial applications. This paper addresses a means to integrate the design and knowledge base development process through automated knowledge base development from CAD/CAE databases and files. Benefits of this approach include the development of a more efficient means of knowledge engineering, resulting in the timely creation of large knowledge based systems that are inherently free of error.

  1. Knowledge-based analysis of microarrays for the discovery of transcriptional regulation relationships

    PubMed Central

    2010-01-01

    Background The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. Results In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. Conclusion High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data. PMID:20122245

  2. System Engineering for the NNSA Knowledge Base

    NASA Astrophysics Data System (ADS)

    Young, C.; Ballard, S.; Hipp, J.

    2006-05-01

    To improve ground-based nuclear explosion monitoring capability, GNEM R&E (Ground-based Nuclear Explosion Monitoring Research & Engineering) researchers at the national laboratories have collected an extensive set of raw data products. These raw data are used to develop higher level products (e.g. 2D and 3D travel time models) to better characterize the Earth at regional scales. The processed products and selected portions of the raw data are stored in an archiving and access system known as the NNSA (National Nuclear Security Administration) Knowledge Base (KB), which is engineered to meet the requirements of operational monitoring authorities. At its core, the KB is a data archive, and the effectiveness of the KB is ultimately determined by the quality of the data content, but access to that content is completely controlled by the information system in which that content is embedded. Developing this system has been the task of Sandia National Laboratories (SNL), and in this paper we discuss some of the significant challenges we have faced and the solutions we have engineered. One of the biggest system challenges with raw data has been integrating database content from the various sources to yield an overall KB product that is comprehensive, thorough and validated, yet minimizes the amount of disk storage required. Researchers at different facilities often use the same data to develop their products, and this redundancy must be removed in the delivered KB, ideally without requiring any additional effort on the part of the researchers. Further, related data content must be grouped together for KB user convenience. Initially SNL used whatever tools were already available for these tasks, and did the other tasks manually. The ever-growing volume of KB data to be merged, as well as a need for more control of merging utilities, led SNL to develop our own java software package, consisting of a low- level database utility library upon which we have built several

  3. Knowledge-based data analysis comes of age.

    PubMed

    Ochs, Michael F

    2010-01-01

    The emergence of high-throughput technologies for measuring biological systems has introduced problems for data interpretation that must be addressed for proper inference. First, analysis techniques need to be matched to the biological system, reflecting in their mathematical structure the underlying behavior being studied. When this is not done, mathematical techniques will generate answers, but the values and reliability estimates may not accurately reflect the biology. Second, analysis approaches must address the vast excess in variables measured (e.g. transcript levels of genes) over the number of samples (e.g. tumors, time points), known as the 'large-p, small-n' problem. In large-p, small-n paradigms, standard statistical techniques generally fail, and computational learning algorithms are prone to overfit the data. Here we review the emergence of techniques that match mathematical structure to the biology, the use of integrated data and prior knowledge to guide statistical analysis, and the recent emergence of analysis approaches utilizing simple biological models. We show that novel biological insights have been gained using these techniques.

  4. Knowledge-Based Entrepreneurship in a Boundless Research System

    ERIC Educational Resources Information Center

    Dell'Anno, Davide

    2008-01-01

    International entrepreneurship and knowledge-based entrepreneurship have recently generated considerable academic and non-academic attention. This paper explores the "new" field of knowledge-based entrepreneurship in a boundless research system. Cultural barriers to the development of business opportunities by researchers persist in some academic…

  5. Knowledge-Based Entrepreneurship in a Boundless Research System

    ERIC Educational Resources Information Center

    Dell'Anno, Davide

    2008-01-01

    International entrepreneurship and knowledge-based entrepreneurship have recently generated considerable academic and non-academic attention. This paper explores the "new" field of knowledge-based entrepreneurship in a boundless research system. Cultural barriers to the development of business opportunities by researchers persist in some academic…

  6. Contribution of brain or biological reserve and cognitive or neural reserve to outcome after TBI: A meta-analysis (prior to 2015).

    PubMed

    Mathias, Jane L; Wheaton, Patricia

    2015-08-01

    Brain/biological (BR) and cognitive/neural reserve (CR) have increasingly been used to explain some of the variability that occurs as a consequence of normal ageing and neurological injuries or disease. However, research evaluating the impact of reserve on outcomes after adult traumatic brain injury (TBI) has yet to be quantitatively reviewed. This meta-analysis consolidated data from 90 studies (published prior to 2015) that either examined the relationship between measures of BR (genetics, age, sex) or CR (education, premorbid IQ) and outcomes after TBI or compared the outcomes of groups with high and low reserve. The evidence for genetic sources of reserve was limited and often contrary to prediction. APOE ∈4 status has been studied most, but did not have a consistent or sizeable impact on outcomes. The majority of studies found that younger age was associated with better outcomes, however most failed to adjust for normal age-related changes in cognitive performance that are independent of a TBI. This finding was reversed (older adults had better outcomes) in the small number of studies that provided age-adjusted scores; although it remains unclear whether differences in the cause and severity of injuries that are sustained by younger and older adults contributed to this finding. Despite being more likely to sustain a TBI, males have comparable outcomes to females. Overall, as is the case in the general population, higher levels of education and pre-morbid IQ are both associated with better outcomes.

  7. Verification of knowledge bases based on containment checking

    SciTech Connect

    Levy. A.Y.; Rousset, M.C.

    1996-12-31

    Building complex knowledge based applications requires encoding large amounts of domain knowledge. After acquiring knowledge from domain experts, much of the effort in building a knowledge base goes into verifying that the knowledge is encoded correctly. We consider the problem of verifying hybrid knowledge bases that contain both Horn rules and a terminology in a description logic. Our approach to the verification problem is based on showing a close relationship to the problem of query containment. Our first contribution, based on this relationship, is presenting a thorough analysis of the decidability and complexity of the verification problem, for knowledge bases containing recursive rules and the interpreted predicates =, {le}, < and {ne}. Second, we show that important new classes of constraints on correct inputs and outputs can be expressed in a hybrid setting, in which a description logic class hierarchy is also considered, and we present the first complete algorithm for verifying such hybrid knowledge bases.

  8. Hyperincursion and the Globalization of the Knowledge-Based Economy

    NASA Astrophysics Data System (ADS)

    Leydesdorff, Loet

    2006-06-01

    In biological systems, the capacity of anticipation—that is, entertaining a model of the system within the system—can be considered as naturally given. Human languages enable psychological systems to construct and exchange mental models of themselves and their environments reflexively, that is, provide meaning to the events. At the level of the social system expectations can further be codified. When these codifications are functionally differentiated—like between market mechanisms and scientific research programs—the potential asynchronicity in the update among the subsystems provides room for a second anticipatory mechanism at the level of the transversal information exchange among differently codified meaning-processing subsystems. Interactions between the two different anticipatory mechanisms (the transversal one and the one along the time axis in each subsystem) may lead to co-evolutions and stabilization of expectations along trajectories. The wider horizon of knowledgeable expectations can be expected to meta-stabilize and also globalize a previously stabilized configuration of expectations against the axis of time. While stabilization can be considered as consequences of interaction and aggregation among incursive formulations of the logistic equation, globalization can be modeled using the hyperincursive formulation of this equation. The knowledge-based subdynamic at the global level which thus emerges, enables historical agents to inform the reconstruction of previous states and to co-construct future states of the social system, for example, in a techno-economic co-evolution.

  9. Response time satisfaction in a real-time knowledge-based system

    SciTech Connect

    Frank, D. ); Friesen, D.; Williams, G. . Dept. of Computer Science)

    1990-08-01

    Response to interrupts within a certain time frame is an important issue for all software operating in real-time environment. A knowledge-based system (KBS) is no exception. Prior work on real-time knowledge-based systems either concentrated on improving the performance of the KBS in order to meet these constraints or focused on producing a better solution as more time was allowed. However, a problem with much of the latter research was that it required inference-time costs to be hardcoded into the different branches of reasoning. This limited the type of reasoning possible and the size of the KBS. Furthermore, performing the analysis required to derive those numbers is very difficult in knowledge based systems. This research explored a model for overcoming these drawbacks. It is based on integrating conventional programming techniques used to control task processing with knowledge-based techniques used to actually produce task results. The C-Language Integrated Production System (CLIPS) was used for the inference engine in the KBS; using CLIPS for the inference engine simplified the rapid context switching required. Thus, the KBS could respond in a timely manner while maintaining the fullest spectrum of KBS functionality.

  10. Bioenergy Science Center KnowledgeBase

    DOE Data Explorer

    Syed, M. H.; Karpinets, T. V.; Parang, M.; Leuze, M. R.; Park, B. H.; Hyatt, D.; Brown, S. D.; Moulton, S. Galloway, M.D.; Uberbacher, E. C.

    The challenge of converting cellulosic biomass to sugars is the dominant obstacle to cost effective production of biofuels in s capable of significant enough quantities to displace U. S. consumption of fossil transportation fuels. The BioEnergy Science Center (BESC) tackles this challenge of biomass recalcitrance by closely linking (1) plant research to make cell walls easier to deconstruct, and (2) microbial research to develop multi-talented biocatalysts tailor-made to produce biofuels in a single step. [from the 2011 BESC factsheet] The BioEnergy Science Center (BESC) is a multi-institutional, multidisciplinary research (biological, chemical, physical and computational sciences, mathematics and engineering) organization focused on the fundamental understanding and elimination of biomass recalcitrance. The BESC Knowledgebase and its associated tools is a discovery platform for bioenergy research. It consists of a collection of metadata, data, and computational tools for data analysis, integration, comparison and visualization for plants and microbes in the center.The BESC Knowledgebase (KB) and BESC Laboratory Information Management System (LIMS) enable bioenergy researchers to perform systemic research. [http://bobcat.ornl.gov/besc/index.jsp

  11. The data dictionary: A view into the CTBT knowledge base

    SciTech Connect

    Shepherd, E.R.; Keyser, R.G.; Armstrong, H.M.

    1997-08-01

    The data dictionary for the Comprehensive Test Ban Treaty (CTBT) knowledge base provides a comprehensive, current catalog of the projected contents of the knowledge base. It is written from a data definition view of the knowledge base and therefore organizes information in a fashion that allows logical storage within the computer. The data dictionary introduces two organization categories of data: the datatype, which is a broad, high-level category of data, and the dataset, which is a specific instance of a datatype. The knowledge base, and thus the data dictionary, consist of a fixed, relatively small number of datatypes, but new datasets are expected to be added on a regular basis. The data dictionary is a tangible result of the design effort for the knowledge base and is intended to be used by anyone who accesses the knowledge base for any purpose, such as populating the knowledge base with data, or accessing the data for use with automatic data processing (ADP) routines, or browsing through the data for verification purposes. For these two reasons, it is important to discuss the development of the data dictionary as well as to describe its contents to better understand its usefulness; that is the purpose of this paper.

  12. Integration of textual guideline documents with formal guideline knowledge bases.

    PubMed

    Shankar, R D; Tu, S W; Martins, S B; Fagan, L M; Goldstein, M K; Musen, M A

    2001-01-01

    Numerous approaches have been proposed to integrate the text of guideline documents with guideline-based care systems. Current approaches range from serving marked up guideline text documents to generating advisories using complex guideline knowledge bases. These approaches have integration problems mainly because they tend to rigidly link the knowledge base with text. We are developing a bridge approach that uses an information retrieval technology. The new approach facilitates a versatile decision-support system by using flexible links between the formal structures of the knowledge base and the natural language style of the guideline text.

  13. SU-E-T-572: A Plan Quality Metric for Evaluating Knowledge-Based Treatment Plans.

    PubMed

    Chanyavanich, V; Lo, J; Das, S

    2012-06-01

    In prostate IMRT treatment planning, the variation in patient anatomy makes it difficult to estimate a priori the potentially achievable extent of dose reduction possible to the rectum and bladder. We developed a mutual information-based framework to estimate the achievable plan quality for a new patient, prior to any treatment planning or optimization. The knowledge-base consists of 250 retrospective prostate IMRT plans. Using these prior plans, twenty query cases were each matched with five cases from the database. We propose a simple DVH plan quality metric (PQ) based on the weighted-sum of the areas under the curve (AUC) of the PTV, rectum and bladder. We evaluate the plan quality of knowledge-based generated plans, and established a correlation between the plan quality and case similarity. The introduced plan quality metric correlates well (r2 = 0.8) with the mutual similarity between cases. A matched case with high anatomical similarity can be used to produce a new high quality plan. Not surprisingly, a poorly matched case with low degree of anatomical similarity tends to produce a low quality plan, since the adapted fluences from a dissimilar case cannot be modified sufficiently to yield acceptable PTV coverage. The plan quality metric is well-correlated to the degree of anatomical similarity between a new query case and matched cases. Further work will investigate how to apply this metric to further stratify and select cases for knowledge-based planning. © 2012 American Association of Physicists in Medicine.

  14. Towards Modeling False Memory With Computational Knowledge Bases.

    PubMed

    Li, Justin; Kohanyi, Emma

    2017-01-01

    One challenge to creating realistic cognitive models of memory is the inability to account for the vast common-sense knowledge of human participants. Large computational knowledge bases such as WordNet and DBpedia may offer a solution to this problem but may pose other challenges. This paper explores some of these difficulties through a semantic network spreading activation model of the Deese-Roediger-McDermott false memory task. In three experiments, we show that these knowledge bases only capture a subset of human associations, while irrelevant information introduces noise and makes efficient modeling difficult. We conclude that the contents of these knowledge bases must be augmented and, more important, that the algorithms must be refined and optimized, before large knowledge bases can be widely used for cognitive modeling. Copyright © 2016 Cognitive Science Society, Inc.

  15. XML-Based SHINE Knowledge Base Interchange Language

    NASA Technical Reports Server (NTRS)

    James, Mark; Mackey, Ryan; Tikidjian, Raffi

    2008-01-01

    The SHINE Knowledge Base Interchange Language software has been designed to more efficiently send new knowledge bases to spacecraft that have been embedded with the Spacecraft Health Inference Engine (SHINE) tool. The intention of the behavioral model is to capture most of the information generally associated with a spacecraft functional model, while specifically addressing the needs of execution within SHINE and Livingstone. As such, it has some constructs that are based on one or the other.

  16. The process for integrating the NNSA knowledge base.

    SciTech Connect

    Wilkening, Lisa K.; Carr, Dorthe Bame; Young, Christopher John; Hampton, Jeff; Martinez, Elaine

    2009-03-01

    From 2002 through 2006, the Ground Based Nuclear Explosion Monitoring Research & Engineering (GNEMRE) program at Sandia National Laboratories defined and modified a process for merging different types of integrated research products (IRPs) from various researchers into a cohesive, well-organized collection know as the NNSA Knowledge Base, to support operational treaty monitoring. This process includes defining the KB structure, systematically and logically aggregating IRPs into a complete set, and verifying and validating that the integrated Knowledge Base works as expected.

  17. Analysis of molecular expression patterns and integration with other knowledge bases using probabilistic Bayesian network models

    SciTech Connect

    Moler, Edward J.; Mian, I.S.

    2000-03-01

    How can molecular expression experiments be interpreted with greater than ten to the fourth measurements per chip? How can one get the most quantitative information possible from the experimental data with good confidence? These are important questions whose solutions require an interdisciplinary combination of molecular and cellular biology, computer science, statistics, and complex systems analysis. The explosion of data from microarray techniques present the problem of interpreting the experiments. The availability of large-scale knowledge bases provide the opportunity to maximize the information extracted from these experiments. We have developed new methods of discovering biological function, metabolic pathways, and regulatory networks from these data and knowledge bases. These techniques are applicable to analyses for biomedical engineering, clinical, and fundamental cell and molecular biology studies. Our approach uses probabilistic, computational methods that give quantitative interpretations of data in a biological context. We have selected Bayesian statistical models with graphical network representations as a framework for our methods. As a first step, we use a nave Bayesian classifier to identify statistically significant patterns in gene expression data. We have developed methods which allow us to (a) characterize which genes or experiments distinguish each class from the others, (b) cross-index the resulting classes with other databases to assess biological meaning of the classes, and (c) display a gross overview of cellular dynamics. We have developed a number of visualization tools to convey the results. We report here our methods of classification and our first attempts at integrating the data and other knowledge bases together with new visualization tools. We demonstrate the utility of these methods and tools by analysis of a series of yeast cDNA microarray data and to a set of cancerous/normal sample data from colon cancer patients. We discuss

  18. Aquatic ecology of the Elwha River estuary prior to dam removal: Chapter 7 in Coastal habitats of the Elwha River, Washington--biological and physical patterns and processes prior to dam removal

    USGS Publications Warehouse

    Duda, Jeffrey J.; Beirne, Matthew M.; Larsen, Kimberly; Barry, Dwight; Stenberg, Karl; McHenry, Michael L.; Duda, Jeffrey J.; Warrick, Jonathan A.; Magirl, Christopher S.

    2011-01-01

    The removal of two long-standing dams on the Elwha River in Washington State will initiate a suite of biological and physical changes to the estuary at the river mouth. Estuaries represent a transition between freshwater and saltwater, have unique assemblages of plants and animals, and are a critical habitat for some salmon species as they migrate to the ocean. This chapter summarizes a number of studies in the Elwha River estuary, and focuses on physical and biological aspects of the ecosystem that are expected to change following dam removal. Included are data sets that summarize (1) water chemistry samples collected over a 16 month period; (2) beach seining activities targeted toward describing the fish assemblage of the estuary and migratory patterns of juvenile salmon; (3) descriptions of the aquatic and terrestrial invertebrate communities in the estuary, which represent an important food source for juvenile fish and are important water quality indicators; and (4) the diet and growth patterns of juvenile Chinook salmon in the lower Elwha River and estuary. These data represent baseline conditions of the ecosystem after nearly a century of changes due to the dams and will be useful in monitoring the changes to the river and estuary following dam removal.

  19. Case-Based Tutoring from a Medical Knowledge Base

    PubMed Central

    Chin, Homer L.

    1988-01-01

    The past decade has seen the emergence of programs that make use of large knowledge bases to assist physicians in diagnosis within the general field of internal medicine. One such program, Internist-I, contains knowledge about over 600 diseases, covering a significant proportion of internal medicine. This paper describes the process of converting a subset of this knowledge base--in the area of cardiovascular diseases--into a probabilistic format, and the use of this resulting knowledge base to teach medical diagnostic knowledge. The system (called KBSimulator--for Knowledge-Based patient Simulator) generates simulated patient cases and uses these cases as a focal point from which to teach medical knowledge. It interacts with the student in a mixed-initiative fashion, presenting patients for the student to diagnose, and allowing the student to obtain further information on his/her own initiative in the context of that patient case. The system scores the student, and uses these scores to form a rudimentary model of the student. This resulting model of the student is then used to direct the generation of subsequent patient cases. This project demonstrates the feasibility of building an intelligent, flexible instructional system that uses a knowledge base constructed primarily for medical diagnosis.

  20. Comparing contents of a knowledge base to traditional information sources.

    PubMed Central

    Giuse, N. B.; Giuse, D. A.; Bankowitz, R. A.; Miller, R. A.

    1993-01-01

    Physicians rely on the medical literature as a major source of medical knowledge and data. The medical literature, however, is continually evolving and represents different sources at different levels of coverage and detail. The recent development of computerized medical knowledge bases has added a new form of information that can potentially be used to address the practicing physician's information needs. To understand how the information from various sources differs, we compared the description of a disease found in the QMR knowledge base to those found in two general internal medicine textbooks and two specialized nephrology textbooks. The study shows both differences in coverage and differences in the level of detail. Textbooks contain information about pathophysiology and therapy that is not present in the diagnostic knowledge base. The knowledge base contains a more detailed description of the associated findings, more quantitative information, and a greater number of references to peer-reviewed medical articles. The study demonstrates that computerized knowledge bases, if properly constructed, may be able to provide clinicians with a useful new source of medical knowledge that is complementary to existing sources. PMID:8130550

  1. Evaluation of database technologies for the CTBT Knowledge Base prototype

    SciTech Connect

    Keyser, R.; Shepard-Dombroski, E.; Baur, D.; Hipp, J.; Moore, S.; Young, C.; Chael, E.

    1996-11-01

    This document examines a number of different software technologies in the rapidly changing field of database management systems, evaluates these systems in light of the expected needs of the Comprehensive Test Ban Treaty (CTBT) Knowledge Base, and makes some recommendations for the initial prototypes of the Knowledge Base. The Knowledge Base requirements are examined and then used as criteria for evaluation of the database management options. A mock-up of the data expected in the Knowledge Base is used as a basis for examining how four different database technologies deal with the problems of storing and retrieving the data. Based on these requirement and the results of the evaluation, the recommendation is that the Illustra database be considered for the initial prototype of the Knowledge Base. Illustra offers a unique blend of performance, flexibility, and features that will aid in the implementation of the prototype. At the same time, Illustra provides a high level of compatibility with the hardware and software environments present at the US NDC (National Data Center) and the PIDC (Prototype International Data Center).

  2. Integrating knowledge-based techniques into well-test interpretation

    SciTech Connect

    Harrison, I.W.; Fraser, J.L.

    1995-04-01

    The goal of the Spirit Project was to develop a prototype of next-generation well-test-interpretation (WTI) software that would include knowledge-based decision support for the WTI model selection task. This paper describes how Spirit makes use of several different types of information (pressure, seismic, petrophysical, geological, and engineering) to support the user in identifying the most appropriate WTI model. Spirit`s knowledge-based approach to type-curve matching is to generate several different feasible interpretations by making assumptions about the possible presence of both wellbore storage and late-time boundary effects. Spirit fuses information from type-curve matching and other data sources by use of a knowledge-based decision model developed in collaboration with a WTI expert. The sponsors of the work have judged the resulting prototype system a success.

  3. Enhancing acronym/abbreviation knowledge bases with semantic information.

    PubMed

    Torii, Manabu; Liu, Hongfang

    2007-10-11

    In the biomedical domain, a terminology knowledge base that associates acronyms/abbreviations (denoted as SFs) with the definitions (denoted as LFs) is highly needed. For the construction such terminology knowledge base, we investigate the feasibility to build a system automatically assigning semantic categories to LFs extracted from text. Given a collection of pairs (SF,LF) derived from text, we i) assess the coverage of LFs and pairs (SF,LF) in the UMLS and justify the need of a semantic category assignment system; and ii) automatically derive name phrases annotated with semantic category and construct a system using machine learning. Utilizing ADAM, an existing collection of (SF,LF) pairs extracted from MEDLINE, our system achieved an f-measure of 87% when assigning eight UMLS-based semantic groups to LFs. The system has been incorporated into a web interface which integrates SF knowledge from multiple SF knowledge bases. Web site: http://gauss.dbb.georgetown.edu/liblab/SFThesurus.

  4. A Knowledge-Based Approach To Planning And Scheduling

    NASA Astrophysics Data System (ADS)

    Gilmore, John F.; Williams, D. Lamont; Thornton, Sheila

    1989-03-01

    Analyses of the shop scheduling domain indicate the objective of scheduling is the determination and satisfaction of a large number of diverse constraints. Many researchers have explored the possibilities of scheduling with the assistance of dispatching rules, algorithms, heuristics and knowledge-based systems. This paper describes the development of an experimental knowledge-based planning and scheduling system which marries traditional planning and scheduling algorithms with a knowledge-based problem solving methodology in an integrated blackboard architecture. This system embodies scheduling methods and techniques which attempt to minimize one or a combination of scheduling parameters including completion time, average completion time, lateness, tardiness, and flow time. Preliminary results utilizing a test case factory involved in part production are presented.

  5. The browser prototype for the CTBT knowledge base

    SciTech Connect

    Armstrong, H.M.; Keyser, R.G.

    1997-07-02

    As part of the United States Department of Energy`s (DOE) Comprehensive Test Ban Treaty (CTBT) research and development effort, a Knowledge Base is being developed. This Knowledge Base will store the regional geophysical research results as well as geographic contexual information and make this information available to the Automated Data Processing (ADP routines) as well as human analysts involved in CTBT monitoring. This paper focuses on the initial development of a browser prototype to be used to interactively examine the contents of the CTBT Knowledge Base. The browser prototype is intended to be a research tool to experiment with different ways to display and integrate the datasets. An initial prototype version has been developed using Environmental Systems Research Incorporated`s (ESRI) ARC/INFO Geographic Information System (GIS) product. The conceptual requirements, design, initial implementation, current status, and future work plans are discussed. 4 refs., 2 figs.

  6. A Knowledge-Based System Developer for aerospace applications

    NASA Technical Reports Server (NTRS)

    Shi, George Z.; Wu, Kewei; Fensky, Connie S.; Lo, Ching F.

    1993-01-01

    A prototype Knowledge-Based System Developer (KBSD) has been developed for aerospace applications by utilizing artificial intelligence technology. The KBSD directly acquires knowledge from domain experts through a graphical interface then builds expert systems from that knowledge. This raises the state of the art of knowledge acquisition/expert system technology to a new level by lessening the need for skilled knowledge engineers. The feasibility, applicability , and efficiency of the proposed concept was established, making a continuation which would develop the prototype to a full-scale general-purpose knowledge-based system developer justifiable. The KBSD has great commercial potential. It will provide a marketable software shell which alleviates the need for knowledge engineers and increase productivity in the workplace. The KBSD will therefore make knowledge-based systems available to a large portion of industry.

  7. A Knowledge-Based System Developer for aerospace applications

    NASA Technical Reports Server (NTRS)

    Shi, George Z.; Wu, Kewei; Fensky, Connie S.; Lo, Ching F.

    1993-01-01

    A prototype Knowledge-Based System Developer (KBSD) has been developed for aerospace applications by utilizing artificial intelligence technology. The KBSD directly acquires knowledge from domain experts through a graphical interface then builds expert systems from that knowledge. This raises the state of the art of knowledge acquisition/expert system technology to a new level by lessening the need for skilled knowledge engineers. The feasibility, applicability , and efficiency of the proposed concept was established, making a continuation which would develop the prototype to a full-scale general-purpose knowledge-based system developer justifiable. The KBSD has great commercial potential. It will provide a marketable software shell which alleviates the need for knowledge engineers and increase productivity in the workplace. The KBSD will therefore make knowledge-based systems available to a large portion of industry.

  8. Openness to and preference for attributes of biologic therapy prior to initiation among patients with rheumatoid arthritis: patient and rheumatologist perspectives and implications for decision making

    PubMed Central

    Bolge, Susan C; Goren, Amir; Brown, Duncan; Ginsberg, Seth; Allen, Isabel

    2016-01-01

    Purpose Despite American College of Rheumatology recommendations, appropriate and timely initiation of biologic therapies does not always occur. This study examined openness to and preference for attributes of biologic therapies among patients with rheumatoid arthritis (RA), differences in patients’ and rheumatologists’ perceptions, and discussions around biologic therapy initiation. Patients and methods A self-administered online survey was completed by 243 adult patients with RA in the US who were taking disease-modifying antirheumatic drugs (DMARDs) and had never taken, but had discussed biologic therapy with a rheumatologist. Patients were recruited from a consumer panel (n=142) and patient advocacy organization (n=101). A separate survey was completed by 103 rheumatologists who treated at least 25 patients with RA per month with biologic therapy. Descriptive and bivariate analyses were conducted separately for patients and rheumatologists. Attributes of biologic therapy included route of administration (intravenous infusion or subcutaneous injection), frequency of injections/infusions, and duration of infusion. Results Over half of patients (53.1%) were open to both intravenous infusion and subcutaneous injection, whereas rheumatologists reported 40.7% of patients would be open to both. Only 26.3% of patients strongly preferred subcutaneous injection, whereas rheumatologists reported 35.2%. Discrepancies were even more pronounced among specific patient types (eg, older vs younger patients and Medicare recipients). Among patients, 23% reported initiating discussion about biologics and 54% reported their rheumatologist initiated the discussion. A majority of rheumatologists reported discussing in detail several key aspects of biologics, whereas a minority of patients reported the same. Conclusion Preferences differed among patients with RA from rheumatologists’ perceptions of these preferences for biologic therapy, including greater openness to intravenous

  9. Knowledge based systems: From process control to policy analysis

    SciTech Connect

    Marinuzzi, J.G.

    1993-06-01

    Los Alamos has been pursuing the use of Knowledge Based Systems for many years. These systems are currently being used to support projects that range across many production and operations areas. By investing time and money in people and equipment, Los Alamos has developed one of the strongest knowledge based systems capabilities within the DOE. Staff of Los Alamos` Mechanical & Electronic Engineering Division are using these knowledge systems to increase capability, productivity and competitiveness in areas of manufacturing quality control, robotics, process control, plant design and management decision support. This paper describes some of these projects and associated technical program approaches, accomplishments, benefits and future goals.

  10. Knowledge based systems: From process control to policy analysis

    SciTech Connect

    Marinuzzi, J.G.

    1993-01-01

    Los Alamos has been pursuing the use of Knowledge Based Systems for many years. These systems are currently being used to support projects that range across many production and operations areas. By investing time and money in people and equipment, Los Alamos has developed one of the strongest knowledge based systems capabilities within the DOE. Staff of Los Alamos' Mechanical Electronic Engineering Division are using these knowledge systems to increase capability, productivity and competitiveness in areas of manufacturing quality control, robotics, process control, plant design and management decision support. This paper describes some of these projects and associated technical program approaches, accomplishments, benefits and future goals.

  11. Arranging ISO 13606 archetypes into a knowledge base.

    PubMed

    Kopanitsa, Georgy

    2014-01-01

    To enable the efficient reuse of standard based medical data we propose to develop a higher level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analyzed for their ability to be applied in the implementation of a higher level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future.

  12. Design of a knowledge-based report generator

    SciTech Connect

    Kukich, K.

    1983-01-01

    Knowledge-based report generation is a technique for automatically generating natural language reports from computer databases. It is so named because it applies knowledge-based expert systems software to the problem of text generation. The first application of the technique, a system for generating natural language stock reports from a daily stock quotes database, is partially implemented. Three fundamental principles of the technique are its use of domain-specific semantic and linguistic knowledge, its use of macro-level semantic and linguistic constructs (such as whole messages, a phrasal lexicon, and a sentence-combining grammar), and its production system approach to knowledge representation. 14 references.

  13. Online Knowledge-Based Model for Big Data Topic Extraction

    PubMed Central

    Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan

    2016-01-01

    Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004

  14. "Chromosome": a knowledge-based system for the chromosome classification.

    PubMed

    Ramstein, G; Bernadet, M

    1993-01-01

    Chromosome, a knowledge-based analysis system has been designed for the classification of human chromosomes. Its aim is to perform an optimal classification by driving a tool box containing the procedures of image processing, pattern recognition and classification. This paper presents the general architecture of Chromosome, based on a multiagent system generator. The image processing tool box is described from the met aphasic enhancement to the fine classification. Emphasis is then put on the knowledge base intended for the chromosome recognition. The global classification process is also presented, showing how Chromosome proceeds to classify a given chromosome. Finally, we discuss further extensions of the system for the karyotype building.

  15. Towards building a disease-phenotype knowledge base: extracting disease-manifestation relationship from literature

    PubMed Central

    Xu, Rong; Li, Li; Wang, QuanQiu

    2013-01-01

    Motivation: Systems approaches to studying phenotypic relationships among diseases are emerging as an active area of research for both novel disease gene discovery and drug repurposing. Currently, systematic study of disease phenotypic relationships on a phenome-wide scale is limited because large-scale machine-understandable disease–phenotype relationship knowledge bases are often unavailable. Here, we present an automatic approach to extract disease–manifestation (D-M) pairs (one specific type of disease–phenotype relationship) from the wide body of published biomedical literature. Data and Methods: Our method leverages external knowledge and limits the amount of human effort required. For the text corpus, we used 119 085 682 MEDLINE sentences (21 354 075 citations). First, we used D-M pairs from existing biomedical ontologies as prior knowledge to automatically discover D-M–specific syntactic patterns. We then extracted additional pairs from MEDLINE using the learned patterns. Finally, we analysed correlations between disease manifestations and disease-associated genes and drugs to demonstrate the potential of this newly created knowledge base in disease gene discovery and drug repurposing. Results: In total, we extracted 121 359 unique D-M pairs with a high precision of 0.924. Among the extracted pairs, 120 419 (99.2%) have not been captured in existing structured knowledge sources. We have shown that disease manifestations correlate positively with both disease-associated genes and drug treatments. Conclusions: The main contribution of our study is the creation of a large-scale and accurate D-M phenotype relationship knowledge base. This unique knowledge base, when combined with existing phenotypic, genetic and proteomic datasets, can have profound implications in our deeper understanding of disease etiology and in rapid drug repurposing. Availability: http://nlp.case.edu/public/data/DMPatternUMLS/ Contact: rxx@case.edu PMID:23828786

  16. Ada as an implementation language for knowledge based systems

    NASA Technical Reports Server (NTRS)

    Rochowiak, Daniel

    1990-01-01

    Debates about the selection of programming languages often produce cultural collisions that are not easily resolved. This is especially true in the case of Ada and knowledge based programming. The construction of programming tools provides a desirable alternative for resolving the conflict.

  17. A knowledge-based decision support system for payload scheduling

    NASA Technical Reports Server (NTRS)

    Tyagi, Rajesh; Tseng, Fan T.

    1988-01-01

    This paper presents the development of a prototype Knowledge-based Decision Support System, currently under development, for scheduling payloads/experiments on space station missions. The DSS is being built on Symbolics, a Lisp machine, using KEE, a commercial knowledge engineering tool.

  18. Intelligent Tools for Planning Knowledge base Development and Verification

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintaining the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems.

  19. Towards an Intelligent Planning Knowledge Base Development Environment

    NASA Technical Reports Server (NTRS)

    Chien, S.

    1994-01-01

    ract describes work in developing knowledge base editing and debugging tools for the Multimission VICAR Planner (MVP) system. MVP uses artificial intelligence planning techniques to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing requests made to the JPL Multimission Image Processing Laboratory.

  20. SCU at TREC 2014 Knowledge Base Acceleration Track

    DTIC Science & Technology

    2014-11-01

    SCU at TREC 2014 Knowledge Base Acceleration Track Hung Nguyen, Yi Fang Department of Computer Engineering Santa Clara University 500 El Camino ...University,Department of Computer Engineering,500 El Camino Real,Santa Clara,CA,95053 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING

  1. KBGIS-2: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, T.; Peuquet, D.; Menon, S.; Agarwal, P.

    1986-01-01

    The architecture and working of a recently implemented knowledge-based geographic information system (KBGIS-2) that was designed to satisfy several general criteria for the geographic information system are described. The system has four major functions that include query-answering, learning, and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial objects language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is currently performing all its designated tasks successfully, although currently implemented on inadequate hardware. Future reports will detail the performance characteristics of the system, and various new extensions are planned in order to enhance the power of KBGIS-2.

  2. Development of a Knowledge Base for Incorporating Technology into Courses

    ERIC Educational Resources Information Center

    Rath, Logan

    2013-01-01

    This article discusses a project resulting from the request of a group of faculty at The College at Brockport to create a website for best practices in teaching and technology. The project evolved into a knowledge base powered by WordPress. Installation and configuration of WordPress resulted in the creation of custom taxonomies and post types,…

  3. Value Creation in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Liu, Fang-Chun

    2013-01-01

    Effective investment strategies help companies form dynamic core organizational capabilities allowing them to adapt and survive in today's rapidly changing knowledge-based economy. This dissertation investigates three valuation issues that challenge managers with respect to developing business-critical investment strategies that can have…

  4. Grey Documentation as a Knowledge Base in Social Work.

    ERIC Educational Resources Information Center

    Berman, Yitzhak

    1994-01-01

    Defines grey documentation as documents issued informally and not available through normal channels and discusses the role that grey documentation can play in the social work knowledge base. Topics addressed include grey documentation and science; social work and the empirical approach in knowledge development; and dissemination of grey…

  5. Dynamic Strategic Planning in a Professional Knowledge-Based Organization

    ERIC Educational Resources Information Center

    Olivarius, Niels de Fine; Kousgaard, Marius Brostrom; Reventlow, Susanne; Quelle, Dan Grevelund; Tulinius, Charlotte

    2010-01-01

    Professional, knowledge-based institutions have a particular form of organization and culture that makes special demands on the strategic planning supervised by research administrators and managers. A model for dynamic strategic planning based on a pragmatic utilization of the multitude of strategy models was used in a small university-affiliated…

  6. Knowledge Based Engineering for Spatial Database Management and Use

    NASA Technical Reports Server (NTRS)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  7. CommonKADS models for knowledge-based planning

    SciTech Connect

    Kingston, J.; Shadbolt, N.; Tate, A.

    1996-12-31

    The CommonKADS methodology is a collection of structured methods for building knowledge-based systems. A key component of CommonKADS is the library of generic inference models which can be applied to tasks of specified types. These generic models can either be used as frameworks for knowledge acquisition, or to verify the completeness of models developed by analysis of the domain. However, the generic models for some task types, such as knowledge-based planning, are not well-developed. Since knowledge-based planning is an important commercial application of Artificial Intelligence, there is a clear need for the development of generic models for planning tasks. Many of the generic models which currently exist have been derived from modelling of existing AI systems. These models have the strength of proven applicability. There are a number of well-known and well-tried Al planning systems in existence; one of the best known is the Open Planning Architecture (O-Plan). This paper describes the development of a CommonKADS generic inference model for knowledge-based planning tasks, based on the capabilities of the O-Plan system. The paper also describes the verification of this model in the context of a real-life planning task: the assignment and management of Royal Air Force Search and Rescue operations.

  8. Toffler's Powershift: Creating New Knowledge Bases in Higher Education.

    ERIC Educational Resources Information Center

    Powers, Patrick James

    This paper examines the creation of new knowledge bases in higher education in light of the ideas of Alvin Toffler, whose trilogy "Future Shock" (1970), "The Third Wave" (1980), and "Powershift" (1990) focus on the processes, directions, and control of change, respectively. It discusses the increasingly important role…

  9. Spinning Fantasy: Themes, Structure, and the Knowledge Base.

    ERIC Educational Resources Information Center

    Lucariello, Joan

    1987-01-01

    Investigated the influence of the child's knowledge base on symbolic play in terms of event schemas. Pretend play of 10 mother-child (ages 24 to 29 months) dyads was observed in novel and free play contexts. Play was examined for thematic content, self-other relations, substitute/imaginary objects, action integration, and planfulness. (Author/BN)

  10. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  11. Malaysia Transitions toward a Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Mustapha, Ramlee; Abdullah, Abu

    2004-01-01

    The emergence of a knowledge-based economy (k-economy) has spawned a "new" notion of workplace literacy, changing the relationship between employers and employees. The traditional covenant where employees expect a stable or lifelong employment will no longer apply. The retention of employees will most probably be based on their skills…

  12. PLAN-IT - Knowledge-based mission sequencing

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric W.

    1987-01-01

    PLAN-IT (Plan-Integrated Timelines), a knowledge-based approach to assist in mission sequencing, is discussed. PLAN-IT uses a large set of scheduling techniques known as strategies to develop and maintain a mission sequence. The approach implemented by PLAN-IT and the current applications of PLAN-IT for sequencing at NASA are reported.

  13. Designing a Knowledge Base for Automatic Book Classification.

    ERIC Educational Resources Information Center

    Kim, Jeong-Hyen; Lee, Kyung-Ho

    2002-01-01

    Reports on the design of a knowledge base for an automatic classification in the library science field by using the facet classification principles of colon classification. Discusses inputting titles or key words into the computer to create class numbers through automatic subject recognition and processing title key words. (Author/LRW)

  14. CACTUS: Command and Control Training Using Knowledge-Based Simulations

    ERIC Educational Resources Information Center

    Hartley, Roger; Ravenscroft, Andrew; Williams, R. J.

    2008-01-01

    The CACTUS project was concerned with command and control training of large incidents where public order may be at risk, such as large demonstrations and marches. The training requirements and objectives of the project are first summarized justifying the use of knowledge-based computer methods to support and extend conventional training…

  15. Planning and Implementing a High Performance Knowledge Base.

    ERIC Educational Resources Information Center

    Cortez, Edwin M.

    1999-01-01

    Discusses the conceptual framework for developing a rapid-prototype high-performance knowledge base for the four mission agencies of the United States Department of Agriculture and their university partners. Describes the background of the project and methods used for establishing the requirements; examines issues and problems surrounding semantic…

  16. KBGIS-II: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, Terence; Peuquet, Donna; Menon, Sudhakar; Agarwal, Pankaj

    1986-01-01

    The architecture and working of a recently implemented Knowledge-Based Geographic Information System (KBGIS-II), designed to satisfy several general criteria for the GIS, is described. The system has four major functions including query-answering, learning and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial object language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is performing all its designated tasks successfully. Future reports will relate performance characteristics of the system.

  17. Common Sense about Uncommon Knowledge: The Knowledge Bases for Diversity.

    ERIC Educational Resources Information Center

    Smith, G. Pritchy

    This book explains knowledge bases for teaching diverse student populations. An introduction displays one first-year teacher's experiences with diverse students in a high school classroom in San Angelo, Texas in 1961. The 15 chapters are: (1) "Toward Defining Culturally Responsible and Responsive Teacher Education"; (2) "Knowledge…

  18. Planning and Implementing a High Performance Knowledge Base.

    ERIC Educational Resources Information Center

    Cortez, Edwin M.

    1999-01-01

    Discusses the conceptual framework for developing a rapid-prototype high-performance knowledge base for the four mission agencies of the United States Department of Agriculture and their university partners. Describes the background of the project and methods used for establishing the requirements; examines issues and problems surrounding semantic…

  19. Document Retrieval Using A Fuzzy Knowledge-Based System

    NASA Astrophysics Data System (ADS)

    Subramanian, Viswanath; Biswas, Gautam; Bezdek, James C.

    1986-03-01

    This paper presents the design and development of a prototype document retrieval system using a knowledge-based systems approach. Both the domain-specific knowledge base and the inferencing schemes are based on a fuzzy set theoretic framework. A query in natural language represents a request to retrieve a relevant subset of documents from a document base. Such a query, which can include both fuzzy terms and fuzzy relational operators, is converted into an unambiguous intermediate form by a natural language interface. Concepts that describe domain topics and the relationships between concepts, such as the synonym relation and the implication relation between a general concept and more specific concepts, have been captured in a knowledge base. The knowledge base enables the system to emulate the reasoning process followed by an expert, such as a librarian, in understanding and reformulating user queries. The retrieval mechanism processes the query in two steps. First it produces a pruned list of documents pertinent to the query. Second, it uses an evidence combination scheme to compute a degree of support between the query and individual documents produced in step one. The front-end component of the system then presents a set of document citations to the user in ranked order as an answer to the information request.

  20. Knowledge-Based Hierarchies: Using Organizations to Understand the Economy

    ERIC Educational Resources Information Center

    Garicano, Luis; Rossi-Hansberg, Esteban

    2015-01-01

    Incorporating the decision of how to organize the acquisition, use, and communication of knowledge into economic models is essential to understand a wide variety of economic phenomena. We survey the literature that has used knowledge-based hierarchies to study issues such as the evolution of wage inequality, the growth and productivity of firms,…

  1. Knowledge-Based Hierarchies: Using Organizations to Understand the Economy

    ERIC Educational Resources Information Center

    Garicano, Luis; Rossi-Hansberg, Esteban

    2015-01-01

    Incorporating the decision of how to organize the acquisition, use, and communication of knowledge into economic models is essential to understand a wide variety of economic phenomena. We survey the literature that has used knowledge-based hierarchies to study issues such as the evolution of wage inequality, the growth and productivity of firms,…

  2. Knowledge-Based Aid: A Four Agency Comparative Study

    ERIC Educational Resources Information Center

    McGrath, Simon; King, Kenneth

    2004-01-01

    Part of the response of many development cooperation agencies to the challenges of globalisation, ICTs and the knowledge economy is to emphasise the importance of knowledge for development. This paper looks at the discourses and practices of ''knowledge-based aid'' through an exploration of four agencies: the World Bank, DFID, Sida and JICA. It…

  3. Desperately seeking data: knowledge base-database links.

    PubMed Central

    Hripcsak, G.; Johnson, S. B.; Clayton, P. D.

    1993-01-01

    Linking a knowledge-based system (KBS) to a clinical database is a difficult task, but critical if such systems are to achieve widespread use. The Columbia-Presbyterian Medical Center's clinical event monitor provides alerts, interpretations, research screening, and quality assurance functions for the center. Its knowledge base consists of Arden Syntax Medical Logic Modules (MLMs). The knowledge base was analyzed in order to quantify the use and impact of KBS-database links. The MLM data slot, which contains the definition of these links, had almost as many statements (5.8 vs. 8.8, ns with p = 0.15) and more tokens (122 vs. 76, p = 0.037) than the logic slot, which contains the actual medical knowledge. The data slot underwent about twice as many modifications over time as the logic slot (3.0 vs. 1.6 modifications/version, p = 0.010). Database queries and updates accounted for 97.2% of the MLM's total elapsed execution time. Thus, KBS-database links consume substantial resources in an MLM knowledge base, in terms of coding, maintenance, and performance. PMID:8130552

  4. Developing a Knowledge Base and Taxonomy in Instructional Technology.

    ERIC Educational Resources Information Center

    Caffarella, Edward P.; Fly, Kenneth

    The purpose of this study was to test the feasibility of using a model adapted from the instructional design and technology (ID&T) taxonomy model proposed by the Association for Educational Communications and Technology (AECT) Definitions and Terminology Committee to build an ID&T knowledge base. The model was tested by mapping a random…

  5. CACTUS: Command and Control Training Using Knowledge-Based Simulations

    ERIC Educational Resources Information Center

    Hartley, Roger; Ravenscroft, Andrew; Williams, R. J.

    2008-01-01

    The CACTUS project was concerned with command and control training of large incidents where public order may be at risk, such as large demonstrations and marches. The training requirements and objectives of the project are first summarized justifying the use of knowledge-based computer methods to support and extend conventional training…

  6. Conventional and Knowledge-Based Information Retrieval with Prolog.

    ERIC Educational Resources Information Center

    Leigh, William; Paz, Noemi

    1988-01-01

    Describes the use of PROLOG to program knowledge-based information retrieval systems, in which the knowledge contained in a document is translated into machine processable logic. Several examples of the resulting search process, and the program rules supporting the process, are given. (10 references) (CLB)

  7. Knowledge Based Engineering for Spatial Database Management and Use

    NASA Technical Reports Server (NTRS)

    Peuquet, D. (Principal Investigator)

    1984-01-01

    The use of artificial intelligence techniques that are applicable to Geographic Information Systems (GIS) are examined. Questions involving the performance and modification to the database structure, the definition of spectra in quadtree structures and their use in search heuristics, extension of the knowledge base, and learning algorithm concepts are investigated.

  8. Development of a Knowledge Base for Incorporating Technology into Courses

    ERIC Educational Resources Information Center

    Rath, Logan

    2013-01-01

    This article discusses a project resulting from the request of a group of faculty at The College at Brockport to create a website for best practices in teaching and technology. The project evolved into a knowledge base powered by WordPress. Installation and configuration of WordPress resulted in the creation of custom taxonomies and post types,…

  9. Value Creation in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Liu, Fang-Chun

    2013-01-01

    Effective investment strategies help companies form dynamic core organizational capabilities allowing them to adapt and survive in today's rapidly changing knowledge-based economy. This dissertation investigates three valuation issues that challenge managers with respect to developing business-critical investment strategies that can have…

  10. Knowledge-Based Aid: A Four Agency Comparative Study

    ERIC Educational Resources Information Center

    McGrath, Simon; King, Kenneth

    2004-01-01

    Part of the response of many development cooperation agencies to the challenges of globalisation, ICTs and the knowledge economy is to emphasise the importance of knowledge for development. This paper looks at the discourses and practices of ''knowledge-based aid'' through an exploration of four agencies: the World Bank, DFID, Sida and JICA. It…

  11. Tools for Assembling and Managing Scalable Knowledge Bases

    DTIC Science & Technology

    2003-02-01

    1 1.1 Knowledge Translation .......................................................................................................................... 1...areas of the knowledge base and ontology construction process and are outlined in more detail below. 1.1 Knowledge Translation As mentioned above...during KB merging operations. 2.2 The Translation Problem Figure 2: The knowledge translation problem. The general problem we set out to solve is

  12. Reflections on the knowledge base for obstetric fistula.

    PubMed

    Kelly, J; Winter, H R

    2007-11-01

    This article presents the reflections of an experienced fistula surgeon and an epidemiologist on the current knowledge base for obstetric fistula. The incidence, prevention, and management of vesico-vaginal and recto-vaginal fistula are discussed. The authors call for more randomized controlled trials to determine the effectiveness of surgical interventions for fistula repair.

  13. A knowledge-based information system for monitoring drug levels.

    PubMed

    Wiener, F; Groth, T; Mortimer, O; Hallquist, I; Rane, A

    1989-06-01

    The expert system shell SMR has been enhanced to include information system routines for designing data screens and providing facilities for data entry, storage, retrieval, queries and descriptive statistics. The data for inference making is abstracted from the data base record and inserted into a data array to which the knowledge base is applied to derive the appropriate advice and comments. The enhanced system has been used to develop an intelligent information system for monitoring serum drug levels which includes evaluation of temporal changes and production of specialized printed reports. The module for digoxin has been fully developed and validated. To demonstrate the extension to other drugs a module for phenytoin was constructed with only a rudimentary knowledge base. Data from the request forms together with the S-digoxin results are entered into the data base by the department secretary. The day's results are then reviewed by the clinical pharmacologist. For each case, previous results may be displayed and are taken into account by the system in the decision process. The knowledge base is applied to the data to formulate an evaluative comment on the report returned to the requestor. The report includes a semi-graphic presentation of the current and previous results and either the system's interpretation or one entered by the pharmacologist if he does not agree with it. The pharmacologist's comment is also recorded in the data base for future retrieval, analysis and possible updating of the knowledge base. The system is now undergoing testing and evaluation under routine operations in the clinical pharmacology service. It is a prototype for other applications in both laboratory and clinical medicine currently under development at Uppsala University Hospital. This system may thus provide a vehicle for a more intensive penetration of knowledge-based systems in practical medical applications.

  14. National Nuclear Security Administration Knowledge Base Core Table Schema Document

    SciTech Connect

    CARR,DORTHE B.

    2002-09-01

    The National Nuclear Security Administration is creating a Knowledge Base to store technical information to support the United States nuclear explosion monitoring mission. This document defines the core database tables that are used in the Knowledge Base. The purpose of this document is to present the ORACLE database tables in the NNSA Knowledge Base that on modifications to the CSS3.0 Database Schema developed in 1990. (Anderson et al., 1990). These modifications include additional columns to the affiliation table, an increase in the internal ORACLE format from 8 integers to 9 integers for thirteen IDs, and new primary and unique key definitions for six tables. It is intended to be used as a reference by researchers inside and outside of NNSA/DOE as they compile information to submit to the NNSA Knowledge Base. These ''core'' tables are separated into two groups. The Primary tables are dynamic and consist of information that can be used in automatic and interactive processing (e.g. arrivals, locations). The Lookup tables change infrequently and are used for auxiliary information used by the processing. In general, the information stored in the core tables consists of: arrivals; events, origins, associations of arrivals; magnitude information; station information (networks, site descriptions, instrument responses); pointers to waveform data; and comments pertaining to the information. This document is divided into four sections, the first being this introduction. Section two defines the sixteen tables that make up the core tables of the NNSA Knowledge Base database. Both internal (ORACLE) and external formats for the attributes are defined, along with a short description of each attribute. In addition, the primary, unique and foreign keys are defined. Section three of the document shows the relationships between the different tables by using entity-relationship diagrams. The last section, defines the columns or attributes of the various tables. Information that is

  15. Predicting Mycobacterium tuberculosis Complex Clades Using Knowledge-Based Bayesian Networks

    PubMed Central

    Bennett, Kristin P.

    2014-01-01

    We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238

  16. Dealing with difficult deformations: construction of a knowledge-based deformation atlas

    NASA Astrophysics Data System (ADS)

    Thorup, S. S.; Darvann, T. A.; Hermann, N. V.; Larsen, P.; Ólafsdóttir, H.; Paulsen, R. R.; Kane, A. A.; Govier, D.; Lo, L.-J.; Kreiborg, S.; Larsen, R.

    2010-03-01

    Twenty-three Taiwanese infants with unilateral cleft lip and palate (UCLP) were CT-scanned before lip repair at the age of 3 months, and again after lip repair at the age of 12 months. In order to evaluate the surgical result, detailed point correspondence between pre- and post-surgical images was needed. We have previously demonstrated that non-rigid registration using B-splines is able to provide automated determination of point correspondences in populations of infants without cleft lip. However, this type of registration fails when applied to the task of determining the complex deformation from before to after lip closure in infants with UCLP. The purpose of the present work was to show that use of prior information about typical deformations due to lip closure, through the construction of a knowledge-based atlas of deformations, could overcome the problem. Initially, mean volumes (atlases) for the pre- and post-surgical populations, respectively, were automatically constructed by non-rigid registration. An expert placed corresponding landmarks in the cleft area in the two atlases; this provided prior information used to build a knowledge-based deformation atlas. We model the change from pre- to post-surgery using thin-plate spline warping. The registration results are convincing and represent a first move towards an automatic registration method for dealing with difficult deformations due to this type of surgery.

  17. Scaling up explanation generation: Large-scale knowledge bases and empirical studies

    SciTech Connect

    Lester, J.C.; Porter, B.W.

    1996-12-31

    To explain complex phenomena, an explanation system must be able to select information from a formal representation of domain knowledge, organize the selected information into multisentential discourse plans, and realize the discourse plans in text. Although recent years have witnessed significant progress in the development of sophisticated computational mechanisms for explanation, empirical results have been limited. This paper reports on a seven year effort to empirically study explanation generation from semantically rich, large-scale knowledge bases. We first describe Knight, a robust explanation system that constructs multi-sentential and multi-paragraph explanations from the Biology Knowledge Base, a large-scale knowledge base in the domain of botanical anatomy, physiology, and development. We then introduce the Two Panel evaluation methodology and describe how Knight`s performance was assessed with this methodology in the most extensive empirical evaluation conducted on an explanation system. In this evaluation, Knight scored within {open_quotes}half a grade{close_quote} of domain experts, and its performance exceeded that of one of the domain experts.

  18. A knowledge-based system for prototypical reasoning

    NASA Astrophysics Data System (ADS)

    Lieto, Antonio; Minieri, Andrea; Piana, Alberto; Radicioni, Daniele P.

    2015-04-01

    In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired architecture for the representation of conceptual information. The proposed system aims at extending the classical representational and reasoning capabilities of the ontology-based frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base, composed of a classical symbolic component (grounded on a formal ontology) with a typicality based one (grounded on the conceptual spaces framework). The resulting system attempts to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual process theories of reasoning and rationality. The system has been experimentally assessed in a conceptual categorisation task where common sense linguistic descriptions were given in input, and the corresponding target concepts had to be identified. The results show that the proposed solution substantially extends the representational and reasoning 'conceptual' capabilities of standard ontology-based systems.

  19. Managing Project Landscapes in Knowledge-Based Enterprises

    NASA Astrophysics Data System (ADS)

    Stantchev, Vladimir; Franke, Marc Roman

    Knowledge-based enterprises are typically conducting a large number of research and development projects simultaneously. This is a particularly challenging task in complex and diverse project landscapes. Project Portfolio Management (PPM) can be a viable framework for knowledge and innovation management in such landscapes. A standardized process with defined functions such as project data repository, project assessment, selection, reporting, and portfolio reevaluation can serve as a starting point. In this work we discuss the benefits a multidimensional evaluation framework can provide for knowledge-based enterprises. Furthermore, we describe a knowledge and learning strategy and process in the context of PPM and evaluate their practical applicability at different stages of the PPM process.

  20. NRV web knowledge base on low-energy nuclear physics

    NASA Astrophysics Data System (ADS)

    Karpov, A. V.; Denikin, A. S.; Naumenko, M. A.; Alekseev, A. P.; Rachkov, V. A.; Samarin, V. V.; Saiko, V. V.; Zagrebaev, V. I.

    2017-07-01

    The paper describes the principles of organization and operation of the NRV web knowledge base on low-energy nuclear physics (http://nrv.jinr.ru/) which integrates a large amount of digitized experimental data on the properties of nuclei and nuclear reaction cross sections with a wide range of computational programs for modeling of nuclear properties and various processes of nuclear dynamics which work directly in the browser of a remote user. The paper also gives an overview of the current situation in the field of application of network information technologies in nuclear physics. The features of the NRV knowledge base are illustrated in detail on the example of the analysis of nucleon transfer reactions within the distorted wave Born approximation.

  1. Knowledge-based zonal grid generation for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.

  2. A specialized framework for medical diagnostic knowledge-based systems.

    PubMed

    Lanzola, G; Stefanelli, M

    1992-08-01

    For a knowledge-based system (KBS) to exhibit an intelligent behavior, it must be endowed with knowledge enabling it to represent the expert's strategies. The elicitation task is inherently difficult for strategic knowledge, because strategy is often tacit, and, even when it has been made explicit, it is not an easy task to describe it in a form which may be directly translated and implemented into a program. This paper describes a Specialized Framework for Medical Diagnostic Knowledge-Based Systems that can help an expert in the process of building KBSs in a medical domain. The framework is based on an epistemological model of diagnostic reasoning which has proven to be helpful in describing the diagnostic process in terms of the tasks that it is composed of. It allows a straightforward modeling of diagnostic reasoning at the knowledge level by the domain expert, thus helping to convey domain-dependent strategies into the target KBS.

  3. A specialized framework for Medical Diagnostic Knowledge Based Systems.

    PubMed

    Lanzola, G; Stefanelli, M

    1991-01-01

    To have a knowledge based system (KBS) exhibiting an intelligent behavior, it must be endowed even with knowledge able to represent the expert's strategies, other than with domain knowledge. The elicitation task is inherently difficult for strategic knowledge, because strategy is often tacit, and, even when it has been made explicit, it is not an easy task to describe it in a form that may be directly translated and implemented into a program. This paper describes a Specialized Framework for Medical Diagnostic Knowledge Based Systems able to help an expert in the process of building KBSs in a medical domain. The framework is based on an epistemological model of diagnostic reasoning which has proved to be helpful in describing the diagnostic process in terms of the tasks by which it is composed of.

  4. TVS: An Environment For Building Knowledge-Based Vision Systems

    NASA Astrophysics Data System (ADS)

    Weymouth, Terry E.; Amini, Amir A.; Tehrani, Saeid

    1989-03-01

    Advances in the field of knowledge-guided computer vision require the development of large scale projects and experimentation with them. One factor which impedes such development is the lack of software environments which combine standard image processing and graphics abilities with the ability to perform symbolic processing. In this paper, we describe a software environment that assists in the development of knowledge-based computer vision projects. We have built, upon Common LISP and C, a software development environment which combines standard image processing tools and a standard blackboard-based system, with the flexibility of the LISP programming environment. This environment has been used to develop research projects in knowledge-based computer vision and dynamic vision for robot navigation.

  5. An Empirical Analysis of Knowledge Based Hypertext Navigation

    PubMed Central

    Snell, J.R.; Boyle, C.

    1990-01-01

    Our purpose is to investigate the effectiveness of knowledge-based navigation in a dermatology hypertext network. The chosen domain is a set of dermatology class notes implemented in Hypercard and SINS. The study measured time, number of moves, and success rates for subjects to find solutions to ten questions. The subjects were required to navigate within a dermatology hypertext network in order to find the solutions to a question. Our results indicate that knowledge-based navigation can assist the user in finding information of interest in a fewer number of node visits (moves) than with traditional button-based browsing or keyword searching. The time necessary to find an item of interest was lower for traditional-based methods. There was no difference in success rates for the two test groups.

  6. NRV web knowledge base on low-energy nuclear physics

    SciTech Connect

    Karpov, V. Denikin, A. S.; Alekseev, A. P.; Zagrebaev, V. I.; Rachkov, V. A.; Naumenko, M. A.; Saiko, V. V.

    2016-09-15

    Principles underlying the organization and operation of the NRV web knowledge base on low-energy nuclear physics (http://nrv.jinr.ru) are described. This base includes a vast body of digitized experimental data on the properties of nuclei and on cross sections for nuclear reactions that is combined with a wide set of interconnected computer programs for simulating complex nuclear dynamics, which work directly in the browser of a remote user. Also, the current situation in the realms of application of network information technologies in nuclear physics is surveyed. The potential of the NRV knowledge base is illustrated in detail by applying it to the example of an analysis of the fusion of nuclei that is followed by the decay of the excited compound nucleus formed.

  7. NRV web knowledge base on low-energy nuclear physics

    NASA Astrophysics Data System (ADS)

    Karpov, V.; Denikin, A. S.; Alekseev, A. P.; Zagrebaev, V. I.; Rachkov, V. A.; Naumenko, M. A.; Saiko, V. V.

    2016-09-01

    Principles underlying the organization and operation of the NRV web knowledge base on low-energy nuclear physics (http://nrv.jinr.ru) are described. This base includes a vast body of digitized experimental data on the properties of nuclei and on cross sections for nuclear reactions that is combined with a wide set of interconnected computer programs for simulating complex nuclear dynamics, which work directly in the browser of a remote user. Also, the current situation in the realms of application of network information technologies in nuclear physics is surveyed. The potential of the NRV knowledge base is illustrated in detail by applying it to the example of an analysis of the fusion of nuclei that is followed by the decay of the excited compound nucleus formed.

  8. Distributed Knowledge Base Systems for Diagnosis and Information Retrieval.

    DTIC Science & Technology

    1983-11-01

    progress came in the area of diagnostic reasoning and in the conceptual foundations of knowledge-based systems in general. We also developed an approach to a...have been experimenting with the application of this tool to the design and implementation of expert systems in the area of mechanical systems, since...itself or in the problem solving open area of research in the field, but the intuition is that processes that operate on it. Thus the range of

  9. Impact of Knowledge-Based Techniques on Emerging Technologies

    DTIC Science & Technology

    2006-09-01

    coherent location (PCL), tracking in multistatic radar, and ‘spatial denial’ as a waveform diversity technique to prevent the exploitation by an enemy...performing a variety of surveillance and tracking tasks. Knowledge-based processing may be used to control the scheduling of tasks in such a radar, showing...techniques to bistatic and multistatic radar, including the use of information on waveform properties in passive coherent location (PCL), tracking

  10. Design of an opt-electronic knowledge-based system

    NASA Astrophysics Data System (ADS)

    Shen, Xuan-Jing; Qian, Qing-Ji; Liu, Ping-Ping

    2006-01-01

    In this paper, based on the analysis of the features of knowledge-based system and optical computing, a scheme of an opt-electronic hybrid system (OEHKBS) model and its hardware supporting system. The OEHKBS adopts a knowledge representation based on matrix and its inference and learning arithmetic which suitable for optical parallel processing. Finally, the paper analyses the performance of the OEHKBS, which can make the time complexity for solving Maze problem reduce to O(n).

  11. A knowledge-based decision support system for payload scheduling

    NASA Technical Reports Server (NTRS)

    Floyd, Stephen; Ford, Donnie

    1988-01-01

    The role that artificial intelligence/expert systems technologies play in the development and implementation of effective decision support systems is illustrated. A recently developed prototype system for supporting the scheduling of subsystems and payloads/experiments for NASA's Space Station program is presented and serves to highlight various concepts. The potential integration of knowledge based systems and decision support systems which has been proposed in several recent articles and presentations is illustrated.

  12. A Study of Knowledge-Based Systems for Photo Interpretation.

    DTIC Science & Technology

    1980-06-01

    OIL (15] CAI Electronics SOPHIE (10] Medicine GUIDON [14] Learning Chemistry Meta-DENDRAL (i] Agriculture INDUCE [19] Mathematics AM [40] Intelligent...16 6. Computer-Aided Instruction: GUIDON Three types of traditional computer-aided instruction (CAI) are often distinguished: frame-oriented drill-and...systems have an obvious contribution to make to CAI. The GUIDON system developed by Clancey at Stanford exploits the MYCIN knowledge base about

  13. A Knowledge-Based Approach to Language Production

    DTIC Science & Technology

    1985-08-01

    systemic gramma rs--morphological, lexical, syntactic, and functional knowledge. The valu e of a feature may be a literal, special symbol, or a composite...A Knowledge-Based Approach to Language Prcxiuction By Paul Schafran Jacobs A.B. (Harvard University ) 1981 S.M. (Harvard University ) 1981...GRADUATE DIVISION OF ’THE UNIVERSITY OF CALIFORNIA, BERKELEY .. ~, ....-~- .. 9.!!.1’!.5 ’ i Date ..... 0. ~ •.. fi.:. -~- ..... f./..’(/ P.:’ .. ~ d

  14. A knowledge based model of electric utility operations. Final report

    SciTech Connect

    1993-08-11

    This report consists of an appendix to provide a documentation and help capability for an analyst using the developed expert system of electric utility operations running in CLIPS. This capability is provided through a separate package running under the WINDOWS Operating System and keyed to provide displays of text, graphics and mixed text and graphics that explain and elaborate on the specific decisions being made within the knowledge based expert system.

  15. MetaShare: Enabling Knowledge-Based Data Management

    NASA Astrophysics Data System (ADS)

    Pennington, D. D.; Salayandia, L.; Gates, A.; Osuna, F.

    2013-12-01

    MetaShare is a free and open source knowledge-based system for supporting data management planning, now required by some agencies and publishers. MetaShare supports users as they describe the types of data they will collect, expected standards, and expected policies for sharing. MetaShare's semantic model captures relationships between disciplines, tools, data types, data formats, and metadata standards. As the user plans their data management activities, MetaShare recommends choices based on practices and decisions from a community that has used the system for similar purposes, and extends the knowledge base to capture new relationships. The MetaShare knowledge base is being seeded with information for geoscience and environmental science domains, and is currently undergoing testing on at the University of Texas at El Paso. Through time and usage, it is expected to grow to support a variety of research domains, enabling community-based learning of data management practices. Knowledge of a user's choices during the planning phase can be used to support other tasks in the data life cycle, e.g., collecting, disseminating, and archiving data. A key barrier to scientific data sharing is the lack of sufficient metadata that provides context under which data were collected. The next phase of MetaShare development will automatically generate data collection instruments with embedded metadata and semantic annotations based on the information provided during the planning phase. While not comprehensive, this metadata will be sufficient for discovery and will enable user's to focus on more detailed descriptions of their data. Details are available at: Salayandia, L., Pennington, D., Gates, A., and Osuna, F. (accepted). MetaShare: From data management plans to knowledge base systems. AAAI Fall Symposium Series Workshop on Discovery Informatics, November 15-17, 2013, Arlington, VA.

  16. Knowledge-Based Decision Support in Department of Defense Acquisitions

    DTIC Science & Technology

    2010-09-01

    2005) reviewed and analyzed the National Aeronautics and Space Administration ( NASA ) project management policies and compared them to the GAO’s best...practices on knowledge-based decision making. The study was primarily focused on the Goddard Space Flight Center, the Jet Propulsion Lab, Johnson ...Space Center, and Marshall Space Flight Center. During its investigation, the GAO found NASA deficient in key criteria and decision reviews to fully

  17. Knowledge-Based Production Management: Approaches, Results and Prospects

    DTIC Science & Technology

    1991-12-01

    In this paper we provide an overview of research in the field of knowledge-based production management . We begin by examining the important sources...of decision-making difficulty in practical production management domains, discussing the requirements implied by each with respect to the development...of effective production management tools, and identifying the general opportunities in this regard provided by AI-based technology. We then categorize

  18. Enhancing Acronym/Abbreviation Knowledge Bases with Semantic Information

    PubMed Central

    Torii, Manabu; Liu, Hongfang

    2007-01-01

    Objective: In the biomedical domain, a terminology knowledge base that associates acronyms/abbreviations (denoted as SFs) with the definitions (denoted as LFs) is highly needed. For the construction such terminology knowledge base, we investigate the feasibility to build a system automatically assigning semantic categories to LFs extracted from text. Methods: Given a collection of pairs (SF,LF) derived from text, we i) assess the coverage of LFs and pairs (SF,LF) in the UMLS and justify the need of a semantic category assignment system; and ii) automatically derive name phrases annotated with semantic category and construct a system using machine learning. Results: Utilizing ADAM, an existing collection of (SF,LF) pairs extracted from MEDLINE, our system achieved an f-measure of 87% when assigning eight UMLS-based semantic groups to LFs. The system has been incorporated into a web interface which integrates SF knowledge from multiple SF knowledge bases. Web site: http://gauss.dbb.georgetown.edu/liblab/SFThesurus. PMID:18693933

  19. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul

    1994-01-01

    This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.

  20. ISPE: A knowledge-based system for fluidization studies

    SciTech Connect

    Reddy, S.

    1991-01-01

    Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.

  1. Three forms of assessment of prior knowledge, and improved performance following an enrichment programme, of English second language biology students within the context of a marine theme

    NASA Astrophysics Data System (ADS)

    Feltham, Nicola F.; Downs, Colleen T.

    2002-02-01

    The Science Foundation Programme (SFP) was launched in 1991 at the University of Natal, Pietermaritzburg, South Africa in an attempt to equip a selected number of matriculants from historically disadvantaged schools with the skills, resources and self-confidence needed to embark on their tertiary studies. Previous research within the SFP biology component suggests that a major contributor to poor achievement and low retention rates among English second language (ESL) students in the Life Sciences is the inadequate background knowledge in natural history. In this study, SFP student background knowledge was assessed along a continuum of language dependency using a set of three probes. Improved student performance in each of the respective assessments examined the extent to which a sound natural history background facilitated meaningful learning relative to ESL proficiency. Student profiles and attitudes to biology were also examined. Results indicated that students did not perceive language to be a problem in biology. However, analysis of the student performance in the assessment probes indicated that, although the marine course provided the students with the background knowledge that they were initially lacking, they continued to perform better in the drawing and MCQ tools in the post-tests, suggesting that it is their inability to express themselves in the written form that hampers their development. These results have implications for curriculum development within the constructivist framework of the SFP.

  2. The Knowledge Base in Education Administration: Did NCATE Open a Pandora's Box?

    ERIC Educational Resources Information Center

    Achilles, C. M.; DuVall, L.

    The controversial nature of the knowledge base of educational administration is discussed in this paper. Included are a definition of professionalism, a discussion of how to build and develop a knowledge base, and a review of the obstacles to knowledge base development. Elements of a consensual knowledge base include theory, practice, and other…

  3. Using the DOE Knowledge Base for Special Event Analysis

    SciTech Connect

    Armstrong, H.M.; Harris, J.M.; Young, C.J.

    1998-10-20

    The DOE Knowledge Base is a library of detailed information whose purpose is to support the United States National Data Center (USNDC) in its mission to monitor compliance with the Comprehensive Test Ban Treaty (CTBT). One of the important tasks which the USNDC must accomplish is to periodically perform detailed analysis of events of high interest, so-called "Special Events", to provide the national authority with information needed to make policy decisions. In this paper we investigate some possible uses of the Knowledge Base for Special Event Analysis (SEA), and make recommendations for improving Knowledge Base support for SEA. To analyze an event in detail, there are two basic types of data which must be used sensor-derived data (wave- forms, arrivals, events, etc.) and regiohalized contextual data (known sources, geological characteristics, etc.). Cur- rently there is no single package which can provide full access to both types of data, so for our study we use a separate package for each MatSeis, the Sandia Labs-developed MATLAB-based seismic analysis package, for wave- form data analysis, and ArcView, an ESRI product, for contextual data analysis. Both packages are well-suited to pro- totyping because they provide a rich set of currently available functionality and yet are also flexible and easily extensible, . Using these tools and Phase I Knowledge Base data sets, we show how the Knowledge Base can improve both the speed and the quality of SEA. Empirically-derived interpolated correction information can be accessed to improve both location estimates and associated error estimates. This information can in turn be used to identi~ any known nearby sources (e.g. mines, volcanos), which may then trigger specialized processing of the sensor data. Based on the location estimate, preferred magnitude formulas and discriminants can be retrieved, and any known blockages can be identified to prevent miscalculations. Relevant historic events can be identilled either by

  4. Use of biological priors enhances understanding of genetic architecture and genomic prediction of complex traits within and between dairy cattle breeds.

    PubMed

    Fang, Lingzhao; Sahana, Goutam; Ma, Peipei; Su, Guosheng; Yu, Ying; Zhang, Shengli; Lund, Mogens Sandø; Sørensen, Peter

    2017-08-10

    A better understanding of the genetic architecture underlying complex traits (e.g., the distribution of causal variants and their effects) may aid in the genomic prediction. Here, we hypothesized that the genomic variants of complex traits might be enriched in a subset of genomic regions defined by genes grouped on the basis of "Gene Ontology" (GO), and that incorporating this independent biological information into genomic prediction models might improve their predictive ability. Four complex traits (i.e., milk, fat and protein yields, and mastitis) together with imputed sequence variants in Holstein (HOL) and Jersey (JER) cattle were analysed. We first carried out a post-GWAS analysis in a HOL training population to assess the degree of enrichment of the association signals in the gene regions defined by each GO term. We then extended the genomic best linear unbiased prediction model (GBLUP) to a genomic feature BLUP (GFBLUP) model, including an additional genomic effect quantifying the joint effect of a group of variants located in a genomic feature. The GBLUP model using a single random effect assumes that all genomic variants contribute to the genomic relationship equally, whereas GFBLUP attributes different weights to the individual genomic relationships in the prediction equation based on the estimated genomic parameters. Our results demonstrate that the immune-relevant GO terms were more associated with mastitis than milk production, and several biologically meaningful GO terms improved the prediction accuracy with GFBLUP for the four traits, as compared with GBLUP. The improvement of the genomic prediction between breeds (the average increase across the four traits was 0.161) was more apparent than that it was within the HOL (the average increase across the four traits was 0.020). Our genomic feature modelling approaches provide a framework to simultaneously explore the genetic architecture and genomic prediction of complex traits by taking advantage of

  5. Determination of nickel in biological materials after microwave dissolution using inductively coupled plasma atomic emission spectrometry with prior extraction into butan-1-ol.

    PubMed

    Vereda Alonso, E; García de Torres, A; Cano Pavón, J M

    1992-07-01

    A sensitive procedure has been developed for the determination of ultratrace amounts of nickel in biological materials by inductively coupled plasma atomic emission spectrometry after extraction of the nickel ion into butan-1-ol by using 1,5-bis(di-2-pyridylmethylene)thiocarbonohydrazide as the extracting reagent. Fast, efficient and complete sample digestion is achieved by an HNO3-HCl poly(tetrafluoroethylene) bomb dissolution technique using microwave heating. Results obtained for eleven certified reference materials agreed with the certified values.

  6. Separation of extracts from biological tissues into polycyclic aromatic hydrocarbon, polychlorinated biphenyl and polychlorinated dibenzo-p-dioxin/polychlorinated dibenzofuran fractions prior to analysis.

    PubMed

    O'Keefe, P W; Miller, J; Smith, R; Connor, S; Clayton, W; Storm, R

    1997-05-30

    A low-pressure liquid chromatography method is presented for separating polycyclic aromatic hydrocarbons (PAHs), polychlorinated biphenyls (PCBs) and polychlorinated dibenzo-p-dioxins/polychlorinated dibenzofurans (PCDDs/PCDFs) from biological tissue extracts. After removing lipid from extracts, the PAHs are separated from PCBs and PCDDs/PCDFs on a deactivated 13-24 microns silica gel column. The PCBs are subsequently separated from PCDDs/PCDFs by collecting the first fraction from an automated three column cleanup procedure for PCDDs/PCDFs. The complete method has been used to obtain high recoveries of the three compound classes for analysis by GC-electron capture detection (PCBs) or GC-MS (PAHs and PCDDs/PCDFs).

  7. Baseline levels of bioaerosols and volatile organic compounds around a municipal waste incinerator prior to the construction of a mechanical-biological treatment plant

    SciTech Connect

    Vilavert, Lolita; Nadal, Marti; Inza, Isabel; Figueras, Maria J.; Domingo, Jose L.

    2009-09-15

    New waste management programs are currently aimed at developing alternative treatment technologies such as mechanical-biological treatment (MBT) and composting plants. However, there is still a high uncertainty concerning the chemical and microbiological risks for human health, not only for workers of these facilities, but also for the population living in the neighborhood. A new MBT plant is planned to be constructed adjacently to a municipal solid waste incinerator (MSWI) in Tarragona (Catalonia, Spain). In order to evaluate its potential impact and to differentiate the impacts of MSWI from those of the MBT when the latter is operative, a pre-operational survey was initiated by determining the concentrations of 20 volatile organic compounds (VOCs) and bioaerosols (total bacteria, Gram-negative bacteria, fungi and Aspergillus fumigatus) in airborne samples around the MSWI. The results indicated that the current concentrations of bioaerosols (ranges: 382-3882, 18-790, 44-926, and <1-7 CFU/m{sup 3} for fungi at 25 deg. C, fungi at 37 deg. C, total bacteria, and Gram-negative bacteria, respectively) and VOCs (ranging from 0.9 to 121.2 {mu}g/m{sup 3}) are very low in comparison to reported levels in indoor and outdoor air in composting and MBT plants, as well in urban and industrial zones. With the exception of total bacteria, no correlations were observed between the environmental concentrations of biological agents and the direction/distance from the facility. However, total bacteria presented significantly higher levels downwind. Moreover, a non-significant increase of VOCs was detected in sites closer to the incinerator, which means that the MSWI could have a very minor impact on the surrounding environment.

  8. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  9. SAFOD Brittle Microstructure and Mechanics Knowledge Base (BM2KB)

    NASA Astrophysics Data System (ADS)

    Babaie, Hassan A.; Broda Cindi, M.; Hadizadeh, Jafar; Kumar, Anuj

    2013-07-01

    Scientific drilling near Parkfield, California has established the San Andreas Fault Observatory at Depth (SAFOD), which provides the solid earth community with short range geophysical and fault zone material data. The BM2KB ontology was developed in order to formalize the knowledge about brittle microstructures in the fault rocks sampled from the SAFOD cores. A knowledge base, instantiated from this domain ontology, stores and presents the observed microstructural and analytical data with respect to implications for brittle deformation and mechanics of faulting. These data can be searched on the knowledge base‧s Web interface by selecting a set of terms (classes, properties) from different drop-down lists that are dynamically populated from the ontology. In addition to this general search, a query can also be conducted to view data contributed by a specific investigator. A search by sample is done using the EarthScope SAFOD Core Viewer that allows a user to locate samples on high resolution images of core sections belonging to different runs and holes. The class hierarchy of the BM2KB ontology was initially designed using the Unified Modeling Language (UML), which was used as a visual guide to develop the ontology in OWL applying the Protégé ontology editor. Various Semantic Web technologies such as the RDF, RDFS, and OWL ontology languages, SPARQL query language, and Pellet reasoning engine, were used to develop the ontology. An interactive Web application interface was developed through Jena, a java based framework, with AJAX technology, jsp pages, and java servlets, and deployed via an Apache tomcat server. The interface allows the registered user to submit data related to their research on a sample of the SAFOD core. The submitted data, after initial review by the knowledge base administrator, are added to the extensible knowledge base and become available in subsequent queries to all types of users. The interface facilitates inference capabilities in the

  10. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  11. Knowledge-based GIS techniques applied to geological engineering

    USGS Publications Warehouse

    Usery, E. Lynn; Altheide, Phyllis; Deister, Robin R.P.; Barr, David J.

    1988-01-01

    A knowledge-based geographic information system (KBGIS) approach which requires development of a rule base for both GIS processing and for the geological engineering application has been implemented. The rule bases are implemented in the Goldworks expert system development shell interfaced to the Earth Resources Data Analysis System (ERDAS) raster-based GIS for input and output. GIS analysis procedures including recoding, intersection, and union are controlled by the rule base, and the geological engineering map product is generted by the expert system. The KBGIS has been used to generate a geological engineering map of Creve Coeur, Missouri.

  12. Knowledge-based system for automatic MBR control.

    PubMed

    Comas, J; Meabe, E; Sancho, L; Ferrero, G; Sipma, J; Monclús, H; Rodriguez-Roda, I

    2010-01-01

    MBR technology is currently challenging traditional wastewater treatment systems and is increasingly selected for WWTP upgrading. MBR systems typically are constructed on a smaller footprint, and provide superior treated water quality. However, the main drawback of MBR technology is that the permeability of membranes declines during filtration due to membrane fouling, which for a large part causes the high aeration requirements of an MBR to counteract this fouling phenomenon. Due to the complex and still unknown mechanisms of membrane fouling it is neither possible to describe clearly its development by means of a deterministic model, nor to control it with a purely mathematical law. Consequently the majority of MBR applications are controlled in an "open-loop" way i.e. with predefined and fixed air scour and filtration/relaxation or backwashing cycles, and scheduled inline or offline chemical cleaning as a preventive measure, without taking into account the real needs of membrane cleaning based on its filtration performance. However, existing theoretical and empirical knowledge about potential cause-effect relations between a number of factors (influent characteristics, biomass characteristics and operational conditions) and MBR operation can be used to build a knowledge-based decision support system (KB-DSS) for the automatic control of MBRs. This KB-DSS contains a knowledge-based control module, which, based on real time comparison of the current permeability trend with "reference trends", aims at optimizing the operation and energy costs and decreasing fouling rates. In practice the automatic control system proposed regulates the set points of the key operational variables controlled in MBR systems (permeate flux, relaxation and backwash times, backwash flows and times, aeration flow rates, chemical cleaning frequency, waste sludge flow rate and recycle flow rates) and identifies its optimal value. This paper describes the concepts and the 3-level architecture

  13. Building a knowledge based economy in Russia using guided entrepreneurship

    NASA Astrophysics Data System (ADS)

    Reznik, Boris N.; Daniels, Marc; Ichim, Thomas E.; Reznik, David L.

    2005-06-01

    Despite advanced scientific and technological (S&T) expertise, the Russian economy is presently based upon manufacturing and raw material exports. Currently, governmental incentives are attempting to leverage the existing scientific infrastructure through the concept of building a Knowledge Based Economy. However, socio-economic changes do not occur solely by decree, but by alteration of approach to the market. Here we describe the "Guided Entrepreneurship" plan, a series of steps needed for generation of an army of entrepreneurs, which initiate a chain reaction of S&T-driven growth. The situation in Russia is placed in the framework of other areas where Guided Entrepreneurship has been successful.

  14. CGKB: an annotation knowledge base for cowpea (Vigna unguiculata L.) methylation filtered genomic genespace sequences

    PubMed Central

    Chen, Xianfeng; Laudeman, Thomas W; Rushton, Paul J; Spraggins, Thomas A; Timko, Michael P

    2007-01-01

    annotated GSS were analyzed using the HMMER package against the Pfam database. The annotated GSS were also assigned with Gene Ontology annotation terms and integrated with 228 curated plant metabolic pathways from the Arabidopsis Information Resource (TAIR) knowledge base. The UniProtKB-Swiss-Prot ENZYME database was used to assign putative enzymatic function to each GSS. Each GSS was also analyzed with the Tandem Repeat Finder (TRF) program in order to identify potential SSRs for molecular marker discovery. The raw sequence data, processed annotation, and SSR results were stored in relational tables designed in key-value pair fashion using a PostgreSQL relational database management system. The biological knowledge derived from the sequence data and processed results are represented as views or materialized views in the relational database management system. All materialized views are indexed for quick data access and retrieval. Data processing and analysis pipelines were implemented using the Perl programming language. The web interface was implemented in JavaScript and Perl CGI running on an Apache web server. The CPU intensive data processing and analysis pipelines were run on a computer cluster of more than 30 dual-processor Apple XServes. A job management system called Vela was created as a robust way to submit large numbers of jobs to the Portable Batch System (PBS). Conclusion CGKB is an integrated and annotated resource for cowpea GSS with features of homology-based and HMM-based annotations, enzyme and pathway annotations, GO term annotation, toolkits, and a large number of other facilities to perform complex queries. The cowpea GSS, chloroplast sequences, mitochondrial sequences, retroelements, and SSR sequences are available as FASTA formatted files and downloadable at CGKB. This database and web interface are publicly accessible at . PMID:17445272

  15. Knowledge-Based Parallel Performance Technology for Scientific Application Competitiveness Final Report

    SciTech Connect

    Malony, Allen D; Shende, Sameer

    2011-08-15

    The primary goal of the University of Oregon's DOE "œcompetitiveness" project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translation of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.

  16. Terminating pre-ozonation prior to biological activated carbon filtration results in increased formation of nitrogenous disinfection by-products upon subsequent chlorination.

    PubMed

    Chu, Wenhai; Li, Changjun; Gao, Naiyun; Templeton, Michael R; Zhang, Yanshen

    2015-02-01

    Previous research demonstrated that ozone dosed before biological activated carbon (BAC) filtration reduces the formation of disinfection by-products (DBPs) upon subsequent chlorination. The current work aimed to evaluate the impact of terminating this pre-ozonation on the ability of the BAC to remove the precursors of N-DBPs. More N-DBP precursors passed into the post-BAC water when the pre-ozonation was terminated, resulting in greater formation of N-DBPs when the water was subsequently chlorinated, compared to a parallel BAC filter when the pre-ozonation was run continuously. Moreover, the N-DBP formation potential was significantly increased in the effluent of the BAC filter after terminating pre-ozonation, compared with the influent of the BAC filter (i.e. the effluent from the sand filter). Therefore, while selectively switching pre-ozonation on/off may have cost and other operational benefits for water suppliers, these should be weighed against the increased formation of N-DBPs and potential associated health risks. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. WE-F-BRB-00: New Developments in Knowledge-Based Treatment Planning and Automation

    SciTech Connect

    2015-06-15

    Advancements in informatics in radiotherapy are opening up opportunities to improve our ability to assess treatment plans. Models on individualizing patient dose constraints from prior patient data and shape relationships have been extensively researched and are now making their way into commercial products. New developments in knowledge based treatment planning involve understanding the impact of the radiation dosimetry on the patient. Akin to radiobiology models that have driven intensity modulated radiotherapy optimization, toxicity and outcome predictions based on treatment plans and prior patient experiences may be the next step in knowledge based planning. In order to realize these predictions, it is necessary to understand how the clinical information can be captured, structured and organized with ontologies and databases designed for recall. Large databases containing radiation dosimetry and outcomes present the opportunity to evaluate treatment plans against predictions of toxicity and disease response. Such evaluations can be based on dose volume histogram or even the full 3-dimensional dose distribution and its relation to the critical anatomy. This session will provide an understanding of ontologies and standard terminologies used to capture clinical knowledge into structured databases; How data can be organized and accessed to utilize the knowledge in planning; and examples of research and clinical efforts to incorporate that clinical knowledge into planning for improved care for our patients. Learning Objectives: Understand the role of standard terminologies, ontologies and data organization in oncology Understand methods to capture clinical toxicity and outcomes in a clinical setting Understand opportunities to learn from clinical data and its application to treatment planning Todd McNutt receives funding from Philips, Elekta and Toshiba for some of the work presented.

  18. Matching sensors to missions using a knowledge-based approach

    NASA Astrophysics Data System (ADS)

    Preece, Alun; Gomez, Mario; de Mel, Geeth; Vasconcelos, Wamberto; Sleeman, Derek; Colley, Stuart; Pearson, Gavin; Pham, Tien; La Porta, Thomas

    2008-04-01

    Making decisions on how best to utilise limited intelligence, surveillance and reconnaisance (ISR) resources is a key issue in mission planning. This requires judgements about which kinds of available sensors are more or less appropriate for specific ISR tasks in a mission. A methodological approach to addressing this kind of decision problem in the military context is the Missions and Means Framework (MMF), which provides a structured way to analyse a mission in terms of tasks, and assess the effectiveness of various means for accomplishing those tasks. Moreover, the problem can be defined as knowledge-based matchmaking: matching the ISR requirements of tasks to the ISR-providing capabilities of available sensors. In this paper we show how the MMF can be represented formally as an ontology (that is, a specification of a conceptualisation); we also represent knowledge about ISR requirements and sensors, and then use automated reasoning to solve the matchmaking problem. We adopt the Semantic Web approach and the Web Ontology Language (OWL), allowing us to import elements of existing sensor knowledge bases. Our core ontologies use the description logic subset of OWL, providing efficient reasoning. We describe a prototype tool as a proof-of-concept for our approach. We discuss the various kinds of possible sensor-mission matches, both exact and inexact, and how the tool helps mission planners consider alternative choices of sensors.

  19. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  20. Utilizing knowledge-base semantics in graph-based algorithms

    SciTech Connect

    Darwiche, A.

    1996-12-31

    Graph-based algorithms convert a knowledge base with a graph structure into one with a tree structure (a join-tree) and then apply tree-inference on the result. Nodes in the join-tree are cliques of variables and tree-inference is exponential in w*, the size of the maximal clique in the join-tree. A central property of join-trees that validates tree-inference is the running-intersection property: the intersection of any two cliques must belong to every clique on the path between them. We present two key results in connection to graph-based algorithms. First, we show that the running-intersection property, although sufficient, is not necessary for validating tree-inference. We present a weaker property for this purpose, called running-interaction, that depends on non-structural (semantical) properties of a knowledge base. We also present a linear algorithm that may reduce w* of a join-tree, possibly destroying its running-intersection property, while maintaining its running-interaction property and, hence, its validity for tree-inference. Second, we develop a simple algorithm for generating trees satisfying the running-interaction property. The algorithm bypasses triangulation (the standard technique for constructing join-trees) and does not construct a join-tree first. We show that the proposed algorithm may in some cases generate trees that are more efficient than those generated by modifying a join-tree.

  1. Big data analytics in immunology: a knowledge-based approach.

    PubMed

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow.

  2. Knowledge-based inference engine for online video dissemination

    NASA Astrophysics Data System (ADS)

    Zhou, Wensheng; Kuo, C.-C. Jay

    2000-10-01

    To facilitate easy access to rich information of multimedia over the Internet, we develop a knowledge-based classification system that supports automatic Indexing and filtering based on semantic concepts for the dissemination of on-line real-time media. Automatic segmentation, annotation and summarization of media for fast information browsing and updating are achieved in the same time. In the proposed system, a real-time scene-change detection proxy performs an initial video structuring process by splitting a video clip into scenes. Motional and visual features are extracted in real time for every detected scene by using online feature extraction proxies. Higher semantics are then derived through a joint use of low-level features along with inference rules in the knowledge base. Inference rules are derived through a supervised learning process based on representative samples. On-line media filtering based on semantic concepts becomes possible by using the proposed video inference engine. Video streams are either blocked or sent to certain channels depending on whether or not the video stream is matched with the user's profile. The proposed system is extensively evaluated by applying the engine to video of basketball games.

  3. A knowledge-based care protocol system for ICU.

    PubMed

    Lau, F; Vincent, D D

    1995-01-01

    There is a growing interest in using care maps in ICU. So far, the emphasis has been on developing the critical path, problem/outcome, and variance reporting for specific diagnoses. This paper presents a conceptual knowledge-based care protocol system design for the ICU. It is based on the manual care map currently in use for managing myocardial infarction in the ICU of the Sturgeon General Hospital in Alberta. The proposed design uses expert rules, object schemas, case-based reasoning, and quantitative models as sources of its knowledge. Also being developed is a decision model with explicit linkages for outcome-process-measure from the care map. The resulting system is intended as a bedside charting and decision-support tool for caregivers. Proposed usage includes charting by acknowledgment, generation of alerts, and critiques on variances/events recorded, recommendations for planned interventions, and comparison with historical cases. Currently, a prototype is being developed on a PC-based network with Visual Basic, Level-Expert Object, and xBase. A clinical trial is also planned to evaluate whether this knowledge-based care protocol can reduce the length of stay of patients with myocardial infarction in the ICU.

  4. Hospital nurses' use of knowledge-based information resources.

    PubMed

    Tannery, Nancy Hrinya; Wessel, Charles B; Epstein, Barbara A; Gadd, Cynthia S

    2007-01-01

    The purpose of this study was to evaluate the information-seeking practices of nurses before and after access to a library's electronic collection of information resources. This is a pre/post intervention study of nurses at a rural community hospital. The hospital contracted with an academic health sciences library for access to a collection of online knowledge-based resources. Self-report surveys were used to obtain information about nurses' computer use and how they locate and access information to answer questions related to their patient care activities. In 2001, self-report surveys were sent to the hospital's 573 nurses during implementation of access to online resources with a post-implementation survey sent 1 year later. At the initiation of access to the library's electronic resources, nurses turned to colleagues and print textbooks or journals to satisfy their information needs. After 1 year of access, 20% of the nurses had begun to use the library's electronic resources. The study outcome suggests ready access to knowledge-based electronic information resources can lead to changes in behavior among some nurses.

  5. Portable Knowledge-Based Diagnostic And Maintenance Systems

    NASA Astrophysics Data System (ADS)

    Darvish, John; Olson, Noreen S.

    1989-03-01

    It is difficult to diagnose faults and maintain weapon systems because (1) they are highly complex pieces of equipment composed of multiple mechanical, electrical, and hydraulic assemblies, and (2) talented maintenance personnel are continuously being lost through the attrition process. To solve this problem, we developed a portable diagnostic and maintenance aid that uses a knowledge-based expert system. This aid incorporates diagnostics, operational procedures, repair and replacement procedures, and regularly scheduled maintenance into one compact, 18-pound graphics workstation. Drawings and schematics can be pulled up from the CD-ROM to assist the operator in answering the expert system's questions. Work for this aid began with the development of the initial knowledge-based expert system in a fast prototyping environment using a LISP machine. The second phase saw the development of a personal computer-based system that used videodisc technology to pictorially assist the operator. The current version of the aid eliminates the high expenses associated with videodisc preparation by scanning in the art work already in the manuals. A number of generic software tools have been developed that streamlined the construction of each iteration of the aid; these tools will be applied to the development of future systems.

  6. Network fingerprint: a knowledge-based characterization of biomedical networks

    PubMed Central

    Cui, Xiuliang; He, Haochen; He, Fuchu; Wang, Shengqi; Li, Fei; Bo, Xiaochen

    2015-01-01

    It can be difficult for biomedical researchers to understand complex molecular networks due to their unfamiliarity with the mathematical concepts employed. To represent molecular networks with clear meanings and familiar forms for biomedical researchers, we introduce a knowledge-based computational framework to decipher biomedical networks by making systematic comparisons to well-studied “basic networks”. A biomedical network is characterized as a spectrum-like vector called “network fingerprint”, which contains similarities to basic networks. This knowledge-based multidimensional characterization provides a more intuitive way to decipher molecular networks, especially for large-scale network comparisons and clustering analyses. As an example, we extracted network fingerprints of 44 disease networks in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The comparisons among the network fingerprints of disease networks revealed informative disease-disease and disease-signaling pathway associations, illustrating that the network fingerprinting framework will lead to new approaches for better understanding of biomedical networks. PMID:26307246

  7. Knowledge-based imaging-sensor fusion system

    NASA Astrophysics Data System (ADS)

    Westrom, George

    1989-11-01

    An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.

  8. Extensible knowledge-based architecture for segmenting CT data

    NASA Astrophysics Data System (ADS)

    Brown, Matthew S.; McNitt-Gray, Michael F.; Goldin, Jonathan G.; Aberle, Denise R.

    1998-06-01

    A knowledge-based system has been developed for segmenting computed tomography (CT) images. Its modular architecture includes an anatomical model, image processing engine, inference engine and blackboard. The model contains a priori knowledge of size, shape, X-ray attenuation and relative position of anatomical structures. This knowledge is used to constrain low-level segmentation routines. Model-derived constraints and segmented image objects are both transformed into a common feature space and posted on the blackboard. The inference engine then matches image to model objects, based on the constraints. The transformation to feature space allows the knowledge and image data representations to be independent. Thus a high-level model can be used, with data being stored in a frame-based semantic network. This modularity and explicit representation of knowledge allows for straightforward system extension. We initially demonstrate an application to lung segmentation in thoracic CT, with subsequent extension of the knowledge-base to include tumors within the lung fields. The anatomical model was later augmented to include basic brain anatomy including the skull and blood vessels, to allow automatic segmentation of vascular structures in CT angiograms for 3D rendering and visualization.

  9. Apprenticeship learning techniques for knowledge-based systems

    SciTech Connect

    Wilkins, D.C.

    1987-01-01

    This thesis describes apprenticeship learning techniques for automation of the transfer of expertise. Apprenticeship learning is a form of learning by watching, in which learning occurs as a byproduct of building explanations of human problem-solving actions. As apprenticeship is the most-powerful method that human experts use to refine and debug their expertise in knowledge-intensive domains such as medicine; this motivates giving such capabilities to an expert system. The major accomplishment in this thesis is showing how an explicit representation of the strategy knowledge to solve a general problem class, such as diagnosis, can provide a basis for learning the knowledge that is specific to a particular domain, such as medicine. The Odysseus learning program provides the first demonstration of using the same technique to transfer of expertise to and from an expert system knowledge base. Another major focus of this thesis is limitations of apprenticeship learning. It is shown that extant techniques for reasoning under uncertainty for expert systems lead to a sociopathic knowledge base.

  10. Knowledge-based simulation using object-oriented programming

    NASA Technical Reports Server (NTRS)

    Sidoran, Karen M.

    1993-01-01

    Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.

  11. A proven knowledge-based approach to prioritizing process information

    NASA Technical Reports Server (NTRS)

    Corsberg, Daniel R.

    1991-01-01

    Many space-related processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect is rapid analysis of the changing process information. During a disturbance, this task can overwhelm humans as well as computers. Humans deal with this by applying heuristics in determining significant information. A simple, knowledge-based approach to prioritizing information is described. The approach models those heuristics that humans would use in similar circumstances. The approach described has received two patents and was implemented in the Alarm Filtering System (AFS) at the Idaho National Engineering Laboratory (INEL). AFS was first developed for application in a nuclear reactor control room. It has since been used in chemical processing applications, where it has had a significant impact on control room environments. The approach uses knowledge-based heuristics to analyze data from process instrumentation and respond to that data according to knowledge encapsulated in objects and rules. While AFS cannot perform the complete diagnosis and control task, it has proven to be extremely effective at filtering and prioritizing information. AFS was used for over two years as a first level of analysis for human diagnosticians. Given the approach's proven track record in a wide variety of practical applications, it should be useful in both ground- and space-based systems.

  12. Framework Support For Knowledge-Based Software Development

    NASA Astrophysics Data System (ADS)

    Huseth, Steve

    1988-03-01

    The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.

  13. Knowledge base image classification using P-trees

    NASA Astrophysics Data System (ADS)

    Seetha, M.; Ravi, G.

    2010-02-01

    Image Classification is the process of assigning classes to the pixels in remote sensed images and important for GIS applications, since the classified image is much easier to incorporate than the original unclassified image. To resolve misclassification in traditional parametric classifier like Maximum Likelihood Classifier, the neural network classifier is implemented using back propagation algorithm. The extra spectral and spatial knowledge acquired from the ancillary information is required to improve the accuracy and remove the spectral confusion. To build knowledge base automatically, this paper explores a non-parametric decision tree classifier to extract knowledge from the spatial data in the form of classification rules. A new method is proposed using a data structure called Peano Count Tree (P-tree) for decision tree classification. The Peano Count Tree is a spatial data organization that provides a lossless compressed representation of a spatial data set and facilitates efficient classification than other data mining techniques. The accuracy is assessed using the parameters overall accuracy, User's accuracy and Producer's accuracy for image classification methods of Maximum Likelihood Classification, neural network classification using back propagation, Knowledge Base Classification, Post classification and P-tree Classifier. The results reveal that the knowledge extracted from decision tree classifier and P-tree data structure from proposed approach remove the problem of spectral confusion to a greater extent. It is ascertained that the P-tree classifier surpasses the other classification techniques.

  14. Knowledge-based imaging-sensor fusion system

    NASA Technical Reports Server (NTRS)

    Westrom, George

    1989-01-01

    An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.

  15. The incidence of cancer in patients with rheumatoid arthritis and a prior malignancy who receive TNF inhibitors or rituximab: results from the British Society for Rheumatology Biologics Register-Rheumatoid Arthritis

    PubMed Central

    Silva-Fernández, Lucía; Lunt, Mark; Kearsley-Fleet, Lianne; Watson, Kath D.; Dixon, William G.; Symmons, Deborah P. M.

    2016-01-01

    Objective. To explore the influence of TNF inhibitor (TNFi) therapy and rituximab (RTX) upon the incidence of cancer in patients with RA and prior malignancy. Methods. The study population comprised RA subjects with a prior malignancy reported to the UK national cancer registers, recruited to the British Society for Rheumatology Biologics Register from 2001 to 2013. We compared rates of first incident malignancy in a TNFi cohort, RTX cohort and synthetic DMARDs (sDMARD) cohort. Results. We identified 425 patients with a prior malignancy from 18 000 RA patients in the study. Of these, 101 patients developed a new malignancy. The rates of incident malignancy were 33.3 events/1000 person-years (py) in the TNFi cohort, 24.7 events/1000 py in the RTX cohort and 53.8 events/1000 py in the sDMARD cohort. The age- and gender-adjusted hazard ratio was 0.55 (95% CI: 0.35, 0.86) for the TNFi cohort and 0.43 (95% CI: 0.10, 1.80) for the RTX cohort in comparison with the sDMARDs cohort. The 17.0% of patients in the sDMARDs cohort had a recurrence of the same cancer in comparison with the 12.8% and the 4.3% in the TNFi and RTX cohorts, respectively. Conclusions. Although numbers are still low, it seems that patients with RA and prior malignancy selected to receive either a TNFi or RTX in the UK do not have an increased risk of future incident malignancy. PMID:27550304

  16. Use of thermal analysis techniques (TG-DSC) for the characterization of diverse organic municipal waste streams to predict biological stability prior to land application

    SciTech Connect

    Fernandez, Jose M.; Plaza, Cesar; Polo, Alfredo; Plante, Alain F.

    2012-01-15

    ) techniques. Total amounts of CO{sub 2} respired indicated that the organic matter in the TS was the least stable, while that in the CS was the most stable. This was confirmed by changes detected with the spectroscopic methods in the composition of the organic wastes due to C mineralization. Differences were especially pronounced for TS, which showed a remarkable loss of aliphatic and proteinaceous compounds during the incubation process. TG, and especially DSC analysis, clearly reflected these differences between the three organic wastes before and after the incubation. Furthermore, the calculated energy density, which represents the energy available per unit of organic matter, showed a strong correlation with cumulative respiration. Results obtained support the hypothesis of a potential link between the thermal and biological stability of the studied organic materials, and consequently the ability of thermal analysis to characterize the maturity of municipal organic wastes and composts.

  17. A knowledge-based control system for air-scour optimisation in membrane bioreactors.

    PubMed

    Ferrero, G; Monclús, H; Sancho, L; Garrido, J M; Comas, J; Rodríguez-Roda, I

    2011-01-01

    Although membrane bioreactors (MBRs) technology is still a growing sector, its progressive implementation all over the world, together with great technical achievements, has allowed it to reach a mature degree, just comparable to other more conventional wastewater treatment technologies. With current energy requirements around 0.6-1.1 kWh/m3 of treated wastewater and investment costs similar to conventional treatment plants, main market niche for MBRs can be areas with very high restrictive discharge limits, where treatment plants have to be compact or where water reuse is necessary. Operational costs are higher than for conventional treatments; consequently there is still a need and possibilities for energy saving and optimisation. This paper presents the development of a knowledge-based decision support system (DSS) for the integrated operation and remote control of the biological and physical (filtration and backwashing or relaxation) processes in MBRs. The core of the DSS is a knowledge-based control module for air-scour consumption automation and energy consumption minimisation.

  18. New developments of a knowledge based system (VEG) for inferring vegetation characteristics

    NASA Technical Reports Server (NTRS)

    Kimes, D. S.; Harrison, P. A.; Harrison, P. R.

    1992-01-01

    An extraction technique for inferring physical and biological surface properties of vegetation using nadir and/or directional reflectance data as input has been developed. A knowledge-based system (VEG) accepts spectral data of an unknown target as input, determines the best strategy for inferring the desired vegetation characteristic, applies the strategy to the target data, and provides a rigorous estimate of the accuracy of the inference. Progress in developing the system is presented. VEG combines methods from remote sensing and artificial intelligence, and integrates input spectral measurements with diverse knowledge bases. VEG has been developed to (1) infer spectral hemispherical reflectance from any combination of nadir and/or off-nadir view angles; (2) test and develop new extraction techniques on an internal spectral database; (3) browse, plot, or analyze directional reflectance data in the system's spectral database; (4) discriminate between user-defined vegetation classes using spectral and directional reflectance relationships; and (5) infer unknown view angles from known view angles (known as view angle extension).

  19. Knowledge-Based Identification of Soluble Biomarkers: Hepatic Fibrosis in NAFLD as an Example

    PubMed Central

    Page, Sandra; Birerdinc, Aybike; Estep, Michael; Stepanova, Maria; Afendy, Arian; Petricoin, Emanuel; Younossi, Zobair; Chandhoke, Vikas; Baranova, Ancha

    2013-01-01

    The discovery of biomarkers is often performed using high-throughput proteomics-based platforms and is limited to the molecules recognized by a given set of purified and validated antigens or antibodies. Knowledge-based, or systems biology, approaches that involve the analysis of integrated data, predominantly molecular pathways and networks may infer quantitative changes in the levels of biomolecules not included by the given assay from the levels of the analytes profiled. In this study we attempted to use a knowledge-based approach to predict biomarkers reflecting the changes in underlying protein phosphorylation events using Nonalcoholic Fatty Liver Disease (NAFLD) as a model. Two soluble biomarkers, CCL-2 and FasL, were inferred in silico as relevant to NAFLD pathogenesis. Predictive performance of these biomarkers was studied using serum samples collected from patients with histologically proven NAFLD. Serum levels of both molecules, in combination with clinical and demographic data, were predictive of hepatic fibrosis in a cohort of NAFLD patients. Our study suggests that (1) NASH-specific disruption of the kinase-driven signaling cascades in visceral adipose tissue lead to detectable changes in the levels of soluble molecules released into the bloodstream, and (2) biomarkers discovered in silico could contribute to predictive models for non-malignant chronic diseases. PMID:23405244

  20. Knowledge-Based Systems Approach to Wilderness Fire Management.

    NASA Astrophysics Data System (ADS)

    Saveland, James M.

    The 1988 and 1989 forest fire seasons in the Intermountain West highlight the shortcomings of current fire policy. To fully implement an optimization policy that minimizes the costs and net value change of resources affected by fire, long-range fire severity information is essential, yet lacking. This information is necessary for total mobility of suppression forces, implementing contain and confine suppression strategies, effectively dealing with multiple fire situations, scheduling summer prescribed burning, and wilderness fire management. A knowledge-based system, Delphi, was developed to help provide long-range information. Delphi provides: (1) a narrative of advice on where a fire might spread, if allowed to burn, (2) a summary of recent weather and fire danger information, and (3) a Bayesian analysis of long-range fire danger potential. Uncertainty is inherent in long-range information. Decision theory and judgment research can be used to help understand the heuristics experts use to make decisions under uncertainty, heuristics responsible both for expert performance and bias. Judgment heuristics and resulting bias are examined from a fire management perspective. Signal detection theory and receiver operating curve (ROC) analysis can be used to develop a long-range forecast to improve decisions. ROC analysis mimics some of the heuristics and compensates for some of the bias. Most importantly, ROC analysis displays a continuum of bias from which an optimum operating point can be selected. ROC analysis is especially appropriate for long-range forecasting since (1) the occurrence of possible future events is stated in terms of probability, (2) skill prediction is displayed, (3) inherent trade-offs are displayed, and (4) fire danger is explicitly defined. Statements on the probability of the energy release component of the National Fire Danger Rating System exceeding a critical value later in the fire season can be made early July in the Intermountain West

  1. The Knowledge-Based Software Assistant: Beyond CASE

    NASA Technical Reports Server (NTRS)

    Carozzoni, Joseph A.

    1993-01-01

    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.

  2. Development of a knowledge-based electronic patient record.

    PubMed

    Safran, C; Rind, D M; Sands, D Z; Davis, R B; Wald, J; Slack, W V

    1996-01-01

    To help clinicians care for patients with HIV infection, we developed an interactive knowledge-based electronic patient record that integrates rule-based decision support and full-text information retrieval with an online patient record. This highly interactive clinical workstation now allows the clinicians at a large primary care practice (30,000 ambulatory visits per year) to use online information resources and fully electronic patient records during all patient encounters. The resulting practice database is continually updated with outcome data on a cohort of 700 patients with HIV infection. As a byproduct of this integrated system, we have developed improved statistical methods to measure the effects of electronic alerts and reminders.

  3. TMS for Instantiating a Knowledge Base With Incomplete Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A computer program that belongs to the class known among software experts as output truth-maintenance-systems (output TMSs) has been devised as one of a number of software tools for reducing the size of the knowledge base that must be searched during execution of artificial- intelligence software of the rule-based inference-engine type in a case in which data are missing. This program determines whether the consequences of activation of two or more rules can be combined without causing a logical inconsistency. For example, in a case involving hypothetical scenarios that could lead to turning a given device on or off, the program determines whether a scenario involving a given combination of rules could lead to turning the device both on and off at the same time, in which case that combination of rules would not be included in the scenario.

  4. Knowledge-based fault diagnosis system for refuse collection vehicle

    SciTech Connect

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-15

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  5. A model for a knowledge-based system's life cycle

    NASA Technical Reports Server (NTRS)

    Kiss, Peter A.

    1990-01-01

    The American Institute of Aeronautics and Astronautics has initiated a Committee on Standards for Artificial Intelligence. Presented here are the initial efforts of one of the working groups of that committee. The purpose here is to present a candidate model for the development life cycle of Knowledge Based Systems (KBS). The intent is for the model to be used by the Aerospace Community and eventually be evolved into a standard. The model is rooted in the evolutionary model, borrows from the spiral model, and is embedded in the standard Waterfall model for software development. Its intent is to satisfy the development of both stand-alone and embedded KBSs. The phases of the life cycle are detailed as are and the review points that constitute the key milestones throughout the development process. The applicability and strengths of the model are discussed along with areas needing further development and refinement by the aerospace community.

  6. Knowledge-based system for flight information management. Thesis

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.

    1990-01-01

    The use of knowledge-based system (KBS) architectures to manage information on the primary flight display (PFD) of commercial aircraft is described. The PFD information management strategy used tailored the information on the PFD to the tasks the pilot performed. The KBS design and implementation of the task-tailored PFD information management application is described. The knowledge acquisition and subsequent system design of a flight-phase-detection KBS is also described. The flight-phase output of this KBS was used as input to the task-tailored PFD information management KBS. The implementation and integration of this KBS with existing aircraft systems and the other KBS is described. The flight tests are examined of both KBS's, collectively called the Task-Tailored Flight Information Manager (TTFIM), which verified their implementation and integration, and validated the software engineering advantages of the KBS approach in an operational environment.

  7. Knowledge-based fault diagnosis system for refuse collection vehicle

    NASA Astrophysics Data System (ADS)

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-01

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  8. Knowledge-Based Framework: its specification and new related discussions

    NASA Astrophysics Data System (ADS)

    Rodrigues, Douglas; Zaniolo, Rodrigo R.; Branco, Kalinka R. L. J. C.

    2015-09-01

    Unmanned Aerial Vehicle is a common application of critical embedded systems. The heterogeneity prevalent in these vehicles in terms of services for avionics is particularly relevant to the elaboration of multi-application missions. Besides, this heterogeneity in UAV services is often manifested in the form of characteristics such as reliability, security and performance. Different service implementations typically offer different guarantees in terms of these characteristics and in terms of associated costs. Particularly, we explore the notion of Service-Oriented Architecture (SOA) in the context of UAVs as safety-critical embedded systems for the composition of services to fulfil application-specified performance and dependability guarantees. So, we propose a framework for the deployment of these services and their variants. This framework is called Knowledge-Based Framework for Dynamically Changing Applications (KBF) and we specify its services module, discussing all the related issues.

  9. Knowledge-based navigation of complex information spaces

    SciTech Connect

    Burke, R.D.; Hammond, K.J.; Young, B.C.

    1996-12-31

    While the explosion of on-line information has brought new opportunities for finding and using electronic data, it has also brought to the forefront the problem of isolating useful information and making sense of large multi-dimension information spaces. We have built several developed an approach to building data {open_quotes}tour guides,{close_quotes} called FINDME systems. These programs know enough about an information space to be able to help a user navigate through it. The user not only comes away with items of useful information but also insights into the structure of the information space itself. In these systems, we have combined ideas of instance-based browsing, structuring retrieval around the critiquing of previously-retrieved examples, and retrieval strategies, knowledge-based heuristics for finding relevant information. We illustrate these techniques with several examples, concentrating especially on the RENTME system, a FINDME system for helping users find suitable rental apartments in the Chicago metropolitan area.

  10. [Artificial intelligence--the knowledge base applied to nephrology].

    PubMed

    Sancipriano, G P

    2005-01-01

    The idea that efficacy efficiency, and quality in medicine could not be reached without sorting the huge knowledge of medical and nursing science is very common. Engineers and computer scientists have developed medical software with great prospects for success, but currently these software applications are not so useful in clinical practice. The medical doctor and the trained nurse live the 'information age' in many daily activities, but the main benefits are not so widespread in working activities. Artificial intelligence and, particularly, export systems charm health staff because of their potential. The first part of this paper summarizes the characteristics of 'weak artificial intelligence' and of expert systems important in clinical practice. The second part discusses medical doctors' requirements and the current nephrologic knowledge bases available for artificial intelligence development.

  11. The 2004 knowledge base parametric grid data software suite.

    SciTech Connect

    Wilkening, Lisa K.; Simons, Randall W.; Ballard, Sandy; Jensen, Lee A.; Chang, Marcus C.; Hipp, James Richard

    2004-08-01

    One of the most important types of data in the National Nuclear Security Administration (NNSA) Ground-Based Nuclear Explosion Monitoring Research and Engineering (GNEM R&E) Knowledge Base (KB) is parametric grid (PG) data. PG data can be used to improve signal detection, signal association, and event discrimination, but so far their greatest use has been for improving event location by providing ground-truth-based corrections to travel-time base models. In this presentation we discuss the latest versions of the complete suite of Knowledge Base PG tools developed by NNSA to create, access, manage, and view PG data. The primary PG population tool is the Knowledge Base calibration integration tool (KBCIT). KBCIT is an interactive computer application to produce interpolated calibration-based information that can be used to improve monitoring performance by improving precision of model predictions and by providing proper characterizations of uncertainty. It is used to analyze raw data and produce kriged correction surfaces that can be included in the Knowledge Base. KBCIT not only produces the surfaces but also records all steps in the analysis for later review and possible revision. New features in KBCIT include a new variogram autofit algorithm; the storage of database identifiers with a surface; the ability to merge surfaces; and improved surface-smoothing algorithms. The Parametric Grid Library (PGL) provides the interface to access the data and models stored in a PGL file database. The PGL represents the core software library used by all the GNEM R&E tools that read or write PGL data (e.g., KBCIT and LocOO). The library provides data representations and software models to support accurate and efficient seismic phase association and event location. Recent improvements include conversion of the flat-file database (FDB) to an Oracle database representation; automatic access of station/phase tagged models from the FDB during location; modification of the core

  12. Assessing an AI knowledge-base for asymptomatic liver diseases.

    PubMed

    Babic, A; Mathiesen, U; Hedin, K; Bodemar, G; Wigertz, O

    1998-01-01

    Discovering not yet seen knowledge from clinical data is of importance in the field of asymptomatic liver diseases. Avoidance of liver biopsy which is used as the ultimate confirmation of diagnosis by making the decision based on relevant laboratory findings only, would be considered an essential support. The system based on Quinlan's ID3 algorithm was simple and efficient in extracting the sought knowledge. Basic principles of applying the AI systems are therefore described and complemented with medical evaluation. Some of the diagnostic rules were found to be useful as decision algorithms i.e. they could be directly applied in clinical work and made a part of the knowledge-base of the Liver Guide, an automated decision support system.

  13. Knowledge-based assistance in costing the space station DMS

    NASA Technical Reports Server (NTRS)

    Henson, Troy; Rone, Kyle

    1988-01-01

    The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.

  14. Incremental Knowledge Base Construction Using DeepDive.

    PubMed

    Shin, Jaeho; Wu, Sen; Wang, Feiran; De Sa, Christopher; Zhang, Ce; Ré, Christopher

    2015-07-01

    Populating a database with unstructured information is a long-standing problem in industry and research that encompasses problems of extraction, cleaning, and integration. Recent names used for this problem include dealing with dark data and knowledge base construction (KBC). In this work, we describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems, and we present techniques to make the KBC process more efficient. We observe that the KBC process is iterative, and we develop techniques to incrementally produce inference results for KBC systems. We propose two methods for incremental inference, based respectively on sampling and variational techniques. We also study the tradeoff space of these methods and develop a simple rule-based optimizer. DeepDive includes all of these contributions, and we evaluate Deep-Dive on five KBC systems, showing that it can speed up KBC inference tasks by up to two orders of magnitude with negligible impact on quality.

  15. Knowledge-based requirements analysis for automating software development

    NASA Technical Reports Server (NTRS)

    Markosian, Lawrence Z.

    1988-01-01

    We present a new software development paradigm that automates the derivation of implementations from requirements. In this paradigm, informally-stated requirements are expressed in a domain-specific requirements specification language. This language is machine-understable and requirements expressed in it are captured in a knowledge base. Once the requirements are captured, more detailed specifications and eventually implementations are derived by the system using transformational synthesis. A key characteristic of the process is that the required human intervention is in the form of providing problem- and domain-specific engineering knowledge, not in writing detailed implementations. We describe a prototype system that applies the paradigm in the realm of communication engineering: the prototype automatically generates implementations of buffers following analysis of the requirements on each buffer.

  16. Distribution planning using a knowledge-based expert system

    SciTech Connect

    Hsu, Y.Y.; Chen, J.L. . Dept. of Electrical Engineering)

    1990-07-01

    An expert system is designed for determining substation locations and feeder configuration of a distribution system. By incorporating the heuristic rules currently followed by distribution engineers into the knowledge base, the expert system can benefit from system planners' experience in its problem-solving process. To minimize the feeder losses of the distribution plan, a novel approach, usually referred to as the location-allocation method, is proposed to determine substation locations. The expert system is implemented using the artificial language PROLONG and is designed to be used interactively. The developed expert system is applied to the planning of a distribution system in Taiwan. Test results indicate that the expert system is both efficient and user friendly in reaching the final distribution plan.

  17. Effective domain-dependent reuse in medical knowledge bases.

    PubMed

    Dojat, M; Pachet, F

    1995-12-01

    Knowledge reuse is now a critical issue for most developers of medical knowledge-based systems. As a rule, reuse is addressed from an ambitious, knowledge-engineering perspective that focuses on reusable general purpose knowledge modules, concepts, and methods. However, such a general goal fails to take into account the specific aspects of medical practice. From the point of view of the knowledge engineer, whose goal is to capture the specific features and intricacies of a given domain, this approach addresses the wrong level of generality. In this paper, we adopt a more pragmatic viewpoint, introducing the less ambitious goal of "domain-dependent limited reuse" and suggesting effective means of achieving it in practice. In a knowledge representation framework combining objects and production rules, we propose three mechanisms emerging from the combination of object-oriented programming and rule-based programming. We show these mechanisms contribute to achieve limited reuse and to introduce useful limited variations in medical expertise.

  18. SENEX: An Object-Oriented Biomedical Knowledge Base

    PubMed Central

    Ball, Sheldon; Wright, Lawrence; Miller, Perry

    1989-01-01

    We are currently developing an object-oriented knowledge base (SENEX) in the domain of neurodegeneration and loss of memory in aging. Initially, we are focusing on three sets of issues in the representation of biomedical information. First, we are seeking to extend the Medical Subjects Headings (MeSH) nomenclature to include new classes of biomedical entities and to include relationships among those entities. Secondly, we are structuring biomedical information rather than categorizing text for bibliographic retrieval. Finally, we are exploring ways in which such information could be used in an interactive system created for purposes of education and for designing basic research experiments. This article describes the current behavior of SENEX, which is being developed using the COMMON LISP Object System (CLOS). It also discusses various issues raised, and plans for future development.

  19. Detection of infrastructure manipulation with knowledge-based video surveillance

    NASA Astrophysics Data System (ADS)

    Muench, David; Hilsenbeck, Barbara; Kieritz, Hilke; Becker, Stefan; Grosselfinger, Ann-Kristin; Huebner, Wolfgang; Arens, Michael

    2016-10-01

    We are living in a world dependent on sophisticated technical infrastructure. Malicious manipulation of such critical infrastructure poses an enormous threat for all its users. Thus, running a critical infrastructure needs special attention to log the planned maintenance or to detect suspicious events. Towards this end, we present a knowledge-based surveillance approach capable of logging visual observable events in such an environment. The video surveillance modules are based on appearance-based person detection, which further is used to modulate the outcome of generic processing steps such as change detection or skin detection. A relation between the expected scene behavior and the underlying basic video surveillance modules is established. It will be shown that the combination already provides sufficient expressiveness to describe various everyday situations in indoor video surveillance. The whole approach is qualitatively and quantitatively evaluated on a prototypical scenario in a server room.

  20. A knowledge-based agent prototype for Chinese address geocoding

    NASA Astrophysics Data System (ADS)

    Wei, Ran; Zhang, Xuehu; Ding, Linfang; Ma, Haoming; Li, Qi

    2009-10-01

    Chinese address geocoding is a difficult problem to deal with due to intrinsic complexities in Chinese address systems and a lack of standards in address assignments and usages. In order to improve existing address geocoding algorithm, a spatial knowledge-based agent prototype aimed at validating address geocoding results is built to determine the spatial accuracies as well as matching confidence. A portion of human's knowledge of judging the spatial closeness of two addresses is represented via first order logic and the corresponding algorithms are implemented with the Prolog language. Preliminary tests conducted using addresses matching result in Beijing area showed that the prototype can successfully assess the spatial closeness between the matching address and the query address with 97% accuracy.

  1. VIALACTEA knowledge base homogenizing access to Milky Way data

    NASA Astrophysics Data System (ADS)

    Molinaro, Marco; Butora, Robert; Bandieramonte, Marilena; Becciani, Ugo; Brescia, Massimo; Cavuoti, Stefano; Costa, Alessandro; Di Giorgio, Anna M.; Elia, Davide; Hajnal, Akos; Gabor, Hermann; Kacsuk, Peter; Liu, Scige J.; Molinari, Sergio; Riccio, Giuseppe; Schisano, Eugenio; Sciacca, Eva; Smareglia, Riccardo; Vitello, Fabio

    2016-08-01

    The VIALACTEA project has a work package dedicated to "Tools and Infrastructure" and, inside it, a task for the "Database and Virtual Observatory Infrastructure". This task aims at providing an infrastructure to store all the resources needed by the, more purposely, scientific work packages of the project itself. This infrastructure includes a combination of: storage facilities, relational databases and web services on top of them, and has taken, as a whole, the name of VIALACTEA Knowledge Base (VLKB). This contribution illustrates the current status of this VLKB. It details the set of data resources put together; describes the database that allows data discovery through VO inspired metadata maintenance; illustrates the discovery, cutout and access services built on top of the former two for the users to exploit the data content.

  2. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  3. On the optimal design of molecular sensing interfaces with lipid bilayer assemblies - A knowledge based approach

    NASA Astrophysics Data System (ADS)

    Siontorou, Christina G.

    2012-12-01

    Biosensors are analytic devices that incorporate a biochemical recognition system (biological, biologicalderived or biomimic: enzyme, antibody, DNA, receptor, etc.) in close contact with a physicochemical transducer (electrochemical, optical, piezoelectric, conductimetric, etc.) that converts the biochemical information, produced by the specific biological recognition reaction (analyte-biomolecule binding), into a chemical or physical output signal, related to the concentration of the analyte in the measuring sample. The biosensing concept is based on natural chemoreception mechanisms, which are feasible over/within/by means of a biological membrane, i.e., a structured lipid bilayer, incorporating or attached to proteinaceous moieties that regulate molecular recognition events which trigger ion flux changes (facilitated or passive) through the bilayer. The creation of functional structures that are similar to natural signal transduction systems, correlating and interrelating compatibly and successfully the physicochemical transducer with the lipid film that is self-assembled on its surface while embedding the reconstituted biological recognition system, and at the same time manage to satisfy the basic conditions for measuring device development (simplicity, easy handling, ease of fabrication) is far from trivial. The aim of the present work is to present a methodological framework for designing such molecular sensing interfaces, functioning within a knowledge-based system built on an ontological platform for supplying sub-systems options, compatibilities, and optimization parameters.

  4. Autonomous Cryogenic Load Operations: Knowledge-Based Autonomous Test Engineer

    NASA Technical Reports Server (NTRS)

    Schrading, J. Nicolas

    2013-01-01

    The Knowledge-Based Autonomous Test Engineer (KATE) program has a long history at KSC. Now a part of the Autonomous Cryogenic Load Operations (ACLO) mission, this software system has been sporadically developed over the past 20 years. Originally designed to provide health and status monitoring for a simple water-based fluid system, it was proven to be a capable autonomous test engineer for determining sources of failure in the system. As part of a new goal to provide this same anomaly-detection capability for a complicated cryogenic fluid system, software engineers, physicists, interns and KATE experts are working to upgrade the software capabilities and graphical user interface. Much progress was made during this effort to improve KATE. A display of the entire cryogenic system's graph, with nodes for components and edges for their connections, was added to the KATE software. A searching functionality was added to the new graph display, so that users could easily center their screen on specific components. The GUI was also modified so that it displayed information relevant to the new project goals. In addition, work began on adding new pneumatic and electronic subsystems into the KATE knowledge base, so that it could provide health and status monitoring for those systems. Finally, many fixes for bugs, memory leaks, and memory errors were implemented and the system was moved into a state in which it could be presented to stakeholders. Overall, the KATE system was improved and necessary additional features were added so that a presentation of the program and its functionality in the next few months would be a success.

  5. EHR based Genetic Testing Knowledge Base (iGTKB) Development.

    PubMed

    Zhu, Qian; Liu, Hongfang; Chute, Christopher G; Ferber, Matthew

    2015-01-01

    The gap between a large growing number of genetic tests and a suboptimal clinical workflow of incorporating these tests into regular clinical practice poses barriers to effective reliance on advanced genetic technologies to improve quality of healthcare. A promising solution to fill this gap is to develop an intelligent genetic test recommendation system that not only can provide a comprehensive view of genetic tests as education resources, but also can recommend the most appropriate genetic tests to patients based on clinical evidence. In this study, we developed an EHR based Genetic Testing Knowledge Base for Individualized Medicine (iGTKB). We extracted genetic testing information and patient medical records from EHR systems at Mayo Clinic. Clinical features have been semi-automatically annotated from the clinical notes by applying a Natural Language Processing (NLP) tool, MedTagger suite. To prioritize clinical features for each genetic test, we compared odds ratio across four population groups. Genetic tests, genetic disorders and clinical features with their odds ratios have been applied to establish iGTKB, which is to be integrated into the Genetic Testing Ontology (GTO). Overall, there are five genetic tests operated with sample size greater than 100 in 2013 at Mayo Clinic. A total of 1,450 patients who was tested by one of the five genetic tests have been selected. We assembled 243 clinical features from the Human Phenotype Ontology (HPO) for these five genetic tests. There are 60 clinical features with at least one mention in clinical notes of patients taking the test. Twenty-eight clinical features with high odds ratio (greater than 1) have been selected as dominant features and deposited into iGTKB with their associated information about genetic tests and genetic disorders. In this study, we developed an EHR based genetic testing knowledge base, iGTKB. iGTKB will be integrated into the GTO by providing relevant clinical evidence, and ultimately to

  6. Knowledge-based architecture for airborne mine and minefield detection

    NASA Astrophysics Data System (ADS)

    Agarwal, Sanjeev; Menon, Deepak; Swonger, C. W.

    2004-09-01

    One of the primary lessons learned from airborne mid-wave infrared (MWIR) based mine and minefield detection research and development over the last few years has been the fact that no single algorithm or static detection architecture is able to meet mine and minefield detection performance specifications. This is true not only because of the highly varied environmental and operational conditions under which an airborne sensor is expected to perform but also due to the highly data dependent nature of sensors and algorithms employed for detection. Attempts to make the algorithms themselves more robust to varying operating conditions have only been partially successful. In this paper, we present a knowledge-based architecture to tackle this challenging problem. The detailed algorithm architecture is discussed for such a mine/minefield detection system, with a description of each functional block and data interface. This dynamic and knowledge-driven architecture will provide more robust mine and minefield detection for a highly multi-modal operating environment. The acquisition of the knowledge for this system is predominantly data driven, incorporating not only the analysis of historical airborne mine and minefield imagery data collection, but also other "all source data" that may be available such as terrain information and time of day. This "all source data" is extremely important and embodies causal information that drives the detection performance. This information is not being used by current detection architectures. Data analysis for knowledge acquisition will facilitate better understanding of the factors that affect the detection performance and will provide insight into areas for improvement for both sensors and algorithms. Important aspects of this knowledge-based architecture, its motivations and the potential gains from its implementation are discussed, and some preliminary results are presented.

  7. Comparison of clinical knowledge bases for summarization of electronic health records.

    PubMed

    McCoy, Allison B; Sittig, Dean F; Wright, Adam

    2013-01-01

    Automated summarization tools that create condition-specific displays may improve clinician efficiency. These tools require new kinds of knowledge that is difficult to obtain. We compared five problem-medication pair knowledge bases generated using four previously described knowledge base development approaches. The number of pairs in the resulting mapped knowledge bases varied widely due to differing mapping techniques from the source terminologies, ranging from 2,873 to 63,977,738 pairs. The number of overlapping pairs across knowledge bases was low, with one knowledge base having half of the pairs overlapping with another knowledge base, and most having less than a third overlapping. Further research is necessary to better evaluate the knowledge bases independently in additional settings, and to identify methods to integrate the knowledge bases.

  8. Knowledge-based potentials in bioinformatics: From a physicist’s viewpoint

    NASA Astrophysics Data System (ADS)

    Zheng, Wei-Mou

    2015-12-01

    Biological raw data are growing exponentially, providing a large amount of information on what life is. It is believed that potential functions and the rules governing protein behaviors can be revealed from analysis on known native structures of proteins. Many knowledge-based potentials for proteins have been proposed. Contrary to most existing review articles which mainly describe technical details and applications of various potential models, the main foci for the discussion here are ideas and concepts involving the construction of potentials, including the relation between free energy and energy, the additivity of potentials of mean force and some key issues in potential construction. Sequence analysis is briefly viewed from an energetic viewpoint. Project supported in part by the National Natural Science Foundation of China (Grant Nos. 11175224 and 11121403).

  9. Structural and Network-based Methods for Knowledge-Based Systems

    DTIC Science & Technology

    2011-12-01

    NORTHWESTERN UNIVERSITY STRUCTURAL AND NETWORK-BASED METHODS FOR KNOWLEDGE -BASED SYSTEMS A DISSERTATION SUBMITTED TO THE GRADUATE SCHOOL...4. TITLE AND SUBTITLE Structural and Network-based Methods for Knowledge -based Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...ABSTRACT Structural and Network-based Methods for Knowledge -based Systems In recent years, there has been

  10. The Knowledge-Based Economy and E-Learning: Critical Considerations for Workplace Democracy

    ERIC Educational Resources Information Center

    Remtulla, Karim A.

    2007-01-01

    The ideological shift by nation-states to "a knowledge-based economy" (also referred to as "knowledge-based society") is causing changes in the workplace. Brought about by the forces of globalisation and technological innovation, the ideologies of the "knowledge-based economy" are not limited to influencing the…

  11. Strategy Regulation: The Role of Intelligence, Metacognitive Attributions, and Knowledge Base.

    ERIC Educational Resources Information Center

    Alexander, Joyce M.; Schwanenflugel, Paula J.

    1994-01-01

    Studied influence of intelligence, metacognitive attributions, and knowledge base coherence in the regulation of the category-sorting strategy in first and second graders. Knowledge base was a powerful predictor of strategic-looking behavior; metacognitive attribution was most influential in low knowledge base conditions; and intelligence had…

  12. NetWeaver for EMDS user guide (version 1.1): a knowledge base development system.

    Treesearch

    Keith M. Reynolds

    1999-01-01

    The guide describes use of the NetWeaver knowledge base development system. Knowledge representation in NetWeaver is based on object-oriented fuzzy-logic networks that offer several significant advantages over the more traditional rulebased representation. Compared to rule-based knowledge bases, NetWeaver knowledge bases are easier to build, test, and maintain because...

  13. The Knowledge-Based Economy and E-Learning: Critical Considerations for Workplace Democracy

    ERIC Educational Resources Information Center

    Remtulla, Karim A.

    2007-01-01

    The ideological shift by nation-states to "a knowledge-based economy" (also referred to as "knowledge-based society") is causing changes in the workplace. Brought about by the forces of globalisation and technological innovation, the ideologies of the "knowledge-based economy" are not limited to influencing the…

  14. Knowledge-based engineering of a PLC controlled telescope

    NASA Astrophysics Data System (ADS)

    Pessemier, Wim; Raskin, Gert; Saey, Philippe; Van Winckel, Hans; Deconinck, Geert

    2016-08-01

    As the new control system of the Mercator Telescope is being finalized, we can review some technologies and design methodologies that are advantageous, despite their relative uncommonness in astronomical instrumentation. Particular for the Mercator Telescope is that it is controlled by a single high-end soft-PLC (Programmable Logic Controller). Using off-the-shelf components only, our distributed embedded system controls all subsystems of the telescope such as the pneumatic primary mirror support, the hydrostatic bearing, the telescope axes, the dome, the safety system, and so on. We show how real-time application logic can be written conveniently in typical PLC languages (IEC 61131-3) and in C++ (to implement the pointing kernel) using the commercial TwinCAT 3 programming environment. This software processes the inputs and outputs of the distributed system in real-time via an observatory-wide EtherCAT network, which is synchronized with high precision to an IEEE 1588 (PTP, Precision Time Protocol) time reference clock. Taking full advantage of the ability of soft-PLCs to run both real-time and non real-time software, the same device also hosts the most important user interfaces (HMIs or Human Machine Interfaces) and communication servers (OPC UA for process data, FTP for XML configuration data, and VNC for remote control). To manage the complexity of the system and to streamline the development process, we show how most of the software, electronics and systems engineering aspects of the control system have been modeled as a set of scripts written in a Domain Specific Language (DSL). When executed, these scripts populate a Knowledge Base (KB) which can be queried to retrieve specific information. By feeding the results of those queries to a template system, we were able to generate very detailed "browsable" web-based documentation about the system, but also PLC software code, Python client code, model verification reports, etc. The aim of this paper is to

  15. EHR based Genetic Testing Knowledge Base (iGTKB) Development

    PubMed Central

    2015-01-01

    Background The gap between a large growing number of genetic tests and a suboptimal clinical workflow of incorporating these tests into regular clinical practice poses barriers to effective reliance on advanced genetic technologies to improve quality of healthcare. A promising solution to fill this gap is to develop an intelligent genetic test recommendation system that not only can provide a comprehensive view of genetic tests as education resources, but also can recommend the most appropriate genetic tests to patients based on clinical evidence. In this study, we developed an EHR based Genetic Testing Knowledge Base for Individualized Medicine (iGTKB). Methods We extracted genetic testing information and patient medical records from EHR systems at Mayo Clinic. Clinical features have been semi-automatically annotated from the clinical notes by applying a Natural Language Processing (NLP) tool, MedTagger suite. To prioritize clinical features for each genetic test, we compared odds ratio across four population groups. Genetic tests, genetic disorders and clinical features with their odds ratios have been applied to establish iGTKB, which is to be integrated into the Genetic Testing Ontology (GTO). Results Overall, there are five genetic tests operated with sample size greater than 100 in 2013 at Mayo Clinic. A total of 1,450 patients who was tested by one of the five genetic tests have been selected. We assembled 243 clinical features from the Human Phenotype Ontology (HPO) for these five genetic tests. There are 60 clinical features with at least one mention in clinical notes of patients taking the test. Twenty-eight clinical features with high odds ratio (greater than 1) have been selected as dominant features and deposited into iGTKB with their associated information about genetic tests and genetic disorders. Conclusions In this study, we developed an EHR based genetic testing knowledge base, iGTKB. iGTKB will be integrated into the GTO by providing relevant

  16. LLNL Middle East, North Africa and Western Eurasia Knowledge Base

    SciTech Connect

    O'Boyle, J; Ruppert, S D; Hauk, T F; Dodge, D A; Ryall, F; Firpo, M A

    2001-07-12

    The Lawrence Livermore National Laboratory (LLNL) Ground-Based Nuclear Event Monitoring (GNEM) program has made significant progress populating a comprehensive Seismic Research Knowledge Base (SRKB) and deriving calibration parameters for the Middle East, North Africa and Western Eurasia (ME/NA/WE) regions. The LLNL SRKB provides not only a coherent framework in which to store and organize very large volumes of collected seismic waveforms, associated event parameter information, and spatial contextual data, but also provides an efficient data processing/research environment for deriving location and discrimination correction surfaces. The SRKB is a flexible and extensible framework consisting of a relational database (RDB), Geographical Information System (GIS), and associated product/data visualization and data management tools. This SRKB framework is designed to accommodate large volumes of data (almost 3 million waveforms from 57,000 events) in diverse formats from many sources (both LLNL derived research and integrated contractor products), in addition to maintaining detailed quality control and metadata. We have developed expanded look-up tables for critical station parameter information (including location and response) and an integrated and reconciled event catalog data set (including specification of preferred origin solutions and associated phase arrivals) for the PDE, CMT, ISC, REB and selected regional catalogs. Using the SRKB framework, we are combining traveltime observations, event characterization studies, and regional tectonic models to assemble a library of ground truth information and phenomenology (e.g. travel-time and amplitude) correction surfaces required for support of the ME/NA/WE regionalization program. We also use the SRKB to integrate data and research products from a variety of sources, such as contractors and universities, to merge and maintain quality control of the data sets. Corrections and parameters distilled from the LLNL SRKB

  17. Data/knowledge Base Processing Using Optical Associative Architectures

    NASA Astrophysics Data System (ADS)

    Akyokus, Selim

    Optical storage, communication, and processing technologies will have a great impact on the future data/knowledge base processing systems. The use of optics in data/knowledge base processing requires new design methods, architectures, and algorithms to apply the optical technology successfully. In this dissertation, three optical associative architectures are proposed. The basic data element in the proposed systems is a 2-D data page. Pages of database relations are stored in a page-oriented optical mass memory, retrieved, and processed in parallel. The first architecture uses a 1-D optical content addressable memory (OCAM) as the main functional unit. A 1-D OCAM is basically an optical vector-matrix multiplier which works as a CAM due to the spatial coding used for bit matching and masking. A 1-D OCAM can compare a search argument with a data page in parallel. The second architecture uses a 2-D OCAM as a main functional unit. A 2-D OCAM is an optical matrix-matrix multiplier which enables the comparison of a page of search arguments with a data page in parallel and in a single step. This architecture allows the execution of multiple selection and join operations very fast. The third architecture uses an optical perfect shuffle network for data routing and a processing array for performing parallel logic operations. A processing array based on symbolic substitution logic is introduced, and the use of a smart SLM as processing array is discussed. The symbolic substitution rules and algorithms for the implementation of search and bitonic sort operations are given for the proposed system. The implementation of relational database operations: selection, projection, update, deletion, sorting, duplication removal, aggregation functions, join, and set operations are described for the proposed systems, timing equations are developed for each operation, and their performances are analyzed. The proposed architectures take advantage of one-to-one mapping among the physical

  18. On-the-spot lung cancer differential diagnosis by label-free, molecular vibrational imaging and knowledge-based classification.

    PubMed

    Gao, Liang; Li, Fuhai; Thrall, Michael J; Yang, Yaliang; Xing, Jiong; Hammoudi, Ahmad A; Zhao, Hong; Massoud, Yehia; Cagle, Philip T; Fan, Yubo; Wong, Kelvin K; Wang, Zhiyong; Wong, Stephen T C

    2011-09-01

    We report the development and application of a knowledge-based coherent anti-Stokes Raman scattering (CARS) microscopy system for label-free imaging, pattern recognition, and classification of cells and tissue structures for differentiating lung cancer from non-neoplastic lung tissues and identifying lung cancer subtypes. A total of 1014 CARS images were acquired from 92 fresh frozen lung tissue samples. The established pathological workup and diagnostic cellular were used as prior knowledge for establishment of a knowledge-based CARS system using a machine learning approach. This system functions to separate normal, non-neoplastic, and subtypes of lung cancer tissues based on extracted quantitative features describing fibrils and cell morphology. The knowledge-based CARS system showed the ability to distinguish lung cancer from normal and non-neoplastic lung tissue with 91% sensitivity and 92% specificity. Small cell carcinomas were distinguished from nonsmall cell carcinomas with 100% sensitivity and specificity. As an adjunct to submitting tissue samples to routine pathology, our novel system recognizes the patterns of fibril and cell morphology, enabling medical practitioners to perform differential diagnosis of lung lesions in mere minutes. The demonstration of the strategy is also a necessary step toward in vivo point-of-care diagnosis of precancerous and cancerous lung lesions with a fiber-based CARS microendoscope.

  19. On-the-spot lung cancer differential diagnosis by label-free, molecular vibrational imaging and knowledge-based classification

    NASA Astrophysics Data System (ADS)

    Gao, Liang; Li, Fuhai; Thrall, Michael J.; Yang, Yaliang; Xing, Jiong; Hammoudi, Ahmad A.; Zhao, Hong; Massoud, Yehia; Cagle, Philip T.; Fan, Yubo; Wong, Kelvin K.; Wang, Zhiyong; Wong, Stephen T. C.

    2011-09-01

    We report the development and application of a knowledge-based coherent anti-Stokes Raman scattering (CARS) microscopy system for label-free imaging, pattern recognition, and classification of cells and tissue structures for differentiating lung cancer from non-neoplastic lung tissues and identifying lung cancer subtypes. A total of 1014 CARS images were acquired from 92 fresh frozen lung tissue samples. The established pathological workup and diagnostic cellular were used as prior knowledge for establishment of a knowledge-based CARS system using a machine learning approach. This system functions to separate normal, non-neoplastic, and subtypes of lung cancer tissues based on extracted quantitative features describing fibrils and cell morphology. The knowledge-based CARS system showed the ability to distinguish lung cancer from normal and non-neoplastic lung tissue with 91% sensitivity and 92% specificity. Small cell carcinomas were distinguished from nonsmall cell carcinomas with 100% sensitivity and specificity. As an adjunct to submitting tissue samples to routine pathology, our novel system recognizes the patterns of fibril and cell morphology, enabling medical practitioners to perform differential diagnosis of lung lesions in mere minutes. The demonstration of the strategy is also a necessary step toward in vivo point-of-care diagnosis of precancerous and cancerous lung lesions with a fiber-based CARS microendoscope.

  20. Selection of construction methods: a knowledge-based approach.

    PubMed

    Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  1. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  2. Compiling knowledge-based systems from KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.

  3. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes

    PubMed Central

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J.; Wang, Liliang; Lin, Jianguo

    2016-01-01

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions. PMID:28060298

  4. Prospector II: Towards a knowledge base for mineral deposits

    USGS Publications Warehouse

    McCammon, R.B.

    1994-01-01

    What began in the mid-seventies as a research effort in designing an expert system to aid geologists in exploring for hidden mineral deposits has in the late eighties become a full-sized knowledge-based system to aid geologists in conducting regional mineral resource assessments. Prospector II, the successor to Prospector, is interactive-graphics oriented, flexible in its representation of mineral deposit models, and suited to regional mineral resource assessment. In Prospector II, the geologist enters the findings for an area, selects the deposit models or examples of mineral deposits for consideration, and the program compares the findings with the models or the examples selected, noting the similarities, differences, and missing information. The models or the examples selected are ranked according to scores that are based on the comparisons with the findings. Findings can be reassessed and the process repeated if necessary. The results provide the geologist with a rationale for identifying those mineral deposit types that the geology of an area permits. In future, Prospector II can assist in the creation of new models used in regional mineral resource assessment and in striving toward an ultimate classification of mineral deposits. ?? 1994 International Association for Mathematical Geology.

  5. Knowledge-based graphical interfaces for presenting technical information

    NASA Technical Reports Server (NTRS)

    Feiner, Steven

    1988-01-01

    Designing effective presentations of technical information is extremely difficult and time-consuming. Moreover, the combination of increasing task complexity and declining job skills makes the need for high-quality technical presentations especially urgent. We believe that this need can ultimately be met through the development of knowledge-based graphical interfaces that can design and present technical information. Since much material is most naturally communicated through pictures, our work has stressed the importance of well-designed graphics, concentrating on generating pictures and laying out displays containing them. We describe APEX, a testbed picture generation system that creates sequences of pictures that depict the performance of simple actions in a world of 3D objects. Our system supports rules for determining automatically the objects to be shown in a picture, the style and level of detail with which they should be rendered, the method by which the action itself should be indicated, and the picture's camera specification. We then describe work on GRIDS, an experimental display layout system that addresses some of the problems in designing displays containing these pictures, determining the position and size of the material to be presented.

  6. FunSecKB: the Fungal Secretome KnowledgeBase

    PubMed Central

    Lum, Gengkon; Min, Xiang Jia

    2011-01-01

    The Fungal Secretome KnowledgeBase (FunSecKB) provides a resource of secreted fungal proteins, i.e. secretomes, identified from all available fungal protein data in the NCBI RefSeq database. The secreted proteins were identified using a well evaluated computational protocol which includes SignalP, WolfPsort and Phobius for signal peptide or subcellular location prediction, TMHMM for identifying membrane proteins, and PS-Scan for identifying endoplasmic reticulum (ER) target proteins. The entries were mapped to the UniProt database and any annotations of subcellular locations that were either manually curated or computationally predicted were included in FunSecKB. Using a web-based user interface, the database is searchable, browsable and downloadable by using NCBI’s RefSeq accession or gi number, UniProt accession number, keyword or by species. A BLAST utility was integrated to allow users to query the database by sequence similarity. A user submission tool was implemented to support community annotation of subcellular locations of fungal proteins. With the complete fungal data from RefSeq and associated web-based tools, FunSecKB will be a valuable resource for exploring the potential applications of fungal secreted proteins. Database URL: http://proteomics.ysu.edu/secretomes/fungi.php PMID:21300622

  7. A knowledge based system for scientific data visualization

    NASA Technical Reports Server (NTRS)

    Senay, Hikmet; Ignatius, Eve

    1992-01-01

    A knowledge-based system, called visualization tool assistant (VISTA), which was developed to assist scientists in the design of scientific data visualization techniques, is described. The system derives its knowledge from several sources which provide information about data characteristics, visualization primitives, and effective visual perception. The design methodology employed by the system is based on a sequence of transformations which decomposes a data set into a set of data partitions, maps this set of partitions to visualization primitives, and combines these primitives into a composite visualization technique design. Although the primary function of the system is to generate an effective visualization technique design for a given data set by using principles of visual perception the system also allows users to interactively modify the design, and renders the resulting image using a variety of rendering algorithms. The current version of the system primarily supports visualization techniques having applicability in earth and space sciences, although it may easily be extended to include other techniques useful in other disciplines such as computational fluid dynamics, finite-element analysis and medical imaging.

  8. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  9. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  10. Knowledge Based Cloud FE Simulation of Sheet Metal Forming Processes.

    PubMed

    Zhou, Du; Yuan, Xi; Gao, Haoxiang; Wang, Ailing; Liu, Jun; El Fakir, Omer; Politis, Denis J; Wang, Liliang; Lin, Jianguo

    2016-12-13

    The use of Finite Element (FE) simulation software to adequately predict the outcome of sheet metal forming processes is crucial to enhancing the efficiency and lowering the development time of such processes, whilst reducing costs involved in trial-and-error prototyping. Recent focus on the substitution of steel components with aluminum alloy alternatives in the automotive and aerospace sectors has increased the need to simulate the forming behavior of such alloys for ever more complex component geometries. However these alloys, and in particular their high strength variants, exhibit limited formability at room temperature, and high temperature manufacturing technologies have been developed to form them. Consequently, advanced constitutive models are required to reflect the associated temperature and strain rate effects. Simulating such behavior is computationally very expensive using conventional FE simulation techniques. This paper presents a novel Knowledge Based Cloud FE (KBC-FE) simulation technique that combines advanced material and friction models with conventional FE simulations in an efficient manner thus enhancing the capability of commercial simulation software packages. The application of these methods is demonstrated through two example case studies, namely: the prediction of a material's forming limit under hot stamping conditions, and the tool life prediction under multi-cycle loading conditions.

  11. Dynamic reasoning in a knowledge-based system

    NASA Technical Reports Server (NTRS)

    Rao, Anand S.; Foo, Norman Y.

    1988-01-01

    Any space based system, whether it is a robot arm assembling parts in space or an onboard system monitoring the space station, has to react to changes which cannot be foreseen. As a result, apart from having domain-specific knowledge as in current expert systems, a space based AI system should also have general principles of change. This paper presents a modal logic which can not only represent change but also reason with it. Three primitive operations, expansion, contraction and revision are introduced and axioms which specify how the knowledge base should change when the external world changes are also specified. Accordingly the notion of dynamic reasoning is introduced, which unlike the existing forms of reasoning, provide general principles of change. Dynamic reasoning is based on two main principles, namely minimize change and maximize coherence. A possible-world semantics which incorporates the above two principles is also discussed. The paper concludes by discussing how the dynamic reasoning system can be used to specify actions and hence form an integral part of an autonomous reasoning and planning system.

  12. Knowledge-based planning model for courses of action generation

    SciTech Connect

    Collins, D.R.; Baucum, T.A.

    1986-04-07

    U.S. Army War College students of the Class of 1986 were solicited to participate in a Military Studies Program to develop a planning model for Courses of Action Generation. The Model was to be knowledge-based, i.e., drawn from the collective experience of officers with operational/planning backgrounds. The purpose of this document is to summarize the results of the four knowledge engineering sessions conducted. The detailed results are at enclosures 1-4, each enclosure acting as an agreed-upon record of that engineering session. Initial discussions between the CECLOM computer scientist and the AWC students concerned the potential for automation of the process of developing a scheme of maneuver. It was the opinion of the students that some aspects of the process would be extremely difficult to include in a computer program - the intent of the commander, for example. While neither student dismissed the potential of artificial intelligence on the battlefield, neither actively sought ways to incorporate it, either. What evolved, therefore, was and exposition by the students of what actually goes on in the minds of commanders and battlefield planners during an active operational environment.

  13. Verification of Legal Knowledge-base with Conflictive Concept

    NASA Astrophysics Data System (ADS)

    Hagiwara, Shingo; Tojo, Satoshi

    In this paper, we propose a verification methodology of large-scale legal knowledge. With a revision of legal code, we are forced to revise also other affected code to keep the consistency of law. Thus, our task is to revise the affected area properly and to investigate its adequacy. In this study, we extend the notion of inconsistency besides of the ordinary logical inconsistency, to include the conceptual conflicts. We obtain these conflictions from taxonomy data, and thus, we can avoid tedious manual declarations of opponent words. In the verification process, we adopt extended disjunctive logic programming (EDLP) to tolerate multiple consequences for a given set of antecedents. In addition, we employ abductive logic programming (ALP) regarding the situations to which the rules are applied as premises. Also, we restrict a legal knowledge-base to acyclic program to avoid the circulation of definitions, to justify the relevance of verdicts. Therefore, detecting cyclic parts of legal knowledge would be one of our objectives. The system is composed of two subsystems; we implement the preprocessor in Ruby to facilitate string manipulation, and the verifier in Prolog to exert the logical inference. Also, we employ XML format in the system to retain readability. In this study, we verify actual code of ordinances of Toyama prefecture, and show the experimental results.

  14. Incremental Knowledge Base Construction Using DeepDive

    PubMed Central

    Shin, Jaeho; Wu, Sen; Wang, Feiran; De Sa, Christopher; Zhang, Ce; Ré, Christopher

    2016-01-01

    Populating a database with unstructured information is a long-standing problem in industry and research that encompasses problems of extraction, cleaning, and integration. Recent names used for this problem include dealing with dark data and knowledge base construction (KBC). In this work, we describe DeepDive, a system that combines database and machine learning ideas to help develop KBC systems, and we present techniques to make the KBC process more efficient. We observe that the KBC process is iterative, and we develop techniques to incrementally produce inference results for KBC systems. We propose two methods for incremental inference, based respectively on sampling and variational techniques. We also study the tradeoff space of these methods and develop a simple rule-based optimizer. DeepDive includes all of these contributions, and we evaluate Deep-Dive on five KBC systems, showing that it can speed up KBC inference tasks by up to two orders of magnitude with negligible impact on quality. PMID:27144081

  15. Knowledge-based topographic feature extraction in medical images

    NASA Astrophysics Data System (ADS)

    Qian, JianZhong; Khair, Mohammad M.

    1995-08-01

    Diagnostic medical imaging often contains variations of patient anatomies, camera mispositioning, or other imperfect imaging condiitons. These variations contribute to uncertainty about shapes and boundaries of objects in images. As the results sometimes image features, such as traditional edges, may not be identified reliably and completely. We describe a knowledge based system that is able to reason about such uncertainties and use partial and locally ambiguous information to infer about shapes and lcoation of objects in an image. The system uses directional topographic features (DTFS), such as ridges and valleys, labeled from the underlying intensity surface to correlate to the intrinsic anatomical information. By using domain specific knowledge, the reasoning system can deduce significant anatomical landmarks based upon these DTFS, and can cope with uncertainties and fill in missing information. A succession of levels of representation for visual information and an active process of uncertain reasoning about this visual information are employed to realiably achieve the goal of image analysis. These landmarks can then be used in localization of anatomy of interest, image registration, or other clinical processing. The successful application of this system to a large set of planar cardiac images of nuclear medicine studies has demonstrated its efficiency and accuracy.

  16. Embedded knowledge-based system for automatic target recognition

    NASA Astrophysics Data System (ADS)

    Aboutalib, A. O.

    1990-10-01

    The development of a reliable Automatic Target Recognition (ATE) system is considered a very critical and challenging problem. Existing ATE Systems have inherent limitations in terms of recognition performance and the ability to learn and adapt. Artificial Intelligence Techniques have the potential to improve the performance of ATh Systems. In this paper, we presented a novel Knowledge-Engineering tool, termed, the Automatic Reasoning Process (ARP) , that can be used to automatically develop and maintain a Knowledge-Base (K-B) for the ATR Systems. In its learning mode, the ARP utilizes Learning samples to automatically develop the ATR K-B, which consists of minimum size sets of necessary and sufficient conditions for each target class. In its operational mode, the ARP infers the target class from sensor data using the ATh K-B System. The ARP also has the capability to reason under uncertainty, and can support both statistical and model-based approaches for ATR development. The capabilities of the ARP are compared and contrasted to those of another Knowledge-Engineering tool, termed, the Automatic Rule Induction (ARI) which is based on maximizing the mutual information. The AR? has been implemented in LISP on a VAX-GPX workstation.

  17. Knowledge-based computer systems for radiotherapy planning.

    PubMed

    Kalet, I J; Paluszynski, W

    1990-08-01

    Radiation therapy is one of the first areas of clinical medicine to utilize computers in support of routine clinical decision making. The role of the computer has evolved from simple dose calculations to elaborate interactive graphic three-dimensional simulations. These simulations can combine external irradiation from megavoltage photons, electrons, and particle beams with interstitial and intracavitary sources. With the flexibility and power of modern radiotherapy equipment and the ability of computer programs that simulate anything the machinery can do, we now face a challenge to utilize this capability to design more effective radiation treatments. How can we manage the increased complexity of sophisticated treatment planning? A promising approach will be to use artificial intelligence techniques to systematize our present knowledge about design of treatment plans, and to provide a framework for developing new treatment strategies. Far from replacing the physician, physicist, or dosimetrist, artificial intelligence-based software tools can assist the treatment planning team in producing more powerful and effective treatment plans. Research in progress using knowledge-based (AI) programming in treatment planning already has indicated the usefulness of such concepts as rule-based reasoning, hierarchical organization of knowledge, and reasoning from prototypes. Problems to be solved include how to handle continuously varying parameters and how to evaluate plans in order to direct improvements.

  18. A knowledge based expert system for condition monitoring

    SciTech Connect

    Selkirk, C.G.; Roberge, P.R.; Fisher, G.F.; Yeung, K.K.

    1994-12-31

    Condition monitoring (CM) is the focus of many maintenance philosophies around the world today. In the Canadian Forces (CF), CM has played an important role in the maintenance of aircraft systems since the introduction of spectrometric oil analysis (SOAP) over twenty years ago. Other techniques in use in the CF today include vibration analysis (VA), ferrography, and filter debris analysis (FDA). To improve the usefulness and utility gained from these CM techniques, work is currently underway to incorporate expert systems into them. An expert system for FDA is being developed which will aid filter debris analysts in identifying wear debris and wear level trends, and which will provide the analyst with reference examples in an attempt to standardize results. Once completed, this knowledge based expert system will provide a blueprint from which other CM expert systems can be created. Amalgamating these specific systems into a broad based global system will provide the CM analyst with a tool that will be able to correlate data and results from each of the techniques, thereby increasing the utility of each individual method of analysis. This paper will introduce FDA and then outline the development of the FDA expert system and future applications.

  19. Integrated Knowledge Based Expert System for Disease Diagnosis System

    NASA Astrophysics Data System (ADS)

    Arbaiy, Nureize; Sulaiman, Shafiza Eliza; Hassan, Norlida; Afizah Afip, Zehan

    2017-08-01

    The role and importance of healthcare systems to improve quality of life and social welfare in a society have been well recognized. Attention should be given to raise awareness and implementing appropriate measures to improve health care. Therefore, a computer based system is developed to serve as an alternative for people to self-diagnose their health status based on given symptoms. This strategy should be emphasized so that people can utilize the information correctly as a reference to enjoy healthier life. Hence, a Web-based Community Center for Healthcare Diagnosis system is developed based on expert system technique. Expert system reasoning technique is employed in the system to enable information about treatment and prevention of the diseases based on given symptoms. At present, three diseases are included which are arthritis, thalassemia and pneumococcal. Sets of rule and fact are managed in the knowledge based system. Web based technology is used as a platform to disseminate the information to users in order for them to optimize the information appropriately. This system will benefit people who wish to increase health awareness and seek expert knowledge on the diseases by performing self-diagnosis for early disease detection.

  20. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  1. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  2. SmartWeld: A knowledge-based approach to welding

    SciTech Connect

    Mitchiner, J.L.; Kleban, S.D.; Hess, B.V.; Mahin, K.W.; Messink, D.

    1996-07-01

    SmartWeld is a concurrent engineering system that integrates product design and processing decisions within an electronic desktop engineering environment. It is being developed to provide designers, process engineers, researchers and manufacturing technologists with transparent access to the right process information, process models, process experience and process experts, to realize``right the first time`` manufacturing. Empirical understanding along with process models are synthesized within a knowledge-based system to identify robust fabrication procedures based on cost, schedule, and performance. Integration of process simulation tools with design tools enables the designer to assess a number of design and process options on the computer rather than on the manufacturing floor. Task models and generic process models are being embedded within user friendly GUI`s to more readily enable the customer to use the SmartWeld system and its software tool set without extensive training. The integrated system architecture under development provides interactive communications and shared application capabilities across a variety of workstation and PC-type platforms either locally or at remote sites.

  3. A knowledge-based system design/information tool

    NASA Technical Reports Server (NTRS)

    Allen, James G.; Sikora, Scott E.

    1990-01-01

    The objective of this effort was to develop a Knowledge Capture System (KCS) for the Integrated Test Facility (ITF) at the Dryden Flight Research Facility (DFRF). The DFRF is a NASA Ames Research Center (ARC) facility. This system was used to capture the design and implementation information for NASA's high angle-of-attack research vehicle (HARV), a modified F/A-18A. In particular, the KCS was used to capture specific characteristics of the design of the HARV fly-by-wire (FBW) flight control system (FCS). The KCS utilizes artificial intelligence (AI) knowledge-based system (KBS) technology. The KCS enables the user to capture the following characteristics of automated systems: the system design; the hardware (H/W) design and implementation; the software (S/W) design and implementation; and the utilities (electrical and hydraulic) design and implementation. A generic version of the KCS was developed which can be used to capture the design information for any automated system. The deliverable items for this project consist of the prototype generic KCS and an application, which captures selected design characteristics of the HARV FCS.

  4. Knowledge-based modelling of historical surfaces using lidar data

    NASA Astrophysics Data System (ADS)

    Höfler, Veit; Wessollek, Christine; Karrasch, Pierre

    2016-10-01

    Currently in archaeological studies digital elevation models are mainly used especially in terms of shaded reliefs for the prospection of archaeological sites. Hesse (2010) provides a supporting software tool for the determination of local relief models during the prospection using LiDAR scans. Furthermore the search for relicts from WW2 is also in the focus of his research. In James et al. (2006) the determined contour lines were used to reconstruct locations of archaeological artefacts such as buildings. This study is much more and presents an innovative workflow of determining historical high resolution terrain surfaces using recent high resolution terrain models and sedimentological expert knowledge. Based on archaeological field studies (Franconian Saale near Bad Neustadt in Germany) the sedimentological analyses shows that archaeological interesting horizon and geomorphological expert knowledge in combination with particle size analyses (Koehn, DIN ISO 11277) are useful components for reconstructing surfaces of the early Middle Ages. Furthermore the paper traces how it is possible to use additional information (extracted from a recent digital terrain model) to support the process of determination historical surfaces. Conceptual this research is based on methodology of geomorphometry and geo-statistics. The basic idea is that the working procedure is based on the different input data. One aims at tracking the quantitative data and the other aims at processing the qualitative data. Thus, the first quantitative data were available for further processing, which were later processed with the qualitative data to convert them to historical heights. In the final stage of the workflow all gathered information are stored in a large data matrix for spatial interpolation using the geostatistical method of Kriging. Besides the historical surface, the algorithm also provides a first estimation of accuracy of the modelling. The presented workflow is characterized by a high

  5. Peyronie's disease: urologist's knowledge base and practice patterns.

    PubMed

    Sullivan, J; Moskovic, D; Nelson, C; Levine, L; Mulhall, J

    2015-03-01

    Peyronie's disease (PD) is a poorly understood clinical entity. We performed an in-depth analysis of the knowledge base and current practice patterns of urologists in the United States. A 46-question instrument was created by two experienced PD practitioners and emailed to current American Urology Association members nationally. Questions were either multiple-choice or used a visual analogue scale. Responses regarding treatment options were answered by ranking a list of utilized therapies by preference. Data were aggregated and mean values for each category compiled. Responses were received from 639 urologists (67% in private practice). Almost all (98%) reported seeing PD patients with regularity. Twenty-six percent believed PD prevalence is ≤1%, a small fraction (5%) reporting prevalence as ≥10%. Only 3% referred patients to a subspecialist in PD. Twenty-six percent believed PD is a condition that does not warrant any treatment. The preferred initial management was with oral agents (81%). Of those who used intralesional injections as first line, verapamil was most commonly selected (67%). Seventy-nine percent perform surgery for PD with 86% reporting the optimal timing at ≥12 months after onset of symptoms. Seventy percent perform penile plication, most commonly the Nesbit technique (54%), 61% perform implant surgery and 37% reported performing plaque incision/excision and grafting. Although PD is now a more recognized condition, there are still large variances in knowledge and management strategies. Prospective clinical studies are needed to elucidate standardized management guidelines and a more cohesive strategy to manage this common disease.

  6. Development of a Prototype Model-Form Uncertainty Knowledge Base

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.

    2016-01-01

    Uncertainties are generally classified as either aleatory or epistemic. Aleatory uncertainties are those attributed to random variation, either naturally or through manufacturing processes. Epistemic uncertainties are generally attributed to a lack of knowledge. One type of epistemic uncertainty is called model-form uncertainty. The term model-form means that among the choices to be made during a design process within an analysis, there are different forms of the analysis process, which each give different results for the same configuration at the same flight conditions. Examples of model-form uncertainties include the grid density, grid type, and solver type used within a computational fluid dynamics code, or the choice of the number and type of model elements within a structures analysis. The objectives of this work are to identify and quantify a representative set of model-form uncertainties and to make this information available to designers through an interactive knowledge base (KB). The KB can then be used during probabilistic design sessions, so as to enable the possible reduction of uncertainties in the design process through resource investment. An extensive literature search has been conducted to identify and quantify typical model-form uncertainties present within aerospace design. An initial attempt has been made to assemble the results of this literature search into a searchable KB, usable in real time during probabilistic design sessions. A concept of operations and the basic structure of a model-form uncertainty KB are described. Key operations within the KB are illustrated. Current limitations in the KB, and possible workarounds are explained.

  7. Tilting the lasso by knowledge-based post-processing.

    PubMed

    Tharmaratnam, Kukatharmini; Sperrin, Matthew; Jaki, Thomas; Reppe, Sjur; Frigessi, Arnoldo

    2016-09-02

    It is useful to incorporate biological knowledge on the role of genetic determinants in predicting an outcome. It is, however, not always feasible to fully elicit this information when the number of determinants is large. We present an approach to overcome this difficulty. First, using half of the available data, a shortlist of potentially interesting determinants are generated. Second, binary indications of biological importance are elicited for this much smaller number of determinants. Third, an analysis is carried out on this shortlist using the second half of the data. We show through simulations that, compared with adaptive lasso, this approach leads to models containing more biologically relevant variables, while the prediction mean squared error (PMSE) is comparable or even reduced. We also apply our approach to bone mineral density data, and again final models contain more biologically relevant variables and have reduced PMSEs. Our method leads to comparable or improved predictive performance, and models with greater face validity and interpretability with feasible incorporation of biological knowledge into predictive models.

  8. Comparative development of knowledge-based bioeconomy in the European Union and Turkey.

    PubMed

    Celikkanat Ozan, Didem; Baran, Yusuf

    2014-09-01

    Biotechnology, defined as the technological application that uses biological systems and living organisms, or their derivatives, to create or modify diverse products or processes, is widely used for healthcare, agricultural and environmental applications. The continuity in industrial applications of biotechnology enables the rise and development of the bioeconomy concept. Bioeconomy, including all applications of biotechnology, is defined as translation of knowledge received from life sciences into new, sustainable, environment friendly and competitive products. With the advanced research and eco-efficient processes in the scope of bioeconomy, more healthy and sustainable life is promised. Knowledge-based bioeconomy with its economic, social and environmental potential has already been brought to the research agendas of European Union (EU) countries. The aim of this study is to summarize the development of knowledge-based bioeconomy in EU countries and to evaluate Turkey's current situation compared to them. EU-funded biotechnology research projects under FP6 and FP7 and nationally-funded biotechnology projects under The Scientific and Technological Research Council of Turkey (TUBITAK) Academic Research Funding Program Directorate (ARDEB) and Technology and Innovation Funding Programs Directorate (TEYDEB) were examined. In the context of this study, the main research areas and subfields which have been funded, the budget spent and the number of projects funded since 2003 both nationally and EU-wide and the gaps and overlapping topics were analyzed. In consideration of the results, detailed suggestions for Turkey have been proposed. The research results are expected to be used as a roadmap for coordinating the stakeholders of bioeconomy and integrating Turkish Research Areas into European Research Areas.

  9. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks

    PubMed Central

    De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616

  10. Creating illusions of knowledge: learning errors that contradict prior knowledge.

    PubMed

    Fazio, Lisa K; Barber, Sarah J; Rajaram, Suparna; Ornstein, Peter A; Marsh, Elizabeth J

    2013-02-01

    Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors (e.g., "Franklin invented the light bulb"). On a later general-knowledge test, participants reproduced story errors despite previously answering the questions correctly. This misinformation effect was found even for questions that were answered correctly on the initial test with the highest level of confidence. Furthermore, prior knowledge offered no protection against errors entering the knowledge base; the misinformation effect was equivalent for previously known and unknown facts. Errors can enter the knowledge base even when learners have the knowledge necessary to catch the errors.

  11. Knowledge-based image bandwidth compression and enhancement

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Tescher, Andrew G.

    1987-01-01

    Techniques for incorporating a priori knowledge in the digital coding and bandwidth compression of image data are described and demonstrated. An algorithm for identifying and highlighting thin lines and point objects prior to coding is presented, and the precoding enhancement of a slightly smoothed version of the image is shown to be more effective than enhancement of the original image. Also considered are readjustment of the local distortion parameter and variable-block-size coding. The line-segment criteria employed in the classification are listed in a table, and sample images demonstrating the effectiveness of the enhancement techniques are presented.

  12. A national knowledge-based crop recognition in Mediterranean environment

    NASA Astrophysics Data System (ADS)

    Cohen, Yafit; Shoshany, Maxim

    2002-08-01

    Population growth, urban expansion, land degradation, civil strife and war may place plant natural resources for food and agriculture at risk. Crop and yield monitoring is basic information necessary for wise management of these resources. Satellite remote sensing techniques have proven to be cost-effective in widespread agricultural lands in Africa, America, Europe and Australia. However, they have had limited success in Mediterranean regions that are characterized by a high rate of spatio-temporal ecological heterogeneity and high fragmentation of farming lands. An integrative knowledge-based approach is needed for this purpose, which combines imagery and geographical data within the framework of an intelligent recognition system. This paper describes the development of such a crop recognition methodology and its application to an area that comprises approximately 40% of the cropland in Israel. This area contains eight crop types that represent 70% of Israeli agricultural production. Multi-date Landsat TM images representing seasonal vegetation cover variations were converted to normalized difference vegetation index (NDVI) layers. Field boundaries were delineated by merging Landsat data with SPOT-panchromatic images. Crop recognition was then achieved in two-phases, by clustering multi-temporal NDVI layers using unsupervised classification, and then applying 'split-and-merge' rules to these clusters. These rules were formalized through comprehensive learning of relationships between crop types, imagery properties (spectral and NDVI) and auxiliary data including agricultural knowledge, precipitation and soil types. Assessment of the recognition results using ground data from the Israeli Agriculture Ministry indicated an average recognition accuracy exceeding 85% which accounts for both omission and commission errors. The two-phase strategy implemented in this study is apparently successful for heterogeneous regions. This is due to the fact that it allows

  13. Using the Gene Ontology to Enrich Biological Pathways

    SciTech Connect

    Sanfilippo, Antonio P.; Baddeley, Robert L.; Beagley, Nathaniel; McDermott, Jason E.; Riensche, Roderick M.; Taylor, Ronald C.; Gopalan, Banu

    2009-12-10

    Most current approaches to automatic pathway generation are based on a reverse engineering approach in which pathway plausibility is solely derived from microarray gene expression data. These approaches tend to lack in generality and offer no independent validation as they are too reliant on the pathway observables that guide pathway generation. By contrast, alternative approaches that use prior biological knowledge to validate pathways inferred from gene expression data may err in the opposite direction as the prior knowledge is usually not sufficiently tuned to the pathology of focus. In this paper, we present a novel pathway generation approach that combines insights from the reverse engineering and knowledge-based approaches to increase the biological plausibility of automatically generated regulatory networks and describe an application of this approach to transcriptional data from a mouse model of neuroprotection during stroke.

  14. Systems, methods and apparatus for verification of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Rash, James L. (Inventor); Erickson, John D. (Inventor); Gracinin, Denis (Inventor); Rouff, Christopher A. (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments, domain knowledge is translated into a knowledge-based system. In some embodiments, a formal specification is derived from rules of a knowledge-based system, the formal specification is analyzed, and flaws in the formal specification are used to identify and correct errors in the domain knowledge, from which a knowledge-based system is translated.

  15. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2.

    PubMed

    Thiele, Ines; Hyduke, Daniel R; Steeb, Benjamin; Fankam, Guy; Allen, Douglas K; Bazzani, Susanna; Charusanti, Pep; Chen, Feng-Chi; Fleming, Ronan M T; Hsiung, Chao A; De Keersmaecker, Sigrid C J; Liao, Yu-Chieh; Marchal, Kathleen; Mo, Monica L; Özdemir, Emre; Raghunathan, Anu; Reed, Jennifer L; Shin, Sook-il; Sigurbjörnsdóttir, Sara; Steinmann, Jonas; Sudarsan, Suresh; Swainston, Neil; Thijs, Inge M; Zengler, Karsten; Palsson, Bernhard O; Adkins, Joshua N; Bumann, Dirk

    2011-01-18

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium is a human pathogen, causes various diseases and its increasing antibiotic resistance poses a public health problem. Here, we describe a community-driven effort, in which more than 20 experts in S. Typhimurium biology and systems biology collaborated to reconcile and expand the S. Typhimurium BiGG knowledge-base. The consensus MR was obtained starting from two independently developed MRs for S. Typhimurium. Key results of this reconstruction jamboree include i) development and implementation of a community-based workflow for MR annotation and reconciliation; ii) incorporation of thermodynamic information; and iii) use of the consensus MR to identify potential multi-target drug therapy approaches. Taken together, with the growing number of parallel MRs a structured, community-driven approach will be necessary to maximize quality while increasing adoption of MRs in experimental design and interpretation.

  16. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    SciTech Connect

    Thiele, Ines; Hyduke, Daniel R.; Steeb, Benjamin; Fankam, Guy; Allen, Douglas K.; Bazzani, Susanna; Charusanti, Pep; Chen, Feng-Chi; Fleming, Ronan MT; Hsiung, Chao A.; De Keersmaecker, Sigrid CJ; Liao, Yu-Chieh; Marchal, Kathleen; Mo, Monica L.; Özdemir, Emre; Raghunathan, Anu; Reed, Jennifer L.; Shin, Sook-Il; Sigurbjörnsdóttir, Sara; Steinmann, Jonas; Sudarsan, Suresh; Swainston, Neil; Thijs, Inge M.; Zengler, Karsten; Palsson, Bernhard O.; Adkins, Joshua N.; Bumann, Dirk

    2011-01-01

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium is a human pathogen, causes various diseases and its increasing antibiotic resistance poses a public health problem. Here, we describe a community-driven effort, in which more than 20 experts in S. Typhimurium biology and systems biology collaborated to reconcile and expand the S. Typhimurium BiGG knowledge-base. The consensus MR was obtained starting from two independently developed MRs for S. Typhimurium. Key results of this reconstruction jamboree include i) development and implementation of a community-based workflow for MR annotation and reconciliation; ii) incorporation of thermodynamic information; and iii) use of the consensus MR to identify potential multi-target drug therapy approaches. Finally, taken together, with the growing number of parallel MRs a structured, community-driven approach will be necessary to maximize quality while increasing adoption of MRs in experimental design and interpretation.

  17. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    PubMed Central

    2011-01-01

    Background Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium is a human pathogen, causes various diseases and its increasing antibiotic resistance poses a public health problem. Results Here, we describe a community-driven effort, in which more than 20 experts in S. Typhimurium biology and systems biology collaborated to reconcile and expand the S. Typhimurium BiGG knowledge-base. The consensus MR was obtained starting from two independently developed MRs for S. Typhimurium. Key results of this reconstruction jamboree include i) development and implementation of a community-based workflow for MR annotation and reconciliation; ii) incorporation of thermodynamic information; and iii) use of the consensus MR to identify potential multi-target drug therapy approaches. Conclusion Taken together, with the growing number of parallel MRs a structured, community-driven approach will be necessary to maximize quality while increasing adoption of MRs in experimental design and interpretation. PMID:21244678

  18. Knowledge-based factor analysis of multidimensional nuclear medicine image sequences

    NASA Astrophysics Data System (ADS)

    Yap, Jeffrey T.; Chen, Chin-Tu; Cooper, Malcolm; Treffert, Jon D.

    1994-05-01

    We have developed a knowledge-based approach to analyzing dynamic nuclear medicine data sets using factor analysis. Prior knowledge is used as constraints to produce factor images and their associated time functions which are physically and physiologically realistic. These methods have been applied to both planar and tomographic image sequences acquired using various single-photon emitting and positron emitting radiotracers. Computer-simulated data, non-human primate studies, and human clinical studies have been used to develop and evaluate the methodology. The organ systems studied include the kidneys, heart, brain, liver, and bone. The factors generated represent various isolated aspects of physiologic function, such as tissue perfusion and clearance. In some clinical studies, the factors have indicated the potential to isolate diseased tissue from normally functioning tissue. In addition, the factor analysis of data acquired using newly developed radioligands has shown the ability to differentiate the specific binding of the radioligand to the targeted receptors from the non-specific binding. This suggests the potential use of factor analysis in the development and evaluation of radiolabeled compounds as well as in the investigation of specific receptor systems and their role in diagnosing disease.

  19. Constrained noninformative priors

    SciTech Connect

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given.

  20. ANAP: An Integrated Knowledge Base for Arabidopsis Protein Interaction Network Analysis1[C][W][OA

    PubMed Central

    Wang, Congmao; Marshall, Alex; Zhang, Dabing; Wilson, Zoe A.

    2012-01-01

    Protein interactions are fundamental to the molecular processes occurring within an organism and can be utilized in network biology to help organize, simplify, and understand biological complexity. Currently, there are more than 10 publicly available Arabidopsis (Arabidopsis thaliana) protein interaction databases. However, there are limitations with these databases, including different types of interaction evidence, a lack of defined standards for protein identifiers, differing levels of information, and, critically, a lack of integration between them. In this paper, we present an interactive bioinformatics Web tool, ANAP (Arabidopsis Network Analysis Pipeline), which serves to effectively integrate the different data sets and maximize access to available data. ANAP has been developed for Arabidopsis protein interaction integration and network-based study to facilitate functional protein network analysis. ANAP integrates 11 Arabidopsis protein interaction databases, comprising 201,699 unique protein interaction pairs, 15,208 identifiers (including 11,931 The Arabidopsis Information Resource Arabidopsis Genome Initiative codes), 89 interaction detection methods, 73 species that interact with Arabidopsis, and 6,161 references. ANAP can be used as a knowledge base for constructing protein interaction networks based on user input and supports both direct and indirect interaction analysis. It has an intuitive graphical interface allowing easy network visualization and provides extensive detailed evidence for each interaction. In addition, ANAP displays the gene and protein annotation in the generated interactive network with links to The Arabidopsis Information Resource, the AtGenExpress Visualization Tool, the Arabidopsis 1,001 Genomes GBrowse, the Protein Knowledgebase, the Kyoto Encyclopedia of Genes and Genomes, and the Ensembl Genome Browser to significantly aid functional network analysis. The tool is available open access at http://gmdd.shgmo.org/Computational-Biology

  1. Interest and Prior Knowledge.

    ERIC Educational Resources Information Center

    Tobias, Sigmund

    This paper selectively reviews research on the relationship between topic interest and prior knowledge, and discusses the optimal association between these variables. The paper points out that interest has a facilitating impact on learning, and at least part of this effect must be ascribed to prior knowledge. While the interest-knowledge…

  2. Guidelines for the verification and validation of expert system software and conventional software: Evaluation of knowledge base certification methods. Volume 4

    SciTech Connect

    Miller, L.A.; Hayes, J.E.; Mirsky, S.M.

    1995-03-01

    This report presents the results of the Knowledge Base Certification activity of the expert systems verification and validation (V&V) guideline development project which is jointly funded by the US Nuclear Regulatory Commission and the Electric Power Research Institute. The ultimate objective is the formulation of guidelines for the V&V of expert systems for use in nuclear power applications. This activity is concerned with the development and testing of various methods for assuring the quality of knowledge bases. The testing procedure used was that of behavioral experiment, the first known such evaluation of any type of V&V activity. The value of such experimentation is its capability to provide empirical evidence for -- or against -- the effectiveness of plausible methods in helping people find problems in knowledge bases. The three-day experiment included 20 participants from three nuclear utilities, the Nuclear Regulatory Commission`s Technical training Center, the University of Maryland, EG&G Idaho, and SAIC. The study used two real nuclear expert systems: a boiling water reactor emergency operating procedures tracking system and a pressurized water reactor safety assessment systems. Ten participants were assigned to each of the expert systems. All participants were trained in and then used a sequence of four different V&V methods selected as being the best and most appropriate for study on the basis of prior evaluation activities. These methods either involved the analysis and tracing of requirements to elements in the knowledge base (requirements grouping and requirements tracing) or else involved direct inspection of the knowledge base for various kinds of errors. Half of the subjects within each system group used the best manual variant of the V&V methods (the control group), while the other half were supported by the results of applying real or simulated automated tools to the knowledge bases (the experimental group).

  3. GUIDON-WATCH: A Graphic Interface for Viewing a Knowledge-Based System. Technical Report #14.

    ERIC Educational Resources Information Center

    Richer, Mark H.; Clancey, William J.

    This paper describes GUIDON-WATCH, a graphic interface that uses multiple windows and a mouse to allow a student to browse a knowledge base and view reasoning processes during diagnostic problem solving. The GUIDON project at Stanford University is investigating how knowledge-based systems can provide the basis for teaching programs, and this…

  4. A Model of Knowledge Based Information Retrieval with Hierarchical Concept Graph.

    ERIC Educational Resources Information Center

    Kim, Young Whan; Kim, Jin H.

    1990-01-01

    Proposes a model of knowledge-based information retrieval (KBIR) that is based on a hierarchical concept graph (HCG) which shows relationships between index terms and constitutes a hierarchical thesaurus as a knowledge base. Conceptual distance between a query and an object is discussed and the use of Boolean operators is described. (25…

  5. The Knowledge Base as an Extension of Distance Learning Reference Service

    ERIC Educational Resources Information Center

    Casey, Anne Marie

    2012-01-01

    This study explores knowledge bases as extension of reference services for distance learners. Through a survey and follow-up interviews with distance learning librarians, this paper discusses their interest in creating and maintaining a knowledge base as a resource for reference services to distance learners. It also investigates their perceptions…

  6. The Role of Knowledge Base in the Memory Perfomance of Good and Poor Readers.

    ERIC Educational Resources Information Center

    Bjorklund, David F.; Bernholtz, Jean E.

    1986-01-01

    Compares typicality effects in recall between good and poor junior high readers to determine the influence of knowledge base upon memory. Results suggest that poor readers have a different knowledge base for familiar categories than good readers and that cognitive differences between them are related to differences in their semantic memories.…

  7. A comparison of LISP and MUMPS as implementation languages for knowledge-based systems.

    PubMed

    Curtis, A C

    1984-10-01

    Major components of knowledge-based systems are summarized, along with the programming language features generally useful in their implementation. LISP and MUMPS are briefly described and compared as vehicles for building knowledge-based systems. The paper concludes with suggestions for extensions to MUMPS that might increase its usefulness in artificial intelligence applications without affecting the essential nature of the language.

  8. End-user oriented language to develop knowledge-based expert systems

    SciTech Connect

    Ueno, H.

    1983-01-01

    A description is given of the COMEX (compact knowledge based expert system) expert system language for application-domain users who want to develop a knowledge-based expert system by themselves. The COMEX system was written in FORTRAN and works on a microcomputer. COMEX is being used in several application domains such as medicine, education, and industry. 7 references.

  9. The Knowledge Base as an Extension of Distance Learning Reference Service

    ERIC Educational Resources Information Center

    Casey, Anne Marie

    2012-01-01

    This study explores knowledge bases as extension of reference services for distance learners. Through a survey and follow-up interviews with distance learning librarians, this paper discusses their interest in creating and maintaining a knowledge base as a resource for reference services to distance learners. It also investigates their perceptions…

  10. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  11. Comparison of LISP and MUMPS as implementation languages for knowledge-based systems

    SciTech Connect

    Curtis, A.C.

    1984-01-01

    Major components of knowledge-based systems are summarized, along with the programming language features generally useful in their implementation. LISP and MUMPS are briefly described and compared as vehicles for building knowledge-based systems. The paper concludes with suggestions for extensions to MUMPS which might increase its usefulness in artificial intelligence applications without affecting the essential nature of the language. 8 references.

  12. Knowledge-based control for robot self-localization

    NASA Technical Reports Server (NTRS)

    Bennett, Bonnie Kathleen Holte

    1993-01-01

    Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.

  13. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.

    1991-01-01

    The purpose is to develop algorithms and architectures for embedding artificial intelligence in aircraft guidance and control systems. With the approach adopted, AI-computing is used to create an outer guidance loop for driving the usual aircraft autopilot. That is, a symbolic processor monitors the operation and performance of the aircraft. Then, based on rules and other stored knowledge, commands are automatically formulated for driving the autopilot so as to accomplish desired flight operations. The focus is on developing a software system which can respond to linguistic instructions, input in a standard format, so as to formulate a sequence of simple commands to the autopilot. The instructions might be a fairly complex flight clearance, input either manually or by data-link. Emphasis is on a software system which responds much like a pilot would, employing not only precise computations, but, also, knowledge which is less precise, but more like common-sense. The approach is based on prior work to develop a generic 'shell' architecture for an AI-processor, which may be tailored to many applications by describing the application in appropriate processor data bases (libraries). Such descriptions include numerical models of the aircraft and flight control system, as well as symbolic (linguistic) descriptions of flight operations, rules, and tactics.

  14. Knowledge-Based Reinforcement Learning for Data Mining

    NASA Astrophysics Data System (ADS)

    Kudenko, Daniel; Grzes, Marek

    experts have developed heuristics that help them in planning and scheduling resources in their work place. However, this domain knowledge is often rough and incomplete. When the domain knowledge is used directly by an automated expert system, the solutions are often sub-optimal, due to the incompleteness of the knowledge, the uncertainty of environments, and the possibility to encounter unexpected situations. RL, on the other hand, can overcome the weaknesses of the heuristic domain knowledge and produce optimal solutions. In the talk we propose two techniques, which represent first steps in the area of knowledge-based RL (KBRL). The first technique [1] uses high-level STRIPS operator knowledge in reward shaping to focus the search for the optimal policy. Empirical results show that the plan-based reward shaping approach outperforms other RL techniques, including alternative manual and MDP-based reward shaping when it is used in its basic form. We showed that MDP-based reward shaping may fail and successful experiments with STRIPS-based shaping suggest modifications which can overcome encountered problems. The STRIPSbased method we propose allows expressing the same domain knowledge in a different way and the domain expert can choose whether to define an MDP or STRIPS planning task. We also evaluated the robustness of the proposed STRIPS-based technique to errors in the plan knowledge. In case that STRIPS knowledge is not available, we propose a second technique [2] that shapes the reward with hierarchical tile coding. Where the Q-function is represented with low-level tile coding, a V-function with coarser tile coding can be learned in parallel and used to approximate the potential for ground states. In the context of data mining, our KBRL approaches can also be used for any data collection task where the acquisition of data may incur considerable cost. In addition, observing the data collection agent in specific scenarios may lead to new insights into optimal data

  15. The ins and outs of eukaryotic viruses: Knowledge base and ontology of a viral infection

    PubMed Central

    Hulo, Chantal; Masson, Patrick; de Castro, Edouard; Auchincloss, Andrea H.; Foulger, Rebecca; Poux, Sylvain; Lomax, Jane; Bougueleret, Lydie; Xenarios, Ioannis

    2017-01-01

    Viruses are genetically diverse, infect a wide range of tissues and host cells and follow unique processes for replicating themselves. All these processes were investigated and indexed in ViralZone knowledge base. To facilitate standardizing data, a simple ontology of viral life-cycle terms was developed to provide a common vocabulary for annotating data sets. New terminology was developed to address unique viral replication cycle processes, and existing terminology was modified and adapted. The virus life-cycle is classically described by schematic pictures. Using this ontology, it can be represented by a combination of successive terms: “entry”, “latency”, “transcription”, “replication” and “exit”. Each of these parts is broken down into discrete steps. For example Zika virus “entry” is broken down in successive steps: “Attachment”, “Apoptotic mimicry”, “Viral endocytosis/ macropinocytosis”, “Fusion with host endosomal membrane”, “Viral factory”. To demonstrate the utility of a standard ontology for virus biology, this work was completed by annotating virus data in the ViralZone, UniProtKB and Gene Ontology databases. PMID:28207819

  16. Knowledge-based discovery for designing CRISPR-CAS systems against invading mobilomes in thermophiles.

    PubMed

    Chellapandi, P; Ranjani, J

    2015-09-01

    Clustered regularly interspaced short palindromic repeats (CRISPRs) are direct features of the prokaryotic genomes involved in resistance to their bacterial viruses and phages. Herein, we have identified CRISPR loci together with CRISPR-associated sequences (CAS) genes to reveal their immunity against genome invaders in the thermophilic archaea and bacteria. Genomic survey of this study implied that genomic distribution of CRISPR-CAS systems was varied from strain to strain, which was determined by the degree of invading mobiloms. Direct repeats found to be equal in some extent in many thermopiles, but their spacers were differed in each strain. Phylogenetic analyses of CAS superfamily revealed that genes cmr, csh, csx11, HD domain, devR were belonged to the subtypes of cas gene family. The members in cas gene family of thermophiles were functionally diverged within closely related genomes and may contribute to develop several defense strategies. Nevertheless, genome dynamics, geological variation and host defense mechanism were contributed to share their molecular functions across the thermophiles. A thermophilic archaean, Thermococcus gammotolerans and thermophilic bacteria, Petrotoga mobilis and Thermotoga lettingae have shown superoperons-like appearance to cluster cas genes, which were typically evolved for their defense pathways. A cmr operon was identified with a specific promoter in a thermophilic archaean, Caldivirga maquilingensis. Overall, we concluded that knowledge-based genomic survey and phylogeny-based functional assignment have suggested for designing a reliable genetic regulatory circuit naturally from CRISPR-CAS systems, acquired defense pathways, to thermophiles in future synthetic biology.

  17. Computing gene expression data with a knowledge-based gene clustering approach.

    PubMed

    Rosa, Bruce A; Oh, Sookyung; Montgomery, Beronda L; Chen, Jin; Qin, Wensheng

    2010-01-01

    Computational analysis methods for gene expression data gathered in microarray experiments can be used to identify the functions of previously unstudied genes. While obtaining the expression data is not a difficult task, interpreting and extracting the information from the datasets is challenging. In this study, a knowledge-based approach which identifies and saves important functional genes before filtering based on variability and fold change differences was utilized to study light regulation. Two clustering methods were used to cluster the filtered datasets, and clusters containing a key light regulatory gene were located. The common genes to both of these clusters were identified, and the genes in the common cluster were ranked based on their coexpression to the key gene. This process was repeated for 11 key genes in 3 treatment combinations. The initial filtering method reduced the dataset size from 22,814 probes to an average of 1134 genes, and the resulting common cluster lists contained an average of only 14 genes. These common cluster lists scored higher gene enrichment scores than two individual clustering methods. In addition, the filtering method increased the proportion of light responsive genes in the dataset from 1.8% to 15.2%, and the cluster lists increased this proportion to 18.4%. The relatively short length of these common cluster lists compared to gene groups generated through typical clustering methods or coexpression networks narrows the search for novel functional genes while increasing the likelihood that they are biologically relevant.

  18. Ab initio protein structure assembly using continuous structure fragments and optimized knowledge-based force field.

    PubMed

    Xu, Dong; Zhang, Yang

    2012-07-01

    Ab initio protein folding is one of the major unsolved problems in computational biology owing to the difficulties in force field design and conformational search. We developed a novel program, QUARK, for template-free protein structure prediction. Query sequences are first broken into fragments of 1-20 residues where multiple fragment structures are retrieved at each position from unrelated experimental structures. Full-length structure models are then assembled from fragments using replica-exchange Monte Carlo simulations, which are guided by a composite knowledge-based force field. A number of novel energy terms and Monte Carlo movements are introduced and the particular contributions to enhancing the efficiency of both force field and search engine are analyzed in detail. QUARK prediction procedure is depicted and tested on the structure modeling of 145 nonhomologous proteins. Although no global templates are used and all fragments from experimental structures with template modeling score >0.5 are excluded, QUARK can successfully construct 3D models of correct folds in one-third cases of short proteins up to 100 residues. In the ninth community-wide Critical Assessment of protein Structure Prediction experiment, QUARK server outperformed the second and third best servers by 18 and 47% based on the cumulative Z-score of global distance test-total scores in the FM category. Although ab initio protein folding remains a significant challenge, these data demonstrate new progress toward the solution of the most important problem in the field.

  19. Biotechnology as the engine for the Knowledge-Based Bio-Economy.

    PubMed

    Aguilar, Alfredo; Bochereau, Laurent; Matthiessen, Line

    2010-01-01

    The European Commission has defined the Knowledge-Based Bio-Economy (KBBE) as the process of transforming life science knowledge into new, sustainable, eco-efficient and competitive products. The term "Bio-Economy" encompasses all industries and economic sectors that produce, manage and otherwise exploit biological resources and related services. Over the last decades biotechnologies have led to innovations in many agricultural, industrial, medical sectors and societal activities. Biotechnology will continue to be a major contributor to the Bio-Economy, playing an essential role in support of economic growth, employment, energy supply and a new generation of bio-products, and to maintain the standard of living. The paper reviews some of the main biotechnology-related research activities at European level. Beyond the 7th Framework Program for Research and Technological Development (FP7), several initiatives have been launched to better integrate FP7 with European national research activities, promote public-private partnerships and create better market and regulatory environments for stimulating innovation.

  20. Soybean Knowledge Base (SoyKB): a Web Resource for Soybean Translational Genomics

    SciTech Connect

    Joshi, Trupti; Patil, Kapil; Fitzpatrick, Michael R.; Franklin, Levi D.; Yao, Qiuming; Cook, Jeffrey R.; Wang, Zhem; Libault, Marc; Brechenmacher, Laurent; Valliyodan, Babu; Wu, Xiaolei; Cheng, Jianlin; Stacey, Gary; Nguyen, Henry T.; Xu, Dong

    2012-01-17

    Background: Soybean Knowledge Base (SoyKB) is a comprehensive all-inclusive web resource for soybean translational genomics. SoyKB is designed to handle the management and integration of soybean genomics, transcriptomics, proteomics and metabolomics data along with annotation of gene function and biological pathway. It contains information on four entities, namely genes, microRNAs, metabolites and single nucleotide polymorphisms (SNPs). Methods: SoyKB has many useful tools such as Affymetrix probe ID search, gene family search, multiple gene/ metabolite search supporting co-expression analysis, and protein 3D structure viewer as well as download and upload capacity for experimental data and annotations. It has four tiers of registration, which control different levels of access to public and private data. It allows users of certain levels to share their expertise by adding comments to the data. It has a user-friendly web interface together with genome browser and pathway viewer, which display data in an intuitive manner to the soybean researchers, producers and consumers. Conclusions: SoyKB addresses the increasing need of the soybean research community to have a one-stop-shop functional and translational omics web resource for information retrieval and analysis in a user-friendly way. SoyKB can be publicly accessed at http://soykb.org/.

  1. DBD-Hunter: a knowledge-based method for the prediction of DNA-protein interactions.

    PubMed

    Gao, Mu; Skolnick, Jeffrey

    2008-07-01

    The structures of DNA-protein complexes have illuminated the diversity of DNA-protein binding mechanisms shown by different protein families. This lack of generality could pose a great challenge for predicting DNA-protein interactions. To address this issue, we have developed a knowledge-based method, DNA-binding Domain Hunter (DBD-Hunter), for identifying DNA-binding proteins and associated binding sites. The method combines structural comparison and the evaluation of a statistical potential, which we derive to describe interactions between DNA base pairs and protein residues. We demonstrate that DBD-Hunter is an accurate method for predicting DNA-binding function of proteins, and that DNA-binding protein residues can be reliably inferred from the corresponding templates if identified. In benchmark tests on approximately 4000 proteins, our method achieved an accuracy of 98% and a precision of 84%, which significantly outperforms three previous methods. We further validate the method on DNA-binding protein structures determined in DNA-free (apo) state. We show that the accuracy of our method is only slightly affected on apo-structures compared to the performance on holo-structures cocrystallized with DNA. Finally, we apply the method to approximately 1700 structural genomics targets and predict that 37 targets with previously unknown function are likely to be DNA-binding proteins. DBD-Hunter is freely available at http://cssb.biology.gatech.edu/skolnick/webservice/DBD-Hunter/.

  2. The ins and outs of eukaryotic viruses: Knowledge base and ontology of a viral infection.

    PubMed

    Hulo, Chantal; Masson, Patrick; de Castro, Edouard; Auchincloss, Andrea H; Foulger, Rebecca; Poux, Sylvain; Lomax, Jane; Bougueleret, Lydie; Xenarios, Ioannis; Le Mercier, Philippe

    2017-01-01

    Viruses are genetically diverse, infect a wide range of tissues and host cells and follow unique processes for replicating themselves. All these processes were investigated and indexed in ViralZone knowledge base. To facilitate standardizing data, a simple ontology of viral life-cycle terms was developed to provide a common vocabulary for annotating data sets. New terminology was developed to address unique viral replication cycle processes, and existing terminology was modified and adapted. The virus life-cycle is classically described by schematic pictures. Using this ontology, it can be represented by a combination of successive terms: "entry", "latency", "transcription", "replication" and "exit". Each of these parts is broken down into discrete steps. For example Zika virus "entry" is broken down in successive steps: "Attachment", "Apoptotic mimicry", "Viral endocytosis/ macropinocytosis", "Fusion with host endosomal membrane", "Viral factory". To demonstrate the utility of a standard ontology for virus biology, this work was completed by annotating virus data in the ViralZone, UniProtKB and Gene Ontology databases.

  3. Data- and knowledge-based modeling of gene regulatory networks: an update

    PubMed Central

    Linde, Jörg; Schulze, Sylvie; Henkel, Sebastian G.; Guthke, Reinhard

    2015-01-01

    Gene regulatory network inference is a systems biology approach which predicts interactions between genes with the help of high-throughput data. In this review, we present current and updated network inference methods focusing on novel techniques for data acquisition, network inference assessment, network inference for interacting species and the integration of prior knowledge. After the advance of Next-Generation-Sequencing of cDNAs derived from RNA samples (RNA-Seq) we discuss in detail its application to network inference. Furthermore, we present progress for large-scale or even full-genomic network inference as well as for small-scale condensed network inference and review advances in the evaluation of network inference methods by crowdsourcing. Finally, we reflect the current availability of data and prior knowledge sources and give an outlook for the inference of gene regulatory networks that reflect interacting species, in particular pathogen-host interactions. PMID:27047314

  4. Making priors a priority

    NASA Astrophysics Data System (ADS)

    Segall, Matthew; Chadwick, Andrew

    2010-12-01

    When we build a predictive model of a drug property we rigorously assess its predictive accuracy, but we are rarely able to address the most important question, "How useful will the model be in making a decision in a practical context?" To answer this requires an understanding of the prior probability distribution ("the prior") and hence prevalence of negative outcomes due to the property being assessed. In this perspective, we illustrate the importance of the prior to assess the utility of a model in different contexts: to select or eliminate compounds, to prioritise compounds for further investigation using more expensive screens, or to combine models for different properties to select compounds with a balance of properties. In all three contexts, a better understanding of the prior probabilities of adverse events due to key factors will improve our ability to make good decisions in drug discovery, finding higher quality molecules more efficiently.

  5. Prior Knowledge Assessment Guide

    DTIC Science & Technology

    2014-12-01

    facts automatically requires knowledge of the fact itself. You will have to determine what levels are important for your purposes. As an example...will see the cell positions for “array 1” automatically record in your function. GUIDE FOR DEVELOPING AND USING PRIOR KNOWLEDGE ASSESSMENTS TO...TAILOR TRAINING 50 6. Type a comma ( , ) and “array2” will automatically show in bold type. GUIDE FOR DEVELOPING AND USING PRIOR

  6. Constructing priors in synesthesia.

    PubMed

    van Leeuwen, Tessa M

    2014-01-01

    A new theoretical framework (PPSMC) applicable to synesthesia has been proposed, in which the discrepancy between the perceptual reality of (some) synesthetic concurrents and their subjective non-veridicality is being explained. The PPSMC framework stresses the relevance of the phenomenology of synesthesia for synesthesia research-and beyond. When describing the emergence and persistence of synesthetic concurrents under PPSMC, it is proposed that precise, high-confidence priors are crucial in synesthesia. I discuss the construction of priors in synesthesia.

  7. GEDA: new knowledge base of gene expression in drug addiction.

    PubMed

    Suh, Young Ju; Yang, Moon Hee; Yoon, Suk Joon; Park, Jong Hoon

    2006-07-31

    Abuse of drugs can elicit compulsive drug seeking behaviors upon repeated administration, and ultimately leads to the phenomenon of addiction. We developed a procedure for the standardization of microarray gene expression data of rat brain in drug addiction and stored them in a single integrated database system, focusing on more effective data processing and interpretation. Another characteristic of the present database is that it has a systematic flexibility for statistical analysis and linking with other databases. Basically, we adopt an intelligent SQL querying system, as the foundation of our DB, in order to set up an interactive module which can automatically read the raw gene expression data in the standardized format. We maximize the usability of this DB, helping users study significant gene expression and identify biological function of the genes through integrated up-to-date gene information such as GO annotation and metabolic pathway. For collecting the latest information of selected gene from the database, we also set up the local BLAST search engine and nonredundant sequence database updated by NCBI server on a daily basis. We find that the present database is a useful query interface and data-mining tool, specifically for finding out the genes related to drug addiction. We apply this system to the identification and characterization of methamphetamine-induced genes' behavior in rat brain.

  8. Spanish strategy on bioeconomy: Towards a knowledge based sustainable innovation.

    PubMed

    Lainez, Manuel; González, José Manuel; Aguilar, Alfredo; Vela, Carmen

    2017-05-25

    Spain launched its own strategy on bioeconomy in January 2016 aiming at boosting a bioeconomy based on the sustainable and efficient production and use of biological resources. It highlights global societal challenges related with agricultural and biotechnological sciences in Spain and the great dynamism of the private sectors involved, particularly the agri-food, biotech and biomass sectors. The targeted sectors are food, agriculture and forestry, conditioned by water availability. It also includes the production of those industrial bioproducts and bioenergy obtained from the use and valorisation of wastes and residues and other non-conventional sources of biomass, in a circular economy. The strategy also puts a focus on rural and coastal development through several uses and services linked to ecosystems. The capacity to generate know-how in this area and the promotion of public and private collaboration are important pillars in order to enhance existing value chains and to create new ones. The strategy is led by R&I and Agriculture, Food and Environment policy managers and largely supported at regional level too. The strategic objective is the maintenance of the bioeconomy as an essential part of Spanish economy to contribute to the economic growth by creating new jobs and fostering investments. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  9. CastorDB: a comprehensive knowledge base for Ricinus communis

    PubMed Central

    2011-01-01

    Background Ricinus communis is an industrially important non-edible oil seed crop, native to tropical and subtropical regions of the world. Although, R. communis genome was assembled in 4X draft by JCVI, and is predicted to contain 31,221 proteins, the function of most of the genes remains to be elucidated. A large amount of information of different aspects of the biology of R. communis is available, but most of the data are scattered one not easily accessible. Therefore a comprehensive resource on Castor, Castor DB, is required to facilitate research on this important plant. Findings CastorDB is a specialized and comprehensive database for the oil seed plant R. communis, integrating information from several diverse resources. CastorDB contains information on gene and protein sequences, gene expression and gene ontology annotation of protein sequences obtained from a variety of repositories, as primary data. In addition, computational analysis was used to predict cellular localization, domains, pathways, protein-protein interactions, sumoylation sites and biochemical properties and has been included as derived data. This database has an intuitive user interface that prompts the user to explore various possible information resources available on a given gene or a protein. Conclusion CastorDB provides a user friendly comprehensive resource on castor with particular emphasis on its genome, transcriptome, and proteome and on protein domains, pathways, protein localization, presence of sumoylation sites, expression data and protein interacting partners. PMID:21914200

  10. Problem-Oriented Corporate Knowledge Base Models on the Case-Based Reasoning Approach Basis

    NASA Astrophysics Data System (ADS)

    Gluhih, I. N.; Akhmadulin, R. K.

    2017-07-01

    One of the urgent directions of efficiency enhancement of production processes and enterprises activities management is creation and use of corporate knowledge bases. The article suggests a concept of problem-oriented corporate knowledge bases (PO CKB), in which knowledge is arranged around possible problem situations and represents a tool for making and implementing decisions in such situations. For knowledge representation in PO CKB a case-based reasoning approach is encouraged to use. Under this approach, the content of a case as a knowledge base component has been defined; based on the situation tree a PO CKB knowledge model has been developed, in which the knowledge about typical situations as well as specific examples of situations and solutions have been represented. A generalized problem-oriented corporate knowledge base structural chart and possible modes of its operation have been suggested. The obtained models allow creating and using corporate knowledge bases for support of decision making and implementing, training, staff skill upgrading and analysis of the decisions taken. The universal interpretation of terms “situation” and “solution” adopted in the work allows using the suggested models to develop problem-oriented corporate knowledge bases in different subject domains. It has been suggested to use the developed models for making corporate knowledge bases of the enterprises that operate engineer systems and networks at large production facilities.

  11. Multisensor detection and tracking of tactical ballistic missiles using knowledge-based state estimation

    NASA Astrophysics Data System (ADS)

    Woods, Edward; Queeney, Tom

    1994-06-01

    Westinghouse has developed and demonstrated a system that performs multisensor detection and tracking of tactical ballistic missiles (TBM). Under a USAF High Gear Program, we developed knowledge-based techniques to discriminate TBM targets from ground clutter, air breathing targets, and false alarms. Upon track initiation the optimal estimate of the target's launch point, impact point and instantaneous position was computed by fusing returns from noncollocated multiple sensors. The system also distinguishes different missile types during the boost phase and forms multiple hypotheses to account for measurement and knowledge base uncertainties. This paper outlines the salient features of the knowledge-based processing of the multisensor data.

  12. Coordination between control and knowledge based systems for autonomous vehicle guidance

    SciTech Connect

    Harmon, S.Y.

    1983-01-01

    A technique for coordination between control and knowledge based components of an autonomous mobile robot guidance system is discussed. This technique models the interaction process as multiple message passing tasks. A protocol with which to structure the messages has been developed. This protocol builds upon an available transport layer. The synchronization between tasks for real time control and slower knowledge based tasks is achieved by having the knowledge based tasks always work in anticipation of events to come. The implementation of this technique in the form of an autonomous mobile ground robot is used for illustration. Various elements of this robot's hardware and software architecture are discussed.

  13. Automated knowledge acquisition for second generation knowledge base systems: A conceptual analysis and taxonomy

    SciTech Connect

    Williams, K.E.; Kotnour, T.

    1991-12-31

    In this paper, we present a conceptual analysis of knowledge-base development methodologies. The purpose of this research is to help overcome the high cost and lack of efficiency in developing knowledge base representations for artificial intelligence applications. To accomplish this purpose, we analyzed the available methodologies and developed a knowledge-base development methodology taxonomy. We review manual, machine-aided, and machine-learning methodologies. A set of developed characteristics allows description and comparison among the methodologies. We present the results of this conceptual analysis of methodologies and recommendations for development of more efficient and effective tools.

  14. Automated knowledge acquisition for second generation knowledge base systems: A conceptual analysis and taxonomy

    SciTech Connect

    Williams, K.E.; Kotnour, T.

    1991-01-01

    In this paper, we present a conceptual analysis of knowledge-base development methodologies. The purpose of this research is to help overcome the high cost and lack of efficiency in developing knowledge base representations for artificial intelligence applications. To accomplish this purpose, we analyzed the available methodologies and developed a knowledge-base development methodology taxonomy. We review manual, machine-aided, and machine-learning methodologies. A set of developed characteristics allows description and comparison among the methodologies. We present the results of this conceptual analysis of methodologies and recommendations for development of more efficient and effective tools.

  15. Evolutionary potentials: structure specific knowledge-based potentials exploiting the evolutionary record of sequence homologs.

    PubMed

    Panjkovich, Alejandro; Melo, Francisco; Marti-Renom, Marc A

    2008-04-08

    We introduce a new type of knowledge-based potentials for protein structure prediction, called 'evolutionary potentials', which are derived using a single experimental protein structure and all three-dimensional models of its homologous sequences. The new potentials have been benchmarked against other knowledge-based potentials, resulting in a significant increase in accuracy for model assessment. In contrast to standard knowledge-based potentials, we propose that evolutionary potentials capture key determinants of thermodynamic stability and specific sequence constraints required for fast folding.

  16. A JAVA implementation of a medical knowledge base for decision support.

    PubMed

    Ambrosiadou, V; Goulis, D; Shankararaman, V; Shamtani, G

    1999-01-01

    Distributed decision support is a challenging issue requiring the implementation of advanced computer science techniques together with tools of development which offer ease of communication and efficiency of searching and control performance. This paper presents a JAVA implementation of a knowledge base model called ARISTOTELES which may be used in order to support the development of the medical knowledge base by clinicians in diverse specialised areas of interest. The advantages that are evident by the application of such a cognitive model are ease of knowledge acquisition, modular construction of the knowledge base and greater acceptance from clinicians.

  17. A real-time multiprocessor system for knowledge-based target-tracking

    NASA Astrophysics Data System (ADS)

    Irwin, P. D. S.; Farson, S. A.; Wilkinson, A. J.

    1989-12-01

    A real-time processing architecture for implementation of knowledge-based algorithms employed in infrared-image interpretation is described. Three stages of image interpretation (image segmentation, feature extraction, and feature examination by a knowledge-based system) are outlined. Dedicated hardware for the image segmentation and feature extraction are covered, along with a multitransputer architecture for implementation of data-dependent processes. Emphasis is placed on implementation of the description, frame-hypothesis, and slot-filling algorithms. An optimal algorithm for scheduling various tasks involved in implementing the rule set of the knowledge-based system is presented.

  18. Knowledge-based systems: how will they affect manufacturing in the 80's

    SciTech Connect

    King, M.S.; Brooks, S.L.; Schaefer, R.M.

    1985-04-01

    Knowledge-based or ''expert'' systems have been in various stages of development and use for a long time in the academic world. Some of these systems have come out of the lab in recent years in the fields of medicine, geology, and computer system design. The use of knowledge-based systems in conjunction iwth manufacturing process planning and the emerging CAD/CAM/CAE technologies promises significant increases in engineering productivity. This paper's focus is on areas in manufacturing where knowledge-based systems could most benefit the engineer and industry. 13 refs., 3 figs.

  19. SU-F-BRA-13: Knowledge-Based Treatment Planning for Prostate LDR Brachytherapy Based On Principle Component Analysis

    SciTech Connect

    Roper, J; Bradshaw, B; Godette, K; Schreibmann, E; Chanyavanich, V

    2015-06-15

    Purpose: To create a knowledge-based algorithm for prostate LDR brachytherapy treatment planning that standardizes plan quality using seed arrangements tailored to individual physician preferences while being fast enough for real-time planning. Methods: A dataset of 130 prior cases was compiled for a physician with an active prostate seed implant practice. Ten cases were randomly selected to test the algorithm. Contours from the 120 library cases were registered to a common reference frame. Contour variations were characterized on a point by point basis using principle component analysis (PCA). A test case was converted to PCA vectors using the same process and then compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. The seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Computational time was recorded. Any subsequent modifications were recorded that required input from a treatment planner to achieve an acceptable plan. Results: The computational time required to register contours from a test case and evaluate PCA similarity across the library was approximately 10s. Five of the ten test cases did not require any seed additions, deletions, or moves to obtain an acceptable plan. The remaining five test cases required on average 4.2 seed modifications. The time to complete manual plan modifications was less than 30s in all cases. Conclusion: A knowledge-based treatment planning algorithm was developed for prostate LDR brachytherapy based on principle component analysis. Initial results suggest that this approach can be used to quickly create treatment plans that require few if any modifications by the treatment planner. In general, test case plans have seed arrangements which are very similar to prior cases, and thus are inherently tailored to physician preferences.

  20. A knowledge-based approach to estimating the magnitude and spatial patterns of potential threats to soil biodiversity.

    PubMed

    Orgiazzi, Alberto; Panagos, Panos; Yigini, Yusuf; Dunbar, Martha B; Gardi, Ciro; Montanarella, Luca; Ballabio, Cristiano

    2016-03-01

    Because of the increasing pressures exerted on soil, below-ground life is under threat. Knowledge-based rankings of potential threats to different components of soil biodiversity were developed in order to assess the spatial distribution of threats on a European scale. A list of 13 potential threats to soil biodiversity was proposed to experts with different backgrounds in order to assess the potential for three major components of soil biodiversity: soil microorganisms, fauna, and biological functions. This approach allowed us to obtain knowledge-based rankings of threats. These classifications formed the basis for the development of indices through an additive aggregation model that, along with ad-hoc proxies for each pressure, allowed us to preliminarily assess the spatial patterns of potential threats. Intensive exploitation was identified as the highest pressure. In contrast, the use of genetically modified organisms in agriculture was considered as the threat with least potential. The potential impact of climate change showed the highest uncertainty. Fourteen out of the 27 considered countries have more than 40% of their soils with moderate-high to high potential risk for all three components of soil biodiversity. Arable soils are the most exposed to pressures. Soils within the boreal biogeographic region showed the lowest risk potential. The majority of soils at risk are outside the boundaries of protected areas. First maps of risks to three components of soil biodiversity based on the current scientific knowledge were developed. Despite the intrinsic limits of knowledge-based assessments, a remarkable potential risk to soil biodiversity was observed. Guidelines to preliminarily identify and circumscribe soils potentially at risk are provided. This approach may be used in future research to assess threat at both local and global scale and identify areas of possible risk and, subsequently, design appropriate strategies for monitoring and protection of soil

  1. A relational data-knowledge base system and its potential in developing a distributed data-knowledge system

    NASA Technical Reports Server (NTRS)

    Rahimian, Eric N.; Graves, Sara J.

    1988-01-01

    A new approach used in constructing a rational data knowledge base system is described. The relational database is well suited for distribution due to its property of allowing data fragmentation and fragmentation transparency. An example is formulated of a simple relational data knowledge base which may be generalized for use in developing a relational distributed data knowledge base system. The efficiency and ease of application of such a data knowledge base management system is briefly discussed. Also discussed are the potentials of the developed model for sharing the data knowledge base as well as the possible areas of difficulty in implementing the relational data knowledge base management system.

  2. A relational data-knowledge base system and its potential in developing a distributed data-knowledge system

    NASA Technical Reports Server (NTRS)

    Rahimian, Eric N.; Graves, Sara J.

    1988-01-01

    A new approach used in constructing a rational data knowledge base system is described. The relational database is well suited for distribution due to its property of allowing data fragmentation and fragmentation transparency. An example is formulated of a simple relational data knowledge base which may be generalized for use in developing a relational distributed data knowledge base system. The efficiency and ease of application of such a data knowledge base management system is briefly discussed. Also discussed are the potentials of the developed model for sharing the data knowledge base as well as the possible areas of difficulty in implementing the relational data knowledge base management system.

  3. Risk Management of New Microelectronics for NASA: Radiation Knowledge-base

    NASA Technical Reports Server (NTRS)

    LaBel, Kenneth A.

    2004-01-01

    Contents include the following: NASA Missions - implications to reliability and radiation constraints. Approach to Insertion of New Technologies Technology Knowledge-base development. Technology model/tool development and validation. Summary comments.

  4. An Integrative Framework for Bayesian Variable Selection with Informative Priors for Identifying Genes and Pathways

    PubMed Central

    Ander, Bradley P.; Zhang, Xiaoshuai; Xue, Fuzhong; Sharp, Frank R.; Yang, Xiaowei

    2013-01-01

    The discovery of genetic or genomic markers plays a central role in the development of personalized medicine. A notable challenge exists when dealing with the high dimensionality of the data sets, as thousands of genes or millions of genetic variants are collected on a relatively small number of subjects. Traditional gene-wise selection methods using univariate analyses face difficulty to incorporate correlational, structural, or functional structures amongst the molecular measures. For microarray gene expression data, we first summarize solutions in dealing with ‘large p, small n’ problems, and then propose an integrative Bayesian variable selection (iBVS) framework for simultaneously identifying causal or marker genes and regulatory pathways. A novel partial least squares (PLS) g-prior for iBVS is developed to allow the incorporation of prior knowledge on gene-gene interactions or functional relationships. From the point view of systems biology, iBVS enables user to directly target the joint effects of multiple genes and pathways in a hierarchical modeling diagram to predict disease status or phenotype. The estimated posterior selection probabilities offer probabilitic and biological interpretations. Both simulated data and a set of microarray data in predicting stroke status are used in validating the performance of iBVS in a Probit model with binary outcomes. iBVS offers a general framework for effective discovery of various molecular biomarkers by combining data-based statistics and knowledge-based priors. Guidelines on making posterior inferences, determining Bayesian significance levels, and improving computational efficiencies are also discussed. PMID:23844055

  5. Structure for a Knowledge-Based System to Estimate Soviet Tactics in the Airland Battle.

    DTIC Science & Technology

    1988-03-01

    was developed as a knowledge-based system, which,. " . is a subset of artificial intelig ~ence (AI) technoiogy. Knowledge-based systems, including...and error adjustments using testcases (obtained from records of Soviet exer- % cises) . Dynamic. Low DSS Frame represen- tations of time, frequent...6). 16. Harmon, Paul and David King. Expert Systems -Artificial Intelligence in Business . New York NY: Wiley Press, 1985. * 17. Harrison, Major H

  6. A Different Approach to the Generation of Patient Management Problems from a Knowledge-Based System

    PubMed Central

    Barriga, Rosa Maria

    1988-01-01

    Several strategies are proposed to approach the generation of Patient Management Problems from a Knowledge Base and avoid inconsistencies in the results. These strategies are based on a different Knowledge Base structure and in the use of case introductions that describe the patient attributes which are not disease-dependent. This methodology has proven effective in a recent pilot test and it is on its way to implementation as part of an educational program at CWRU, School of Medicine.

  7. A study of knowledge-based systems for the Space Station

    NASA Technical Reports Server (NTRS)

    Friedland, Peter; Swietek, Gregg; Bullock, Bruce

    1989-01-01

    A rapid turnaround study on the potential uses of knowledge-based systems for Space Station Freedom was conducted from October 1987 through January 1988. Participants included both NASA personnel and experienced industrial knowledge engineers. Major results of the study included five recommended systems for the Baseline Configuration of the Space Station, an analysis of sensor hooks and scars, and a proposed plan for evolutionary growth of knowledge-based systems on the Space Station.

  8. Representing a Nursing Knowledge Base for a Decision Support System in Prolog

    PubMed Central

    Ozbolt, Judy G.; Swain, Mary Ann P.

    1988-01-01

    This paper describes the initial steps in the process of representing a nursing knowledge base in the logic programming language Prolog and provides examples of the work to date. This includes a description of the major knowledge domains and depictions of the relationships within and among them. Natural language text, graphical illustrations, and Prolog statements are used to show how the knowledge base is being represented.

  9. A NASA/RAE cooperation in the development of a real-time knowledge based autopilot

    NASA Technical Reports Server (NTRS)

    Daysh, Colin; Corbin, Malcolm; Butler, Geoff; Duke, Eugene L.; Belle, Steven D.; Brumbaugh, Randal W.

    1991-01-01

    As part of a US/UK cooperative aeronautical research program, a joint activity between NASA-Ames and the Royal Aerospace Establishment on Knowledge Based Systems (KBS) was established. This joint activity is concerned with tools and techniques for the implementation and validation of real-time KBS. The proposed next stage of the research is described, in which some of the problems of implementing and validating a Knowledge Based Autopilot (KBAP) for a generic high performance aircraft will be studied.

  10. Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis

    DTIC Science & Technology

    1989-08-01

    Automatic Line Network Extraction from Aerial Imangery of Urban Areas Sthrough KnowledghBased Image Analysis N 04 Final Technical ReportI December...Automatic Line Network Extraction from Aerial Imagery of Urban Areas through Knowledge Based Image Analysis Accesion For NTIS CRA&I DTIC TAB 0...paittern re’ognlition. blac’kboardl oriented symbollic processing, knowledge based image analysis , image understanding, aer’ial imsagery, urban area, 17

  11. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  12. Ab Initio Protein Structure Assembly Using Continuous Structure Fragments and Optimized Knowledge-based Force Field

    PubMed Central

    Xu, Dong; Zhang, Yang

    2012-01-01

    Ab initio protein folding is one of the major unsolved problems in computational biology due to the difficulties in force field design and conformational search. We developed a novel program, QUARK, for template-free protein structure prediction. Query sequences are first broken into fragments of 1–20 residues where multiple fragment structures are retrieved at each position from unrelated experimental structures. Full-length structure models are then assembled from fragments using replica-exchange Monte Carlo simulations, which are guided by a composite knowledge-based force field. A number of novel energy terms and Monte Carlo movements are introduced and the particular contributions to enhancing the efficiency of both force field and search engine are analyzed in detail. QUARK prediction procedure is depicted and tested on the structure modeling of 145 non-homologous proteins. Although no global templates are used and all fragments from experimental structures with template modeling score (TM-score) >0.5 are excluded, QUARK can successfully construct 3D models of correct folds in 1/3 cases of short proteins up to 100 residues. In the ninth community-wide Critical Assessment of protein Structure Prediction (CASP9) experiment, QUARK server outperformed the second and third best servers by 18% and 47% based on the cumulative Z-score of global distance test-total (GDT-TS) scores in the free modeling (FM) category. Although ab initio protein folding remains a significant challenge, these data demonstrate new progress towards the solution of the most important problem in the field. PMID:22411565

  13. A knowledge based approach to matching human neurodegenerative disease and animal models

    PubMed Central

    Maynard, Sarah M.; Mungall, Christopher J.; Lewis, Suzanna E.; Imam, Fahim T.; Martone, Maryann E.

    2013-01-01

    Neurodegenerative diseases present a wide and complex range of biological and clinical features. Animal models are key to translational research, yet typically only exhibit a subset of disease features rather than being precise replicas of the disease. Consequently, connecting animal to human conditions using direct data-mining strategies has proven challenging, particularly for diseases of the nervous system, with its complicated anatomy and physiology. To address this challenge we have explored the use of ontologies to create formal descriptions of structural phenotypes across scales that are machine processable and amenable to logical inference. As proof of concept, we built a Neurodegenerative Disease Phenotype Ontology (NDPO) and an associated Phenotype Knowledge Base (PKB) using an entity-quality model that incorporates descriptions for both human disease phenotypes and those of animal models. Entities are drawn from community ontologies made available through the Neuroscience Information Framework (NIF) and qualities are drawn from the Phenotype and Trait Ontology (PATO). We generated ~1200 structured phenotype statements describing structural alterations at the subcellular, cellular and gross anatomical levels observed in 11 human neurodegenerative conditions and associated animal models. PhenoSim, an open source tool for comparing phenotypes, was used to issue a series of competency questions to compare individual phenotypes among organisms and to determine which animal models recapitulate phenotypic aspects of the human disease in aggregate. Overall, the system was able to use relationships within the ontology to bridge phenotypes across scales, returning non-trivial matches based on common subsumers that were meaningful to a neuroscientist with an advanced knowledge of neuroanatomy. The system can be used both to compare individual phenotypes and also phenotypes in aggregate. This proof of concept suggests that expressing complex phenotypes using formal

  14. Consistent Refinement of Submitted Models at CASP using a Knowledge-based Potential

    PubMed Central

    Chopra, Gaurav; Kalisman, Nir; Levitt, Michael

    2010-01-01

    Protein structure refinement is an important but unsolved problem; it must be solved if we are to predict biological function that is very sensitive to structural details. Specifically, Critical Assessment of Techniques for Protein Structure Prediction (CASP) shows that the accuracy of predictions in the comparative modeling category is often worse than that of the template on which the homology model is based. Here we describe a refinement protocol that is able to consistently refine submitted predictions for all categories at CASP7. The protocol uses direct energy minimization of the knowledge-based potential of mean force that is based on the interaction statistics of 167 atom types (Summa and Levitt, Proc Natl Acad Sci USA 2007; 104:3177–3182). Our protocol is thus computationally very efficient; it only takes a few minutes of CPU time to run typical protein models (300 residues). We observe an average structural improvement of 1% in GDT_TS, for predictions that have low and medium homology to known PDB structures (Global Distance Test score or GDT_TS between 50 and 80%). We also observe a marked improvement in the stereochemistry of the models. The level of improvement varies amongst the various participants at CASP, but we see large improvements (>10% increase in GDT_TS) even for models predicted by the best performing groups at CASP7. In addition, our protocol consistently improved the best predicted models in the refinement category at CASP7 and CASP8. These improvements in structure and stereochemistry prove the usefulness of our computationally inexpensive, powerful and automatic refinement protocol. PMID:20589633

  15. Knowledge-based extraction of adverse drug events from biomedical text

    PubMed Central

    2014-01-01

    Background Many biomedical relation extraction systems are machine-learning based and have to be trained on large annotated corpora that are expensive and cumbersome to construct. We developed a knowledge-based relation extraction system that requires minimal training data, and applied the system for the extraction of adverse drug events from biomedical text. The system consists of a concept recognition module that identifies drugs and adverse effects in sentences, and a knowledge-base module that establishes whether a relation exists between the recognized concepts. The knowledge base was filled with information from the Unified Medical Language System. The performance of the system was evaluated on the ADE corpus, consisting of 1644 abstracts with manually annotated adverse drug events. Fifty abstracts were used for training, the remaining abstracts were used for testing. Results The knowledge-based system obtained an F-score of 50.5%, which was 34.4 percentage points better than the co-occurrence baseline. Increasing the training set to 400 abstracts improved the F-score to 54.3%. When the system was compared with a machine-learning system, jSRE, on a subset of the sentences in the ADE corpus, our knowledge-based system achieved an F-score that is 7 percentage points higher than the F-score of jSRE trained on 50 abstracts, and still 2 percentage points higher than jSRE trained on 90% of the corpus. Conclusion A knowledge-based approach can be successfully used to extract adverse drug events from biomedical text without need for a large training set. Whether use of a knowledge base is equally advantageous for other biomedical relation-extraction tasks remains to be investigated. PMID:24593054

  16. A knowledge-based artificial neural network classifier for pulmonary embolism diagnosis.

    PubMed

    Serpen, G; Tekkedil, D K; Orra, M

    2008-02-01

    This paper aims to demonstrate that knowledge-based hybrid learning algorithms are positioned to offer better performance in comparison with purely empirical machine learning algorithms for the automatic classification task associated with the diagnosis of a medical condition described as pulmonary embolism (PE). The main premise is that there exists substantial and significant specialized knowledge in the domain of PE, which can readily be leveraged for bootstrapping a knowledge-based hybrid classifier that employs both the explanation-based and the empirical learning. The modified prospective investigation of pulmonary embolism diagnosis (PIOPED) criteria, which represent the pre-eminent collective experiential knowledge base among nuclear radiologists as a diagnosis procedure for PE, are conveniently defined in terms of a set of if-then rules. As such, it lends itself to being captured into a knowledge base through instantiating a knowledge-based hybrid learning algorithm. This study shows the instantiation of a knowledge-based artificial neural network (KBANN) classifier through the modified PIOPED criteria for the diagnosis of PE. The development effort for the KBANN that captures the rule base associated with the PIOPED criteria as well as further refinement of the same rule base through highly specialized domain expertise is presented. Through a testing dataset generated with the help of nuclear radiologists, performance of the instantiated KBANN is profiled. Performances of a set of empirical machine learning algorithms, which are configured as classifiers and include the nai ve Bayes, the Bayesian Belief network, the multilayer perceptron neural network, the C4.5 decision tree algorithm, and two meta learners with boosting and bagging, are also profiled on the same dataset for the purpose of comparison with that of the KBANN. Simulation results indicate that the KBANN can effectively model and leverage the PIOPED knowledge base and its further refinements

  17. Knowledge-based Characterization of Similarity Relationships in the Human Protein-Tyrosine Phosphatase Family for Rational Inhibitor Design

    PubMed Central

    Vidović, Dušica; Schürer, Stephan C.

    2009-01-01

    Tyrosine phosphorylation, controlled by the coordinated action of protein-tyrosine kinases (PTKs) and protein-tyrosine phosphatases (PTPs), is a fundamental regulatory mechanism of numerous physiological processes. PTPs are implicated in a number of human diseases and their potential as prospective drug targets is increasingly being recognized. Despite their biological importance, until now no comprehensive overview has been reported describing how all members of the human PTP family are related. Here we review the entire human PTP family and present a systematic knowledge-based characterization of global and local similarity relationships, which are relevant for the development of small molecule inhibitors. We use parallel homology modeling to expand the current PTP structure space and analyze the human PTPs based on local three-dimensional catalytic sites and domain sequences. Furthermore, we demonstrate the importance of binding site similarities in understanding cross-reactivity and inhibitor selectivity in the design of small molecule inhibitors. PMID:19810703

  18. Enhancing Automatic Biological Pathway Generation with GO-based Gene Similarity

    SciTech Connect

    Sanfilippo, Antonio P.; Baddeley, Robert L.; Beagley, Nathaniel; Riensche, Roderick M.; Gopalan, Banu

    2009-08-03

    One of the greatest challenges in today’s analysis of microarray gene expression data is to identify pathways across regulated genes that underlie structural and functional changes of living cells in specific pathologies. Most current approaches to pathway generation are based on a reverse engineering approach in which pathway plausibility is solely induced from observed pathway data. These approaches tend to lack in generality as they are too dependent on the pathway observables from which they are induced. By contrast, alternative approaches that rely on prior biological knowledge may err in the opposite direction as the prior knowledge is usually not sufficiently tuned to the pathology of focus. In this paper, we present a novel pathway generation approach which combines insights from the reverse engineering and knowledge-based approaches to increase the biological plausibility and specificity of induced regulatory networks.

  19. IGENPRO knowledge-based digital system for process transient diagnostics and management

    SciTech Connect

    Morman, J.A.; Reifman, J.; Wei, T.Y.C.

    1997-12-31

    Verification and validation issues have been perceived as important factors in the large scale deployment of knowledge-based digital systems for plant transient diagnostics and management. Research and development (R&D) is being performed on the IGENPRO package to resolve knowledge base issues. The IGENPRO approach is to structure the knowledge bases on generic thermal-hydraulic (T-H) first principles and not use the conventional event-basis structure. This allows for generic comprehensive knowledge, relatively small knowledge bases and above all the possibility of T-H system/plant independence. To demonstrate concept feasibility the knowledge structure has been implemented in the diagnostic module PRODIAG. Promising laboratory testing results have been obtained using data from the full scope Braidwood PWR operator training simulator. This knowledge structure is now being implemented in the transient management module PROMANA to treat unanticipated events and the PROTREN module is being developed to process actual plant data. Achievement of the IGENPRO R&D goals should contribute to the acceptance of knowledge-based digital systems for transient diagnostics and management.

  20. A knowledge-based approach to automated flow-field zoning for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1989-01-01

    An automated three-dimensional zonal grid generation capability for computational fluid dynamics is shown through the development of a demonstration computer program capable of automatically zoning the flow field of representative two-dimensional (2-D) aerodynamic configurations. The applicability of a knowledge-based programming approach to the domain of flow-field zoning is examined. Several aspects of flow-field zoning make the application of knowledge-based techniques challenging: the need for perceptual information, the role of individual bias in the design and evaluation of zonings, and the fact that the zoning process is modeled as a constructive, design-type task (for which there are relatively few examples of successful knowledge-based systems in any domain). Engineering solutions to the problems arising from these aspects are developed, and a demonstration system is implemented which can design, generate, and output flow-field zonings for representative 2-D aerodynamic configurations.

  1. Design and implementation of knowledge-based framework for ground objects recognition in remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Shaobin; Ding, Mingyue; Cai, Chao; Fu, Xiaowei; Sun, Yue; Chen, Duo

    2009-10-01

    The advance of image processing makes knowledge-based automatic image interpretation much more realistic than ever. In the domain of remote sensing image processing, the introduction of knowledge enhances the confidence of recognition of typical ground objects. There are mainly two approaches to employ knowledge: the first one is scattering knowledge in concrete program and relevant knowledge of ground objects are fixed by programming; the second is systematically storing knowledge in knowledge base to offer a unified instruction for each object recognition procedure. In this paper, a knowledge-based framework for ground objects recognition in remote sensing image is proposed. This framework takes the second means for using knowledge with a hierarchical architecture. The recognition of typical airport demonstrated the feasibility of the proposed framework.

  2. Geomorphological feature extraction from a digital elevation model through fuzzy knowledge-based classification

    NASA Astrophysics Data System (ADS)

    Argialas, Demetre P.; Tzotsos, Angelos

    2003-03-01

    The objective of this research was the investigation of advanced image analysis methods for geomorphological mapping. Methods employed included multiresolution segmentation of the Digital Elevation Model (DEM) GTOPO30 and fuzzy knowledge based classification of the segmented DEM into three geomorphological classes: mountain ranges, piedmonts and basins. The study area was a segment of the Basin and Range Physiographic Province in Nevada, USA. The implementation was made in eCognition. In particular, the segmentation of GTOPO30 resulted into primitive objects. The knowledge-based classification of the primitive objects based on their elevation and shape parameters, resulted in the extraction of the geomorphological features. The resulted boundaries in comparison to those by previous studies were found satisfactory. It is concluded that geomorphological feature extraction can be carried out through fuzzy knowledge based classification as implemented in eCognition.

  3. Expert systems ''to go'': Laptop personal computers and knowledge-based software

    SciTech Connect

    Carr, K.R.

    1988-01-01

    There are many instances in the everyday activities of many fields in which the physically small, lightweight, battery-powered laptop personal computer with appropriate knowledge-based software can be a highly beneficial tool. For example, in chemical engineering field work, the easily portable laptop personal computer with expert system programs can provide expert assistance in troubleshooting equipment, tuning controllers, and installing various components. Laptop personal computers fill a particular niche in the use of knowledge-based software; at the Oak Ridge facilities of the US Department of Energy, knowledge-based software is being used on a wide range of computers, including a Cray X-MP/12, Digital Equipment Corp., VAX 8700, VAX 8200, and VAXstations, Symbolics, Inc., 3640 LISP machines, many IBM PC's and compatibles, as well as laptop personal computers. Factors considered include expert system transfer from larger computers, program execution speed, memory capacity, and options for utilization of non-volatile memory storage. 46 refs.

  4. Applying knowledge-based methods to design and implement an air quality workshop

    NASA Astrophysics Data System (ADS)

    Schmoldt, Daniel L.; Peterson, David L.

    1991-09-01

    In response to protection needs in class I wilderness areas, forest land managers of the USDA Forest Service must provide input to regulatory agencies regarding air pollutant impacts on air quality-related values. Regional workshops have been convened for land managers and scientists to discuss the aspects and extent of wilderness protection needs. Previous experience with a national workshop indicated that a document summarizing workshop discussions will have little operational utility. An alternative is to create a knowledge-based analytical system, in addition to the document, to aid land managers in assessing effects of air pollutants on wilderness. Knowledge-based methods were used to design and conduct regional workshops in the western United States. Extracting knowledge from a large number of workshop participants required careful planning of workshop discussions. Knowledge elicitation methods helped with this task. This knowledge-based approach appears to be effective for focusing group discussions and collecting knowledge from large groups of specialists.

  5. A knowledge base for tracking the impact of genomics on population health.

    PubMed

    Yu, Wei; Gwinn, Marta; Dotson, W David; Green, Ridgely Fisk; Clyne, Mindy; Wulf, Anja; Bowen, Scott; Kolor, Katherine; Khoury, Muin J

    2016-12-01

    We created an online knowledge base (the Public Health Genomics Knowledge Base (PHGKB)) to provide systematically curated and updated information that bridges population-based research on genomics with clinical and public health applications. Weekly horizon scanning of a wide variety of online resources is used to retrieve relevant scientific publications, guidelines, and commentaries. After curation by domain experts, links are deposited into Web-based databases. PHGKB currently consists of nine component databases. Users can search the entire knowledge base or search one or more component databases directly and choose options for customizing the display of their search results. PHGKB offers researchers, policy makers, practitioners, and the general public a way to find information they need to understand the complicated landscape of genomics and population health.Genet Med 18 12, 1312-1314.

  6. Developing a knowledge base to support the annotation of ultrasound images of ectopic pregnancy.

    PubMed

    Dhombres, Ferdinand; Maurice, Paul; Friszer, Stéphanie; Guilbaud, Lucie; Lelong, Nathalie; Khoshnood, Babak; Charlet, Jean; Perrot, Nicolas; Jauniaux, Eric; Jurkovic, Davor; Jouannic, Jean-Marie

    2017-01-31

    Ectopic pregnancy is a frequent early complication of pregnancy associated with significant rates of morbidly and mortality. The positive diagnosis of this condition is established through transvaginal ultrasound scanning. The timing of diagnosis depends on the operator expertise in identifying the signs of ectopic pregnancy, which varies dramatically among medical staff with heterogeneous training. Developing decision support systems in this context is expected to improve the identification of these signs and subsequently improve the quality of care. In this article, we present a new knowledge base for ectopic pregnancy, and we demonstrate its use on the annotation of clinical images. The knowledge base is supported by an application ontology, which provides the taxonomy, the vocabulary and definitions for 24 types and 81 signs of ectopic pregnancy, 484 anatomical structures and 32 technical elements for image acquisition. The knowledge base provides a sign-centric model of the domain, with the relations of signs to ectopic pregnancy types, anatomical structures and the technical elements. The evaluation of the ontology and knowledge base demonstrated a positive feedback from a panel of 17 medical users. Leveraging these semantic resources, we developed an application for the annotation of ultrasound images. Using this application, 6 operators achieved a precision of 0.83 for the identification of signs in 208 ultrasound images corresponding to 35 clinical cases of ectopic pregnancy. We developed a new ectopic pregnancy knowledge base for the annotation of ultrasound images. The use of this knowledge base for the annotation of ultrasound images of ectopic pregnancy showed promising results from the perspective of clinical decision support system development. Other gynecological disorders and fetal anomalies may benefit from our approach.

  7. Space shuttle main engine anomaly data and inductive knowledge based systems: Automated corporate expertise

    NASA Technical Reports Server (NTRS)

    Modesitt, Kenneth L.

    1987-01-01

    Progress is reported on the development of SCOTTY, an expert knowledge-based system to automate the analysis procedure following test firings of the Space Shuttle Main Engine (SSME). The integration of a large-scale relational data base system, a computer graphics interface for experts and end-user engineers, potential extension of the system to flight engines, application of the system for training of newly-hired engineers, technology transfer to other engines, and the essential qualities of good software engineering practices for building expert knowledge-based systems are among the topics discussed.

  8. Knowledge-based immunosuppressive therapy for kidney transplant patients--from theoretical model to clinical integration.

    PubMed

    Seeling, Walter; Plischke, Max; de Bruin, Jeroen S; Schuh, Christian

    2015-01-01

    Immunosuppressive therapy is a risky necessity after a patient received a kidney transplant. To reduce risks, a knowledge-based system was developed that determines the right dosage of the immunosuppresive agent Tacrolimus. A theoretical model, to classify medication blood levels as well as medication adaptions, was created using data from almost 500 patients, and over 13.000 examinations. This model was then translated into an Arden Syntax knowledge base, and integrated directly into the hospital information system of the Vienna General Hospital. In this paper we give an overview of the construction and integration of such a system.

  9. Arranging ISO 13606 archetypes into a knowledge base using UML connectors.

    PubMed

    Kopanitsa, Georgy

    2014-01-01

    To enable the efficient reuse of standard based medical data we propose to develop a higher-level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analysed for their ability to be applied in the implementation of a higher-level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future.

  10. Design of Composite Structures Using Knowledge-Based and Case Based Reasoning

    NASA Technical Reports Server (NTRS)

    Lambright, Jonathan Paul

    1996-01-01

    A method of using knowledge based and case based reasoning to assist designers during conceptual design tasks of composite structures was proposed. The cooperative use of heuristics, procedural knowledge, and previous similar design cases suggests a potential reduction in design cycle time and ultimately product lead time. The hypothesis of this work is that the design process of composite structures can be improved by using Case-Based Reasoning (CBR) and Knowledge-Based (KB) reasoning in the early design stages. The technique of using knowledge-based and case-based reasoning facilitates the gathering of disparate information into one location that is easily and readily available. The method suggests that the inclusion of downstream life-cycle issues into the conceptual design phase reduces potential of defective, and sub-optimal composite structures. Three industry experts were interviewed extensively. The experts provided design rules, previous design cases, and test problems. A Knowledge Based Reasoning system was developed using the CLIPS (C Language Interpretive Procedural System) environment and a Case Based Reasoning System was developed using the Design Memory Utility For Sharing Experiences (MUSE) xviii environment. A Design Characteristic State (DCS) was used to document the design specifications, constraints, and problem areas using attribute-value pair relationships. The DCS provided consistent design information between the knowledge base and case base. Results indicated that the use of knowledge based and case based reasoning provided a robust design environment for composite structures. The knowledge base provided design guidance from well defined rules and procedural knowledge. The case base provided suggestions on design and manufacturing techniques based on previous similar designs and warnings of potential problems and pitfalls. The case base complemented the knowledge base and extended the problem solving capability beyond the existence of

  11. Generating MEDLINE search strategies using a librarian knowledge-based system.

    PubMed Central

    Peng, P.; Aguirre, A.; Johnson, S. B.; Cimino, J. J.

    1993-01-01

    We describe a librarian knowledge-based system that generates a search strategy from a query representation based on a user's information need. Together with the natural language parser AQUA, the system functions as a human/computer interface, which translates a user query from free text into a BRS Onsite search formulation, for searching the MEDLINE bibliographic database. In the system, conceptual graphs are used to represent the user's information need. The UMLS Metathesaurus and Semantic Net are used as the key knowledge sources in building the knowledge base. PMID:8130544

  12. PRAIS: Distributed, real-time knowledge-based systems made easy

    NASA Technical Reports Server (NTRS)

    Goldstein, David G.

    1990-01-01

    This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.

  13. Design of Composite Structures Using Knowledge-Based and Case Based Reasoning

    NASA Technical Reports Server (NTRS)

    Lambright, Jonathan Paul

    1996-01-01

    A method of using knowledge based and case based reasoning to assist designers during conceptual design tasks of composite structures was proposed. The cooperative use of heuristics, procedural knowledge, and previous similar design cases suggests a potential reduction in design cycle time and ultimately product lead time. The hypothesis of this work is that the design process of composite structures can be improved by using Case-Based Reasoning (CBR) and Knowledge-Based (KB) reasoning in the early design stages. The technique of using knowledge-based and case-based reasoning facilitates the gathering of disparate information into one location that is easily and readily available. The method suggests that the inclusion of downstream life-cycle issues into the conceptual design phase reduces potential of defective, and sub-optimal composite structures. Three industry experts were interviewed extensively. The experts provided design rules, previous design cases, and test problems. A Knowledge Based Reasoning system was developed using the CLIPS (C Language Interpretive Procedural System) environment and a Case Based Reasoning System was developed using the Design Memory Utility For Sharing Experiences (MUSE) xviii environment. A Design Characteristic State (DCS) was used to document the design specifications, constraints, and problem areas using attribute-value pair relationships. The DCS provided consistent design information between the knowledge base and case base. Results indicated that the use of knowledge based and case based reasoning provided a robust design environment for composite structures. The knowledge base provided design guidance from well defined rules and procedural knowledge. The case base provided suggestions on design and manufacturing techniques based on previous similar designs and warnings of potential problems and pitfalls. The case base complemented the knowledge base and extended the problem solving capability beyond the existence of

  14. Interfaces for knowledge-base builders control knowledge and application-specific procedures

    SciTech Connect

    Hirsch, P.; Katke, W.; Meier, M.; Snyder, S.; Stillman, R.

    1986-01-01

    Expert System Environment/VM is an expert system shell-a general-purpose system for constructing and executing expert system applications. An application expert has both factual knowledge about an application and knowledge about how that factual knowledge should be organized and processed. In addition, many applications require application-dependent procedures to access databases or to do specialized processing. An important and novel part of Expert System Environment/VM is the technique used to allow the expert or knowledge-base builder to enter the control knowledge and to interface with application-dependent procedures. This paper discusses these high-level interfaces for the knowledge-base builder.

  15. The data dictionary--a controlled vocabulary for integrating clinical databases and medical knowledge bases.

    PubMed

    Linnarsson, R; Wigertz, O

    1989-04-01

    The medical information systems of the future will probably include the entire medical record as well as a knowledge base, providing decision support for the physician during patient care. Data dictionaries will play an important role in integrating the medical knowledge bases with the clinical databases. This article presents an infological data model of such an integrated medical information system. Medical events, medical terms, and medical facts are the basic concepts that constitute the model. To allow the transfer of information and knowledge between systems, the data dictionary should be organized with regard to several common classification schemes of medical nomenclature.

  16. Knowledge-based monitoring of the pointing control system on the Hubble space telescope

    NASA Technical Reports Server (NTRS)

    Dunham, Larry L.; Laffey, Thomas J.; Kao, Simon M.; Schmidt, James L.; Read, Jackson Y.

    1987-01-01

    A knowledge-based system for the real time monitoring of telemetry data from the Pointing and Control System (PCS) of the Hubble Space Telescope (HST) that enables the retention of design expertise throughout the three decade project lifespan by means other than personnel and documentation is described. The system will monitor performance, vehicle status, success or failure of various maneuvers, and in some cases diagnose problems and recommend corrective actions using a knowledge base built using mission scenarios and the more than 4,500 telemetry monitors from the HST.

  17. Extending the Learning Experience Using the Web and a Knowledge-Based Virtual Environment.

    ERIC Educational Resources Information Center

    Parkinson, B.; Hudson, P.

    2002-01-01

    Identifies problems associated with teaching and learning a complex subject such as engineering design within a restrictive educational environment. Describes the development of a Web-based computer aid in the United Kingdom which employs a multimedia virtual environment incorporating domain-specific knowledge-based systems to emulate a range of…

  18. Knowledge-Based Information Management for Watershed Analysis in the Pacific Northwest U.S.

    Treesearch

    Keith Reynolds; Richard Olson; Michael Saunders; Donald Latham; Michael Foster; Bruce Miller; Lawrence Bednar; Daniel Schmoldt; Patrick Cunningham; John Steffenson

    1996-01-01

    We are developing a knowledge-based information management system to provide decision support for watershed analysis in the Pacific Northwest region of the U.S. The system includes: (1) a GIS interface that allows users to graphically navigate to specific provinces and watersheds and display a variety of themes and other area-specific information, (2) an analysis...

  19. Knowledge Based Artificial Augmentation Intelligence Technology: Next Step in Academic Instructional Tools for Distance Learning

    ERIC Educational Resources Information Center

    Crowe, Dale; LaPierre, Martin; Kebritchi, Mansureh

    2017-01-01

    With augmented intelligence/knowledge based system (KBS) it is now possible to develop distance learning applications to support both curriculum and administrative tasks. Instructional designers and information technology (IT) professionals are now moving from the programmable systems era that started in the 1950s to the cognitive computing era.…

  20. Longitudinal Assessment of Progress in Reasoning Capacity and Relation with Self-Estimation of Knowledge Base

    ERIC Educational Resources Information Center

    Collard, Anne; Mélot, France; Bourguignon, Jean-Pierre

    2015-01-01

    The aim of the study was to investigate progress in reasoning capacity and knowledge base appraisal in a longitudinal analysis of data from summative evaluation throughout a medical problem-based learning curriculum. The scores in multidisciplinary discussion of a clinical case and multiple choice questionnaires (MCQs) were studied longitudinally…

  1. Knowledge-Based Indexing of the Medical Literature: The Indexing Aid Project.

    ERIC Educational Resources Information Center

    Humphrey, Suzanne; Miller, Nancy E.

    1987-01-01

    Describes the National Library of Medicine's (NLM) Indexing Aid Project for conducting research in knowledge representation and indexing for information retrieval, whose goal is to develop interactive knowledge-based systems for computer-assisted indexing of the periodical medical literature. Appendices include background information on NLM…

  2. Knowledge-Based Information Management in Decision Support for Ecosystem Management

    Treesearch

    Keith Reynolds; Micahel Saunders; Richard Olson; Daniel Schmoldt; Michael Foster; Donald Latham; Bruce Miller; John Steffenson; Lawrence Bednar; Patrick Cunningham

    1995-01-01

    The Pacific Northwest Research Station (USDA Forest Service) is developing a knowledge-based information management system to provide decision support for watershed analysis in the Pacific Northwest region of the U.S. The decision support system includes: (1) a GIS interface that allows users to graphically navigate to specific provinces and watersheds and display a...

  3. The Knowledge-Based Reasoning of Physical Education Teachers: A Comparison between Groups with Different Expertise

    ERIC Educational Resources Information Center

    Reuker, Sabine

    2017-01-01

    The study addresses professional vision, including the abilities of selective attention and knowledge-based reasoning. This article focuses on the latter ability. Groups with different sport-specific and pedagogical expertise (n = 60) were compared according to their observation and interpretation of sport activities in a four-field design. The…

  4. The Spread of Contingent Work in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Szabo, Katalin; Negyesi, Aron

    2005-01-01

    Permanent employment, typical of industrial societies and bolstered by numerous social guaranties, has been declining in the past 2 decades. There has been a steady expansion of various forms of contingent work. The decomposition of traditional work is a logical consequence of the characteristic patterns of the knowledge-based economy. According…

  5. Developing a Knowledge Base for Educational Leadership and Management in East Asia

    ERIC Educational Resources Information Center

    Hallinger, Philip

    2011-01-01

    The role of school leadership in educational reform has reached the status of a truism, and led to major changes in school leader recruitment, selection, training and appraisal. While similar policy trends are evident in East Asia, the empirical knowledge base underlying these measures is distorted and lacking in validation. This paper begins by…

  6. Analytical and knowledge-based redundancy for fault diagnosis in process plants

    SciTech Connect

    Fathi, Z.; Ramirez, W.F. ); Korbicz, J. )

    1993-01-01

    The increasing complexity of process plants and their reliability have necessitated the development of more powerful methods for detecting and diagnosing process abnormalities. Among the underlying strategies, analytical redundancy and knowledge-based system techniques offer viable solutions. In this work, the authors consider the adaptive inclusion of analytical redundancy models (state and parameter estimation modules) in the diagnostic reasoning loop of a knowledge-based system. This helps overcome the difficulties associated with each category. The design method is a new layered knowledge base that houses compiled/qualitative knowledge in the high levels and process-general estimation knowledge in the low levels of a hierarchical knowledge structure. The compiled knowledge is used to narrow the diagnostic search space and provide an effective way of employing estimation modules. The estimation-based methods that resort to fundamental analysis provide the rationale for a qualitatively-guided reasoning process. The overall structure of the fault detection and isolation system based on the combined strategy is discussed focusing on the model-based redundancy methods which create the low levels of the hierarchical knowledge base. The system has been implemented using the condensate-feedwater subsystem of a coal-fired power plant. Due to the highly nonlinear and mixed-mode nature of the power plant dynamics, the modified extended Kalman filter is used in designing local detection filters.

  7. Developing a Knowledge Base for Educational Leadership and Management in East Asia

    ERIC Educational Resources Information Center

    Hallinger, Philip

    2011-01-01

    The role of school leadership in educational reform has reached the status of a truism, and led to major changes in school leader recruitment, selection, training and appraisal. While similar policy trends are evident in East Asia, the empirical knowledge base underlying these measures is distorted and lacking in validation. This paper begins by…

  8. Artificial intelligence in process control: Knowledge base for the shuttle ECS model

    NASA Technical Reports Server (NTRS)

    Stiffler, A. Kent

    1989-01-01

    The general operation of KATE, an artificial intelligence controller, is outlined. A shuttle environmental control system (ECS) demonstration system for KATE is explained. The knowledge base model for this system is derived. An experimental test procedure is given to verify parameters in the model.

  9. Proposing a Knowledge Base for Teaching Academic Content to English Language Learners: Disciplinary Linguistic Knowledge

    ERIC Educational Resources Information Center

    Turkan, Sultan; De Oliveira, Luciana C.; Lee, Okhee; Phelps, Geoffrey

    2014-01-01

    Background/Context: The current research on teacher knowledge and teacher accountability falls short on information about what teacher knowledge base could guide preparation and accountability of the mainstream teachers for meeting the academic needs of ELLs. Most recently, research on specialized knowledge for teaching has offered ways to…

  10. Static and Completion Analysis for Planning Knowledge Base Development and Verification

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintaining the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems.

  11. Universities and the Knowledge-Based Economy: Perceptions from a Developing Country

    ERIC Educational Resources Information Center

    Bano, Shah; Taylor, John

    2015-01-01

    This paper considers the role of universities in the creation of a knowledge-based economy (KBE) in a developing country, Pakistan. Some developing countries have moved quickly to develop a KBE, but progress in Pakistan is much slower. Higher education plays a crucial role as part of the triple helix model for innovation. Based on the perceptions…

  12. The Impact of the Shifting Knowledge Base, from Development to Achievement, on Early Childhood Education Programs

    ERIC Educational Resources Information Center

    Tyler, Kathleen P.

    2012-01-01

    Interest in child development as a knowledge base for early childhood education programs flourished in the 1970s as a result of the theories and philosophies of Jean Piaget and other cognitive developmentalists. During subsequent decades in America, reform movements emphasizing accountability and achievement became a political and social…

  13. The Knowledge Base of Non-Native English-Speaking Teachers: Perspectives of Teachers and Administrators

    ERIC Educational Resources Information Center

    Zhang, Fengjuan; Zhan, Ju

    2014-01-01

    This study explores the knowledge base of non-native English-speaking teachers (NNESTs) working in the Canadian English as a second language (ESL) context. By examining NNESTs' experiences in seeking employment and teaching ESL in Canada, and investigating ESL program administrators' perceptions and hiring practices in relation to NNESTs, it…

  14. Universities and the Knowledge-Based Economy: Perceptions from a Developing Country

    ERIC Educational Resources Information Center

    Bano, Shah; Taylor, John

    2015-01-01

    This paper considers the role of universities in the creation of a knowledge-based economy (KBE) in a developing country, Pakistan. Some developing countries have moved quickly to develop a KBE, but progress in Pakistan is much slower. Higher education plays a crucial role as part of the triple helix model for innovation. Based on the perceptions…

  15. Hospital Bioethics: A Beginning Knowledge Base for the Neonatal Social Worker.

    ERIC Educational Resources Information Center

    Silverman, Ed

    1992-01-01

    Notes that life-saving advances in medicine have created difficult ethical and legal dilemmas for health care professionals. Presents beginning knowledge base for bioethical practice, especially in hospital neonatal units. Outlines key elements of bioethical decision making and examines potential social work role from clinical and organizational…

  16. In Search of Museum Professional Knowledge Base: Mapping the Professional Knowledge Debate onto Museum Work

    ERIC Educational Resources Information Center

    Tlili, Anwar

    2016-01-01

    Museum professionalism remains an unexplored area in museum studies, particularly with regard to what is arguably the core generic question of a "sui generis" professional knowledge base, and its necessary and sufficient conditions. The need to examine this question becomes all the more important with the increasing expansion of the…

  17. Effects of the Knowledge Base on Children's Rehearsal and Organizational Strategies.

    ERIC Educational Resources Information Center

    Ornstein, Peter A.; Naus, Mary J.

    In addition to the important role of memory strategies in mediating age changes in recall performance, it is clear that the permanent memory system (or information available in the knowledge base) exerts a significant influence on the acquisition and retention of information. Age changes in memory performance will be fully understood only through…

  18. Learning and Innovation in the Knowledge-Based Economy: Beyond Clusters and Qualifications

    ERIC Educational Resources Information Center

    James, Laura; Guile, David; Unwin, Lorna

    2013-01-01

    For over a decade policy-makers have claimed that advanced industrial societies should develop a knowledge-based economy (KBE) in response to economic globalisation and the transfer of manufacturing jobs to lower cost countries. In the UK, this vision shaped New Labour's policies for vocational education and training (VET), higher education and…

  19. Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation

    DTIC Science & Technology

    1989-08-01

    SYMBOL 7a. NAME OF MONITORING ORGANIZATION *University of Illinois f (If applicable) Artificial Intelligence (Code 1133) ______________________ 1...Mathews Ave Dist Urbana, IL 61801 A August 1989 Submitted for Publication: Artificial Intelligence Journal Sociopathic Knowledge Bases: Correct...Introduction Reasoning under uncertainty has been widely investigated in artificial intelligence . Prob- abilistic approaches are of particular relevance

  20. Learning Spaces: An ICT-Enabled Model of Future Learning in the Knowledge-Based Society

    ERIC Educational Resources Information Center

    Punie, Yves

    2007-01-01

    This article presents elements of a future vision of learning in the knowledge-based society which is enabled by ICT. It is not only based on extrapolations from trends and drivers that are shaping learning in Europe but also consists of a holistic attempt to envisage and anticipate future learning needs and requirements in the KBS. The…

  1. The Knowledge-Based Reasoning of Physical Education Teachers: A Comparison between Groups with Different Expertise

    ERIC Educational Resources Information Center

    Reuker, Sabine

    2017-01-01

    The study addresses professional vision, including the abilities of selective attention and knowledge-based reasoning. This article focuses on the latter ability. Groups with different sport-specific and pedagogical expertise (n = 60) were compared according to their observation and interpretation of sport activities in a four-field design. The…

  2. Preparing Oral Examinations of Mathematical Domains with the Help of a Knowledge-Based Dialogue System.

    ERIC Educational Resources Information Center

    Schmidt, Peter

    A conception of discussing mathematical material in the domain of calculus is outlined. Applications include that university students work at their knowledge and prepare for their oral examinations by utilizing the dialog system. The conception is based upon three pillars. One central pillar is a knowledge base containing the collections of…

  3. Hospital Bioethics: A Beginning Knowledge Base for the Neonatal Social Worker.

    ERIC Educational Resources Information Center

    Silverman, Ed

    1992-01-01

    Notes that life-saving advances in medicine have created difficult ethical and legal dilemmas for health care professionals. Presents beginning knowledge base for bioethical practice, especially in hospital neonatal units. Outlines key elements of bioethical decision making and examines potential social work role from clinical and organizational…

  4. Development of the Regulatory Commission and Knowledge Base System and Investigation of Possible Augmentation Technologies

    DTIC Science & Technology

    1988-07-01

    1987. Gary Zuckerman, personal communication, Software A& E , Marketing Representative, March, 1987. 7 Zuckerman, personal communication. * 18"KES for...Today’s Knowledge Based Systems," (Software A&E, 1986). 𔃽Ricki Kleist, personal communication, Software A& E , Marketing Representative, March, 1987. 20

  5. Approximate Degrees of Similarity between a User's Knowledge and the Tutorial Systems' Knowledge Base

    ERIC Educational Resources Information Center

    Mogharreban, Namdar

    2004-01-01

    A typical tutorial system functions by means of interaction between four components: the expert knowledge base component, the inference engine component, the learner's knowledge component and the user interface component. In typical tutorial systems the interaction and the sequence of presentation as well as the mode of evaluation are…

  6. Developing genomic knowledge bases and databases to support clinical management: current perspectives.

    PubMed

    Huser, Vojtech; Sincan, Murat; Cimino, James J

    2014-01-01

    Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward.

  7. Improving Student Teachers' Knowledge-Base in Language Education through Critical Reading

    ERIC Educational Resources Information Center

    Mulumba, Mathias Bwanika

    2016-01-01

    The emergence of the digital era is redefining education and the pedagogical processes in an unpredictable manner. In the midst of the increased availability of print and online resources, the twenty-first century language teacher educator expects her (or his) student teachers to be reading beings if they are to improve their knowledge-base in…

  8. Developing genomic knowledge bases and databases to support clinical management: current perspectives

    PubMed Central

    Huser, Vojtech; Sincan, Murat; Cimino, James J

    2014-01-01

    Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward. PMID:25276091

  9. Clear as Glass: A Combined List of Print and Electronic Journals in the Knowledge Base

    ERIC Educational Resources Information Center

    Lowe, M. Sara

    2008-01-01

    The non-standard practice at Cowles Library at Drake University has been to display electronic journals and some print journals in the Knowledge Base while simultaneously listing print journals and some electronic journals in the online public access catalog (OPAC). The result was a system that made it difficult for patrons to determine our…

  10. Testing of a Natural Language Retrieval System for a Full Text Knowledge Base.

    ERIC Educational Resources Information Center

    Bernstein, Lionel M.; Williamson, Robert E.

    1984-01-01

    The Hepatitis Knowledge Base (text of prototype information system) was used for modifying and testing "A Navigator of Natural Language Organized (Textual) Data" (ANNOD), a retrieval system which combines probabilistic, linguistic, and empirical means to rank individual paragraphs of full text for similarity to natural language queries…

  11. Deception Detection in Expert Source Information Through Bayesian Knowledge-Bases

    DTIC Science & Technology

    2008-02-04

    intelligence and have implemented deception detection algorithms using probabilistic,intelligent, multi - agent systems . We have also conducted numerous...Bayesian Knowledge Bases," Data and Knowledge Engineering 64, 218-241, 2008. Yuan, Xiuqing, "Deception Detection in Multi - Agent System and War

  12. Simultaneous Mapping of Interactions between Scientific and Technological Knowledge Bases: The Case of Space Communications.

    ERIC Educational Resources Information Center

    Hassan, E.

    2003-01-01

    Examines the knowledge structure of the field of space communications using bibliometric mapping techniques based on textual analysis. Presents a new approach with the aim of visualizing simultaneously the configuration of the scientific and technological knowledge bases at a worldwide level, and discusses results that show different…

  13. EMDS users guide (version 2.0): knowledge-based decision support for ecological assessment.

    Treesearch

    Keith M. Reynolds

    1999-01-01

    The USDA Forest Service Pacific Northwest Research Station in Corvallis, Oregon, has developed the ecosystem management decision support (EMDS) system. The system integrates the logical formalism of knowledge-based reasoning into a geographic information system (GIS) environment to provide decision support for ecological landscape assessment and evaluation. The...

  14. Longitudinal Assessment of Progress in Reasoning Capacity and Relation with Self-Estimation of Knowledge Base

    ERIC Educational Resources Information Center

    Collard, Anne; Mélot, France; Bourguignon, Jean-Pierre

    2015-01-01

    The aim of the study was to investigate progress in reasoning capacity and knowledge base appraisal in a longitudinal analysis of data from summative evaluation throughout a medical problem-based learning curriculum. The scores in multidisciplinary discussion of a clinical case and multiple choice questionnaires (MCQs) were studied longitudinally…

  15. Transformational Learning and Human Resource Development: Advances toward a Knowledge Based Society through Humor

    ERIC Educational Resources Information Center

    Parke, Joanne

    2004-01-01

    A common thread within a growing globalism is the creation of an emerging knowledge-based workforce. This paper will discuss a message supported by adult education theory that is beginning to manifest itself in human resource development and the growing globalism that steeped in communication and information. Theoretical implications are reviewed…

  16. Application of knowledge-based vision to closed-loop control of the injection molding process

    NASA Astrophysics Data System (ADS)

    Marsh, Robert; Stamp, R. J.; Hill, T. M.

    1997-10-01

    An investigation is under way to develop a control system for an industrial process which uses a vision systems as a sensor. The research is aimed at the improvement of product quality in commercial injection molding system. A significant enhancement has been achieved in the level of application of visually based inspection techniques to component quality. The aim of the research has been the investigation, and employment, of inspection methods that use knowledge based machine vision. The application of such techniques in this context is comprehensive, extending from object oriented analysis, design and programming of the inspection program, to the application of rule based reasoning, to image interpretation, vision system diagnostics, component diagnostics and molding machine control. In this way, knowledge handling methods are exploited wherever they prove to be beneficial. The vision knowledge base contains information on the procedures required to achieve successful identification of component surface defects. A collection of image processing and pattern recognition algorithms are applied selectively. Once inspection of the component has been performed, defects are related to process variables which affect the quality of the component, and another knowledge base is used to effect a control action at the molding machine. Feedback from other machine sensor is also used to direct the control procedure. Results from the knowledge based vision inspection system are encouraging. They indicate that rapid and effective fault detection and analysis is feasible, as is the verification of system integrity.

  17. Learning Spaces: An ICT-Enabled Model of Future Learning in the Knowledge-Based Society

    ERIC Educational Resources Information Center

    Punie, Yves

    2007-01-01

    This article presents elements of a future vision of learning in the knowledge-based society which is enabled by ICT. It is not only based on extrapolations from trends and drivers that are shaping learning in Europe but also consists of a holistic attempt to envisage and anticipate future learning needs and requirements in the KBS. The…

  18. Elaborating the Grounding of the Knowledge Base on Language and Learning for Preservice Literacy Teachers

    ERIC Educational Resources Information Center

    Piazza, Carolyn L.; Wallat, Cynthia

    2006-01-01

    This purpose of this article is to present a qualitative inquiry into the genesis of sociolinguistics and the contributions of eight sociolinguistic pioneers. This inquiry, based on an historical interpretation of events, reformulates the concept of validation as the social construction of a scientific knowledge base, and explicates three themes…

  19. The Relationship between Agriculture Knowledge Bases for Teaching and Sources of Knowledge

    ERIC Educational Resources Information Center

    Rice, Amber H.; Kitchel, Tracy

    2015-01-01

    The purpose of this study was to describe the agriculture knowledge bases for teaching of agriculture teachers and to see if a relationship existed between years of teaching experience, sources of knowledge, and development of pedagogical content knowledge (PCK), using quantitative methods. A model of PCK from mathematics was utilized as a…

  20. A knowledge-based object recognition system for applications in the space station

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.

    1988-02-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  1. Pedagogical Knowledge Base Underlying EFL Teachers' Provision of Oral Corrective Feedback in Grammar Instruction

    ERIC Educational Resources Information Center

    Atai, Mahmood Reza; Shafiee, Zahra

    2017-01-01

    The present study investigated the pedagogical knowledge base underlying EFL teachers' provision of oral corrective feedback in grammar instruction. More specifically, we explored the consistent thought patterns guiding the decisions of three Iranian teachers regarding oral corrective feedback on grammatical errors. We also examined the potential…

  2. Students' Refinement of Knowledge during the Development of Knowledge Bases for Expert Systems.

    ERIC Educational Resources Information Center

    Lippert, Renate; Finley, Fred

    The refinement of the cognitive knowledge base was studied through exploration of the transition from novice to expert and the use of an instructional strategy called novice knowledge engineering. Six college freshmen, who were enrolled in an honors physics course, used an expert system to create questions, decisions, rules, and explanations…

  3. Testing of a Natural Language Retrieval System for a Full Text Knowledge Base.

    ERIC Educational Resources Information Center

    Bernstein, Lionel M.; Williamson, Robert E.

    1984-01-01

    The Hepatitis Knowledge Base (text of prototype information system) was used for modifying and testing "A Navigator of Natural Language Organized (Textual) Data" (ANNOD), a retrieval system which combines probabilistic, linguistic, and empirical means to rank individual paragraphs of full text for similarity to natural language queries…

  4. Learning and Innovation in the Knowledge-Based Economy: Beyond Clusters and Qualifications

    ERIC Educational Resources Information Center

    James, Laura; Guile, David; Unwin, Lorna

    2013-01-01

    For over a decade policy-makers have claimed that advanced industrial societies should develop a knowledge-based economy (KBE) in response to economic globalisation and the transfer of manufacturing jobs to lower cost countries. In the UK, this vision shaped New Labour's policies for vocational education and training (VET), higher education and…

  5. The Feasibility and Effectiveness of a Pilot Resident-Organized and -Led Knowledge Base Review

    ERIC Educational Resources Information Center

    Vautrot, Victor J.; Festin, Fe E.; Bauer, Mark S.

    2010-01-01

    Objective: The Accreditation Council for Graduate Medical Education (ACGME) requires a sufficient medical knowledge base as one of the six core competencies in residency training. The authors judged that an annual "short-course" review of medical knowledge would be a useful adjunct to standard seminar and rotation teaching, and that a…

  6. Small Knowledge-Based Systems in Education and Training: Something New Under the Sun.

    ERIC Educational Resources Information Center

    Wilson, Brent G.; Welsh, Jack R.

    1986-01-01

    Discusses artificial intelligence, robotics, natural language processing, and expert or knowledge-based systems research; examines two large expert systems, MYCIN and XCON; and reviews the resources required to build large expert systems and affordable smaller systems (intelligent job aids) for training. Expert system vendors and products are…

  7. A Comparison of Books and Hypermedia for Knowledge-based Sports Coaching.

    ERIC Educational Resources Information Center

    Vickers, Joan N.; Gaines, Brian R.

    1988-01-01

    Summarizes and illustrates the knowledge-based approach to instructional material design. A series of sports coaching handbooks and hypermedia presentations of the same material are described and the different instantiations of the knowledge and training structures are compared. Figures show knowledge structures for badminton and the architecture…

  8. Adding Learning to Knowledge-Based Systems: Taking the "Artificial" Out of AI

    Treesearch

    Daniel L. Schmoldt

    1997-01-01

    Both, knowledge-based systems (KBS) development and maintenance require time-consuming analysis of domain knowledge. Where example cases exist, KBS can be built, and later updated, by incorporating learning capabilities into their architecture. This applies to both supervised and unsupervised learning scenarios. In this paper, the important issues for learning systems-...

  9. A knowledge-based object recognition system for applications in the space station

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  10. The Unintended Consequences of a Standardized Knowledge Base in Advancing Educational Leadership Preparation

    ERIC Educational Resources Information Center

    English, Fenwick W.

    2006-01-01

    Background: The quest for a "knowledge base" in educational administration resulting in the construction of national standards for preparing school leaders has brought with it an unexpected downside. Purpose: It is argued that instead of raising the bar for preparing educational leaders, the standards have lowered them, first by embracing only a…

  11. Sensitivity analysis of land unit suitability for conservation using a knowledge-based system.

    PubMed

    Humphries, Hope C; Bourgeron, Patrick S; Reynolds, Keith M

    2010-08-01

    The availability of spatially continuous data layers can have a strong impact on selection of land units for conservation purposes. The suitability of ecological conditions for sustaining the targets of conservation is an important consideration in evaluating candidate conservation sites. We constructed two fuzzy logic-based knowledge bases to determine the conservation suitability of land units in the interior Columbia River basin using NetWeaver software in the Ecosystem Management Decision Support application framework. Our objective was to assess the sensitivity of suitability ratings, derived from evaluating the knowledge bases, to fuzzy logic function parameters and to the removal of data layers (land use condition, road density, disturbance regime change index, vegetation change index, land unit size, cover type size, and cover type change index). The amount and geographic distribution of suitable land polygons was most strongly altered by the removal of land use condition, road density, and land polygon size. Removal of land use condition changed suitability primarily on private or intensively-used public land. Removal of either road density or land polygon size most strongly affected suitability on higher-elevation US Forest Service land containing small-area biophysical environments. Data layers with the greatest influence differed in rank between the two knowledge bases. Our results reinforce the importance of including both biophysical and socio-economic attributes to determine the suitability of land units for conservation. The sensitivity tests provided information about knowledge base structuring and parameterization as well as prioritization for future data needs.

  12. Appropriating Professionalism: Restructuring the Official Knowledge Base of England's "Modernised" Teaching Profession

    ERIC Educational Resources Information Center

    Beck, John

    2009-01-01

    The present paper examines efforts by government and government agencies in England to prescribe and control the knowledge base of a teaching profession that has, under successive New Labour administrations since 1997, been subjected to "modernisation". A theoretical framework drawn from aspects of the work of Basil Bernstein, and of Rob…

  13. The Knowledge Base of Non-Native English-Speaking Teachers: Perspectives of Teachers and Administrators

    ERIC Educational Resources Information Center

    Zhang, Fengjuan; Zhan, Ju

    2014-01-01

    This study explores the knowledge base of non-native English-speaking teachers (NNESTs) working in the Canadian English as a second language (ESL) context. By examining NNESTs' experiences in seeking employment and teaching ESL in Canada, and investigating ESL program administrators' perceptions and hiring practices in relation to NNESTs, it…

  14. Building a Knowledge Base for Teacher Education: An Experience in K-8 Mathematics Teacher Preparation

    ERIC Educational Resources Information Center

    Hiebert, James; Morris, Anne K.

    2009-01-01

    Consistent with the theme of this issue, we describe the details of one continuing effort to build knowledge for teacher education. We argue that building a useful knowledge base requires attention to the processes used to generate, record, and vet knowledge. By using 4 features of knowledge-building systems we identified in the introductory…

  15. The Educational Media and Technology Profession: An Agenda for Research and Assessment of the Knowledge Base.

    ERIC Educational Resources Information Center

    Molenda, Michael; Olive, J. Fred III

    This report is the first effort to stake out the territory to be included in research on the profession of educational media and technology (em/t), and explore the existing knowledge base within that territory. It comprises a set of questions, the answers to which cast a light on who is in the profession, where it is going, and what useful…

  16. Improving Student Teachers' Knowledge-Base in Language Education through Critical Reading

    ERIC Educational Resources Information Center

    Mulumba, Mathias Bwanika

    2016-01-01

    The emergence of the digital era is redefining education and the pedagogical processes in an unpredictable manner. In the midst of the increased availability of print and online resources, the twenty-first century language teacher educator expects her (or his) student teachers to be reading beings if they are to improve their knowledge-base in…

  17. L2 Teachers' Pedagogic Knowledge Base: A Comparison between Experienced and Less Experienced Practitioners

    ERIC Educational Resources Information Center

    Akbari, Ramin; Tajik, Leila

    2009-01-01

    Second language teacher education community has become increasingly interested in the pedagogical knowledge base of teachers as a window into practitioners' mental lives. The present study was conducted to document likely differences between the pedagogic thoughts of experienced and less experienced teachers. Eight teachers participated in the…

  18. English Language Teacher Educators' Pedagogical Knowledge Base: The Macro and Micro Categories

    ERIC Educational Resources Information Center

    Moradkhani, Shahab; Akbari, Ramin; Samar, Reza Ghafar; Kiany, Gholam Reza

    2013-01-01

    The aim of this study was to determine the major categories of English language teacher educators' pedagogical knowledge base. To this end, semi-structured interviews were conducted with 5 teachers, teacher educators, and university professors (15 participants in total). The results of data analysis indicated that teacher educators' pedagogical…

  19. Using CLIPS in the domain of knowledge-based massively parallel programming

    NASA Technical Reports Server (NTRS)

    Dvorak, Jiri J.

    1994-01-01

    The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.

  20. The Spread of Contingent Work in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Szabo, Katalin; Negyesi, Aron

    2005-01-01

    Permanent employment, typical of industrial societies and bolstered by numerous social guaranties, has been declining in the past 2 decades. There has been a steady expansion of various forms of contingent work. The decomposition of traditional work is a logical consequence of the characteristic patterns of the knowledge-based economy. According…