Science.gov

Sample records for lcg mcdb-a knowledgebase

  1. LcgCAF: CDF access method to LCG resources

    NASA Astrophysics Data System (ADS)

    Compostella, Gabriele; Bauce, Matteo; Pagan Griso, Simone; Lucchesi, Donatella; Sgaravatto, Massimo; Cecchi, Marco

    2011-12-01

    Up to the early 2011, the CDF collaboration has collected more than 8 fb-1 of data from pbar p collisions at a center of mass energy TeV delivered by the Tevatron collider at Fermilab. Second generation physics measurements, like precision determinations of top properties or searches for the Standard Model higgs, require increasing computing power for data analysis and events simulation. Instead of expanding its set of dedicated Condor based analysis farms, CDF moved to Grid resources. While in the context of OSG this transition was performed using Condor glideins and keeping CDF custom middleware software almost intact, in LCG a complete rewrite of the experiment's submission and monitoring tools was realized, taking full advantage of the features offered by the gLite Workload Management System (WMS). This led to the development of a new computing facility called LcgCAF that CDF collaborators are using to exploit Grid resources in Europe in a transparent way. Given the opportunistic usage of the available resources, it is of crucial importance for CDF to maximize jobs efficiency from submission to output retrieval. This work describes how an experimental resubmisson feature implemented in the WMS was tested in LcgCAF with the aim of lowering the overall execution time of a typical CDF job.

  2. WHALE, a management tool for Tier-2 LCG sites

    NASA Astrophysics Data System (ADS)

    Barone, L. M.; Organtini, G.; Talamo, I. G.

    2012-12-01

    The LCG (Worldwide LHC Computing Grid) is a grid-based hierarchical computing distributed facility, composed of more than 140 computing centers, organized in 4 tiers, by size and offer of services. Every site, although indipendent for many technical choices, has to provide services with a well-defined set of interfaces. For this reason, different LCG sites need frequently to manage very similar situations, like jobs behaviour on the batch system, dataset transfers between sites, operating system and experiment software installation and configuration, monitoring of services. In this context we created WHALE (WHALE Handles Administration in an LCG Environment), a software actually used at the T2_IT_Rome site, an LCG Tier-2 for the CMS experiment. WHALE is a generic, site independent tool written in Python: it allows administrator to interact in a uniform and coherent way with several subsystems using a high level syntax which hides specific commands. The architecture of WHALE is based on the plugin concept and on the possibility of connecting the output of a plugin to the input of the next one, in a pipe-like system, giving the administrator the possibility of making complex functions by combining the simpler ones. The core of WHALE just handles the plugin orchestrations, while even the basic functions (eg. the WHALE activity logging) are performed by plugins, giving the capability to tune and possibly modify every component of the system. WHALE already provides many plugins useful for a LCG site and some more for a Tier-2 of the CMS experiment, especially in the field of job management, dataset transfer and analysis of performance results and availability tests (eg. Nagios tests, SAM tests). Thanks to its architecture and the provided plugins WHALE makes easy to perform tasks that, even if logically simple, are technically complex or tedious, like eg. closing all the worker nodes with a job-failure rate greater than a given threshold. Finally, thanks to the

  3. LCG/AA build infrastructure

    NASA Astrophysics Data System (ADS)

    Hodgkins, Alex Liam; Diez, Victor; Hegner, Benedikt

    2012-12-01

    The Software Process & Infrastructure (SPI) project provides a build infrastructure for regular integration testing and release of the LCG Applications Area software stack. In the past, regular builds have been provided using a system which has been constantly growing to include more features like server-client communication, long-term build history and a summary web interface using present-day web technologies. However, the ad-hoc style of software development resulted in a setup that is hard to monitor, inflexible and difficult to expand. The new version of the infrastructure is based on the Django Python framework, which allows for a structured and modular design, facilitating later additions. Transparency in the workflows and ease of monitoring has been one of the priorities in the design. Formerly missing functionality like on-demand builds or release triggering will support the transition to a more agile development process.

  4. The Knowledgebase Kibbutz

    ERIC Educational Resources Information Center

    Singer, Ross

    2008-01-01

    As libraries' collections increasingly go digital, so too does their dependence on knowledgebases to access and maintain these electronic holdings. Somewhat different from other library-based knowledge management systems (catalogs, institutional repositories, etc.), the data found in the knowledgebases of link resolvers or electronic resource…

  5. Space Environmental Effects Knowledgebase

    NASA Technical Reports Server (NTRS)

    Wood, B. E.

    2007-01-01

    This report describes the results of an NRA funded program entitled Space Environmental Effects Knowledgebase that received funding through a NASA NRA (NRA8-31) and was monitored by personnel in the NASA Space Environmental Effects (SEE) Program. The NASA Project number was 02029. The Satellite Contamination and Materials Outgassing Knowledgebase (SCMOK) was created as a part of the earlier NRA8-20. One of the previous tasks and part of the previously developed Knowledgebase was to accumulate data from facilities using QCMs to measure the outgassing data for satellite materials. The main object of this current program was to increase the number of material outgassing datasets from 250 up to approximately 500. As a part of this effort, a round-robin series of materials outgassing measurements program was also executed that allowed comparison of the results for the same materials tested in 10 different test facilities. Other programs tasks included obtaining datasets or information packages for 1) optical effects of contaminants on optical surfaces, thermal radiators, and sensor systems and 2) space environmental effects data and incorporating these data into the already existing NASA/SEE Knowledgebase.

  6. Knowledge-Based Abstracting.

    ERIC Educational Resources Information Center

    Black, William J.

    1990-01-01

    Discussion of automatic abstracting of technical papers focuses on a knowledge-based method that uses two sets of rules. Topics discussed include anaphora; text structure and discourse; abstracting techniques, including the keyword method and the indicator phrase method; and tools for text skimming. (27 references) (LRW)

  7. [Knowledgebases in postgenomic molecular biology].

    PubMed

    Lisitsa, A V; Shilov, B V; Evdokimov, P A; Gusev, S A

    2010-01-01

    Knowledgebases can become an effective tool essentially raising quality of information retrieval in molecular biology, promoting the development of new methods of education and forecasting of the biomedical R&D. Knowledge-based technologies should induce "paradigm shift" in the life science due to integrative focusing of research groups towards the challenges of postgenomic era. This paper debates concept of the knowledgebase, which exploits web usage mining to personalize the access of molecular biologist to the Internet resources. PMID:21328913

  8. The Reactome pathway Knowledgebase.

    PubMed

    Fabregat, Antonio; Sidiropoulos, Konstantinos; Garapati, Phani; Gillespie, Marc; Hausmann, Kerstin; Haw, Robin; Jassal, Bijay; Jupe, Steven; Korninger, Florian; McKay, Sheldon; Matthews, Lisa; May, Bruce; Milacic, Marija; Rothfels, Karen; Shamovsky, Veronica; Webber, Marissa; Weiser, Joel; Williams, Mark; Wu, Guanming; Stein, Lincoln; Hermjakob, Henning; D'Eustachio, Peter

    2016-01-01

    The Reactome Knowledgebase (www.reactome.org) provides molecular details of signal transduction, transport, DNA replication, metabolism and other cellular processes as an ordered network of molecular transformations-an extended version of a classic metabolic map, in a single consistent data model. Reactome functions both as an archive of biological processes and as a tool for discovering unexpected functional relationships in data such as gene expression pattern surveys or somatic mutation catalogues from tumour cells. Over the last two years we redeveloped major components of the Reactome web interface to improve usability, responsiveness and data visualization. A new pathway diagram viewer provides a faster, clearer interface and smooth zooming from the entire reaction network to the details of individual reactions. Tool performance for analysis of user datasets has been substantially improved, now generating detailed results for genome-wide expression datasets within seconds. The analysis module can now be accessed through a RESTFul interface, facilitating its inclusion in third party applications. A new overview module allows the visualization of analysis results on a genome-wide Reactome pathway hierarchy using a single screen page. The search interface now provides auto-completion as well as a faceted search to narrow result lists efficiently. PMID:26656494

  9. ECOTOX knowledgebase: Search features and customized reports

    EPA Science Inventory

    The ECOTOXicology knowledgebase (ECOTOX) is a comprehensive, publicly available knowledgebase developed and maintained by ORD/NHEERL. It is used for environmental toxicity data on aquatic life, terrestrial plants and wildlife. ECOTOX has the capability to refine and filter search...

  10. Migration to the GLUE 2.0 information schema in the LCG/EGEE/EGI production Grid

    NASA Astrophysics Data System (ADS)

    Burke, Stephen; Field, Laurence; Horat, David

    2011-12-01

    The GLUE information schema has been in use in the LCG/EGEE production Grid since the first version was released in 2002. In 2008 a major redesign of GLUE, version 2.0, was defined in the context of the Open Grid Forum. The implementation of the publication and use of the new schema is a complex process which needs to be carefully managed to avoid any disruption to the production service, especially in the light of the end of the EGEE project and the transition to the new middleware and operational structures in EGI. In this paper we discuss the LDAP rendering of the schema, the upgrading of the Grid information system to allow both schemas to be used in parallel, the design and rollout of information providers, and the plans for migrating client software which uses the schema information. In particular we consider the implications for the specific requirements of the LCG project, especially regarding storage systems, accounting and monitoring.

  11. The nightly build and test system for LCG AA and LHCb software

    NASA Astrophysics Data System (ADS)

    Kruzelecki, Karol; Roiser, Stefan; Degaudenzi, Hubert

    2010-04-01

    The core software stack both from the LCG Application Area and LHCb consists of more than 25 C++/Fortran/Python projects built for about 20 different configurations on Linux, Windows and MacOSX. To these projects, one can also add about 70 external software packages (Boost, Python, Qt, CLHEP, ...) which also have to be built for the same configurations. It order to reduce the time of the development cycle and assure the quality, a framework has been developed for the daily (in fact nightly) build and test of the software. Performing the build and the tests on several configurations and platforms increases the efficiency of the unit and integration tests. Main features: - flexible and fine grained setup (full, partial build) through a web interface; - possibility to build several "slots" with different configurations; - precise and highly granular reports on a web server; - support for CMT projects (but not only) with their cross-dependencies; - scalable client-server architecture for the control machine and its build machines; - copy of the results in a common place to allow early view of the software stack. The nightly build framework is written in Python for portability and it is easily extensible to accommodate new build procedures.

  12. Experiences with the GLUE information schema in the LCG/EGEE production grid

    NASA Astrophysics Data System (ADS)

    Burke, S.; Andreozzi, S.; Field, L.

    2008-07-01

    A common information schema for the description of Grid resources and services is an essential requirement for interoperating Grid infrastructures, and its implementation interacts with every Grid component. In this context, the GLUE information schema was originally defined in 2002 as a joint project between the European DataGrid and DataTAG projects and the US iVDGL. The schema has major components to describe Computing and Storage Elements, and also generic Service and Site information. It has been used extensively in the LCG/EGEE Grid, for job submission, data management, service discovery and monitoring. In this paper we present the experience gained over the last five years, highlighting both successes and problems. In particular, we consider the importance of having a clear definition of schema attributes; the construction of standard information providers and difficulties encountered in mapping an abstract schema to diverse real systems; the configuration of publication in a way which suits system managers and the varying characteristics of Grid sites; the validation of published information; the ways in which information can be used (and misused) by Grid services and users; and issues related to managing schema upgrades in a large distributed system.

  13. Knowledge-based tracking algorithm

    NASA Astrophysics Data System (ADS)

    Corbeil, Allan F.; Hawkins, Linda J.; Gilgallon, Paul F.

    1990-10-01

    This paper describes the Knowledge-Based Tracking (KBT) algorithm for which a real-time flight test demonstration was recently conducted at Rome Air Development Center (RADC). In KBT processing, the radar signal in each resolution cell is thresholded at a lower than normal setting to detect low RCS targets. This lower threshold produces a larger than normal false alarm rate. Therefore, additional signal processing including spectral filtering, CFAR and knowledge-based acceptance testing are performed to eliminate some of the false alarms. TSC's knowledge-based Track-Before-Detect (TBD) algorithm is then applied to the data from each azimuth sector to detect target tracks. In this algorithm, tentative track templates are formed for each threshold crossing and knowledge-based association rules are applied to the range, Doppler, and azimuth measurements from successive scans. Lastly, an M-association out of N-scan rule is used to declare a detection. This scan-to-scan integration enhances the probability of target detection while maintaining an acceptably low output false alarm rate. For a real-time demonstration of the KBT algorithm, the L-band radar in the Surveillance Laboratory (SL) at RADC was used to illuminate a small Cessna 310 test aircraft. The received radar signal wa digitized and processed by a ST-100 Array Processor and VAX computer network in the lab. The ST-100 performed all of the radar signal processing functions, including Moving Target Indicator (MTI) pulse cancelling, FFT Doppler filtering, and CFAR detection. The VAX computers performed the remaining range-Doppler clustering, beamsplitting and TBD processing functions. The KBT algorithm provided a 9.5 dB improvement relative to single scan performance with a nominal real time delay of less than one second between illumination and display.

  14. Knowledge-based nursing diagnosis

    NASA Astrophysics Data System (ADS)

    Roy, Claudette; Hay, D. Robert

    1991-03-01

    Nursing diagnosis is an integral part of the nursing process and determines the interventions leading to outcomes for which the nurse is accountable. Diagnoses under the time constraints of modern nursing can benefit from a computer assist. A knowledge-based engineering approach was developed to address these problems. A number of problems were addressed during system design to make the system practical extended beyond capture of knowledge. The issues involved in implementing a professional knowledge base in a clinical setting are discussed. System functions, structure, interfaces, health care environment, and terminology and taxonomy are discussed. An integrated system concept from assessment through intervention and evaluation is outlined.

  15. Cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward A.; Buchanan, Bruce G.

    1988-01-01

    This final report covers work performed under Contract NCC2-220 between NASA Ames Research Center and the Knowledge Systems Laboratory, Stanford University. The period of research was from March 1, 1987 to February 29, 1988. Topics covered were as follows: (1) concurrent architectures for knowledge-based systems; (2) methods for the solution of geometric constraint satisfaction problems, and (3) reasoning under uncertainty. The research in concurrent architectures was co-funded by DARPA, as part of that agency's Strategic Computing Program. The research has been in progress since 1985, under DARPA and NASA sponsorship. The research in geometric constraint satisfaction has been done in the context of a particular application, that of determining the 3-D structure of complex protein molecules, using the constraints inferred from NMR measurements.

  16. The Coming of Knowledge-Based Business.

    ERIC Educational Resources Information Center

    Davis, Stan; Botkin, Jim

    1994-01-01

    Economic growth will come from knowledge-based businesses whose "smart" products filter and interpret information. Businesses will come to think of themselves as educators and their customers as learners. (SK)

  17. Distributed, cooperating knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    Some current research in the development and application of distributed, cooperating knowledge-based systems technology is addressed. The focus of the current research is the spacecraft ground operations environment. The underlying hypothesis is that, because of the increasing size, complexity, and cost of planned systems, conventional procedural approaches to the architecture of automated systems will give way to a more comprehensive knowledge-based approach. A hallmark of these future systems will be the integration of multiple knowledge-based agents which understand the operational goals of the system and cooperate with each other and the humans in the loop to attain the goals. The current work includes the development of a reference model for knowledge-base management, the development of a formal model of cooperating knowledge-based agents, the use of testbed for prototyping and evaluating various knowledge-based concepts, and beginning work on the establishment of an object-oriented model of an intelligent end-to-end (spacecraft to user) system. An introductory discussion of these activities is presented, the major concepts and principles being investigated are highlighted, and their potential use in other application domains is indicated.

  18. Knowledge-Based Network Operations

    NASA Astrophysics Data System (ADS)

    Wu, Chuan-lin; Hung, Chaw-Kwei; Stedry, Steven P.; McClure, James P.; Yeh, Show-Way

    1988-03-01

    An expert system is being implemented for enhancing operability of the Ground Communication Facility (GCF) of Jet Propulsion Laboratory's (JPL) Deep Space Network (DSN). The DSN is a tracking network for all of JPL's spacecraft plus a subset of spacecrafts launched by other NASA centers. A GCF upgrade task is set to replace the current GCF aging system with new, modern equipments which are capable of using knowledge-based monitor and control approach. The expert system, implemented in terms of KEE and SUN workstation, is used for performing network fault management, configuration management, and performance management in real-time. Monitor data are collected from each processor and DSCC's in every five seconds. In addition to serving as input parameters of the expert system, extracted management information is used to update a management information database. For the monitor and control purpose, software of each processor is divided into layers following the OSI standard. Each layer is modeled as a finite state machine. A System Management Application Process (SMAP) is implemented at application layer, which coordinates layer managers of the same processor and communicates with peer SMAPs of other processors. The expert system will be tuned by augmenting the production rules as the operation is going on, and its performance will be measured.

  19. Patient Dependency Knowledge-Based Systems.

    PubMed

    Soliman, F

    1998-10-01

    The ability of Patient Dependency Systems to provide information for staffing decisions and budgetary development has been demonstrated. In addition, they have become powerful tools in modern hospital management. This growing interest in Patient Dependency Systems has renewed calls for their automation. As advances in Information Technology and in particular Knowledge-Based Engineering reach new heights, hospitals can no longer afford to ignore the potential benefits obtainable from developing and implementing Patient Dependency Knowledge-Based Systems. Experience has shown that the vast majority of decisions and rules used in the Patient Dependency method are too complex to capture in the form of a traditional programming language. Furthermore, the conventional Patient Dependency Information System automates the simple and rigid bookkeeping functions. On the other hand Knowledge-Based Systems automate complex decision making and judgmental processes and therefore are the appropriate technology for automating the Patient Dependency method. In this paper a new technique to automate Patient Dependency Systems using knowledge processing is presented. In this approach all Patient Dependency factors have been translated into a set of Decision Rules suitable for use in a Knowledge-Based System. The system is capable of providing the decision-maker with a number of scenarios and their possible outcomes. This paper also presents the development of Patient Dependency Knowledge-Based Systems, which can be used in allocating and evaluating resources and nursing staff in hospitals on the basis of patients' needs. PMID:9809275

  20. Knowledge-based systems and NASA's software support environment

    NASA Technical Reports Server (NTRS)

    Dugan, Tim; Carmody, Cora; Lennington, Kent; Nelson, Bob

    1990-01-01

    A proposed role for knowledge-based systems within NASA's Software Support Environment (SSE) is described. The SSE is chartered to support all software development for the Space Station Freedom Program (SSFP). This includes support for development of knowledge-based systems and the integration of these systems with conventional software systems. In addition to the support of development of knowledge-based systems, various software development functions provided by the SSE will utilize knowledge-based systems technology.

  1. Knowledge-based diagnosis for aerospace systems

    NASA Technical Reports Server (NTRS)

    Atkinson, David J.

    1988-01-01

    The need for automated diagnosis in aerospace systems and the approach of using knowledge-based systems are examined. Research issues in knowledge-based diagnosis which are important for aerospace applications are treated along with a review of recent relevant research developments in Artificial Intelligence. The design and operation of some existing knowledge-based diagnosis systems are described. The systems described and compared include the LES expert system for liquid oxygen loading at NASA Kennedy Space Center, the FAITH diagnosis system developed at the Jet Propulsion Laboratory, the PES procedural expert system developed at SRI International, the CSRL approach developed at Ohio State University, the StarPlan system developed by Ford Aerospace, the IDM integrated diagnostic model, and the DRAPhys diagnostic system developed at NASA Langley Research Center.

  2. Protective Effects of the Launch/Entry Suit (LES) and the Liquid Cooling Garment(LCG) During Re-entry and Landing After Spaceflight

    NASA Technical Reports Server (NTRS)

    Perez, Sondra A.; Charles, John B.; Fortner, G. William; Hurst, Victor, IV; Meck, Janice V.

    2002-01-01

    Heart rate and arterial pressure were measured during shuttle re-entry, landing and initial standing in crewmembers with and without inflated anti-g suits and with and without liquid cooling garments (LCG). Preflight, three measurements were obtained seated, then standing. Prior to and during re-entry, arterial pressure and heart rate were measured every five minutes until wheels stop (WS). Then crewmembers initiated three seated and three standing measurements. In subjects without inflated anti-g suits, SBP and DBP were significantly lower during preflight standing (P = 0.006; P = 0.001 respectively) and at touchdown (TD) (P = 0.001; P = 0.003 respectively); standing SBP was significantly lower after WS. on-LeG users developed significantly higher heart rates during re-entry (P = 0.029, maxG; P = 0.05, TD; P = 0.02, post-WS seated; P = 0.01, post-WS standing) than LCG users. Our data suggest that the anti-g suit is effective, but the combined anti-g suit with LCG is more effective.

  3. Knowledge-based commodity distribution planning

    NASA Technical Reports Server (NTRS)

    Saks, Victor; Johnson, Ivan

    1994-01-01

    This paper presents an overview of a Decision Support System (DSS) that incorporates Knowledge-Based (KB) and commercial off the shelf (COTS) technology components. The Knowledge-Based Logistics Planning Shell (KBLPS) is a state-of-the-art DSS with an interactive map-oriented graphics user interface and powerful underlying planning algorithms. KBLPS was designed and implemented to support skilled Army logisticians to prepare and evaluate logistics plans rapidly, in order to support corps-level battle scenarios. KBLPS represents a substantial advance in graphical interactive planning tools, with the inclusion of intelligent planning algorithms that provide a powerful adjunct to the planning skills of commodity distribution planners.

  4. The importance of knowledge-based technology.

    PubMed

    Cipriano, Pamela F

    2012-01-01

    Nurse executives are responsible for a workforce that can provide safer and more efficient care in a complex sociotechnical environment. National quality priorities rely on technologies to provide data collection, share information, and leverage analytic capabilities to interpret findings and inform approaches to care that will achieve better outcomes. As a key steward for quality, the nurse executive exercises leadership to provide the infrastructure to build and manage nursing knowledge and instill accountability for following evidence-based practices. These actions contribute to a learning health system where new knowledge is captured as a by-product of care delivery enabled by knowledge-based electronic systems. The learning health system also relies on rigorous scientific evidence embedded into practice at the point of care. The nurse executive optimizes use of knowledge-based technologies, integrated throughout the organization, that have the capacity to help transform health care. PMID:22407206

  5. Knowledge-based system for computer security

    SciTech Connect

    Hunteman, W.J.

    1988-01-01

    The rapid expansion of computer security information and technology has provided little support for the security officer to identify and implement the safeguards needed to secure a computing system. The Department of Energy Center for Computer Security is developing a knowledge-based computer security system to provide expert knowledge to the security officer. The system is policy-based and incorporates a comprehensive list of system attack scenarios and safeguards that implement the required policy while defending against the attacks. 10 figs.

  6. UniProt: the Universal Protein knowledgebase

    PubMed Central

    Apweiler, Rolf; Bairoch, Amos; Wu, Cathy H.; Barker, Winona C.; Boeckmann, Brigitte; Ferro, Serenella; Gasteiger, Elisabeth; Huang, Hongzhan; Lopez, Rodrigo; Magrane, Michele; Martin, Maria J.; Natale, Darren A.; O’Donovan, Claire; Redaschi, Nicole; Yeh, Lai-Su L.

    2004-01-01

    To provide the scientific community with a single, centralized, authoritative resource for protein sequences and functional information, the Swiss-Prot, TrEMBL and PIR protein database activities have united to form the Universal Protein Knowledgebase (UniProt) consortium. Our mission is to provide a comprehensive, fully classified, richly and accurately annotated protein sequence knowledgebase, with extensive cross-references and query interfaces. The central database will have two sections, corresponding to the familiar Swiss-Prot (fully manually curated entries) and TrEMBL (enriched with automated classification, annotation and extensive cross-references). For convenient sequence searches, UniProt also provides several non-redundant sequence databases. The UniProt NREF (UniRef) databases provide representative subsets of the knowledgebase suitable for efficient searching. The comprehensive UniProt Archive (UniParc) is updated daily from many public source databases. The UniProt databases can be accessed online (http://www.uniprot.org) or downloaded in several formats (ftp://ftp.uniprot.org/pub). The scientific community is encouraged to submit data for inclusion in UniProt. PMID:14681372

  7. UniProt: the Universal Protein knowledgebase.

    PubMed

    Apweiler, Rolf; Bairoch, Amos; Wu, Cathy H; Barker, Winona C; Boeckmann, Brigitte; Ferro, Serenella; Gasteiger, Elisabeth; Huang, Hongzhan; Lopez, Rodrigo; Magrane, Michele; Martin, Maria J; Natale, Darren A; O'Donovan, Claire; Redaschi, Nicole; Yeh, Lai-Su L

    2004-01-01

    To provide the scientific community with a single, centralized, authoritative resource for protein sequences and functional information, the Swiss-Prot, TrEMBL and PIR protein database activities have united to form the Universal Protein Knowledgebase (UniProt) consortium. Our mission is to provide a comprehensive, fully classified, richly and accurately annotated protein sequence knowledgebase, with extensive cross-references and query interfaces. The central database will have two sections, corresponding to the familiar Swiss-Prot (fully manually curated entries) and TrEMBL (enriched with automated classification, annotation and extensive cross-references). For convenient sequence searches, UniProt also provides several non-redundant sequence databases. The UniProt NREF (UniRef) databases provide representative subsets of the knowledgebase suitable for efficient searching. The comprehensive UniProt Archive (UniParc) is updated daily from many public source databases. The UniProt databases can be accessed online (http://www.uniprot.org) or downloaded in several formats (ftp://ftp.uniprot.org/pub). The scientific community is encouraged to submit data for inclusion in UniProt. PMID:14681372

  8. Knowledge-Based Entrepreneurship in a Boundless Research System

    ERIC Educational Resources Information Center

    Dell'Anno, Davide

    2008-01-01

    International entrepreneurship and knowledge-based entrepreneurship have recently generated considerable academic and non-academic attention. This paper explores the "new" field of knowledge-based entrepreneurship in a boundless research system. Cultural barriers to the development of business opportunities by researchers persist in some academic…

  9. Applying Knowledge-Based Techniques to Software Development.

    ERIC Educational Resources Information Center

    Harandi, Mehdi T.

    1986-01-01

    Reviews overall structure and design principles of a knowledge-based programming support tool, the Knowledge-Based Programming Assistant, which is being developed at University of Illinois Urbana-Champaign. The system's major units (program design program coding, and intelligent debugging) and additional functions are described. (MBR)

  10. Knowledge-based public health situation awareness

    NASA Astrophysics Data System (ADS)

    Mirhaji, Parsa; Zhang, Jiajie; Srinivasan, Arunkumar; Richesson, Rachel L.; Smith, Jack W.

    2004-09-01

    There have been numerous efforts to create comprehensive databases from multiple sources to monitor the dynamics of public health and most specifically to detect the potential threats of bioterrorism before widespread dissemination. But there are not many evidences for the assertion that these systems are timely and dependable, or can reliably identify man made from natural incident. One must evaluate the value of so called 'syndromic surveillance systems' along with the costs involved in design, development, implementation and maintenance of such systems and the costs involved in investigation of the inevitable false alarms1. In this article we will introduce a new perspective to the problem domain with a shift in paradigm from 'surveillance' toward 'awareness'. As we conceptualize a rather different approach to tackle the problem, we will introduce a different methodology in application of information science, computer science, cognitive science and human-computer interaction concepts in design and development of so called 'public health situation awareness systems'. We will share some of our design and implementation concepts for the prototype system that is under development in the Center for Biosecurity and Public Health Informatics Research, in the University of Texas Health Science Center at Houston. The system is based on a knowledgebase containing ontologies with different layers of abstraction, from multiple domains, that provide the context for information integration, knowledge discovery, interactive data mining, information visualization, information sharing and communications. The modular design of the knowledgebase and its knowledge representation formalism enables incremental evolution of the system from a partial system to a comprehensive knowledgebase of 'public health situation awareness' as it acquires new knowledge through interactions with domain experts or automatic discovery of new knowledge.

  11. An Introduction to the Heliophysics Event Knowledgebase

    NASA Astrophysics Data System (ADS)

    Hurlburt, Neal E.; Cheung, M.; Schrijver, C.; Chang, L.; Freeland, S.; Green, S.; Heck, C.; Jaffey, A.; Kobashi, A.; Schiff, D.; Serafin, J.; Seguin, R.; Slater, G.; Somani, A.; Timmons, R.

    2010-05-01

    The immense volume of data generated by the suite of instruments on SDO requires new tools for efficiently identifying and accessing data that are most relevant to research investigations. We have developed the Heliophysics Events Knowledgebase (HEK) to fill this need. The system developed to support the HEK combines automated datamining using feature detection methods; high-performance visualization systems for data markup; and web-services and clients for searching the resulting metadata, reviewing results and efficient access to the data. We will review these components and present examples of their use with SDO data.

  12. Knowledge-based Autonomous Test Engineer (KATE)

    NASA Technical Reports Server (NTRS)

    Parrish, Carrie L.; Brown, Barbara L.

    1991-01-01

    Mathematical models of system components have long been used to allow simulators to predict system behavior to various stimuli. Recent efforts to monitor, diagnose, and control real-time systems using component models have experienced similar success. NASA Kennedy is continuing the development of a tool for implementing real-time knowledge-based diagnostic and control systems called KATE (Knowledge based Autonomous Test Engineer). KATE is a model-based reasoning shell designed to provide autonomous control, monitoring, fault detection, and diagnostics for complex engineering systems by applying its reasoning techniques to an exchangeable quantitative model describing the structure and function of the various system components and their systemic behavior.

  13. Systems Biology Knowledgebase (GSC8 Meeting)

    ScienceCinema

    Cottingham, Robert W [ORNL

    2011-04-29

    The Genomic Standards Consortium was formed in September 2005. It is an international, open-membership working body which promotes standardization in the description of genomes and the exchange and integration of genomic data. The 2009 meeting was an activity of a five-year funding "Research Coordination Network" from the National Science Foundation and was organized held at the DOE Joint Genome Institute with organizational support provided by the JGI and by the University of California - San Diego. Robert W. Cottingham of Oak Ridge National Laboratory discusses the DOE KnowledgeBase at the Genomic Standards Consortium's 8th meeting at the DOE JGI in Walnut Creek, Calif. on Sept. 9, 2009.

  14. Knowledge-based systems in Japan

    NASA Technical Reports Server (NTRS)

    Feigenbaum, Edward; Engelmore, Robert S.; Friedland, Peter E.; Johnson, Bruce B.; Nii, H. Penny; Schorr, Herbert; Shrobe, Howard

    1994-01-01

    This report summarizes a study of the state-of-the-art in knowledge-based systems technology in Japan, organized by the Japanese Technology Evaluation Center (JTEC) under the sponsorship of the National Science Foundation and the Advanced Research Projects Agency. The panel visited 19 Japanese sites in March 1992. Based on these site visits plus other interactions with Japanese organizations, both before and after the site visits, the panel prepared a draft final report. JTEC sent the draft to the host organizations for their review. The final report was published in May 1993.

  15. Systems Biology Knowledgebase (GSC8 Meeting)

    SciTech Connect

    Cottingham, Robert W

    2009-09-09

    The Genomic Standards Consortium was formed in September 2005. It is an international, open-membership working body which promotes standardization in the description of genomes and the exchange and integration of genomic data. The 2009 meeting was an activity of a five-year funding "Research Coordination Network" from the National Science Foundation and was organized held at the DOE Joint Genome Institute with organizational support provided by the JGI and by the University of California - San Diego. Robert W. Cottingham of Oak Ridge National Laboratory discusses the DOE KnowledgeBase at the Genomic Standards Consortium's 8th meeting at the DOE JGI in Walnut Creek, Calif. on Sept. 9, 2009.

  16. Knowledge-based simulation for aerospace systems

    NASA Technical Reports Server (NTRS)

    Will, Ralph W.; Sliwa, Nancy E.; Harrison, F. Wallace, Jr.

    1988-01-01

    Knowledge-based techniques, which offer many features that are desirable in the simulation and development of aerospace vehicle operations, exhibit many similarities to traditional simulation packages. The eventual solution of these systems' current symbolic processing/numeric processing interface problem will lead to continuous and discrete-event simulation capabilities in a single language, such as TS-PROLOG. Qualitative, totally-symbolic simulation methods are noted to possess several intrinsic characteristics that are especially revelatory of the system being simulated, and capable of insuring that all possible behaviors are considered.

  17. Analysis of Unit-Level Changes in Operations with Increased SPP Wind from EPRI/LCG Balancing Study

    SciTech Connect

    Hadley, Stanton W

    2012-01-01

    Wind power development in the United States is outpacing previous estimates for many regions, particularly those with good wind resources. The pace of wind power deployment may soon outstrip regional capabilities to provide transmission and integration services to achieve the most economic power system operation. Conversely, regions such as the Southeastern United States do not have good wind resources and will have difficulty meeting proposed federal Renewable Portfolio Standards with local supply. There is a growing need to explore innovative solutions for collaborating between regions to achieve the least cost solution for meeting such a renewable energy mandate. The Department of Energy funded the project 'Integrating Midwest Wind Energy into Southeast Electricity Markets' to be led by EPRI in coordination with the main authorities for the regions: SPP, Entergy, TVA, Southern Company and OPC. EPRI utilized several subcontractors for the project including LCG, the developers of the model UPLAN. The study aims to evaluate the operating cost benefits of coordination of scheduling and balancing for Southwest Power Pool (SPP) wind transfers to Southeastern Electric Reliability Council (SERC) Balancing Authorities (BAs). The primary objective of this project is to analyze the benefits of regional cooperation for integrating mid-western wind energy into southeast electricity markets. Scenarios were defined, modeled and investigated to address production variability and uncertainty and the associated balancing of large quantities of wind power in SPP and delivery to energy markets in the southern regions of the SERC. DOE funded Oak Ridge National Laboratory to provide additional support to the project, including a review of results and any side analysis that may provide additional insight. This report is a unit-by-unit analysis of changes in operations due to the different scenarios used in the overall study. It focuses on the change in capacity factors and the number

  18. Bioenergy Science Center KnowledgeBase

    DOE Data Explorer

    Syed, M. H.; Karpinets, T. V.; Parang, M.; Leuze, M. R.; Park, B. H.; Hyatt, D.; Brown, S. D.; Moulton, S. Galloway, M.D.; Uberbacher, E. C.

    The challenge of converting cellulosic biomass to sugars is the dominant obstacle to cost effective production of biofuels in s capable of significant enough quantities to displace U. S. consumption of fossil transportation fuels. The BioEnergy Science Center (BESC) tackles this challenge of biomass recalcitrance by closely linking (1) plant research to make cell walls easier to deconstruct, and (2) microbial research to develop multi-talented biocatalysts tailor-made to produce biofuels in a single step. [from the 2011 BESC factsheet] The BioEnergy Science Center (BESC) is a multi-institutional, multidisciplinary research (biological, chemical, physical and computational sciences, mathematics and engineering) organization focused on the fundamental understanding and elimination of biomass recalcitrance. The BESC Knowledgebase and its associated tools is a discovery platform for bioenergy research. It consists of a collection of metadata, data, and computational tools for data analysis, integration, comparison and visualization for plants and microbes in the center.The BESC Knowledgebase (KB) and BESC Laboratory Information Management System (LIMS) enable bioenergy researchers to perform systemic research. [http://bobcat.ornl.gov/besc/index.jsp

  19. Knowledge-based landmarking of cephalograms.

    PubMed

    Lévy-Mandel, A D; Venetsanopoulos, A N; Tsotsos, J K

    1986-06-01

    Orthodontists have defined a certain number of characteristic points, or landmarks, on X-ray images of the human skull which are used to study growth or as a diagnostic aid. This work presents the first step toward an automatic extraction of these points. They are defined with respect to particular lines which are retrieved first. The original image is preprocessed with a prefiltering operator (median filter) followed by an edge detector (Mero-Vassy operator). A knowledge-based line-following algorithm is subsequently applied, involving a production system with organized sets of rules and a simple interpreter. The a priori knowledge implemented in the algorithm must take into account the fact that the lines represent biological shapes and can vary considerably from one patient to the next. The performance of the algorithm is judged with the help of objective quality criteria. Determination of the exact shapes of the lines allows the computation of the positions of the landmarks. PMID:3519070

  20. Is pharmacy a knowledge-based profession?

    PubMed

    Waterfield, Jon

    2010-04-12

    An increasingly important question for the pharmacy educator is the relationship between pharmacy knowledge and professionalism. There is a substantial body of literature on the theory of knowledge and it is useful to apply this to the profession of pharmacy. This review examines the types of knowledge and skill used by the pharmacist, with particular reference to tacit knowledge which cannot be codified. This leads into a discussion of practice-based pharmacy knowledge and the link between pharmaceutical science and practice. The final section of the paper considers the challenge of making knowledge work in practice. This includes a discussion of the production of knowledge within the context of application. The theoretical question posed by this review, "Is pharmacy a knowledge-based profession?" highlights challenging areas of debate for the pharmacy educator. PMID:20498743

  1. Advances in knowledge-based software engineering

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt

    1991-01-01

    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems.

  2. Knowledge-Based Systems (KBS) development standards: A maintenance perspective

    NASA Technical Reports Server (NTRS)

    Brill, John

    1990-01-01

    Information on knowledge-based systems (KBS) is given in viewgraph form. Information is given on KBS standardization needs, the knowledge engineering process, program management, software and hardware issues, and chronic problem areas.

  3. Cildb: a knowledgebase for centrosomes and cilia

    PubMed Central

    Arnaiz, Olivier; Malinowska, Agata; Klotz, Catherine; Sperling, Linda; Dadlez, Michal; Koll, France; Cohen, Jean

    2009-01-01

    Ciliopathies, pleiotropic diseases provoked by defects in the structure or function of cilia or flagella, reflect the multiple roles of cilia during development, in stem cells, in somatic organs and germ cells. High throughput studies have revealed several hundred proteins that are involved in the composition, function or biogenesis of cilia. The corresponding genes are potential candidates for orphan ciliopathies. To study ciliary genes, model organisms are used in which particular questions on motility, sensory or developmental functions can be approached by genetics. In the course of high throughput studies of cilia in Paramecium tetraurelia, we were confronted with the problem of comparing our results with those obtained in other model organisms. We therefore developed a novel knowledgebase, Cildb, that integrates ciliary data from heterogeneous sources. Cildb links orthology relationships among 18 species to high throughput ciliary studies, and to OMIM data on human hereditary diseases. The web interface of Cildb comprises three tools, BioMart for complex queries, BLAST for sequence homology searches and GBrowse for browsing the human genome in relation to OMIM information for human diseases. Cildb can be used for interspecies comparisons, building candidate ciliary proteomes in any species, or identifying candidate ciliopathy genes. Database URL: http://cildb.cgm.cnrs-gif.fr PMID:20428338

  4. Knowledge-based optical system design

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik

    1992-03-01

    This work is a new approach for the design of start optical systems and represents a new contribution of artificial intelligence techniques in the optical design field. A knowledge-based optical-systems design (KBOSD), based on artificial intelligence algorithms, first order logic, knowledge representation, rules, and heuristics on lens design, is realized. This KBOSD is equipped with optical knowledge in the domain of centered dioptrical optical systems used at low aperture and small field angles. It generates centered dioptrical, on-axis and low-aperture optical systems, which are used as start systems for the subsequent optimization by existing lens design programs. This KBOSD produces monochromatic or polychromatic optical systems, such as singlet lens, doublet lens, triplet lens, reversed singlet lens, reversed doublet lens, reversed triplet lens, and telescopes. In the design of optical systems, the KBOSD takes into account many user constraints such as cost, resistance of the optical material (glass) to chemical, thermal, and mechanical effects, as well as the optical quality such as minimal aberrations and chromatic aberrations corrections. This KBOSD is developed in the programming language Prolog and has knowledge on optical design principles and optical properties. It is composed of more than 3000 clauses. Inference engine and interconnections in the cognitive world of optical systems are described. The system uses neither a lens library nor a lens data base; it is completely based on optical design knowledge.

  5. Knowledge-based approach to system integration

    NASA Technical Reports Server (NTRS)

    Blokland, W.; Krishnamurthy, C.; Biegl, C.; Sztipanovits, J.

    1988-01-01

    To solve complex problems one can often use the decomposition principle. However, a problem is seldom decomposable into completely independent subproblems. System integration deals with problem of resolving the interdependencies and the integration of the subsolutions. A natural method of decomposition is the hierarchical one. High-level specifications are broken down into lower level specifications until they can be transformed into solutions relatively easily. By automating the hierarchical decomposition and solution generation an integrated system is obtained in which the declaration of high level specifications is enough to solve the problem. We offer a knowledge-based approach to integrate the development and building of control systems. The process modeling is supported by using graphic editors. The user selects and connects icons that represent subprocesses and might refer to prewritten programs. The graphical editor assists the user in selecting parameters for each subprocess and allows the testing of a specific configuration. Next, from the definitions created by the graphical editor, the actual control program is built. Fault-diagnosis routines are generated automatically as well. Since the user is not required to write program code and knowledge about the process is present in the development system, the user is not required to have expertise in many fields.

  6. Knowledge-based reusable software synthesis system

    NASA Technical Reports Server (NTRS)

    Donaldson, Cammie

    1989-01-01

    The Eli system, a knowledge-based reusable software synthesis system, is being developed for NASA Langley under a Phase 2 SBIR contract. Named after Eli Whitney, the inventor of interchangeable parts, Eli assists engineers of large-scale software systems in reusing components while they are composing their software specifications or designs. Eli will identify reuse potential, search for components, select component variants, and synthesize components into the developer's specifications. The Eli project began as a Phase 1 SBIR to define a reusable software synthesis methodology that integrates reusabilityinto the top-down development process and to develop an approach for an expert system to promote and accomplish reuse. The objectives of the Eli Phase 2 work are to integrate advanced technologies to automate the development of reusable components within the context of large system developments, to integrate with user development methodologies without significant changes in method or learning of special languages, and to make reuse the easiest operation to perform. Eli will try to address a number of reuse problems including developing software with reusable components, managing reusable components, identifying reusable components, and transitioning reuse technology. Eli is both a library facility for classifying, storing, and retrieving reusable components and a design environment that emphasizes, encourages, and supports reuse.

  7. Introduction: geoscientific knowledgebase of Chernobyl and Fukushima

    NASA Astrophysics Data System (ADS)

    Yamauchi, Masatoshi; Voitsekhovych, Oleg; Korobova, Elena; Stohl, Andreas; Wotawa, Gerhard; Kita, Kazuyuki; Aoyama, Michio; Yoshida, Naohiro

    2013-04-01

    Radioactive contamination after the Chernobyl (1986) and Fukushima (2011) accidents is a multi-disciplinary geoscience problem. Just this session (GI1.4) contains presentations of (i) atmospheric transport for both short and long distances, (ii) aerosol physics and chemistry, (ii) geophysical measurement method and logistics, (iv) inversion method to estimate the geophysical source term and decay, (v) transport, migration, and sedimentation in the surface water system, (vi) transport and sedimentation in the ocean, (vii) soil chemistry and physics, (viii) forest ecosystem, (ix) risk assessments, which are inter-related to each other. Because of rareness of a severe accident like Chernobyl and Fukushima, the Chernobyl's 27 years experience is the only knowledgebase that provides a good guidance for the Fukushima case in understanding the physical/chemical processes related to the environmental radioactive contamination and in providing future prospectives, e.g., what we should do next for the observation/remediation. Unfortunately, the multi-disciplinary nature of the radioactive contamination problem makes it very difficult for a single scientist to obtain the overview of all geoscientific aspects of the Chernobyl experience. The aim of this introductory talk is to give a comprehensive knowledge of the wide geoscientific aspects of the Chernobyl contamination to Fukushima-related geoscience community.

  8. IGENPRO knowledge-based operator support system.

    SciTech Connect

    Morman, J. A.

    1998-07-01

    Research and development is being performed on the knowledge-based IGENPRO operator support package for plant transient diagnostics and management to provide operator assistance during off-normal plant transient conditions. A generic thermal-hydraulic (T-H) first-principles approach is being implemented using automated reasoning, artificial neural networks and fuzzy logic to produce a generic T-H system-independent/plant-independent package. The IGENPRO package has a modular structure composed of three modules: the transient trend analysis module PROTREN, the process diagnostics module PRODIAG and the process management module PROMANA. Cooperative research and development work has focused on the PRODIAG diagnostic module of the IGENPRO package and the operator training matrix of transients used at the Braidwood Pressurized Water Reactor station. Promising simulator testing results with PRODIAG have been obtained for the Braidwood Chemical and Volume Control System (CVCS), and the Component Cooling Water System. Initial CVCS test results have also been obtained for the PROTREN module. The PROMANA effort also involves the CVCS. Future work will be focused on the long-term, slow and mild degradation transients where diagnoses of incipient T-H component failure prior to forced outage events is required. This will enhance the capability of the IGENPRO system as a predictive maintenance tool for plant staff and operator support.

  9. Knowledge-based system verification and validation

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1990-01-01

    The objective of this task is to develop and evaluate a methodology for verification and validation (V&V) of knowledge-based systems (KBS) for space station applications with high reliability requirements. The approach consists of three interrelated tasks. The first task is to evaluate the effectiveness of various validation methods for space station applications. The second task is to recommend requirements for KBS V&V for Space Station Freedom (SSF). The third task is to recommend modifications to the SSF to support the development of KBS using effectiveness software engineering and validation techniques. To accomplish the first task, three complementary techniques will be evaluated: (1) Sensitivity Analysis (Worchester Polytechnic Institute); (2) Formal Verification of Safety Properties (SRI International); and (3) Consistency and Completeness Checking (Lockheed AI Center). During FY89 and FY90, each contractor will independently demonstrate the user of his technique on the fault detection, isolation, and reconfiguration (FDIR) KBS or the manned maneuvering unit (MMU), a rule-based system implemented in LISP. During FY91, the application of each of the techniques to other knowledge representations and KBS architectures will be addressed. After evaluation of the results of the first task and examination of Space Station Freedom V&V requirements for conventional software, a comprehensive KBS V&V methodology will be developed and documented. Development of highly reliable KBS's cannot be accomplished without effective software engineering methods. Using the results of current in-house research to develop and assess software engineering methods for KBS's as well as assessment of techniques being developed elsewhere, an effective software engineering methodology for space station KBS's will be developed, and modification of the SSF to support these tools and methods will be addressed.

  10. A Knowledge-Based System Developer for aerospace applications

    NASA Technical Reports Server (NTRS)

    Shi, George Z.; Wu, Kewei; Fensky, Connie S.; Lo, Ching F.

    1993-01-01

    A prototype Knowledge-Based System Developer (KBSD) has been developed for aerospace applications by utilizing artificial intelligence technology. The KBSD directly acquires knowledge from domain experts through a graphical interface then builds expert systems from that knowledge. This raises the state of the art of knowledge acquisition/expert system technology to a new level by lessening the need for skilled knowledge engineers. The feasibility, applicability , and efficiency of the proposed concept was established, making a continuation which would develop the prototype to a full-scale general-purpose knowledge-based system developer justifiable. The KBSD has great commercial potential. It will provide a marketable software shell which alleviates the need for knowledge engineers and increase productivity in the workplace. The KBSD will therefore make knowledge-based systems available to a large portion of industry.

  11. The Knowledge-Based Economy and E-Learning: Critical Considerations for Workplace Democracy

    ERIC Educational Resources Information Center

    Remtulla, Karim A.

    2007-01-01

    The ideological shift by nation-states to "a knowledge-based economy" (also referred to as "knowledge-based society") is causing changes in the workplace. Brought about by the forces of globalisation and technological innovation, the ideologies of the "knowledge-based economy" are not limited to influencing the production, consumption and economic…

  12. Online Knowledge-Based Model for Big Data Topic Extraction

    PubMed Central

    Khan, Muhammad Taimoor; Durrani, Mehr; Khalid, Shehzad; Aziz, Furqan

    2016-01-01

    Lifelong machine learning (LML) models learn with experience maintaining a knowledge-base, without user intervention. Unlike traditional single-domain models they can easily scale up to explore big data. The existing LML models have high data dependency, consume more resources, and do not support streaming data. This paper proposes online LML model (OAMC) to support streaming data with reduced data dependency. With engineering the knowledge-base and introducing new knowledge features the learning pattern of the model is improved for data arriving in pieces. OAMC improves accuracy as topic coherence by 7% for streaming data while reducing the processing cost to half. PMID:27195004

  13. PLAN-IT - Knowledge-based mission sequencing

    NASA Technical Reports Server (NTRS)

    Biefeld, Eric W.

    1987-01-01

    PLAN-IT (Plan-Integrated Timelines), a knowledge-based approach to assist in mission sequencing, is discussed. PLAN-IT uses a large set of scheduling techniques known as strategies to develop and maintain a mission sequence. The approach implemented by PLAN-IT and the current applications of PLAN-IT for sequencing at NASA are reported.

  14. Dynamic Strategic Planning in a Professional Knowledge-Based Organization

    ERIC Educational Resources Information Center

    Olivarius, Niels de Fine; Kousgaard, Marius Brostrom; Reventlow, Susanne; Quelle, Dan Grevelund; Tulinius, Charlotte

    2010-01-01

    Professional, knowledge-based institutions have a particular form of organization and culture that makes special demands on the strategic planning supervised by research administrators and managers. A model for dynamic strategic planning based on a pragmatic utilization of the multitude of strategy models was used in a small university-affiliated…

  15. Big data analytics in immunology: a knowledge-based approach.

    PubMed

    Zhang, Guang Lan; Sun, Jing; Chitkushev, Lou; Brusic, Vladimir

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677

  16. Big Data Analytics in Immunology: A Knowledge-Based Approach

    PubMed Central

    Zhang, Guang Lan

    2014-01-01

    With the vast amount of immunological data available, immunology research is entering the big data era. These data vary in granularity, quality, and complexity and are stored in various formats, including publications, technical reports, and databases. The challenge is to make the transition from data to actionable knowledge and wisdom and bridge the knowledge gap and application gap. We report a knowledge-based approach based on a framework called KB-builder that facilitates data mining by enabling fast development and deployment of web-accessible immunological data knowledge warehouses. Immunological knowledge discovery relies heavily on both the availability of accurate, up-to-date, and well-organized data and the proper analytics tools. We propose the use of knowledge-based approaches by developing knowledgebases combining well-annotated data with specialized analytical tools and integrating them into analytical workflow. A set of well-defined workflow types with rich summarization and visualization capacity facilitates the transformation from data to critical information and knowledge. By using KB-builder, we enabled streamlining of normally time-consuming processes of database development. The knowledgebases built using KB-builder will speed up rational vaccine design by providing accurate and well-annotated data coupled with tailored computational analysis tools and workflow. PMID:25045677

  17. Multicultural Education Knowledgebase, Attitudes and Preparedness for Diversity

    ERIC Educational Resources Information Center

    Wasonga, Teresa A.

    2005-01-01

    Purpose: The paper aims to investigate the effect of multicultural knowledgebase on attitudes and feelings of preparedness to teach children from diverse backgrounds among pre-service teachers. Currently issues of multicultural education have been heightened by the academic achievement gap and emphasis on standardized test-scores as the indicator…

  18. CACTUS: Command and Control Training Using Knowledge-Based Simulations

    ERIC Educational Resources Information Center

    Hartley, Roger; Ravenscroft, Andrew; Williams, R. J.

    2008-01-01

    The CACTUS project was concerned with command and control training of large incidents where public order may be at risk, such as large demonstrations and marches. The training requirements and objectives of the project are first summarized justifying the use of knowledge-based computer methods to support and extend conventional training…

  19. Knowledge-Based Hierarchies: Using Organizations to Understand the Economy

    ERIC Educational Resources Information Center

    Garicano, Luis; Rossi-Hansberg, Esteban

    2015-01-01

    Incorporating the decision of how to organize the acquisition, use, and communication of knowledge into economic models is essential to understand a wide variety of economic phenomena. We survey the literature that has used knowledge-based hierarchies to study issues such as the evolution of wage inequality, the growth and productivity of firms,…

  20. Knowledge-Based Aid: A Four Agency Comparative Study

    ERIC Educational Resources Information Center

    McGrath, Simon; King, Kenneth

    2004-01-01

    Part of the response of many development cooperation agencies to the challenges of globalisation, ICTs and the knowledge economy is to emphasise the importance of knowledge for development. This paper looks at the discourses and practices of ''knowledge-based aid'' through an exploration of four agencies: the World Bank, DFID, Sida and JICA. It…

  1. Value Creation in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Liu, Fang-Chun

    2013-01-01

    Effective investment strategies help companies form dynamic core organizational capabilities allowing them to adapt and survive in today's rapidly changing knowledge-based economy. This dissertation investigates three valuation issues that challenge managers with respect to developing business-critical investment strategies that can have…

  2. The Ignorance of the Knowledge-Based Economy. The Iconoclast.

    ERIC Educational Resources Information Center

    McMurtry, John

    1996-01-01

    Castigates the supposed "knowledge-based economy" as simply a public relations smokescreen covering up the free market exploitation of people and resources serving corporate interests. Discusses the many ways that private industry, often with government collusion, has controlled or denied dissemination of information to serve its own interests.…

  3. Conventional and Knowledge-Based Information Retrieval with Prolog.

    ERIC Educational Resources Information Center

    Leigh, William; Paz, Noemi

    1988-01-01

    Describes the use of PROLOG to program knowledge-based information retrieval systems, in which the knowledge contained in a document is translated into machine processable logic. Several examples of the resulting search process, and the program rules supporting the process, are given. (10 references) (CLB)

  4. Malaysia Transitions toward a Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Mustapha, Ramlee; Abdullah, Abu

    2004-01-01

    The emergence of a knowledge-based economy (k-economy) has spawned a "new" notion of workplace literacy, changing the relationship between employers and employees. The traditional covenant where employees expect a stable or lifelong employment will no longer apply. The retention of employees will most probably be based on their skills and…

  5. Automated annual cropland mapping using knowledge-based temporal features

    NASA Astrophysics Data System (ADS)

    Waldner, François; Canto, Guadalupe Sepulcre; Defourny, Pierre

    2015-12-01

    Global, timely, accurate and cost-effective cropland mapping is a prerequisite for reliable crop condition monitoring. This article presented a simple and comprehensive methodology capable to meet the requirements of operational cropland mapping by proposing (1) five knowledge-based temporal features that remain stable over time, (2) a cleaning method that discards misleading pixels from a baseline land cover map and (3) a classifier that delivers high accuracy cropland maps (> 80%). This was demonstrated over four contrasted agrosystems in Argentina, Belgium, China and Ukraine. It was found that the quality and accuracy of the baseline impact more the certainty of the classification rather than the classification output itself. In addition, it was shown that interpolation of the knowledge-based features increases the stability of the classifier allowing for its re-use from year to year without recalibration. Hence, the method shows potential for application at larger scale as well as for delivering cropland map in near real time.

  6. Managing Project Landscapes in Knowledge-Based Enterprises

    NASA Astrophysics Data System (ADS)

    Stantchev, Vladimir; Franke, Marc Roman

    Knowledge-based enterprises are typically conducting a large number of research and development projects simultaneously. This is a particularly challenging task in complex and diverse project landscapes. Project Portfolio Management (PPM) can be a viable framework for knowledge and innovation management in such landscapes. A standardized process with defined functions such as project data repository, project assessment, selection, reporting, and portfolio reevaluation can serve as a starting point. In this work we discuss the benefits a multidimensional evaluation framework can provide for knowledge-based enterprises. Furthermore, we describe a knowledge and learning strategy and process in the context of PPM and evaluate their practical applicability at different stages of the PPM process.

  7. Knowledge-based zonal grid generation for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation of flow field zoning in two dimensions is an important step towards reducing the difficulty of three-dimensional grid generation in computational fluid dynamics. Using a knowledge-based approach makes sense, but problems arise which are caused by aspects of zoning involving perception, lack of expert consensus, and design processes. These obstacles are overcome by means of a simple shape and configuration language, a tunable zoning archetype, and a method of assembling plans from selected, predefined subplans. A demonstration system for knowledge-based two-dimensional flow field zoning has been successfully implemented and tested on representative aerodynamic configurations. The results show that this approach can produce flow field zonings that are acceptable to experts with differing evaluation criteria.

  8. A knowledgebase system to enhance scientific discovery: Telemakus.

    PubMed

    Fuller, Sherrilynne S; Revere, Debra; Bugni, Paul F; Martin, George M

    2004-09-21

    BACKGROUND: With the rapid expansion of scientific research, the ability to effectively find or integrate new domain knowledge in the sciences is proving increasingly difficult. Efforts to improve and speed up scientific discovery are being explored on a number of fronts. However, much of this work is based on traditional search and retrieval approaches and the bibliographic citation presentation format remains unchanged. METHODS: Case study. RESULTS: The Telemakus KnowledgeBase System provides flexible new tools for creating knowledgebases to facilitate retrieval and review of scientific research reports. In formalizing the representation of the research methods and results of scientific reports, Telemakus offers a potential strategy to enhance the scientific discovery process. While other research has demonstrated that aggregating and analyzing research findings across domains augments knowledge discovery, the Telemakus system is unique in combining document surrogates with interactive concept maps of linked relationships across groups of research reports. CONCLUSION: Based on how scientists conduct research and read the literature, the Telemakus KnowledgeBase System brings together three innovations in analyzing, displaying and summarizing research reports across a domain: (1) research report schema, a document surrogate of extracted research methods and findings presented in a consistent and structured schema format which mimics the research process itself and provides a high-level surrogate to facilitate searching and rapid review of retrieved documents; (2) research findings, used to index the documents, allowing searchers to request, for example, research studies which have studied the relationship between neoplasms and vitamin E; and (3) visual exploration interface of linked relationships for interactive querying of research findings across the knowledgebase and graphical displays of what is known as well as, through gaps in the map, what is yet to be

  9. Knowledge-based interpretation of outdoor natural color scenes

    SciTech Connect

    Ohta, Y.

    1985-01-01

    One of the major targets in vision research is to develop a total vision system starting from images to a symbolic description, utilizing various knowledge sources. This book demonstrates a knowledge-based image interpretation system that analyzes natural color scenes. Topics covered include color information for region segmentation, preliminary segmentation of color images, and a bottom-up and top-down region analyzer.

  10. A knowledge-based expert system for inferring vegetation characteristics

    NASA Technical Reports Server (NTRS)

    Kimes, Daniel S.; Harrison, Patrick R.; Ratcliffe, P. A.

    1991-01-01

    A prototype knowledge-based expert system VEG is presented that focuses on extracting spectral hemispherical reflectance using any combination of nadir and/or directional reflectance data as input. The system is designed to facilitate expansion to handle other inferences regarding vegetation properties such as total hemispherical reflectance, leaf area index, percent ground cover, phosynthetic capacity, and biomass. This approach is more robust and accurate than conventional extraction techniques previously developed.

  11. A knowledgebase system to enhance scientific discovery: Telemakus

    PubMed Central

    Fuller, Sherrilynne S; Revere, Debra; Bugni, Paul F; Martin, George M

    2004-01-01

    Background With the rapid expansion of scientific research, the ability to effectively find or integrate new domain knowledge in the sciences is proving increasingly difficult. Efforts to improve and speed up scientific discovery are being explored on a number of fronts. However, much of this work is based on traditional search and retrieval approaches and the bibliographic citation presentation format remains unchanged. Methods Case study. Results The Telemakus KnowledgeBase System provides flexible new tools for creating knowledgebases to facilitate retrieval and review of scientific research reports. In formalizing the representation of the research methods and results of scientific reports, Telemakus offers a potential strategy to enhance the scientific discovery process. While other research has demonstrated that aggregating and analyzing research findings across domains augments knowledge discovery, the Telemakus system is unique in combining document surrogates with interactive concept maps of linked relationships across groups of research reports. Conclusion Based on how scientists conduct research and read the literature, the Telemakus KnowledgeBase System brings together three innovations in analyzing, displaying and summarizing research reports across a domain: (1) research report schema, a document surrogate of extracted research methods and findings presented in a consistent and structured schema format which mimics the research process itself and provides a high-level surrogate to facilitate searching and rapid review of retrieved documents; (2) research findings, used to index the documents, allowing searchers to request, for example, research studies which have studied the relationship between neoplasms and vitamin E; and (3) visual exploration interface of linked relationships for interactive querying of research findings across the knowledgebase and graphical displays of what is known as well as, through gaps in the map, what is yet to be tested

  12. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.; Glass, Emily; Economides, Gregory; Russell, Paul

    1994-01-01

    This Contractor Report documents research in Intelligent Control using knowledge-based processing in a manner dual to methods found in the classic stochastic decision, estimation, and control discipline. Such knowledge-based control has also been called Declarative, and Hybid. Software architectures were sought, employing the parallelism inherent in modern object-oriented modeling and programming. The viewpoint adopted was that Intelligent Control employs a class of domain-specific software architectures having features common over a broad variety of implementations, such as management of aircraft flight, power distribution, etc. As much attention was paid to software engineering issues as to artificial intelligence and control issues. This research considered that particular processing methods from the stochastic and knowledge-based worlds are duals, that is, similar in a broad context. They provide architectural design concepts which serve as bridges between the disparate disciplines of decision, estimation, control, and artificial intelligence. This research was applied to the control of a subsonic transport aircraft in the airport terminal area.

  13. Systems, methods and apparatus for verification of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael G. (Inventor); Rash, James L. (Inventor); Erickson, John D. (Inventor); Gracinin, Denis (Inventor); Rouff, Christopher A. (Inventor)

    2010-01-01

    Systems, methods and apparatus are provided through which in some embodiments, domain knowledge is translated into a knowledge-based system. In some embodiments, a formal specification is derived from rules of a knowledge-based system, the formal specification is analyzed, and flaws in the formal specification are used to identify and correct errors in the domain knowledge, from which a knowledge-based system is translated.

  14. Construction of dynamic stochastic simulation models using knowledge-based techniques

    NASA Technical Reports Server (NTRS)

    Williams, M. Douglas; Shiva, Sajjan G.

    1990-01-01

    Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).

  15. Comparison of LISP and MUMPS as implementation languages for knowledge-based systems

    SciTech Connect

    Curtis, A.C.

    1984-01-01

    Major components of knowledge-based systems are summarized, along with the programming language features generally useful in their implementation. LISP and MUMPS are briefly described and compared as vehicles for building knowledge-based systems. The paper concludes with suggestions for extensions to MUMPS which might increase its usefulness in artificial intelligence applications without affecting the essential nature of the language. 8 references.

  16. Knowledge-Based Learning: Integration of Deductive and Inductive Learning for Knowledge Base Completion.

    ERIC Educational Resources Information Center

    Whitehall, Bradley Lane

    In constructing a knowledge-based system, the knowledge engineer must convert rules of thumb provided by the domain expert and previously solved examples into a working system. Research in machine learning has produced algorithms that create rules for knowledge-based systems, but these algorithms require either many examples or a complete domain…

  17. Knowledge-based GIS techniques applied to geological engineering

    USGS Publications Warehouse

    Usery, E. Lynn; Altheide, Phyllis; Deister, Robin R.P.; Barr, David J.

    1988-01-01

    A knowledge-based geographic information system (KBGIS) approach which requires development of a rule base for both GIS processing and for the geological engineering application has been implemented. The rule bases are implemented in the Goldworks expert system development shell interfaced to the Earth Resources Data Analysis System (ERDAS) raster-based GIS for input and output. GIS analysis procedures including recoding, intersection, and union are controlled by the rule base, and the geological engineering map product is generted by the expert system. The KBGIS has been used to generate a geological engineering map of Creve Coeur, Missouri.

  18. Knowledge-based machine vision systems for space station automation

    NASA Technical Reports Server (NTRS)

    Ranganath, Heggere S.; Chipman, Laure J.

    1989-01-01

    Computer vision techniques which have the potential for use on the space station and related applications are assessed. A knowledge-based vision system (expert vision system) and the development of a demonstration system for it are described. This system implements some of the capabilities that would be necessary in a machine vision system for the robot arm of the laboratory module in the space station. A Perceptics 9200e image processor, on a host VAXstation, was used to develop the demonstration system. In order to use realistic test images, photographs of actual space shuttle simulator panels were used. The system's capabilities of scene identification and scene matching are discussed.

  19. Knowledge-based potential functions in protein design.

    PubMed

    Russ, William P; Ranganathan, Rama

    2002-08-01

    Predicting protein sequences that fold into specific native three-dimensional structures is a problem of great potential complexity. Although the complete solution is ultimately rooted in understanding the physical chemistry underlying the complex interactions between amino acid residues that determine protein stability, recent work shows that empirical information about these first principles is embedded in the statistics of protein sequence and structure databases. This review focuses on the use of 'knowledge-based' potentials derived from these databases in designing proteins. In addition, the data suggest how the study of these empirical potentials might impact our fundamental understanding of the energetic principles of protein structure. PMID:12163066

  20. An Introduction to the Heliophysics Event Knowledgebase for SDO

    NASA Astrophysics Data System (ADS)

    Hurlburt, Neal; Schrijver, Carolus; Cheung, Mark

    The immense volume of data generated by the suite of instruments on SDO requires new tools for efficient identifying and accessing data that is most relevant to research investigations. We have developed the Heliophysics Events Knowledgebase (HEK) to fill this need. The system developed in support of the HEK combines automated datamining using feature detection methods; high-performance visualization systems for data markup; and web-services and clients for searching the resulting metadata, reviewing results and efficient access to the data. We will review these components and present examples of their use with SDO data.

  1. A knowledge-based clustering algorithm driven by Gene Ontology.

    PubMed

    Cheng, Jill; Cline, Melissa; Martin, John; Finkelstein, David; Awad, Tarif; Kulp, David; Siani-Rose, Michael A

    2004-08-01

    We have developed an algorithm for inferring the degree of similarity between genes by using the graph-based structure of Gene Ontology (GO). We applied this knowledge-based similarity metric to a clique-finding algorithm for detecting sets of related genes with biological classifications. We also combined it with an expression-based distance metric to produce a co-cluster analysis, which accentuates genes with both similar expression profiles and similar biological characteristics and identifies gene clusters that are more stable and biologically meaningful. These algorithms are demonstrated in the analysis of MPRO cell differentiation time series experiments. PMID:15468759

  2. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  3. A prototype knowledge-based simulation support system

    SciTech Connect

    Hill, T.R.; Roberts, S.D.

    1987-04-01

    As a preliminary step toward the goal of an intelligent automated system for simulation modeling support, we explore the feasibility of the overall concept by generating and testing a prototypical framework. A prototype knowledge-based computer system was developed to support a senior level course in industrial engineering so that the overall feasibility of an expert simulation support system could be studied in a controlled and observable setting. The system behavior mimics the diagnostic (intelligent) process performed by the course instructor and teaching assistants, finding logical errors in INSIGHT simulation models and recommending appropriate corrective measures. The system was programmed in a non-procedural language (PROLOG) and designed to run interactively with students working on course homework and projects. The knowledge-based structure supports intelligent behavior, providing its users with access to an evolving accumulation of expert diagnostic knowledge. The non-procedural approach facilitates the maintenance of the system and helps merge the roles of expert and knowledge engineer by allowing new knowledge to be easily incorporated without regard to the existing flow of control. The background, features and design of the system are describe and preliminary results are reported. Initial success is judged to demonstrate the utility of the reported approach and support the ultimate goal of an intelligent modeling system which can support simulation modelers outside the classroom environment. Finally, future extensions are suggested.

  4. Knowledge-based simulation using object-oriented programming

    NASA Technical Reports Server (NTRS)

    Sidoran, Karen M.

    1993-01-01

    Simulations have become a powerful mechanism for understanding and modeling complex phenomena. Their results have had substantial impact on a broad range of decisions in the military, government, and industry. Because of this, new techniques are continually being explored and developed to make them even more useful, understandable, extendable, and efficient. One such area of research is the application of the knowledge-based methods of artificial intelligence (AI) to the computer simulation field. The goal of knowledge-based simulation is to facilitate building simulations of greatly increased power and comprehensibility by making use of deeper knowledge about the behavior of the simulated world. One technique for representing and manipulating knowledge that has been enhanced by the AI community is object-oriented programming. Using this technique, the entities of a discrete-event simulation can be viewed as objects in an object-oriented formulation. Knowledge can be factual (i.e., attributes of an entity) or behavioral (i.e., how the entity is to behave in certain circumstances). Rome Laboratory's Advanced Simulation Environment (RASE) was developed as a research vehicle to provide an enhanced simulation development environment for building more intelligent, interactive, flexible, and realistic simulations. This capability will support current and future battle management research and provide a test of the object-oriented paradigm for use in large scale military applications.

  5. Knowledge-based imaging-sensor fusion system

    NASA Technical Reports Server (NTRS)

    Westrom, George

    1989-01-01

    An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.

  6. Portable Knowledge-Based Diagnostic And Maintenance Systems

    NASA Astrophysics Data System (ADS)

    Darvish, John; Olson, Noreen S.

    1989-03-01

    It is difficult to diagnose faults and maintain weapon systems because (1) they are highly complex pieces of equipment composed of multiple mechanical, electrical, and hydraulic assemblies, and (2) talented maintenance personnel are continuously being lost through the attrition process. To solve this problem, we developed a portable diagnostic and maintenance aid that uses a knowledge-based expert system. This aid incorporates diagnostics, operational procedures, repair and replacement procedures, and regularly scheduled maintenance into one compact, 18-pound graphics workstation. Drawings and schematics can be pulled up from the CD-ROM to assist the operator in answering the expert system's questions. Work for this aid began with the development of the initial knowledge-based expert system in a fast prototyping environment using a LISP machine. The second phase saw the development of a personal computer-based system that used videodisc technology to pictorially assist the operator. The current version of the aid eliminates the high expenses associated with videodisc preparation by scanning in the art work already in the manuals. A number of generic software tools have been developed that streamlined the construction of each iteration of the aid; these tools will be applied to the development of future systems.

  7. Network fingerprint: a knowledge-based characterization of biomedical networks

    PubMed Central

    Cui, Xiuliang; He, Haochen; He, Fuchu; Wang, Shengqi; Li, Fei; Bo, Xiaochen

    2015-01-01

    It can be difficult for biomedical researchers to understand complex molecular networks due to their unfamiliarity with the mathematical concepts employed. To represent molecular networks with clear meanings and familiar forms for biomedical researchers, we introduce a knowledge-based computational framework to decipher biomedical networks by making systematic comparisons to well-studied “basic networks”. A biomedical network is characterized as a spectrum-like vector called “network fingerprint”, which contains similarities to basic networks. This knowledge-based multidimensional characterization provides a more intuitive way to decipher molecular networks, especially for large-scale network comparisons and clustering analyses. As an example, we extracted network fingerprints of 44 disease networks in the Kyoto Encyclopedia of Genes and Genomes (KEGG) database. The comparisons among the network fingerprints of disease networks revealed informative disease-disease and disease-signaling pathway associations, illustrating that the network fingerprinting framework will lead to new approaches for better understanding of biomedical networks. PMID:26307246

  8. Hospital nurses' use of knowledge-based information resources.

    PubMed

    Tannery, Nancy Hrinya; Wessel, Charles B; Epstein, Barbara A; Gadd, Cynthia S

    2007-01-01

    The purpose of this study was to evaluate the information-seeking practices of nurses before and after access to a library's electronic collection of information resources. This is a pre/post intervention study of nurses at a rural community hospital. The hospital contracted with an academic health sciences library for access to a collection of online knowledge-based resources. Self-report surveys were used to obtain information about nurses' computer use and how they locate and access information to answer questions related to their patient care activities. In 2001, self-report surveys were sent to the hospital's 573 nurses during implementation of access to online resources with a post-implementation survey sent 1 year later. At the initiation of access to the library's electronic resources, nurses turned to colleagues and print textbooks or journals to satisfy their information needs. After 1 year of access, 20% of the nurses had begun to use the library's electronic resources. The study outcome suggests ready access to knowledge-based electronic information resources can lead to changes in behavior among some nurses. PMID:17289463

  9. A proven knowledge-based approach to prioritizing process information

    NASA Technical Reports Server (NTRS)

    Corsberg, Daniel R.

    1991-01-01

    Many space-related processes are highly complex systems subject to sudden, major transients. In any complex process control system, a critical aspect is rapid analysis of the changing process information. During a disturbance, this task can overwhelm humans as well as computers. Humans deal with this by applying heuristics in determining significant information. A simple, knowledge-based approach to prioritizing information is described. The approach models those heuristics that humans would use in similar circumstances. The approach described has received two patents and was implemented in the Alarm Filtering System (AFS) at the Idaho National Engineering Laboratory (INEL). AFS was first developed for application in a nuclear reactor control room. It has since been used in chemical processing applications, where it has had a significant impact on control room environments. The approach uses knowledge-based heuristics to analyze data from process instrumentation and respond to that data according to knowledge encapsulated in objects and rules. While AFS cannot perform the complete diagnosis and control task, it has proven to be extremely effective at filtering and prioritizing information. AFS was used for over two years as a first level of analysis for human diagnosticians. Given the approach's proven track record in a wide variety of practical applications, it should be useful in both ground- and space-based systems.

  10. Knowledge-based vision and simple visual machines.

    PubMed Central

    Cliff, D; Noble, J

    1997-01-01

    The vast majority of work in machine vision emphasizes the representation of perceived objects and events: it is these internal representations that incorporate the 'knowledge' in knowledge-based vision or form the 'models' in model-based vision. In this paper, we discuss simple machine vision systems developed by artificial evolution rather than traditional engineering design techniques, and note that the task of identifying internal representations within such systems is made difficult by the lack of an operational definition of representation at the causal mechanistic level. Consequently, we question the nature and indeed the existence of representations posited to be used within natural vision systems (i.e. animals). We conclude that representations argued for on a priori grounds by external observers of a particular vision system may well be illusory, and are at best place-holders for yet-to-be-identified causal mechanistic interactions. That is, applying the knowledge-based vision approach in the understanding of evolved systems (machines or animals) may well lead to theories and models that are internally consistent, computationally plausible, and entirely wrong. PMID:9304684

  11. Literature classification for semi-automated updating of biological knowledgebases

    PubMed Central

    2013-01-01

    Background As the output of biological assays increase in resolution and volume, the body of specialized biological data, such as functional annotations of gene and protein sequences, enables extraction of higher-level knowledge needed for practical application in bioinformatics. Whereas common types of biological data, such as sequence data, are extensively stored in biological databases, functional annotations, such as immunological epitopes, are found primarily in semi-structured formats or free text embedded in primary scientific literature. Results We defined and applied a machine learning approach for literature classification to support updating of TANTIGEN, a knowledgebase of tumor T-cell antigens. Abstracts from PubMed were downloaded and classified as either "relevant" or "irrelevant" for database update. Training and five-fold cross-validation of a k-NN classifier on 310 abstracts yielded classification accuracy of 0.95, thus showing significant value in support of data extraction from the literature. Conclusion We here propose a conceptual framework for semi-automated extraction of epitope data embedded in scientific literature using principles from text mining and machine learning. The addition of such data will aid in the transition of biological databases to knowledgebases. PMID:24564403

  12. A knowledge-based care protocol system for ICU.

    PubMed

    Lau, F; Vincent, D D

    1995-01-01

    There is a growing interest in using care maps in ICU. So far, the emphasis has been on developing the critical path, problem/outcome, and variance reporting for specific diagnoses. This paper presents a conceptual knowledge-based care protocol system design for the ICU. It is based on the manual care map currently in use for managing myocardial infarction in the ICU of the Sturgeon General Hospital in Alberta. The proposed design uses expert rules, object schemas, case-based reasoning, and quantitative models as sources of its knowledge. Also being developed is a decision model with explicit linkages for outcome-process-measure from the care map. The resulting system is intended as a bedside charting and decision-support tool for caregivers. Proposed usage includes charting by acknowledgment, generation of alerts, and critiques on variances/events recorded, recommendations for planned interventions, and comparison with historical cases. Currently, a prototype is being developed on a PC-based network with Visual Basic, Level-Expert Object, and xBase. A clinical trial is also planned to evaluate whether this knowledge-based care protocol can reduce the length of stay of patients with myocardial infarction in the ICU. PMID:8591604

  13. A knowledgebase of the human Alu repetitive elements.

    PubMed

    Mallona, Izaskun; Jordà, Mireia; Peinado, Miguel A

    2016-04-01

    Alu elements are the most abundant retrotransposons in the human genome with more than one million copies. Alu repeats have been reported to participate in multiple processes related with genome regulation and compartmentalization. Moreover, they have been involved in the facilitation of pathological mutations in many diseases, including cancer. The contribution of Alus and other repeats in genomic regulation is often overlooked because their study poses technical and analytical challenges hardly attainable with conventional strategies. Here we propose the integration of ontology-based semantic methods to query a knowledgebase for the human Alus. The knowledgebase for the human Alus leverages Sequence (SO) and Gene Ontologies (GO) and is devoted to address functional and genetic information in the genomic context of the Alus. For each Alu element, the closest gene and transcript are stored, as well their functional annotation according to GO, the state of the chromatin and the transcription factors binding sites inside the Alu. The model uses Web Ontology Language (OWL) and Semantic Web Rule Language (SWRL). As a case of use and to illustrate the utility of the tool, we have evaluated the epigenetic states of Alu repeats associated with gene promoters according to their transcriptional activity. The ontology is easily extendable, offering a scaffold for the inclusion of new experimental data. The RDF/XML formalization is freely available at http://aluontology.sourceforge.net/. PMID:26827622

  14. Developing a Physiologically-Based Pharmacokinetic Model Knowledgebase in Support of Provisional Model Construction.

    PubMed

    Lu, Jingtao; Goldsmith, Michael-Rock; Grulke, Christopher M; Chang, Daniel T; Brooks, Raina D; Leonard, Jeremy A; Phillips, Martin B; Hypes, Ethan D; Fair, Matthew J; Tornero-Velez, Rogelio; Johnson, Jeffre; Dary, Curtis C; Tan, Yu-Mei

    2016-02-01

    Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-vetted models can be a great resource for the construction of models pertaining to new chemicals. A PBPK knowledgebase was compiled and developed from existing PBPK-related articles and used to develop new models. From 2,039 PBPK-related articles published between 1977 and 2013, 307 unique chemicals were identified for use as the basis of our knowledgebase. Keywords related to species, gender, developmental stages, and organs were analyzed from the articles within the PBPK knowledgebase. A correlation matrix of the 307 chemicals in the PBPK knowledgebase was calculated based on pharmacokinetic-relevant molecular descriptors. Chemicals in the PBPK knowledgebase were ranked based on their correlation toward ethylbenzene and gefitinib. Next, multiple chemicals were selected to represent exact matches, close analogues, or non-analogues of the target case study chemicals. Parameters, equations, or experimental data relevant to existing models for these chemicals and their analogues were used to construct new models, and model predictions were compared to observed values. This compiled knowledgebase provides a chemical structure-based approach for identifying PBPK models relevant to other chemical entities. Using suitable correlation metrics, we demonstrated that models of chemical analogues in the PBPK knowledgebase can guide the construction of PBPK models for other chemicals. PMID:26871706

  15. Developing a Physiologically-Based Pharmacokinetic Model Knowledgebase in Support of Provisional Model Construction

    PubMed Central

    Grulke, Christopher M.; Chang, Daniel T.; Brooks, Raina D.; Leonard, Jeremy A.; Phillips, Martin B.; Hypes, Ethan D.; Fair, Matthew J.; Tornero-Velez, Rogelio; Johnson, Jeffre; Dary, Curtis C.; Tan, Yu-Mei

    2016-01-01

    Developing physiologically-based pharmacokinetic (PBPK) models for chemicals can be resource-intensive, as neither chemical-specific parameters nor in vivo pharmacokinetic data are easily available for model construction. Previously developed, well-parameterized, and thoroughly-vetted models can be a great resource for the construction of models pertaining to new chemicals. A PBPK knowledgebase was compiled and developed from existing PBPK-related articles and used to develop new models. From 2,039 PBPK-related articles published between 1977 and 2013, 307 unique chemicals were identified for use as the basis of our knowledgebase. Keywords related to species, gender, developmental stages, and organs were analyzed from the articles within the PBPK knowledgebase. A correlation matrix of the 307 chemicals in the PBPK knowledgebase was calculated based on pharmacokinetic-relevant molecular descriptors. Chemicals in the PBPK knowledgebase were ranked based on their correlation toward ethylbenzene and gefitinib. Next, multiple chemicals were selected to represent exact matches, close analogues, or non-analogues of the target case study chemicals. Parameters, equations, or experimental data relevant to existing models for these chemicals and their analogues were used to construct new models, and model predictions were compared to observed values. This compiled knowledgebase provides a chemical structure-based approach for identifying PBPK models relevant to other chemical entities. Using suitable correlation metrics, we demonstrated that models of chemical analogues in the PBPK knowledgebase can guide the construction of PBPK models for other chemicals. PMID:26871706

  16. Knowledge-based system for flight information management. Thesis

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.

    1990-01-01

    The use of knowledge-based system (KBS) architectures to manage information on the primary flight display (PFD) of commercial aircraft is described. The PFD information management strategy used tailored the information on the PFD to the tasks the pilot performed. The KBS design and implementation of the task-tailored PFD information management application is described. The knowledge acquisition and subsequent system design of a flight-phase-detection KBS is also described. The flight-phase output of this KBS was used as input to the task-tailored PFD information management KBS. The implementation and integration of this KBS with existing aircraft systems and the other KBS is described. The flight tests are examined of both KBS's, collectively called the Task-Tailored Flight Information Manager (TTFIM), which verified their implementation and integration, and validated the software engineering advantages of the KBS approach in an operational environment.

  17. The Knowledge-Based Software Assistant: Beyond CASE

    NASA Technical Reports Server (NTRS)

    Carozzoni, Joseph A.

    1993-01-01

    This paper will outline the similarities and differences between two paradigms of software development. Both support the whole software life cycle and provide automation for most of the software development process, but have different approaches. The CASE approach is based on a set of tools linked by a central data repository. This tool-based approach is data driven and views software development as a series of sequential steps, each resulting in a product. The Knowledge-Based Software Assistant (KBSA) approach, a radical departure from existing software development practices, is knowledge driven and centers around a formalized software development process. KBSA views software development as an incremental, iterative, and evolutionary process with development occurring at the specification level.

  18. Knowledge-based assistance in costing the space station DMS

    NASA Technical Reports Server (NTRS)

    Henson, Troy; Rone, Kyle

    1988-01-01

    The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.

  19. Knowledge-based fault diagnosis system for refuse collection vehicle

    NASA Astrophysics Data System (ADS)

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-01

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  20. Data integration and analysis using the Heliophysics Event Knowledgebase

    NASA Astrophysics Data System (ADS)

    Hurlburt, Neal; Reardon, Kevin

    The Heliophysics Event Knowledgebase (HEK) system provides an integrated framework for automated data mining using a variety of feature-detection methods; high-performance data systems to cope with over 1TB/day of multi-mission data; and web services and clients for searching the resulting metadata, reviewing results, and efficiently accessing the data products. We have recently enhanced the capabilities of the HEK to support the complex datasets being produced by the Interface Region Imaging Spectrograph (IRIS). We are also developing the mechanisms to incorporate descriptions of coordinated observations from ground-based facilities, including the NSO's Dunn Solar Telescope (DST). We will discuss the system and its recent evolution and demonstrate its ability to support coordinated science investigations.

  1. Knowledge-Based Framework: its specification and new related discussions

    NASA Astrophysics Data System (ADS)

    Rodrigues, Douglas; Zaniolo, Rodrigo R.; Branco, Kalinka R. L. J. C.

    2015-09-01

    Unmanned Aerial Vehicle is a common application of critical embedded systems. The heterogeneity prevalent in these vehicles in terms of services for avionics is particularly relevant to the elaboration of multi-application missions. Besides, this heterogeneity in UAV services is often manifested in the form of characteristics such as reliability, security and performance. Different service implementations typically offer different guarantees in terms of these characteristics and in terms of associated costs. Particularly, we explore the notion of Service-Oriented Architecture (SOA) in the context of UAVs as safety-critical embedded systems for the composition of services to fulfil application-specified performance and dependability guarantees. So, we propose a framework for the deployment of these services and their variants. This framework is called Knowledge-Based Framework for Dynamically Changing Applications (KBF) and we specify its services module, discussing all the related issues.

  2. A knowledge-based system for prototypical reasoning

    NASA Astrophysics Data System (ADS)

    Lieto, Antonio; Minieri, Andrea; Piana, Alberto; Radicioni, Daniele P.

    2015-04-01

    In this work we present a knowledge-based system equipped with a hybrid, cognitively inspired architecture for the representation of conceptual information. The proposed system aims at extending the classical representational and reasoning capabilities of the ontology-based frameworks towards the realm of the prototype theory. It is based on a hybrid knowledge base, composed of a classical symbolic component (grounded on a formal ontology) with a typicality based one (grounded on the conceptual spaces framework). The resulting system attempts to reconcile the heterogeneous approach to the concepts in Cognitive Science with the dual process theories of reasoning and rationality. The system has been experimentally assessed in a conceptual categorisation task where common sense linguistic descriptions were given in input, and the corresponding target concepts had to be identified. The results show that the proposed solution substantially extends the representational and reasoning 'conceptual' capabilities of standard ontology-based systems.

  3. Knowledge-based decision support for patient monitoring in cardioanesthesia.

    PubMed

    Schecke, T; Langen, M; Popp, H J; Rau, G; Käsmacher, H; Kalff, G

    1992-01-01

    An approach to generating 'intelligent alarms' is presented that aggregates many information items, i.e. measured vital signs, recent medications, etc., into state variables that more directly reflect the patient's physiological state. Based on these state variables the described decision support system AES-2 also provides therapy recommendations. The assessment of the state variables and the generation of therapeutic advice follow a knowledge-based approach. Aspects of uncertainty, e.g. a gradual transition between 'normal' and 'below normal', are considered applying a fuzzy set approach. Special emphasis is laid on the ergonomic design of the user interface, which is based on color graphics and finger touch input on the screen. Certain simulation techniques considerably support the design process of AES-2 as is demonstrated with a typical example from cardioanesthesia. PMID:1402299

  4. Knowledge-based fault diagnosis system for refuse collection vehicle

    SciTech Connect

    Tan, CheeFai; Juffrizal, K.; Khalil, S. N.; Nidzamuddin, M. Y.

    2015-05-15

    The refuse collection vehicle is manufactured by local vehicle body manufacturer. Currently; the company supplied six model of the waste compactor truck to the local authority as well as waste management company. The company is facing difficulty to acquire the knowledge from the expert when the expert is absence. To solve the problem, the knowledge from the expert can be stored in the expert system. The expert system is able to provide necessary support to the company when the expert is not available. The implementation of the process and tool is able to be standardize and more accurate. The knowledge that input to the expert system is based on design guidelines and experience from the expert. This project highlighted another application on knowledge-based system (KBS) approached in trouble shooting of the refuse collection vehicle production process. The main aim of the research is to develop a novel expert fault diagnosis system framework for the refuse collection vehicle.

  5. Autonomous Cryogenics Loading Operations Simulation Software: Knowledgebase Autonomous Test Engineer

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S.

    2012-01-01

    The Simulation Software, KATE (Knowledgebase Autonomous Test Engineer), is used to demonstrate the automatic identification of faults in a system. The ACLO (Autonomous Cryogenics Loading Operation) project uses KATE to monitor and find faults in the loading of the cryogenics int o a vehicle fuel tank. The KATE software interfaces with the IHM (Integrated Health Management) systems bus to communicate with other systems that are part of ACLO. One system that KATE uses the IHM bus to communicate with is AIS (Advanced Inspection System). KATE will send messages to AIS when there is a detected anomaly. These messages include visual inspection of specific valves, pressure gauges and control messages to have AIS open or close manual valves. My goals include implementing the connection to the IHM bus within KATE and for the AIS project. I will also be working on implementing changes to KATE's Ul and implementing the physics objects in KATE that will model portions of the cryogenics loading operation.

  6. Autonomous Cryogenics Loading Operations Simulation Software: Knowledgebase Autonomous Test Engineer

    NASA Technical Reports Server (NTRS)

    Wehner, Walter S., Jr.

    2013-01-01

    Working on the ACLO (Autonomous Cryogenics Loading Operations) project I have had the opportunity to add functionality to the physics simulation software known as KATE (Knowledgebase Autonomous Test Engineer), create a new application allowing WYSIWYG (what-you-see-is-what-you-get) creation of KATE schematic files and begin a preliminary design and implementation of a new subsystem that will provide vision services on the IHM (Integrated Health Management) bus. The functionality I added to KATE over the past few months includes a dynamic visual representation of the fluid height in a pipe based on number of gallons of fluid in the pipe and implementing the IHM bus connection within KATE. I also fixed a broken feature in the system called the Browser Display, implemented many bug fixes and made changes to the GUI (Graphical User Interface).

  7. UniProt Knowledgebase: a hub of integrated protein data

    PubMed Central

    Magrane, Michele; Consortium, UniProt

    2011-01-01

    The UniProt Knowledgebase (UniProtKB) acts as a central hub of protein knowledge by providing a unified view of protein sequence and functional information. Manual and automatic annotation procedures are used to add data directly to the database while extensive cross-referencing to more than 120 external databases provides access to additional relevant information in more specialized data collections. UniProtKB also integrates a range of data from other resources. All information is attributed to its original source, allowing users to trace the provenance of all data. The UniProt Consortium is committed to using and promoting common data exchange formats and technologies, and UniProtKB data is made freely available in a range of formats to facilitate integration with other databases. Database URL: http://www.uniprot.org/ PMID:21447597

  8. UniProt Knowledgebase: a hub of integrated protein data.

    PubMed

    Magrane, Michele

    2011-01-01

    The UniProt Knowledgebase (UniProtKB) acts as a central hub of protein knowledge by providing a unified view of protein sequence and functional information. Manual and automatic annotation procedures are used to add data directly to the database while extensive cross-referencing to more than 120 external databases provides access to additional relevant information in more specialized data collections. UniProtKB also integrates a range of data from other resources. All information is attributed to its original source, allowing users to trace the provenance of all data. The UniProt Consortium is committed to using and promoting common data exchange formats and technologies, and UniProtKB data is made freely available in a range of formats to facilitate integration with other databases. Database URL: http://www.uniprot.org/ PMID:21447597

  9. Knowledge-based navigation of complex information spaces

    SciTech Connect

    Burke, R.D.; Hammond, K.J.; Young, B.C.

    1996-12-31

    While the explosion of on-line information has brought new opportunities for finding and using electronic data, it has also brought to the forefront the problem of isolating useful information and making sense of large multi-dimension information spaces. We have built several developed an approach to building data {open_quotes}tour guides,{close_quotes} called FINDME systems. These programs know enough about an information space to be able to help a user navigate through it. The user not only comes away with items of useful information but also insights into the structure of the information space itself. In these systems, we have combined ideas of instance-based browsing, structuring retrieval around the critiquing of previously-retrieved examples, and retrieval strategies, knowledge-based heuristics for finding relevant information. We illustrate these techniques with several examples, concentrating especially on the RENTME system, a FINDME system for helping users find suitable rental apartments in the Chicago metropolitan area.

  10. Knowledge-based architecture for airborne mine and minefield detection

    NASA Astrophysics Data System (ADS)

    Agarwal, Sanjeev; Menon, Deepak; Swonger, C. W.

    2004-09-01

    One of the primary lessons learned from airborne mid-wave infrared (MWIR) based mine and minefield detection research and development over the last few years has been the fact that no single algorithm or static detection architecture is able to meet mine and minefield detection performance specifications. This is true not only because of the highly varied environmental and operational conditions under which an airborne sensor is expected to perform but also due to the highly data dependent nature of sensors and algorithms employed for detection. Attempts to make the algorithms themselves more robust to varying operating conditions have only been partially successful. In this paper, we present a knowledge-based architecture to tackle this challenging problem. The detailed algorithm architecture is discussed for such a mine/minefield detection system, with a description of each functional block and data interface. This dynamic and knowledge-driven architecture will provide more robust mine and minefield detection for a highly multi-modal operating environment. The acquisition of the knowledge for this system is predominantly data driven, incorporating not only the analysis of historical airborne mine and minefield imagery data collection, but also other "all source data" that may be available such as terrain information and time of day. This "all source data" is extremely important and embodies causal information that drives the detection performance. This information is not being used by current detection architectures. Data analysis for knowledge acquisition will facilitate better understanding of the factors that affect the detection performance and will provide insight into areas for improvement for both sensors and algorithms. Important aspects of this knowledge-based architecture, its motivations and the potential gains from its implementation are discussed, and some preliminary results are presented.

  11. Risk Management of New Microelectronics for NASA: Radiation Knowledge-base

    NASA Technical Reports Server (NTRS)

    LaBel, Kenneth A.

    2004-01-01

    Contents include the following: NASA Missions - implications to reliability and radiation constraints. Approach to Insertion of New Technologies Technology Knowledge-base development. Technology model/tool development and validation. Summary comments.

  12. Towards a knowledge-based system to assist the Brazilian data-collecting system operation

    NASA Technical Reports Server (NTRS)

    Rodrigues, Valter; Simoni, P. O.; Oliveira, P. P. B.; Oliveira, C. A.; Nogueira, C. A. M.

    1988-01-01

    A study is reported which was carried out to show how a knowledge-based approach would lead to a flexible tool to assist the operation task in a satellite-based environmental data collection system. Some characteristics of a hypothesized system comprised of a satellite and a network of Interrogable Data Collecting Platforms (IDCPs) are pointed out. The Knowledge-Based Planning Assistant System (KBPAS) and some aspects about how knowledge is organized in the IDCP's domain are briefly described.

  13. Validation of highly reliable, real-time knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Johnson, Sally C.

    1988-01-01

    Knowledge-based systems have the potential to greatly increase the capabilities of future aircraft and spacecraft and to significantly reduce support manpower needed for the space station and other space missions. However, a credible validation methodology must be developed before knowledge-based systems can be used for life- or mission-critical applications. Experience with conventional software has shown that the use of good software engineering techniques and static analysis tools can greatly reduce the time needed for testing and simulation of a system. Since exhaustive testing is infeasible, reliability must be built into the software during the design and implementation phases. Unfortunately, many of the software engineering techniques and tools used for conventional software are of little use in the development of knowledge-based systems. Therefore, research at Langley is focused on developing a set of guidelines, methods, and prototype validation tools for building highly reliable, knowledge-based systems. The use of a comprehensive methodology for building highly reliable, knowledge-based systems should significantly decrease the time needed for testing and simulation. A proven record of delivering reliable systems at the beginning of the highly visible testing and simulation phases is crucial to the acceptance of knowledge-based systems in critical applications.

  14. A clinical trial of a knowledge-based medical record.

    PubMed

    Safran, C; Rind, D M; Davis, R B; Sands, D Z; Caraballo, E; Rippel, K; Wang, Q; Rury, C; Makadon, H J; Cotton, D J

    1995-01-01

    To meet the needs of primary care physicians caring for patients with HIV infection, we developed a knowledge-based medical record to allow the on-line patient record to play an active role in the care process. These programs integrate the on-line patient record, rule-based decision support, and full-text information retrieval into a clinical workstation for the practicing clinician. To determine whether use of a knowledge-based medical record was associated with more rapid and complete adherence to practice guidelines and improved quality of care, we performed a controlled clinical trial among physicians and nurse practitioners caring for 349 patients infected with the human immuno-deficiency virus (HIV); 191 patients were treated by 65 physicians and nurse practitioners assigned to the intervention group, and 158 patients were treated by 61 physicians and nurse practitioners assigned to the control group. During the 18-month study period, the computer generated 303 alerts in the intervention group and 388 in the control group. The median response time of clinicians to these alerts was 11 days in the intervention group and 52 days in the control group (PJJ0.0001, log-rank test). During the study, the computer generated 432 primary care reminders for the intervention group and 360 reminders for the control group. The median response time of clinicians to these alerts was 114 days in the intervention group and more than 500 days in the control group (PJJ0.0001, log-rank test). Of the 191 patients in the intervention group, 67 (35%) had one or more hospitalizations, compared with 70 (44%) of the 158 patients in the control group (PJ=J0.04, Wilcoxon test stratified for initial CD4 count). There was no difference in survival between the intervention and control groups (P = 0.18, log-rank test). We conclude that our clinical workstation significantly changed physicians' behavior in terms of their response to alerts regarding primary care interventions and that these

  15. Hyperincursion and the Globalization of the Knowledge-Based Economy

    NASA Astrophysics Data System (ADS)

    Leydesdorff, Loet

    2006-06-01

    In biological systems, the capacity of anticipation—that is, entertaining a model of the system within the system—can be considered as naturally given. Human languages enable psychological systems to construct and exchange mental models of themselves and their environments reflexively, that is, provide meaning to the events. At the level of the social system expectations can further be codified. When these codifications are functionally differentiated—like between market mechanisms and scientific research programs—the potential asynchronicity in the update among the subsystems provides room for a second anticipatory mechanism at the level of the transversal information exchange among differently codified meaning-processing subsystems. Interactions between the two different anticipatory mechanisms (the transversal one and the one along the time axis in each subsystem) may lead to co-evolutions and stabilization of expectations along trajectories. The wider horizon of knowledgeable expectations can be expected to meta-stabilize and also globalize a previously stabilized configuration of expectations against the axis of time. While stabilization can be considered as consequences of interaction and aggregation among incursive formulations of the logistic equation, globalization can be modeled using the hyperincursive formulation of this equation. The knowledge-based subdynamic at the global level which thus emerges, enables historical agents to inform the reconstruction of previous states and to co-construct future states of the social system, for example, in a techno-economic co-evolution.

  16. Knowledge-acquisition tools for medical knowledge-based systems.

    PubMed

    Lanzola, G; Quaglini, S; Stefanelli, M

    1995-03-01

    Knowledge-based systems (KBS) have been proposed to solve a large variety of medical problems. A strategic issue for KBS development and maintenance are the efforts required for both knowledge engineers and domain experts. The proposed solution is building efficient knowledge acquisition (KA) tools. This paper presents a set of KA tools we are developing within a European Project called GAMES II. They have been designed after the formulation of an epistemological model of medical reasoning. The main goal is that of developing a computational framework which allows knowledge engineers and domain experts to interact cooperatively in developing a medical KBS. To this aim, a set of reusable software components is highly recommended. Their design was facilitated by the development of a methodology for KBS construction. It views this process as comprising two activities: the tailoring of the epistemological model to the specific medical task to be executed and the subsequent translation of this model into a computational architecture so that the connections between computational structures and their knowledge level counterparts are maintained. The KA tools we developed are illustrated taking examples from the behavior of a KBS we are building for the management of children with acute myeloid leukemia. PMID:9082135

  17. Knowledge-based system for design of signalized intersections

    SciTech Connect

    Linkenheld, J.S. ); Benekohal, R.F. ); Garrett, J.H. Jr. )

    1992-03-01

    For an efficient traffic operation in intelligent highway systems, traffic signals need to respond to the changes in roadway and traffic demand. The phasing and timing of traffic signals requires the use of heuristic rules of thumb to determine what phases are needed and how the green time should be assigned to them. Because of the need for judgmental knowledge in solving this problem, this study has used knowledge-based expert-system technology to develop a system for the phasing and signal timing (PHAST) of an isolated intersection. PHAST takes intersection geometry and traffic volume as input and generates appropriate phase plan, cycle length, and green time for each phase. The phase plan and signal timing change when intersection geometry or traffic demand changes. This paper describes the intended system functionality, the system architecture, the knowledge used to phase and time an intersection, the implementation of the system, and system verification. PHAST's performance was validated using phase plans and timings of several intersections.

  18. A knowledge-based system design/information tool

    NASA Technical Reports Server (NTRS)

    Allen, James G.; Sikora, Scott E.

    1990-01-01

    The objective of this effort was to develop a Knowledge Capture System (KCS) for the Integrated Test Facility (ITF) at the Dryden Flight Research Facility (DFRF). The DFRF is a NASA Ames Research Center (ARC) facility. This system was used to capture the design and implementation information for NASA's high angle-of-attack research vehicle (HARV), a modified F/A-18A. In particular, the KCS was used to capture specific characteristics of the design of the HARV fly-by-wire (FBW) flight control system (FCS). The KCS utilizes artificial intelligence (AI) knowledge-based system (KBS) technology. The KCS enables the user to capture the following characteristics of automated systems: the system design; the hardware (H/W) design and implementation; the software (S/W) design and implementation; and the utilities (electrical and hydraulic) design and implementation. A generic version of the KCS was developed which can be used to capture the design information for any automated system. The deliverable items for this project consist of the prototype generic KCS and an application, which captures selected design characteristics of the HARV FCS.

  19. Knowledge-based design of complex mechanical systems

    SciTech Connect

    Ishii, K.

    1988-01-01

    The recent development of Artificial Intelligence (AI) techniques allows incorporation of qualitative aspects of design into the computer aids. This thesis presents a framework for applying AI techniques to the design of complex mechanical systems. A complex, yet well-understood design example as a vehicle for the effort is used. The author first reviews how experienced designers use knowledge at various stages of system design. He then proposes a knowledge-based model of the design process and develop frameworks for applying knowledge engineering in order to construct a consultation system for the designers. He proposes four such frameworks for use at different stages of design: (1) Design Compatibility Analysis (DCA) analyzes the compatibility of the designer's design alternatives with the design specification, (2) Initial Design Suggestion (IDS) provides the designer with reasonable initial estimates of the design variables, (3) Rule-based Sensitivity Analysis (RSA) guides the user through redesign, and (4) Active Constraint Deduction (ACD) identifies the bottlenecks of design by heuristic knowledge. These frameworks eliminate unnecessary iterations and allows the user to obtain a satisfactory solution rapidly.

  20. A fast Peptide Match service for UniProt Knowledgebase

    PubMed Central

    Chen, Chuming; Li, Zhiwen; Huang, Hongzhan; Suzek, Baris E.; Wu, Cathy H.

    2013-01-01

    Summary: We have developed a new web application for peptide matching using Apache Lucene-based search engine. The Peptide Match service is designed to quickly retrieve all occurrences of a given query peptide from UniProt Knowledgebase (UniProtKB) with isoforms. The matched proteins are shown in summary tables with rich annotations, including matched sequence region(s) and links to corresponding proteins in a number of proteomic/peptide spectral databases. The results are grouped by taxonomy and can be browsed by organism, taxonomic group or taxonomy tree. The service supports queries where isobaric leucine and isoleucine are treated equivalent, and an option for searching UniRef100 representative sequences, as well as dynamic queries to major proteomic databases. In addition to the web interface, we also provide RESTful web services. The underlying data are updated every 4 weeks in accordance with the UniProt releases. Availability: http://proteininformationresource.org/peptide.shtml Contact: chenc@udel.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23958731

  1. Framework Support For Knowledge-Based Software Development

    NASA Astrophysics Data System (ADS)

    Huseth, Steve

    1988-03-01

    The advent of personal engineering workstations has brought substantial information processing power to the individual programmer. Advanced tools and environment capabilities supporting the software lifecycle are just beginning to become generally available. However, many of these tools are addressing only part of the software development problem by focusing on rapid construction of self-contained programs by a small group of talented engineers. Additional capabilities are required to support the development of large programming systems where a high degree of coordination and communication is required among large numbers of software engineers, hardware engineers, and managers. A major player in realizing these capabilities is the framework supporting the software development environment. In this paper we discuss our research toward a Knowledge-Based Software Assistant (KBSA) framework. We propose the development of an advanced framework containing a distributed knowledge base that can support the data representation needs of tools, provide environmental support for the formalization and control of the software development process, and offer a highly interactive and consistent user interface.

  2. Plant Protein Annotation in the UniProt Knowledgebase1

    PubMed Central

    Schneider, Michel; Bairoch, Amos; Wu, Cathy H.; Apweiler, Rolf

    2005-01-01

    The Swiss-Prot, TrEMBL, Protein Information Resource (PIR), and DNA Data Bank of Japan (DDBJ) protein database activities have united to form the Universal Protein Resource (UniProt) Consortium. UniProt presents three database layers: the UniProt Archive, the UniProt Knowledgebase (UniProtKB), and the UniProt Reference Clusters. The UniProtKB consists of two sections: UniProtKB/Swiss-Prot (fully manually curated entries) and UniProtKB/TrEMBL (automated annotation, classification and extensive cross-references). New releases are published fortnightly. A specific Plant Proteome Annotation Program (http://www.expasy.org/sprot/ppap/) was initiated to cope with the increasing amount of data produced by the complete sequencing of plant genomes. Through UniProt, our aim is to provide the scientific community with a single, centralized, authoritative resource for protein sequences and functional information that will allow the plant community to fully explore and utilize the wealth of information available for both plant and nonplant model organisms. PMID:15888679

  3. SmartWeld: A knowledge-based approach to welding

    SciTech Connect

    Mitchiner, J.L.; Kleban, S.D.; Hess, B.V.; Mahin, K.W.; Messink, D.

    1996-07-01

    SmartWeld is a concurrent engineering system that integrates product design and processing decisions within an electronic desktop engineering environment. It is being developed to provide designers, process engineers, researchers and manufacturing technologists with transparent access to the right process information, process models, process experience and process experts, to realize``right the first time`` manufacturing. Empirical understanding along with process models are synthesized within a knowledge-based system to identify robust fabrication procedures based on cost, schedule, and performance. Integration of process simulation tools with design tools enables the designer to assess a number of design and process options on the computer rather than on the manufacturing floor. Task models and generic process models are being embedded within user friendly GUI`s to more readily enable the customer to use the SmartWeld system and its software tool set without extensive training. The integrated system architecture under development provides interactive communications and shared application capabilities across a variety of workstation and PC-type platforms either locally or at remote sites.

  4. Knowledge-based graphical interfaces for presenting technical information

    NASA Technical Reports Server (NTRS)

    Feiner, Steven

    1988-01-01

    Designing effective presentations of technical information is extremely difficult and time-consuming. Moreover, the combination of increasing task complexity and declining job skills makes the need for high-quality technical presentations especially urgent. We believe that this need can ultimately be met through the development of knowledge-based graphical interfaces that can design and present technical information. Since much material is most naturally communicated through pictures, our work has stressed the importance of well-designed graphics, concentrating on generating pictures and laying out displays containing them. We describe APEX, a testbed picture generation system that creates sequences of pictures that depict the performance of simple actions in a world of 3D objects. Our system supports rules for determining automatically the objects to be shown in a picture, the style and level of detail with which they should be rendered, the method by which the action itself should be indicated, and the picture's camera specification. We then describe work on GRIDS, an experimental display layout system that addresses some of the problems in designing displays containing these pictures, determining the position and size of the material to be presented.

  5. Real-time application of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Brumbaugh, Randal W.; Duke, Eugene L.

    1989-01-01

    The Rapid Prototyping Facility (RPF) was developed to meet a need for a facility which allows flight systems concepts to be prototyped in a manner which allows for real-time flight test experience with a prototype system. This need was focused during the development and demonstration of the expert system flight status monitor (ESFSM). The ESFSM was a prototype system developed on a LISP machine, but lack of a method for progressive testing and problem identification led to an impractical system. The RPF concept was developed, and the ATMS designed to exercise its capabilities. The ATMS Phase 1 demonstration provided a practical vehicle for testing the RPF, as well as a useful tool. ATMS Phase 2 development continues. A dedicated F-18 is expected to be assigned for facility use in late 1988, with RAV modifications. A knowledge-based autopilot is being developed using the RPF. This is a system which provides elementary autopilot functions and is intended as a vehicle for testing expert system verification and validation methods. An expert system propulsion monitor is being prototyped. This system provides real-time assistance to an engineer monitoring a propulsion system during a flight.

  6. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  7. KBGIS-II: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, Terence; Peuquet, Donna; Menon, Sudhakar; Agarwal, Pankaj

    1986-01-01

    The architecture and working of a recently implemented Knowledge-Based Geographic Information System (KBGIS-II), designed to satisfy several general criteria for the GIS, is described. The system has four major functions including query-answering, learning and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial object language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is performing all its designated tasks successfully. Future reports will relate performance characteristics of the system.

  8. KBGIS-2: A knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Smith, T.; Peuquet, D.; Menon, S.; Agarwal, P.

    1986-01-01

    The architecture and working of a recently implemented knowledge-based geographic information system (KBGIS-2) that was designed to satisfy several general criteria for the geographic information system are described. The system has four major functions that include query-answering, learning, and editing. The main query finds constrained locations for spatial objects that are describable in a predicate-calculus based spatial objects language. The main search procedures include a family of constraint-satisfaction procedures that use a spatial object knowledge base to search efficiently for complex spatial objects in large, multilayered spatial data bases. These data bases are represented in quadtree form. The search strategy is designed to reduce the computational cost of search in the average case. The learning capabilities of the system include the addition of new locations of complex spatial objects to the knowledge base as queries are answered, and the ability to learn inductively definitions of new spatial objects from examples. The new definitions are added to the knowledge base by the system. The system is currently performing all its designated tasks successfully, although currently implemented on inadequate hardware. Future reports will detail the performance characteristics of the system, and various new extensions are planned in order to enhance the power of KBGIS-2.

  9. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  10. Knowledge-based inference engine for online video dissemination

    NASA Astrophysics Data System (ADS)

    Zhou, Wensheng; Kuo, C.-C. Jay

    2000-10-01

    To facilitate easy access to rich information of multimedia over the Internet, we develop a knowledge-based classification system that supports automatic Indexing and filtering based on semantic concepts for the dissemination of on-line real-time media. Automatic segmentation, annotation and summarization of media for fast information browsing and updating are achieved in the same time. In the proposed system, a real-time scene-change detection proxy performs an initial video structuring process by splitting a video clip into scenes. Motional and visual features are extracted in real time for every detected scene by using online feature extraction proxies. Higher semantics are then derived through a joint use of low-level features along with inference rules in the knowledge base. Inference rules are derived through a supervised learning process based on representative samples. On-line media filtering based on semantic concepts becomes possible by using the proposed video inference engine. Video streams are either blocked or sent to certain channels depending on whether or not the video stream is matched with the user's profile. The proposed system is extensively evaluated by applying the engine to video of basketball games.

  11. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  12. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  13. ISPE: A knowledge-based system for fluidization studies

    SciTech Connect

    Reddy, S.

    1991-01-01

    Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.

  14. Plant protein annotation in the UniProt Knowledgebase.

    PubMed

    Schneider, Michel; Bairoch, Amos; Wu, Cathy H; Apweiler, Rolf

    2005-05-01

    The Swiss-Prot, TrEMBL, Protein Information Resource (PIR), and DNA Data Bank of Japan (DDBJ) protein database activities have united to form the Universal Protein Resource (UniProt) Consortium. UniProt presents three database layers: the UniProt Archive, the UniProt Knowledgebase (UniProtKB), and the UniProt Reference Clusters. The UniProtKB consists of two sections: UniProtKB/Swiss-Prot (fully manually curated entries) and UniProtKB/TrEMBL (automated annotation, classification and extensive cross-references). New releases are published fortnightly. A specific Plant Proteome Annotation Program (http://www.expasy.org/sprot/ppap/) was initiated to cope with the increasing amount of data produced by the complete sequencing of plant genomes. Through UniProt, our aim is to provide the scientific community with a single, centralized, authoritative resource for protein sequences and functional information that will allow the plant community to fully explore and utilize the wealth of information available for both plant and non-plant model organisms. PMID:15888679

  15. FunSecKB: the Fungal Secretome KnowledgeBase

    PubMed Central

    Lum, Gengkon; Min, Xiang Jia

    2011-01-01

    The Fungal Secretome KnowledgeBase (FunSecKB) provides a resource of secreted fungal proteins, i.e. secretomes, identified from all available fungal protein data in the NCBI RefSeq database. The secreted proteins were identified using a well evaluated computational protocol which includes SignalP, WolfPsort and Phobius for signal peptide or subcellular location prediction, TMHMM for identifying membrane proteins, and PS-Scan for identifying endoplasmic reticulum (ER) target proteins. The entries were mapped to the UniProt database and any annotations of subcellular locations that were either manually curated or computationally predicted were included in FunSecKB. Using a web-based user interface, the database is searchable, browsable and downloadable by using NCBI’s RefSeq accession or gi number, UniProt accession number, keyword or by species. A BLAST utility was integrated to allow users to query the database by sequence similarity. A user submission tool was implemented to support community annotation of subcellular locations of fungal proteins. With the complete fungal data from RefSeq and associated web-based tools, FunSecKB will be a valuable resource for exploring the potential applications of fungal secreted proteins. Database URL: http://proteomics.ysu.edu/secretomes/fungi.php PMID:21300622

  16. Knowledge-based assistant for ultrasonic inspection in metals

    NASA Astrophysics Data System (ADS)

    Franklin, Reynold; Halabe, Udaya B.

    1997-12-01

    Ultrasonic is a popular nondestructive testing technique for detecting flaws in metals, composites and other materials. A major limitation of this technique for successful field implementation is the need for skilled labor to identify an appropriate testing methodology and conduct the inspection. A knowledge-based assistant that can help the inspector in choosing the suitable testing methodology would greatly reduce the cost for inspection while maintaining reliability. Therefore a rule-based decision logic that can incorporate the expertise of a skilled operator for choosing a suitable ultrasonic configuration and testing procedure for a given application is explored and reported in this paper. A personal computer (PC) based expert system shell, VP Expert, is used to encode the rules and assemble the knowledge to address the different methods in ultrasonic inspection for metals. The expert system will be configured in a question-answer format. Since several factors (such as frequency, couplant, sensors, etc.) influence the inspection, appropriate decisions have to be made about each factor depending on the type of inspection method and the intended use of the metal. This knowledge base will help in identifying the methodology for detecting flaws, cracks, and thickness measurements, etc., which will lead to increase safety.

  17. Automatic tumor segmentation using knowledge-based techniques.

    PubMed

    Clark, M C; Hall, L O; Goldgof, D B; Velthuizen, R; Murtagh, F R; Silbiger, M S

    1998-04-01

    A system that automatically segments and labels glioblastoma-multiforme tumors in magnetic resonance images (MRI's) of the human brain is presented. The MRI's consist of T1-weighted, proton density, and T2-weighted feature images and are processed by a system which integrates knowledge-based (KB) techniques with multispectral analysis. Initial segmentation is performed by an unsupervised clustering algorithm. The segmented image, along with cluster centers for each class are provided to a rule-based expert system which extracts the intracranial region. Multispectral histogram analysis separates suspected tumor from the rest of the intracranial region, with region analysis used in performing the final tumor labeling. This system has been trained on three volume data sets and tested on thirteen unseen volume data sets acquired from a single MRI system. The KB tumor segmentation was compared with supervised, radiologist-labeled "ground truth" tumor volumes and supervised k-nearest neighbors tumor segmentations. The results of this system generally correspond well to ground truth, both on a per slice basis and more importantly in tracking total tumor volume during treatment over time. PMID:9688151

  18. Compiling knowledge-based systems from KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Bock, Conrad; Feldman, Roy

    1990-01-01

    The dominant technology for developing AI applications is to work in a multi-mechanism, integrated, knowledge-based system (KBS) development environment. Unfortunately, systems developed in such environments are inappropriate for delivering many applications - most importantly, they carry the baggage of the entire Lisp environment and are not written in conventional languages. One resolution of this problem would be to compile applications from complex environments to conventional languages. Here the first efforts to develop a system for compiling KBS developed in KEE to Ada (trademark). This system is called KATYDID, for KEE/Ada Translation Yields Development Into Delivery. KATYDID includes early prototypes of a run-time KEE core (object-structure) library module for Ada, and translation mechanisms for knowledge structures, rules, and Lisp code to Ada. Using these tools, part of a simple expert system was compiled (not quite automatically) to run in a purely Ada environment. This experience has given us various insights on Ada as an artificial intelligence programming language, potential solutions of some of the engineering difficulties encountered in early work, and inspiration on future system development.

  19. Designing the Cloud-based DOE Systems Biology Knowledgebase

    SciTech Connect

    Lansing, Carina S.; Liu, Yan; Yin, Jian; Corrigan, Abigail L.; Guillen, Zoe C.; Kleese van Dam, Kerstin; Gorton, Ian

    2011-09-01

    Systems Biology research, even more than many other scientific domains, is becoming increasingly data-intensive. Not only have advances in experimental and computational technologies lead to an exponential increase in scientific data volumes and their complexity, but increasingly such databases themselves are providing the basis for new scientific discoveries. To engage effectively with these community resources, integrated analyses, synthesis and simulation software is needed, regularly supported by scientific workflows. In order to provide a more collaborative, community driven research environment for this heterogeneous setting, the Department of Energy (DOE) has decided to develop a federated, cloud based cyber infrastructure - the Systems Biology Knowledgebase (Kbase). Pacific Northwest National Laboratory (PNNL) with its long tradition in data intensive science lead two of the five initial pilot projects, these two focusing on defining and testing the basic federated cloud-based system architecture and develop a prototype implementation. Hereby the community wide accessibility of biological data and the capability to integrate and analyze this data within its changing research context were seen as key technical functionalities the Kbase needed to enable. In this paper we describe the results of our investigations into the design of a cloud based federated infrastructure for: (1) Semantics driven data discovery, access and integration; (2) Data annotation, publication and sharing; (3) Workflow enabled data analysis; and (4) Project based collaborative working. We describe our approach, exemplary use cases and our prototype implementation that demonstrates the feasibility of this approach.

  20. Knowledge-Based Query Construction Using the CDSS Knowledge Base for Efficient Evidence Retrieval.

    PubMed

    Afzal, Muhammad; Hussain, Maqbool; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Lee, Sungyoung; Kang, Byeong Ho

    2015-01-01

    Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important. PMID:26343669

  1. Knowledge-Based Query Construction Using the CDSS Knowledge Base for Efficient Evidence Retrieval

    PubMed Central

    Afzal, Muhammad; Hussain, Maqbool; Ali, Taqdir; Hussain, Jamil; Khan, Wajahat Ali; Lee, Sungyoung; Kang, Byeong Ho

    2015-01-01

    Finding appropriate evidence to support clinical practices is always challenging, and the construction of a query to retrieve such evidence is a fundamental step. Typically, evidence is found using manual or semi-automatic methods, which are time-consuming and sometimes make it difficult to construct knowledge-based complex queries. To overcome the difficulty in constructing knowledge-based complex queries, we utilized the knowledge base (KB) of the clinical decision support system (CDSS), which has the potential to provide sufficient contextual information. To automatically construct knowledge-based complex queries, we designed methods to parse rule structure in KB of CDSS in order to determine an executable path and extract the terms by parsing the control structures and logic connectives used in the logic. The automatically constructed knowledge-based complex queries were executed on the PubMed search service to evaluate the results on the reduction of retrieved citations with high relevance. The average number of citations was reduced from 56,249 citations to 330 citations with the knowledge-based query construction approach, and relevance increased from 1 term to 6 terms on average. The ability to automatically retrieve relevant evidence maximizes efficiency for clinicians in terms of time, based on feedback collected from clinicians. This approach is generally useful in evidence-based medicine, especially in ambient assisted living environments where automation is highly important. PMID:26343669

  2. Detailed Design of the Heliophysics Event Knowledgebase (HEK)

    NASA Astrophysics Data System (ADS)

    Somani, Ankur; Seguin, R.; Timmons, R.; Freeland, S.; Hurlburt, N.; Kobashi, A.; Jaffey, A.

    2010-05-01

    We present the Heliophysics Event Registry (HER) and the Heliophysics Coverage Registry (HCR), which serve as two components of the Heliophysics Event Knowledgebase (HEK). Using standardized XML formats built upon the IVOA VOEvent specification, events can be ingested, stored, and later searched upon. Various web services and SolarSoft routines are available to aid in these functions. One source of events for the HEK is an automated Event Detection System (EDS) that continuously runs feature finding modules on SDO data. Modules are primarily supplied by the Smithsonian Astrophysical Observatory-led Feature Finding Team. The distributed system will keep up with SDO's data rate and issue space weather alerts in near-real time. Some modules will be run on all data while others are run in response to certain solar phenomena found by other modules in the system. Panorama is a software tool used for rapid visualization of large volumes of solar image data in multiple channels/wavelengths. With the EVACS front-end GUI tool, Panorama allows the user to, in real-time, change channel pixel scaling, weights, alignment, blending and colorization of the data. The user can also easily create WYSIWYG movies and launch the Annotator tool to describe events and features the user observes in the data. Panorama can also be used to drive clustered HiperSpace walls using the CGLX toolkit. The Event Viewer and Control Software (EVACS) provides a GUI that the user can search both the HER and HCR with. By specifying a start and end time and selecting the types of events and instruments that are of interest, EVACS will display the events on a full disk image of the sun while displaying more detailed information for the events. As mentioned, the user can also launch Panorama via EVACS.

  3. Autonomous Cryogenic Load Operations: Knowledge-Based Autonomous Test Engineer

    NASA Technical Reports Server (NTRS)

    Schrading, J. Nicolas

    2013-01-01

    The Knowledge-Based Autonomous Test Engineer (KATE) program has a long history at KSC. Now a part of the Autonomous Cryogenic Load Operations (ACLO) mission, this software system has been sporadically developed over the past 20 years. Originally designed to provide health and status monitoring for a simple water-based fluid system, it was proven to be a capable autonomous test engineer for determining sources of failure in the system. As part of a new goal to provide this same anomaly-detection capability for a complicated cryogenic fluid system, software engineers, physicists, interns and KATE experts are working to upgrade the software capabilities and graphical user interface. Much progress was made during this effort to improve KATE. A display of the entire cryogenic system's graph, with nodes for components and edges for their connections, was added to the KATE software. A searching functionality was added to the new graph display, so that users could easily center their screen on specific components. The GUI was also modified so that it displayed information relevant to the new project goals. In addition, work began on adding new pneumatic and electronic subsystems into the KATE knowledge base, so that it could provide health and status monitoring for those systems. Finally, many fixes for bugs, memory leaks, and memory errors were implemented and the system was moved into a state in which it could be presented to stakeholders. Overall, the KATE system was improved and necessary additional features were added so that a presentation of the program and its functionality in the next few months would be a success.

  4. MetaShare: Enabling Knowledge-Based Data Management

    NASA Astrophysics Data System (ADS)

    Pennington, D. D.; Salayandia, L.; Gates, A.; Osuna, F.

    2013-12-01

    MetaShare is a free and open source knowledge-based system for supporting data management planning, now required by some agencies and publishers. MetaShare supports users as they describe the types of data they will collect, expected standards, and expected policies for sharing. MetaShare's semantic model captures relationships between disciplines, tools, data types, data formats, and metadata standards. As the user plans their data management activities, MetaShare recommends choices based on practices and decisions from a community that has used the system for similar purposes, and extends the knowledge base to capture new relationships. The MetaShare knowledge base is being seeded with information for geoscience and environmental science domains, and is currently undergoing testing on at the University of Texas at El Paso. Through time and usage, it is expected to grow to support a variety of research domains, enabling community-based learning of data management practices. Knowledge of a user's choices during the planning phase can be used to support other tasks in the data life cycle, e.g., collecting, disseminating, and archiving data. A key barrier to scientific data sharing is the lack of sufficient metadata that provides context under which data were collected. The next phase of MetaShare development will automatically generate data collection instruments with embedded metadata and semantic annotations based on the information provided during the planning phase. While not comprehensive, this metadata will be sufficient for discovery and will enable user's to focus on more detailed descriptions of their data. Details are available at: Salayandia, L., Pennington, D., Gates, A., and Osuna, F. (accepted). MetaShare: From data management plans to knowledge base systems. AAAI Fall Symposium Series Workshop on Discovery Informatics, November 15-17, 2013, Arlington, VA.

  5. A knowledge-based approach to automated flow-field zoning for computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1989-01-01

    An automated three-dimensional zonal grid generation capability for computational fluid dynamics is shown through the development of a demonstration computer program capable of automatically zoning the flow field of representative two-dimensional (2-D) aerodynamic configurations. The applicability of a knowledge-based programming approach to the domain of flow-field zoning is examined. Several aspects of flow-field zoning make the application of knowledge-based techniques challenging: the need for perceptual information, the role of individual bias in the design and evaluation of zonings, and the fact that the zoning process is modeled as a constructive, design-type task (for which there are relatively few examples of successful knowledge-based systems in any domain). Engineering solutions to the problems arising from these aspects are developed, and a demonstration system is implemented which can design, generate, and output flow-field zonings for representative 2-D aerodynamic configurations.

  6. A formal approach to validation and verification for knowledge-based control systems

    NASA Technical Reports Server (NTRS)

    Castore, Glen

    1987-01-01

    As control systems become more complex in response to desires for greater system flexibility, performance and reliability, the promise is held out that artificial intelligence might provide the means for building such systems. An obstacle to the use of symbolic processing constructs in this domain is the need for verification and validation (V and V) of the systems. Techniques currently in use do not seem appropriate for knowledge-based software. An outline of a formal approach to V and V for knowledge-based control systems is presented.

  7. PRAIS: Distributed, real-time knowledge-based systems made easy

    NASA Technical Reports Server (NTRS)

    Goldstein, David G.

    1990-01-01

    This paper discusses an architecture for real-time, distributed (parallel) knowledge-based systems called the Parallel Real-time Artificial Intelligence System (PRAIS). PRAIS strives for transparently parallelizing production (rule-based) systems, even when under real-time constraints. PRAIS accomplishes these goals by incorporating a dynamic task scheduler, operating system extensions for fact handling, and message-passing among multiple copies of CLIPS executing on a virtual blackboard. This distributed knowledge-based system tool uses the portability of CLIPS and common message-passing protocols to operate over a heterogeneous network of processors.

  8. A national knowledge-based crop recognition in Mediterranean environment

    NASA Astrophysics Data System (ADS)

    Cohen, Yafit; Shoshany, Maxim

    2002-08-01

    Population growth, urban expansion, land degradation, civil strife and war may place plant natural resources for food and agriculture at risk. Crop and yield monitoring is basic information necessary for wise management of these resources. Satellite remote sensing techniques have proven to be cost-effective in widespread agricultural lands in Africa, America, Europe and Australia. However, they have had limited success in Mediterranean regions that are characterized by a high rate of spatio-temporal ecological heterogeneity and high fragmentation of farming lands. An integrative knowledge-based approach is needed for this purpose, which combines imagery and geographical data within the framework of an intelligent recognition system. This paper describes the development of such a crop recognition methodology and its application to an area that comprises approximately 40% of the cropland in Israel. This area contains eight crop types that represent 70% of Israeli agricultural production. Multi-date Landsat TM images representing seasonal vegetation cover variations were converted to normalized difference vegetation index (NDVI) layers. Field boundaries were delineated by merging Landsat data with SPOT-panchromatic images. Crop recognition was then achieved in two-phases, by clustering multi-temporal NDVI layers using unsupervised classification, and then applying 'split-and-merge' rules to these clusters. These rules were formalized through comprehensive learning of relationships between crop types, imagery properties (spectral and NDVI) and auxiliary data including agricultural knowledge, precipitation and soil types. Assessment of the recognition results using ground data from the Israeli Agriculture Ministry indicated an average recognition accuracy exceeding 85% which accounts for both omission and commission errors. The two-phase strategy implemented in this study is apparently successful for heterogeneous regions. This is due to the fact that it allows

  9. Limitations of Levels, Learning Outcomes and Qualifications as Drivers Towards a More Knowledge-Based Society

    ERIC Educational Resources Information Center

    Brown, Alan

    2008-01-01

    National (and European) qualifications frameworks, the specification of learning outcomes and grand targets like the Lisbon goals of increasing the supply of graduates in Europe in order to achieve a more knowledge-based society are all predicated upon the idea of moving people through to higher and well-defined levels of skills, knowledge and…

  10. End-user oriented language to develop knowledge-based expert systems

    SciTech Connect

    Ueno, H.

    1983-01-01

    A description is given of the COMEX (compact knowledge based expert system) expert system language for application-domain users who want to develop a knowledge-based expert system by themselves. The COMEX system was written in FORTRAN and works on a microcomputer. COMEX is being used in several application domains such as medicine, education, and industry. 7 references.

  11. A Comparison of Books and Hypermedia for Knowledge-based Sports Coaching.

    ERIC Educational Resources Information Center

    Vickers, Joan N.; Gaines, Brian R.

    1988-01-01

    Summarizes and illustrates the knowledge-based approach to instructional material design. A series of sports coaching handbooks and hypermedia presentations of the same material are described and the different instantiations of the knowledge and training structures are compared. Figures show knowledge structures for badminton and the architecture…

  12. Learning and Innovation in the Knowledge-Based Economy: Beyond Clusters and Qualifications

    ERIC Educational Resources Information Center

    James, Laura; Guile, David; Unwin, Lorna

    2013-01-01

    For over a decade policy-makers have claimed that advanced industrial societies should develop a knowledge-based economy (KBE) in response to economic globalisation and the transfer of manufacturing jobs to lower cost countries. In the UK, this vision shaped New Labour's policies for vocational education and training (VET), higher education…

  13. Case-based reasoning for space applications: Utilization of prior experience in knowledge-based systems

    NASA Technical Reports Server (NTRS)

    King, James A.

    1987-01-01

    The goal is to explain Case-Based Reasoning as a vehicle to establish knowledge-based systems based on experimental reasoning for possible space applications. This goal will be accomplished through an examination of reasoning based on prior experience in a sample domain, and also through a presentation of proposed space applications which could utilize Case-Based Reasoning techniques.

  14. Learning Spaces: An ICT-Enabled Model of Future Learning in the Knowledge-Based Society

    ERIC Educational Resources Information Center

    Punie, Yves

    2007-01-01

    This article presents elements of a future vision of learning in the knowledge-based society which is enabled by ICT. It is not only based on extrapolations from trends and drivers that are shaping learning in Europe but also consists of a holistic attempt to envisage and anticipate future learning needs and requirements in the KBS. The "learning…

  15. A Transactional Approach to Children's Learning in a Knowledge-Based Society.

    ERIC Educational Resources Information Center

    Seng, Seok-Hoon

    The 21st century promises to make very different demands on our children and schools in a knowledge-based society. A slow but dynamic shift has been occurring in the Singapore educational system toward a learning nation and thinking school ethos. In the midst of this change, children will need to acquire a new set of skills. They will need to be…

  16. The Spread of Contingent Work in the Knowledge-Based Economy

    ERIC Educational Resources Information Center

    Szabo, Katalin; Negyesi, Aron

    2005-01-01

    Permanent employment, typical of industrial societies and bolstered by numerous social guaranties, has been declining in the past 2 decades. There has been a steady expansion of various forms of contingent work. The decomposition of traditional work is a logical consequence of the characteristic patterns of the knowledge-based economy. According…

  17. Hidden Knowledge: Working-Class Capacity in the "Knowledge-Based Economy"

    ERIC Educational Resources Information Center

    Livingstone, David W.; Sawchuck, Peter H.

    2005-01-01

    The research reported in this paper attempts to document the actual learning practices of working-class people in the context of the much heralded "knowledge-based economy." Our primary thesis is that working-class peoples' indigenous learning capacities have been denied, suppressed, degraded or diverted within most capitalist schooling, adult…

  18. A knowledge-based object recognition system for applications in the space station

    NASA Astrophysics Data System (ADS)

    Dhawan, Atam P.

    1988-02-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  19. Small Knowledge-Based Systems in Education and Training: Something New Under the Sun.

    ERIC Educational Resources Information Center

    Wilson, Brent G.; Welsh, Jack R.

    1986-01-01

    Discusses artificial intelligence, robotics, natural language processing, and expert or knowledge-based systems research; examines two large expert systems, MYCIN and XCON; and reviews the resources required to build large expert systems and affordable smaller systems (intelligent job aids) for training. Expert system vendors and products are…

  20. Universities and the Knowledge-Based Economy: Perceptions from a Developing Country

    ERIC Educational Resources Information Center

    Bano, Shah; Taylor, John

    2015-01-01

    This paper considers the role of universities in the creation of a knowledge-based economy (KBE) in a developing country, Pakistan. Some developing countries have moved quickly to develop a KBE, but progress in Pakistan is much slower. Higher education plays a crucial role as part of the triple helix model for innovation. Based on the perceptions…

  1. A knowledge-based object recognition system for applications in the space station

    NASA Technical Reports Server (NTRS)

    Dhawan, Atam P.

    1988-01-01

    A knowledge-based three-dimensional (3D) object recognition system is being developed. The system uses primitive-based hierarchical relational and structural matching for the recognition of 3D objects in the two-dimensional (2D) image for interpretation of the 3D scene. At present, the pre-processing, low-level preliminary segmentation, rule-based segmentation, and the feature extraction are completed. The data structure of the primitive viewing knowledge-base (PVKB) is also completed. Algorithms and programs based on attribute-trees matching for decomposing the segmented data into valid primitives were developed. The frame-based structural and relational descriptions of some objects were created and stored in a knowledge-base. This knowledge-base of the frame-based descriptions were developed on the MICROVAX-AI microcomputer in LISP environment. The simulated 3D scene of simple non-overlapping objects as well as real camera data of images of 3D objects of low-complexity have been successfully interpreted.

  2. Knowledge-Based Indexing of the Medical Literature: The Indexing Aid Project.

    ERIC Educational Resources Information Center

    Humphrey, Suzanne; Miller, Nancy E.

    1987-01-01

    Describes the National Library of Medicine's (NLM) Indexing Aid Project for conducting research in knowledge representation and indexing for information retrieval, whose goal is to develop interactive knowledge-based systems for computer-assisted indexing of the periodical medical literature. Appendices include background information on NLM…

  3. A knowledge-based system for diagnosis of mastitis problems at the herd level. 2. Machine milking.

    PubMed

    Hogeveen, H; van Vliet, J H; Noordhuizen-Stassen, E N; De Koning, C; Tepp, D M; Brand, A

    1995-07-01

    A knowledge-based system for the diagnosis of mastitis problems at the herd level must search for possible causes, including malfunctioning milking machines or incorrect milking technique. A knowledge-based system on general mechanisms of mastitis infection, using hierarchical conditional causal models, was extended. Model building entailed extensive cooperation between the knowledge engineer and a domain expert. The extended knowledge-based system contains 12 submodels underlying the overview models. Nine submodels were concerned with mastitis problems arising from machine milking. These models are briefly described. The knowledge-based system has been validated by other experts after which the models were adjusted slightly. The final knowledge-based system was validated to data collected at 17 commercial dairy farms with high SCC in the bulk milk. Reports containing the farm data were accompanied by recommendations made by a dairy farm advisor. This validation showed good agreement between the knowledge-based system and the dairy farm advisors. The described knowledge-based system is a good tool for dairy farm advisors to solve herd mastitis problems caused by a malfunctioning milking machine or incorrect milking technique. PMID:7593837

  4. Knowledge-based extraction of adverse drug events from biomedical text

    PubMed Central

    2014-01-01

    Background Many biomedical relation extraction systems are machine-learning based and have to be trained on large annotated corpora that are expensive and cumbersome to construct. We developed a knowledge-based relation extraction system that requires minimal training data, and applied the system for the extraction of adverse drug events from biomedical text. The system consists of a concept recognition module that identifies drugs and adverse effects in sentences, and a knowledge-base module that establishes whether a relation exists between the recognized concepts. The knowledge base was filled with information from the Unified Medical Language System. The performance of the system was evaluated on the ADE corpus, consisting of 1644 abstracts with manually annotated adverse drug events. Fifty abstracts were used for training, the remaining abstracts were used for testing. Results The knowledge-based system obtained an F-score of 50.5%, which was 34.4 percentage points better than the co-occurrence baseline. Increasing the training set to 400 abstracts improved the F-score to 54.3%. When the system was compared with a machine-learning system, jSRE, on a subset of the sentences in the ADE corpus, our knowledge-based system achieved an F-score that is 7 percentage points higher than the F-score of jSRE trained on 50 abstracts, and still 2 percentage points higher than jSRE trained on 90% of the corpus. Conclusion A knowledge-based approach can be successfully used to extract adverse drug events from biomedical text without need for a large training set. Whether use of a knowledge base is equally advantageous for other biomedical relation-extraction tasks remains to be investigated. PMID:24593054

  5. A NASA/RAE cooperation in the development of a real-time knowledge-based autopilot

    NASA Technical Reports Server (NTRS)

    Daysh, Colin; Corbin, Malcolm; Butler, Geoff; Duke, Eugene L.; Belle, Steven D.; Brumbaugh, Randal W.

    1991-01-01

    As part of a US/UK cooperative aeronautical research program, a joint activity between the NASA Dryden Flight Research Facility and the Royal Aerospace Establishment on knowledge-based systems was established. This joint activity is concerned with tools and techniques for the implementation and validation of real-time knowledge-based systems. The proposed next stage of this research is described, in which some of the problems of implementing and validating a knowledge-based autopilot for a generic high-performance aircraft are investigated.

  6. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium...

  7. Knowledge-Based Reinforcement Learning for Data Mining

    NASA Astrophysics Data System (ADS)

    Kudenko, Daniel; Grzes, Marek

    experts have developed heuristics that help them in planning and scheduling resources in their work place. However, this domain knowledge is often rough and incomplete. When the domain knowledge is used directly by an automated expert system, the solutions are often sub-optimal, due to the incompleteness of the knowledge, the uncertainty of environments, and the possibility to encounter unexpected situations. RL, on the other hand, can overcome the weaknesses of the heuristic domain knowledge and produce optimal solutions. In the talk we propose two techniques, which represent first steps in the area of knowledge-based RL (KBRL). The first technique [1] uses high-level STRIPS operator knowledge in reward shaping to focus the search for the optimal policy. Empirical results show that the plan-based reward shaping approach outperforms other RL techniques, including alternative manual and MDP-based reward shaping when it is used in its basic form. We showed that MDP-based reward shaping may fail and successful experiments with STRIPS-based shaping suggest modifications which can overcome encountered problems. The STRIPSbased method we propose allows expressing the same domain knowledge in a different way and the domain expert can choose whether to define an MDP or STRIPS planning task. We also evaluated the robustness of the proposed STRIPS-based technique to errors in the plan knowledge. In case that STRIPS knowledge is not available, we propose a second technique [2] that shapes the reward with hierarchical tile coding. Where the Q-function is represented with low-level tile coding, a V-function with coarser tile coding can be learned in parallel and used to approximate the potential for ground states. In the context of data mining, our KBRL approaches can also be used for any data collection task where the acquisition of data may incur considerable cost. In addition, observing the data collection agent in specific scenarios may lead to new insights into optimal data

  8. Knowledge-based automated road network extraction system using multispectral images

    NASA Astrophysics Data System (ADS)

    Sun, Weihua; Messinger, David W.

    2013-04-01

    A novel approach for automated road network extraction from multispectral WorldView-2 imagery using a knowledge-based system is presented. This approach uses a multispectral flood-fill technique to extract asphalt pixels from satellite images; it follows by identifying prominent curvilinear structures using template matching. The extracted curvilinear structures provide an initial estimate of the road network, which is refined by the knowledge-based system. This system breaks the curvilinear structures into small segments and then groups them using a set of well-defined rules; a saliency check is then performed to prune the road segments. As a final step, these segments, carrying road width and orientation information, can be reconstructed to generate a proper road map. The approach is shown to perform well with various urban and suburban scenes. It can also be deployed to extract the road network in large-scale scenes.

  9. Knowledge-based vision for space station object motion detection, recognition, and tracking

    NASA Technical Reports Server (NTRS)

    Symosek, P.; Panda, D.; Yalamanchili, S.; Wehner, W., III

    1987-01-01

    Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed.

  10. Delivering spacecraft control centers with embedded knowledge-based systems: The methodology issue

    NASA Technical Reports Server (NTRS)

    Ayache, S.; Haziza, M.; Cayrac, D.

    1994-01-01

    Matra Marconi Space (MMS) occupies a leading place in Europe in the domain of satellite and space data processing systems. The maturity of the knowledge-based systems (KBS) technology, the theoretical and practical experience acquired in the development of prototype, pre-operational and operational applications, make it possible today to consider the wide operational deployment of KBS's in space applications. In this perspective, MMS has to prepare the introduction of the new methods and support tools that will form the basis of the development of such systems. This paper introduces elements of the MMS methodology initiatives in the domain and the main rationale that motivated the approach. These initiatives develop along two main axes: knowledge engineering methods and tools, and a hybrid method approach for coexisting knowledge-based and conventional developments.

  11. Increasing levels of assistance in refinement of knowledge-based retrieval systems

    NASA Technical Reports Server (NTRS)

    Baudin, Catherine; Kedar, Smadar; Pell, Barney

    1994-01-01

    The task of incrementally acquiring and refining the knowledge and algorithms of a knowledge-based system in order to improve its performance over time is discussed. In particular, the design of DE-KART, a tool whose goal is to provide increasing levels of assistance in acquiring and refining indexing and retrieval knowledge for a knowledge-based retrieval system, is presented. DE-KART starts with knowledge that was entered manually, and increases its level of assistance in acquiring and refining that knowledge, both in terms of the increased level of automation in interacting with users, and in terms of the increased generality of the knowledge. DE-KART is at the intersection of machine learning and knowledge acquisition: it is a first step towards a system which moves along a continuum from interactive knowledge acquisition to increasingly automated machine learning as it acquires more knowledge and experience.

  12. PVDaCS - A prototype knowledge-based expert system for certification of spacecraft data

    NASA Technical Reports Server (NTRS)

    Wharton, Cathleen; Shiroma, Patricia J.; Simmons, Karen E.

    1989-01-01

    On-line data management techniques to certify spacecraft information are mandated by increasing telemetry rates. Knowledge-based expert systems offer the ability to certify data electronically without the need for time-consuming human interaction. Issues of automatic certification are explored by designing a knowledge-based expert system to certify data from a scientific instrument, the Orbiter Ultraviolet Spectrometer, on an operating NASA planetary spacecraft, Pioneer Venus. The resulting rule-based system, called PVDaCS (Pioneer Venus Data Certification System), is a functional prototype demonstrating the concepts of a larger system design. A key element of the system design is the representation of an expert's knowledge through the usage of well ordered sequences. PVDaCS produces a certification value derived from expert knowledge and an analysis of the instrument's operation. Results of system performance are presented.

  13. Strategic Concept of Competition Model in Knowledge-Based Logistics in Machinebuilding

    NASA Astrophysics Data System (ADS)

    Medvedeva, O. V.

    2015-09-01

    A competitive labor market needs serious changing. Machinebuilding is one of the main problem domains. The current direction to promote human capital competition demands for modernization. Therefore, it is necessary to develop a strategy for social and economic promotion of competition in conditions of knowledge-based economy, in particularly, in machinebuilding. The necessity is demonstrated, as well as basic difficulties faced this strategy for machinebuilding.

  14. Computerized design of removable partial dentures: a knowledge-based system for the future.

    PubMed

    Davenport, J C; Hammond, P; Fitzpatrick, F J

    1993-06-01

    Dentists frequently fail to provide dental technicians with the design information necessary for the construction of removable partial dentures. The computerization of dental practices and the development of appropriate knowledge-based systems could provide a powerful tool for improving this aspect of dental care. This article describes one such system currently under development which is an example of the kind of additional facility that will become available for those practices with the necessary hardware. PMID:8299844

  15. GENEX: A knowledge-based expert assistant for Genbank data analysis

    NASA Technical Reports Server (NTRS)

    Batra, Sajeev; Macinnes, Mark A.

    1990-01-01

    We describe a knowledge-based expert assistant, GENEX (Gene Explorer), that simplifies some analysis of Genbank data. GENEX is written in CLIPS (C Language Integrated Production System), and expert system tool, developed at the NASA Johnson Space Center. The main purpose of the system is to look for gene start site annotations, unusual DNA sequence composition, and regulatory protein patterns. application where determinations are made via a decision tree.

  16. Annotating single amino acid polymorphisms in the UniProt/Swiss-Prot knowledgebase.

    PubMed

    Yip, Yum L; Famiglietti, Maria; Gos, Arnaud; Duek, Paula D; David, Fabrice P A; Gateau, Alain; Bairoch, Amos

    2008-03-01

    UniProtKB/Swiss-Prot (http://beta.uniprot.org/uniprot; last accessed: 19 October 2007) is a manually curated knowledgebase providing information on protein sequences and functional annotation. It is part of the Universal Protein Resource (UniProt). The knowledgebase currently records a total of 32,282 single amino acid polymorphisms (SAPs) touching 6,086 human proteins (Release 53.2, 26 June 2007). Nearly all SAPs are derived from literature reports using strict inclusion criteria. For each SAP, the knowledgebase provides, apart from the position of the mutation and the resulting change in amino acid, information on the effects of SAPs on protein structure and function, as well as their potential involvement in diseases. Presently, there are 16,043 disease-related SAPs, 14,266 polymorphisms, and 1,973 unclassified variants recorded in UniProtKB/Swiss-Prot. Relevant information on SAPs can be found in various sections of a UniProtKB/Swiss-Prot entry. In addition to these, cross-references to human disease databases as well as other gene-specific databases, are being added regularly. In 2003, the Swiss-Prot variant pages were created to provide a concise view of the information related to the SAPs recorded in the knowledgebase. When compared to the information on missense variants listed in other mutation databases, UniProtKB/Swiss-Prot further records information on direct protein sequencing and characterization including posttranslational modifications (PTMs). The direct links to the Online Mendelian Inheritance in Man (OMIM) database entries further enhance the integration of phenotype information with data at protein level. In this regard, SAP information in UniProtKB/Swiss-Prot complements nicely those existing in genomic and phenotypic databases, and is valuable for the understanding of SAPs and diseases. PMID:18175334

  17. Towards knowledge-based retrieval of medical images. The role of semantic indexing, image content representation and knowledge-based retrieval.

    PubMed

    Lowe, H J; Antipov, I; Hersh, W; Smith, C A

    1998-01-01

    Medicine is increasingly image-intensive. The central importance of imaging technologies such as computerized tomography and magnetic resonance imaging in clinical decision making, combined with the trend to store many "traditional" clinical images such as conventional radiographs, microscopic pathology and dermatology images in digital format present both challenges and an opportunities for the designers of clinical information systems. The emergence of Multimedia Electronic Medical Record Systems (MEMRS), architectures that integrate medical images with text-based clinical data, will further hasten this trend. The development of these systems, storing a large and diverse set of medical images, suggests that in the future MEMRS will become important digital libraries supporting patient care, research and education. The representation and retrieval of clinical images within these systems is problematic as conventional database architectures and information retrieval models have, until recently, focused largely on text-based data. Medical imaging data differs in many ways from text-based medical data but perhaps the most important difference is that the information contained within imaging data is fundamentally knowledge-based. New representational and retrieval models for clinical images will be required to address this issue. Within the Image Engine multimedia medical record system project at the University of Pittsburgh we are evolving an approach to representation and retrieval of medical images which combines semantic indexing using the UMLS Metathesuarus, image content-based representation and knowledge-based image analysis. PMID:9929345

  18. A knowledge-based design for assemble system for vehicle seat

    NASA Astrophysics Data System (ADS)

    Wahidin, L. S.; Tan, CheeFai; Khalil, S. N.; Juffrizal, K.; Nidzamuddin, M. Y.

    2015-05-01

    Companies worldwide are striving to reduce the costs of their products to impact their bottom line profitability. When it comes to improving profits, there are in two choices: sell more or cut the cost of what is currently being sold. Given the depressed economy of the last several years, the "sell more" option, in many cases, has been taken off the table. As a result, cost cutting is often the most effective path. One of the industrial challenges is to search for the shorten product development and lower manufacturing cost especially in the early stage of designing the product. Knowledge-based system is used to assist the industry when the expert is not available and to keep the expertise within the company. The application of knowledge-based system will enable the standardization and accuracy of the assembly process. For this purpose, a knowledge-based design for assemble system is developed to assist the industry to plan the assembly process of the vehicle seat.

  19. The Network of Excellence ``Knowledge-based Multicomponent Materials for Durable and Safe Performance''

    NASA Astrophysics Data System (ADS)

    Moreno, Arnaldo

    2008-02-01

    The Network of Excellence "Knowledge-based Multicomponent Materials for Durable and Safe Performance" (KMM-NoE) consists of 36 institutional partners from 10 countries representing leading European research institutes and university departments (25), small and medium enterprises, SMEs (5) and large industry (7) in the field of knowledge-based multicomponent materials (KMM), more specifically in intermetallics, metal-ceramic composites, functionally graded materials and thin layers. The main goal of the KMM-NoE (currently funded by the European Commission) is to mobilise and concentrate the fragmented scientific potential in the KMM field to create a durable and efficient organism capable of developing leading-edge research while spreading the accumulated knowledge outside the Network and enhancing the technological skills of the related industries. The long-term strategic goal of the KMM-NoE is to establish a self-supporting pan-European institution in the field of knowledge-based multicomponent materials—KMM Virtual Institute (KMM-VIN). It will combine industry oriented research with educational and training activities. The KMM Virtual Institute will be founded on three main pillars: KMM European Competence Centre, KMM Integrated Post-Graduate School, KMM Mobility Programme. The KMM-NoE is coordinated by the Institute of Fundamental Technological Research (IPPT) of the Polish Academy of Sciences, Warsaw, Poland.

  20. Knowledge-based approach to fault diagnosis and control in distributed process environments

    NASA Astrophysics Data System (ADS)

    Chung, Kwangsue; Tou, Julius T.

    1991-03-01

    This paper presents a new design approach to knowledge-based decision support systems for fault diagnosis and control for quality assurance and productivity improvement in automated manufacturing environments. Based on the observed manifestations, the knowledge-based diagnostic system hypothesizes a set of the most plausible disorders by mimicking the reasoning process of a human diagnostician. The data integration technique is designed to generate error-free hierarchical category files. A novel approach to diagnostic problem solving has been proposed by integrating the PADIKS (Pattern-Directed Knowledge-Based System) concept and the symbolic model of diagnostic reasoning based on the categorical causal model. The combination of symbolic causal reasoning and pattern-directed reasoning produces a highly efficient diagnostic procedure and generates a more realistic expert behavior. In addition, three distinctive constraints are designed to further reduce the computational complexity and to eliminate non-plausible hypotheses involved in the multiple disorders problem. The proposed diagnostic mechanism, which consists of three different levels of reasoning operations, significantly reduces the computational complexity in the diagnostic problem with uncertainty by systematically shrinking the hypotheses space. This approach is applied to the test and inspection data collected from a PCB manufacturing operation.

  1. HYPLOSP: a knowledge-based approach to protein local structure prediction.

    PubMed

    Chen, Ching-Tai; Lin, Hsin-Nan; Sung, Ting-Yi; Hsu, Wen-Lian

    2006-12-01

    Local structure prediction can facilitate ab initio structure prediction, protein threading, and remote homology detection. However, the accuracy of existing methods is limited. In this paper, we propose a knowledge-based prediction method that assigns a measure called the local match rate to each position of an amino acid sequence to estimate the confidence of our method. Empirically, the accuracy of the method correlates positively with the local match rate; therefore, we employ it to predict the local structures of positions with a high local match rate. For positions with a low local match rate, we propose a neural network prediction method. To better utilize the knowledge-based and neural network methods, we design a hybrid prediction method, HYPLOSP (HYbrid method to Protein LOcal Structure Prediction) that combines both methods. To evaluate the performance of the proposed methods, we first perform cross-validation experiments by applying our knowledge-based method, a neural network method, and HYPLOSP to a large dataset of 3,925 protein chains. We test our methods extensively on three different structural alphabets and evaluate their performance by two widely used criteria, Maximum Deviation of backbone torsion Angle (MDA) and Q(N), which is similar to Q(3) in secondary structure prediction. We then compare HYPLOSP with three previous studies using a dataset of 56 new protein chains. HYPLOSP shows promising results in terms of MDA and Q(N) accuracy and demonstrates its alphabet-independent capability. PMID:17245815

  2. A New Collaborative Knowledge-Based Approach for Wireless Sensor Networks

    PubMed Central

    Canada-Bago, Joaquin; Fernandez-Prieto, Jose Angel; Gadeo-Martos, Manuel Angel; Velasco, Juan Ramón

    2010-01-01

    This work presents a new approach for collaboration among sensors in Wireless Sensor Networks. These networks are composed of a large number of sensor nodes with constrained resources: limited computational capability, memory, power sources, etc. Nowadays, there is a growing interest in the integration of Soft Computing technologies into Wireless Sensor Networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks. The objective of this work is to design a collaborative knowledge-based network, in which each sensor executes an adapted Fuzzy Rule-Based System, which presents significant advantages such as: experts can define interpretable knowledge with uncertainty and imprecision, collaborative knowledge can be separated from control or modeling knowledge and the collaborative approach may support neighbor sensor failures and communication errors. As a real-world application of this approach, we demonstrate a collaborative modeling system for pests, in which an alarm about the development of olive tree fly is inferred. The results show that knowledge-based sensors are suitable for a wide range of applications and that the behavior of a knowledge-based sensor may be modified by inferences and knowledge of neighbor sensors in order to obtain a more accurate and reliable output. PMID:22219701

  3. Can Croatia join Europe as competitive knowledge-based society by 2010?

    PubMed

    Petrovecki, Mladen; Paar, Vladimir; Primorac, Dragan

    2006-12-01

    The 21st century has brought important changes in the paradigms of economic development, one of them being a shift toward recognizing knowledge and information as the most valuable commodities of today. The European Union (EU) has been working hard to become the most competitive knowledge-based society in the world, and Croatia, an EU candidate country, has been faced with a similar task. To establish itself as one of the best knowledge-based country in the Eastern European region over the next 4 years, Croatia realized it has to create an education and science system correspondent with European standards and sensitive to labor market needs. For that purpose, the Croatian Ministry of Science, Education, and Sports (MSES) has created and started implementing a complex strategy, consisting of the following key components: the reform of education system in accordance with the Bologna Declaration; stimulation of scientific production by supporting national and international research projects; reversing the "brain drain" into "brain gain" and strengthening the links between science and technology; and informatization of the whole education and science system. In this comprehensive report, we describe the implementation of these measures, whose coordination with the EU goals presents a challenge, as well as an opportunity for Croatia to become a knowledge-based society by 2010. PMID:17167853

  4. A knowledge-based flight status monitor for real-time application in digital avionics systems

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Disbrow, J. D.; Butler, G. F.

    1989-01-01

    The Dryden Flight Research Facility of the National Aeronautics and Space Administration (NASA) Ames Research Center (Ames-Dryden) is the principal NASA facility for the flight testing and evaluation of new and complex avionics systems. To aid in the interpretation of system health and status data, a knowledge-based flight status monitor was designed. The monitor was designed to use fault indicators from the onboard system which are telemetered to the ground and processed by a rule-based model of the aircraft failure management system to give timely advice and recommendations in the mission control room. One of the important constraints on the flight status monitor is the need to operate in real time, and to pursue this aspect, a joint research activity between NASA Ames-Dryden and the Royal Aerospace Establishment (RAE) on real-time knowledge-based systems was established. Under this agreement, the original LISP knowledge base for the flight status monitor was reimplemented using the intelligent knowledge-based system toolkit, MUSE, which was developed under RAE sponsorship. Details of the flight status monitor and the MUSE implementation are presented.

  5. Using CLIPS in the domain of knowledge-based massively parallel programming

    NASA Technical Reports Server (NTRS)

    Dvorak, Jiri J.

    1994-01-01

    The Program Development Environment (PDE) is a tool for massively parallel programming of distributed-memory architectures. Adopting a knowledge-based approach, the PDE eliminates the complexity introduced by parallel hardware with distributed memory and offers complete transparency in respect of parallelism exploitation. The knowledge-based part of the PDE is realized in CLIPS. Its principal task is to find an efficient parallel realization of the application specified by the user in a comfortable, abstract, domain-oriented formalism. A large collection of fine-grain parallel algorithmic skeletons, represented as COOL objects in a tree hierarchy, contains the algorithmic knowledge. A hybrid knowledge base with rule modules and procedural parts, encoding expertise about application domain, parallel programming, software engineering, and parallel hardware, enables a high degree of automation in the software development process. In this paper, important aspects of the implementation of the PDE using CLIPS and COOL are shown, including the embedding of CLIPS with C++-based parts of the PDE. The appropriateness of the chosen approach and of the CLIPS language for knowledge-based software engineering are discussed.

  6. Can Croatia Join Europe as Competitive Knowledge-based Society by 2010?

    PubMed Central

    Petrovečki, Mladen; Paar, Vladimir; Primorac, Dragan

    2006-01-01

    The 21st century has brought important changes in the paradigms of economic development, one of them being a shift toward recognizing knowledge and information as the most important factors of today. The European Union (EU) has been working hard to become the most competitive knowledge-based society in the world, and Croatia, an EU candidate country, has been faced with a similar task. To establish itself as one of the best knowledge-based country in the Eastern European region over the next four years, Croatia realized it has to create an education and science system correspondent with European standards and sensitive to labor market needs. For that purpose, the Croatian Ministry of Science, Education, and Sports (MSES) has created and started implementing a complex strategy, consisting of the following key components: the reform of education system in accordance with the Bologna Declaration; stimulation of scientific production by supporting national and international research projects; reversing the “brain drain” into “brain gain” and strengthening the links between science and technology; and informatization of the whole education and science system. In this comprehensive report, we describe the implementation of these measures, whose coordination with the EU goals presents a challenge, as well as an opportunity for Croatia to become a knowledge-based society by 2010. PMID:17167853

  7. The International Conference on Human Resources Development Strategies in the Knowledge-Based Society [Proceedings] (Seoul, South Korea, August 29, 2001).

    ERIC Educational Resources Information Center

    Korea Research Inst. for Vocational Education and Training, Seoul.

    This document contains the following seven papers, all in both English and Korean, from a conference on human resources development and school-to-work transitions in the knowledge-based society: "The U.S. Experience as a Knowledge-based Economy in Transition and Its Impact on Industrial and Employment Structures" (Eric Im); "Changes in the…

  8. Evaluating Social and National Education Textbooks Based on the Criteria of Knowledge-Based Economy from the Perspectives of Elementary Teachers in Jordan

    ERIC Educational Resources Information Center

    Al-Edwan, Zaid Suleiman; Hamaidi, Diala Abdul Hadi

    2011-01-01

    Knowledge-based economy is a new implemented trend in the field of education in Jordan. The ministry of education in Jordan attempts to implement this trend's philosophy in its textbooks. This study examined the extent to which the (1st-3rd grade) social and national textbooks reflect knowledge-based economy criteria from the perspective of…

  9. A framework for knowledge acquisition, representation and problem-solving in knowledge-based planning

    NASA Astrophysics Data System (ADS)

    Martinez-Bermudez, Iliana

    This research addresses the problem of developing planning knowledge-based applications. In particular, it is concerned with the problems of knowledge acquisition and representation---the issues that remain an impediment to the development of large-scale, knowledge-based planning applications. This work aims to develop a model of planning problem solving that facilitates expert knowledge elicitation and also supports effective problem solving. Achieving this goal requires determining the types of knowledge used by planning experts, the structure of this knowledge, and the problem-solving process that results in the plan. While answering these questions it became clear that the knowledge structure, as well as the process of problem solving, largely depends on the knowledge available to the expert. This dissertation proposes classification of planning problems based on their use of expert knowledge. Such classification can help in the selection of the appropriate planning method when dealing with a specific planning problem. The research concentrates on one of the identified classes of planning problems that can be characterized by well-defined and well-structured problem-solving knowledge. To achieve a more complete knowledge representation architecture for such problems, this work employs the task-specific approach to problem solving. The result of this endeavor is a task-specific methodology that allows the representation and use of planning knowledge in a structural, consistent manner specific to the domain of the application. The shell for building a knowledge-based planning application was created as a proof of concept for the methodology described in this dissertation. This shell enabled the development of a system for manufacturing planning---COMPLAN. COMPLAN encompasses knowledge related to four generic techniques used in composite material manufacturing and, given the description of the composite part, creates a family of plans capable of producing it.

  10. A knowledge-based approach of satellite image classification for urban wetland detection

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofan

    It has been a technical challenge to accurately detect urban wetlands with remotely sensed data by means of pixel-based image classification. This is mainly caused by inadequate spatial resolutions of satellite imagery, spectral similarities between urban wetlands and adjacent land covers, and the spatial complexity of wetlands in human-transformed, heterogeneous urban landscapes. Knowledge-based classification, with great potential to overcome or reduce these technical impediments, has been applied to various image classifications focusing on urban land use/land cover and forest wetlands, but rarely to mapping the wetlands in urban landscapes. This study aims to improve the mapping accuracy of urban wetlands by integrating the pixel-based classification with the knowledge-based approach. The study area is the metropolitan area of Kansas City, USA. SPOT satellite images of 1992, 2008, and 2010 were classified into four classes - wetland, farmland, built-up land, and forestland - using the pixel-based supervised maximum likelihood classification method. The products of supervised classification are used as the comparative base maps. For our new classification approach, a knowledge base is developed to improve urban wetland detection, which includes a set of decision rules of identifying wetland cover in relation to its elevation, spatial adjacencies, habitat conditions, hydro-geomorphological characteristics, and relevant geostatistics. Using ERDAS Imagine software's knowledge classifier tool, the decision rules are applied to the base maps in order to identify wetlands that are not able to be detected based on the pixel-based classification. The results suggest that the knowledge-based image classification approach can enhance the urban wetland detection capabilities and classification accuracies with remotely sensed satellite imagery.

  11. SU-E-J-71: Spatially Preserving Prior Knowledge-Based Treatment Planning

    SciTech Connect

    Wang, H; Xing, L

    2015-06-15

    Purpose: Prior knowledge-based treatment planning is impeded by the use of a single dose volume histogram (DVH) curve. Critical spatial information is lost from collapsing the dose distribution into a histogram. Even similar patients possess geometric variations that becomes inaccessible in the form of a single DVH. We propose a simple prior knowledge-based planning scheme that extracts features from prior dose distribution while still preserving the spatial information. Methods: A prior patient plan is not used as a mere starting point for a new patient but rather stopping criteria are constructed. Each structure from the prior patient is partitioned into multiple shells. For instance, the PTV is partitioned into an inner, middle, and outer shell. Prior dose statistics are then extracted for each shell and translated into the appropriate Dmin and Dmax parameters for the new patient. Results: The partitioned dose information from a prior case has been applied onto 14 2-D prostate cases. Using prior case yielded final DVHs that was comparable to manual planning, even though the DVH for the prior case was different from the DVH for the 14 cases. Solely using a single DVH for the entire organ was also performed for comparison but showed a much poorer performance. Different ways of translating the prior dose statistics into parameters for the new patient was also tested. Conclusion: Prior knowledge-based treatment planning need to salvage the spatial information without transforming the patients on a voxel to voxel basis. An efficient balance between the anatomy and dose domain is gained through partitioning the organs into multiple shells. The use of prior knowledge not only serves as a starting point for a new case but the information extracted from the partitioned shells are also translated into stopping criteria for the optimization problem at hand.

  12. A knowledge-based approach to improving optimization techniques in system planning

    NASA Technical Reports Server (NTRS)

    Momoh, J. A.; Zhang, Z. Z.

    1990-01-01

    A knowledge-based (KB) approach to improve mathematical programming techniques used in the system planning environment is presented. The KB system assists in selecting appropriate optimization algorithms, objective functions, constraints and parameters. The scheme is implemented by integrating symbolic computation of rules derived from operator and planner's experience and is used for generalized optimization packages. The KB optimization software package is capable of improving the overall planning process which includes correction of given violations. The method was demonstrated on a large scale power system discussed in the paper.

  13. Knowledge-Based Motion Control of AN Intelligent Mobile Autonomous System

    NASA Astrophysics Data System (ADS)

    Isik, Can

    An Intelligent Mobile Autonomous System (IMAS), which is equipped with vision and low level sensors to cope with unknown obstacles, is modeled as a hierarchy of path planning and motion control. This dissertation concentrates on the lower level of this hierarchy (Pilot) with a knowledge-based controller. The basis of a theory of knowledge-based controllers is established, using the example of the Pilot level motion control of IMAS. In this context, the knowledge-based controller with a linguistic world concept is shown to be adequate for the minimum time control of an autonomous mobile robot motion. The Pilot level motion control of IMAS is approached in the framework of production systems. The three major components of the knowledge-based control that are included here are the hierarchies of the database, the rule base and the rule evaluator. The database, which is the representation of the state of the world, is organized as a semantic network, using a concept of minimal admissible vocabulary. The hierarchy of rule base is derived from the analytical formulation of minimum-time control of IMAS motion. The procedure introduced for rule derivation, which is called analytical model verbalization, utilizes the concept of causalities to describe the system behavior. A realistic analytical system model is developed and the minimum-time motion control in an obstacle strewn environment is decomposed to a hierarchy of motion planning and control. The conditions for the validity of the hierarchical problem decomposition are established, and the consistency of operation is maintained by detecting the long term conflicting decisions of the levels of the hierarchy. The imprecision in the world description is modeled using the theory of fuzzy sets. The method developed for the choice of the rule that prescribes the minimum-time motion control among the redundant set of applicable rules is explained and the usage of fuzzy set operators is justified. Also included in the

  14. Facilitating superior chronic disease management through a knowledge-based systems development model.

    PubMed

    Wickramasinghe, Nilmini S; Goldberg, Steve

    2008-01-01

    To date, the adoption and diffusion of technology-enabled solutions to deliver better healthcare has been slow. There are many reasons for this. One of the most significant is that the existing methodologies that are normally used in general for Information and Communications Technology (ICT) implementations tend to be less successful in a healthcare context. This paper describes a knowledge-based adaptive mapping to realisation methodology to traverse successfully from idea to realisation rapidly and without compromising rigour so that success ensues. It is discussed in connection with trying to implement superior ICT-enabled approaches to facilitate superior Chronic Disease Management (CDM). PMID:19174365

  15. A knowledge-based expert system for scheduling of airborne astronomical observations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, P. R.; Gevarter, W. B.; Stutz, J. C.; Banda, C. P.

    1985-01-01

    The Kuiper Airborne Observatory Scheduler (KAOS) is a knowledge-based expert system developed at NASA Ames Research Center to assist in route planning of a C-141 flying astronomical observatory. This program determines a sequence of flight legs that enables sequential observations of a set of heavenly bodies derived from a list of desirable objects. The possible flight legs are constrained by problems of observability, avoiding flyovers of warning and restricted military zones, and running out of fuel. A significant contribution of the KAOS program is that it couples computational capability with a reasoning system.

  16. canSAR: an updated cancer research and drug discovery knowledgebase

    PubMed Central

    Tym, Joseph E.; Mitsopoulos, Costas; Coker, Elizabeth A.; Razaz, Parisa; Schierz, Amanda C.; Antolin, Albert A.; Al-Lazikani, Bissan

    2016-01-01

    canSAR (http://cansar.icr.ac.uk) is a publicly available, multidisciplinary, cancer-focused knowledgebase developed to support cancer translational research and drug discovery. canSAR integrates genomic, protein, pharmacological, drug and chemical data with structural biology, protein networks and druggability data. canSAR is widely used to rapidly access information and help interpret experimental data in a translational and drug discovery context. Here we describe major enhancements to canSAR including new data, improved search and browsing capabilities, new disease and cancer cell line summaries and new and enhanced batch analysis tools. PMID:26673713

  17. Application of flight systems methodologies to the validation of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Duke, Eugene L.

    1988-01-01

    Flight and mission-critical systems are verified, qualified for flight, and validated using well-known and well-established techniques. These techniques define the validation methodology used for such systems. In order to verify, qualify, and validate knowledge-based systems (KBS's), the methodology used for conventional systems must be addressed, and the applicability and limitations of that methodology to KBS's must be identified. The author presents an outline of how this approach to the validation of KBS's is being developed and used at the Dryden Flight Research Facility of the NASA Ames Research Center.

  18. Use of metaknowledge in the verification of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Morell, Larry J.

    1989-01-01

    Knowledge-based systems are modeled as deductive systems. The model indicates that the two primary areas of concern in verification are demonstrating consistency and completeness. A system is inconsistent if it asserts something that is not true of the modeled domain. A system is incomplete if it lacks deductive capability. Two forms of consistency are discussed along with appropriate verification methods. Three forms of incompleteness are discussed. The use of metaknowledge, knowledge about knowledge, is explored in connection to each form of incompleteness.

  19. Structured data collection and knowledge-based user guidance for abdominal ultrasound reporting.

    PubMed Central

    Kuhn, K.; Zemmler, T.; Reichert, M.; Heinlein, C.; Roesner, D.

    1993-01-01

    This paper describes a system for structured data collection and report generation in abdominal ultrasonography. The system is based on a controlled vocabulary and hierarchies of concepts; it uses a graphical user interface. More than 17,000 reports have been generated by 43 physicians using this system, which is integrated into a departmental information system. Evaluations have shown that it is a well accepted tool for the fast generation of reports of comparatively high quality. The functionality is enhanced by two additional components: a hybrid knowledge-based module for "intelligent" user guidance and an interactive tutoring system to illustrate the terminology. Images Fig. 4 Fig. 5 PMID:8130485

  20. A knowledge-based expert system for scheduling of airborne astronomical observations

    NASA Technical Reports Server (NTRS)

    Nachtsheim, P. R.; Gevarter, W. B.; Stutz, J. C.; Banda, C. P.

    1986-01-01

    KAOS (Kuiper Airborne Observatory Scheduler) is a knowledge-based expert system developed at NASA Ames Research Center to assist in route planning of a C-141 flying astronomical observatory. This program determines a sequence of flight legs that enables sequential observations of a set of heavenly bodies derived from a list of desirable objects. The possible flight legs are constrained by problems of observability, avoiding flyovers of warning and restricted military zones, and running out of fuel. A significant contribution of the KAOS program is that it couples computational capability with a reasoning system.

  1. Enroute flight-path planning - Cooperative performance of flight crews and knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Smith, Philip J.; Mccoy, Elaine; Layton, Chuck; Galdes, Deb

    1989-01-01

    Interface design issues associated with the introduction of knowledge-based systems into the cockpit are discussed. Such issues include not only questions about display and control design, they also include deeper system design issues such as questions about the alternative roles and responsibilities of the flight crew and the computer system. In addition, the feasibility of using enroute flight path planning as a context for exploring such research questions is considered. In particular, the development of a prototyping shell that allows rapid design and study of alternative interfaces and system designs is discussed.

  2. Generating MEDLINE search strategies using a librarian knowledge-based system.

    PubMed Central

    Peng, P.; Aguirre, A.; Johnson, S. B.; Cimino, J. J.

    1993-01-01

    We describe a librarian knowledge-based system that generates a search strategy from a query representation based on a user's information need. Together with the natural language parser AQUA, the system functions as a human/computer interface, which translates a user query from free text into a BRS Onsite search formulation, for searching the MEDLINE bibliographic database. In the system, conceptual graphs are used to represent the user's information need. The UMLS Metathesaurus and Semantic Net are used as the key knowledge sources in building the knowledge base. PMID:8130544

  3. The implementation of a knowledge-based Pathology Hypertext under HyperCard.

    PubMed

    Levy, A H; Thursh, D R

    1989-12-01

    A knowledge-based Hypertext of Pathology integrating videodisc-based images and computer-generated graphics with the textual cognitive information of an undergraduate pathology curriculum has been developed. The system described in this paper was implemented under HyperCard during 1988 and 1989. Three earlier versions of the system that were developed on different platforms are contrasted with the present system. Strengths, weaknesses, and future extensions of the system are enumerated. The conceptual basis and organizational principles of the knowledge base are also briefly discussed. PMID:2636967

  4. Knowledge-based approach for generating target system specifications from a domain model

    NASA Technical Reports Server (NTRS)

    Gomaa, Hassan; Kerschberg, Larry; Sugumaran, Vijayan

    1992-01-01

    Several institutions in industry and academia are pursuing research efforts in domain modeling to address unresolved issues in software reuse. To demonstrate the concepts of domain modeling and software reuse, a prototype software engineering environment is being developed at George Mason University to support the creation of domain models and the generation of target system specifications. This prototype environment, which is application domain independent, consists of an integrated set of commercial off-the-shelf software tools and custom-developed software tools. This paper describes the knowledge-based tool that was developed as part of the environment to generate target system specifications from a domain model.

  5. Knowledge-based fault monitoring and diagnosis in Space Shuttle propellant loading

    NASA Technical Reports Server (NTRS)

    Scarl, E. A.; Jamieson, J.; Delaune, C.

    1984-01-01

    The LOX Expert System (LES), now being developed as a tool for the constraint-based monitoring and analysis of propellant loading at the Kennedy Space Center (KSC), is discussed. The loading of LOX at the KSC and its control and monitoring by the Launch Processing System are summarized, and the relevant problem for LES is presented. The LES database is briefly described, and the interaction of LES with KNOBS, a constraint- and frame-oriented knowledge-based system developed as a demonstration system in aid of tactical air mission planning, is the context of launch processing is discussed in detail. The design and fault isolation techniques of LES are also discussed.

  6. Using Unified Modeling Language for Conceptual Modelling of Knowledge-Based Systems

    NASA Astrophysics Data System (ADS)

    Abdullah, Mohd Syazwan; Benest, Ian; Paige, Richard; Kimble, Chris

    This paper discusses extending the Unified Modelling Language by means of a profile for modelling knowledge-based system in the context of Model Driven Architecture (MDA) framework. The profile is implemented using the eXecutable Modelling Framework (XMF) Mosaic tool. A case study from the health care domain demonstrates the practical use of this profile; with the prototype implemented in Java Expert System Shell (Jess). The paper also discusses the possible mapping of the profile elements to the platform specific model (PSM) of Jess and provides some discussion on the Production Rule Representation (PRR) standardisation work.

  7. KoBaMIN: a knowledge-based minimization web server for protein structure refinement

    PubMed Central

    Rodrigues, João P. G. L. M.; Levitt, Michael; Chopra, Gaurav

    2012-01-01

    The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin. PMID:22564897

  8. Integrated knowledge-based tools for documenting and monitoring damages to built heritage

    NASA Astrophysics Data System (ADS)

    Cacciotti, R.

    2015-08-01

    The advancements of information technologies as applied to the most diverse fields of science define a breakthrough in the accessibility and processing of data for both expert and non-expert users. Nowadays it is possible to evidence an increasingly relevant research effort in the context of those domains, such as that of cultural heritage protection, in which knowledge mapping and sharing constitute critical prerequisites for accomplishing complex professional tasks. The aim of this paper is to outline the main results and outputs of the MONDIS research project. This project focusses on the development of integrated knowledge-based tools grounded on an ontological representation of the field of heritage conservation. The scope is to overcome the limitations of earlier databases by the application of modern semantic technologies able to integrate, organize and process useful information concerning damages to built heritage objects. In particular MONDIS addresses the need for supporting a diverse range of stakeholders (e.g. administrators, owners and professionals) in the documentation and monitoring of damages to historical constructions and in finding related remedies. The paper concentrates on the presentation of the following integrated knowledgebased components developed within the project: (I) MONDIS mobile application (plus desktop version), (II) MONDIS record explorer, (III) Ontomind profiles, (IV) knowledge matrix and (V) terminology editor. An example of practical application of the MONDIS integrated system is also provided and finally discussed.

  9. An integrated analytic tool and knowledge-based system approach to aerospace electric power system control

    NASA Astrophysics Data System (ADS)

    Owens, William R.; Henderson, Eric; Gandikota, Kapal

    1986-10-01

    Future aerospace electric power systems require new control methods because of increasing power system complexity, demands for power system management, greater system size and heightened reliability requirements. To meet these requirements, a combination of electric power system analytic tools and knowledge-based systems is proposed. The continual improvement in microelectronic performance has made it possible to envision the application of sophisticated electric power system analysis tools to aerospace vehicles. These tools have been successfully used in the measurement and control of large terrestrial electric power systems. Among these tools is state estimation which has three main benefits. The estimator builds a reliable database for the system structure and states. Security assessment and contingency evaluation also require a state estimator. Finally, the estimator will, combined with modern control theory, improve power system control and stability. Bad data detection as an adjunct to state estimation identifies defective sensors and communications channels. Validated data from the analytic tools is supplied to a number of knowledge-based systems. These systems will be responsible for the control, protection, and optimization of the electric power system.

  10. Predicting Mycobacterium tuberculosis Complex Clades Using Knowledge-Based Bayesian Networks

    PubMed Central

    Bennett, Kristin P.

    2014-01-01

    We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238

  11. Hybrid hill-climbing and knowledge-based methods for intelligent news filtering

    SciTech Connect

    Mock, K.J.

    1996-12-31

    As the size of the Internet increases, the amount of data available to users has dramatically risen, resulting in an information overload for users. This work involved the creation of an intelligent information news filtering system named INFOS (Intelligent News Filtering Organizational System) to reduce the user`s search burden by automatically eliminating Usenet news articles predicted to be irrelevant. These predictions are learned automatically by adapting an internal user model that is based upon features taken from articles and collaborative features derived from other users. The features are manipulated through keyword-based techniques and knowledge-based techniques to perform the actual filtering. Knowledge-based systems have the advantage of analyzing input text in detail, but at the cost of computational complexity and the difficulty of scaling up to large domains. In contrast, statistical and keyword approaches scale up readily but result in a shallower understanding of the input. A hybrid system integrating both approaches improves accuracy over keyword approaches, supports domain knowledge, and retains scalability. The system would be enhanced by more robust word disambiguation.

  12. KoBaMIN: a knowledge-based minimization web server for protein structure refinement.

    PubMed

    Rodrigues, João P G L M; Levitt, Michael; Chopra, Gaurav

    2012-07-01

    The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin. PMID:22564897

  13. Integration of Cardiac Proteome Biology and Medicine by a Specialized Knowledgebase

    PubMed Central

    Zong, Nobel C.; Li, Haomin; Li, Hua; Lam, Maggie P.Y.; Jimenez, Rafael C.; Kim, Christina S.; Deng, Ning; Kim, Allen K.; Choi, Jeong Ho; Zelaya, Ivette; Liem, David; Meyer, David; Odeberg, Jacob; Fang, Caiyun; Lu, Hao-jie; Xu, Tao; Weiss, James; Duan, Huilong; Uhlen, Mathias; Yates, John R.; Apweiler, Rolf; Ge, Junbo; Hermjakob, Henning; Ping, Peipei

    2014-01-01

    Rationale Omics sciences enable a systems-level perspective in characterizing cardiovascular biology. Integration of diverse proteomics data via a computational strategy will catalyze the assembly of contextualized knowledge, foster discoveries through multidisciplinary investigations, and minimize unnecessary redundancy in research efforts. Objective The goal of this project is to develop a consolidated cardiac proteome knowledgebase with novel bioinformatics pipeline and web portals, thereby serving as a new resource to advance cardiovascular biology and medicine. Methods and Results We created Cardiac Organellar Protein Atlas Knowledgebase (COPaKB), a centralized platform of high quality cardiac proteomic data, bioinformatics tools and relevant cardiovascular phenotypes. Currently, COPaKB features eight organellar modules, comprising 4,203 LC-MS/MS experiments from human, mouse, drosophila and C. elegans as well as expression images of 10,924 proteins in human myocardium. In addition, the Java-coded bioinformatics tools provided by COPaKB enable cardiovascular investigators in all disciplines to retrieve and analyze pertinent organellar protein properties of interest. Conclusions COPaKB (www.HeartProteome.org) provides an innovative and interactive resource, which connects research interests with the new biological discoveries in protein sciences. With an array of intuitive tools in this unified web server, non-proteomics investigators can conveniently collaborate with proteomics specialists to dissect the molecular signatures of cardiovascular phenotypes. PMID:23965338

  14. A knowledge-based machine vision system for space station automation

    NASA Astrophysics Data System (ADS)

    Chipman, Laure J.; Ranganath, H. S.

    1989-11-01

    A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.

  15. A knowledge-based control system for air-scour optimisation in membrane bioreactors.

    PubMed

    Ferrero, G; Monclús, H; Sancho, L; Garrido, J M; Comas, J; Rodríguez-Roda, I

    2011-01-01

    Although membrane bioreactors (MBRs) technology is still a growing sector, its progressive implementation all over the world, together with great technical achievements, has allowed it to reach a mature degree, just comparable to other more conventional wastewater treatment technologies. With current energy requirements around 0.6-1.1 kWh/m3 of treated wastewater and investment costs similar to conventional treatment plants, main market niche for MBRs can be areas with very high restrictive discharge limits, where treatment plants have to be compact or where water reuse is necessary. Operational costs are higher than for conventional treatments; consequently there is still a need and possibilities for energy saving and optimisation. This paper presents the development of a knowledge-based decision support system (DSS) for the integrated operation and remote control of the biological and physical (filtration and backwashing or relaxation) processes in MBRs. The core of the DSS is a knowledge-based control module for air-scour consumption automation and energy consumption minimisation. PMID:21902045

  16. A knowledge-based system for diagnosis of mastitis problems at the herd level. 1. Concepts.

    PubMed

    Hogeveen, H; Noordhuizen-Stassen, E N; Tepp, D M; Kremer, W D; van Vliet, J H; Brand, A

    1995-07-01

    Much specialized knowledge is involved in the diagnosis of a mastitis problem at the herd level. Because of their problem-solving capacities, knowledge-based systems can be very useful to support the diagnosis of mastitis problems in the herd. Conditional causal models with multiple layers are used as a representation scheme for the development of a knowledge-based system for diagnosing mastitis problems. Construction of models requires extensive cooperation between the knowledge engineer and the domain expert. The first layer consists of three overview models: the general overview conditional causal model, the contagious overview conditional causal model, and the environmental overview conditional causal model, giving a causal description of the pathways through which mastitis problems can occur. The conditional causal model for primary udder defense and the conditional causal model for host defense are attached to the overview models at the second layer, and the conditional causal model for deep primary udder defense is attached to the conditional causal model for the primary udder defense at the third layer. Based on quantitative user input, the system determines the qualitative values of the nodes that are used for reasoning. The developed models showed that conditional causal models are a good method for modeling the mechanisms involved in a mastitis problem. The system needs to be extended in order to be useful in practical circumstances. PMID:7593836

  17. Knowledge-based algorithm for satellite image classification of urban wetlands

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofan; Ji, Wei

    2014-10-01

    It has been a challenge to accurately detect urban wetlands with remotely sensed data by means of pixel-based image classification. This technical difficulty results mainly from inadequate spatial resolutions of satellite imagery, spectral similarities between urban wetlands and adjacent land covers, and spatial complexity of wetlands in human transformed, heterogeneous urban landscapes. To address this issue, an image classification approach has been developed to improve the mapping accuracy of urban wetlands by integrating the pixel-based classification with a knowledge-based algorithm. The algorithm includes a set of decision rules of identifying wetland cover in relation to their elevation, spatial adjacencies, habitat conditions, hydro-geomorphological characteristics, and relevant geo-statistics. ERDAS Imagine software was used to develop the knowledge base and implement the classification. The study area is the metropolitan region of Kansas City, USA. SPOT satellite images of 1992, 2008, and 2010 were classified into four classes - wetland, farmland, built-up land, and forestland. The results suggest that the knowledge-based image classification approach can enhance urban wetland detection capabilities and classification accuracies with remotely sensed satellite imagery.

  18. Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris

    2010-01-01

    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

  19. Norine, the knowledgebase dedicated to non-ribosomal peptides, is now open to crowdsourcing

    PubMed Central

    Flissi, Areski; Dufresne, Yoann; Michalik, Juraj; Tonon, Laurie; Janot, Stéphane; Noé, Laurent; Jacques, Philippe; Leclère, Valérie; Pupin, Maude

    2016-01-01

    Since its creation in 2006, Norine remains the unique knowledgebase dedicated to non-ribosomal peptides (NRPs). These secondary metabolites, produced by bacteria and fungi, harbor diverse interesting biological activities (such as antibiotic, antitumor, siderophore or surfactant) directly related to the diversity of their structures. The Norine team goal is to collect the NRPs and provide tools to analyze them efficiently. We have developed a user-friendly interface and dedicated tools to provide a complete bioinformatics platform. The knowledgebase gathers abundant and valuable annotations on more than 1100 NRPs. To increase the quantity of described NRPs and improve the quality of associated annotations, we are now opening Norine to crowdsourcing. We believe that contributors from the scientific community are the best experts to annotate the NRPs they work on. We have developed MyNorine to facilitate the submission of new NRPs or modifications of stored ones. This article presents MyNorine and other novelties of Norine interface released since the first publication. Norine is freely accessible from the following URL: http://bioinfo.lifl.fr/NRP. PMID:26527733

  20. Knowledge-based video compression for search and rescue robots and multiple sensor networks

    NASA Astrophysics Data System (ADS)

    Williams, Chris; Murphy, Robin R.

    2006-05-01

    Robot and sensor networks are needed for safety, security, and rescue applications such as port security and reconnaissance during a disaster. These applications rely on real-time transmission of images, which generally saturate the available wireless network infrastructure. Knowledge-based compression is a method for reducing the video frame transmission rate between robots or sensors and remote operators. Because images may need to be archived as evidence and/or distributed to multiple applications with different post processing needs, lossy compression schemes, such as MPEG, H.26x, etc., are not acceptable. This work proposes a lossless video server system consisting of three classes of filters (redundancy, task, and priority) which use different levels of knowledge (local sensed environment, human factors associated with a local task, and relative global priority of a task) at the application layer of the network. It demonstrates the redundancy and task filters for a realistic robot search scenario. The redundancy filter is shown to reduce the overall transmission bandwidth by 24.07% to 33.42%, and, when combined with the task filter, reduces overall transmission bandwidth by 59.08%to 67.83%. By itself, the task filter has the capability to reduce transmission bandwidth by 32.95% to 33.78%. While knowledge-based compression generally does not reach the same levels of reduction as MPEG, there are instances where the system outperforms MPEG encoding.

  1. Predicting Mycobacterium tuberculosis complex clades using knowledge-based Bayesian networks.

    PubMed

    Aminian, Minoo; Couvin, David; Shabbeer, Amina; Hadley, Kane; Vandenberg, Scott; Rastogi, Nalin; Bennett, Kristin P

    2014-01-01

    We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238

  2. A knowledge-based machine vision system for space station automation

    NASA Technical Reports Server (NTRS)

    Chipman, Laure J.; Ranganath, H. S.

    1989-01-01

    A simple knowledge-based approach to the recognition of objects in man-made scenes is being developed. Specifically, the system under development is a proposed enhancement to a robot arm for use in the space station laboratory module. The system will take a request from a user to find a specific object, and locate that object by using its camera input and information from a knowledge base describing the scene layout and attributes of the object types included in the scene. In order to use realistic test images in developing the system, researchers are using photographs of actual NASA simulator panels, which provide similar types of scenes to those expected in the space station environment. Figure 1 shows one of these photographs. In traditional approaches to image analysis, the image is transformed step by step into a symbolic representation of the scene. Often the first steps of the transformation are done without any reference to knowledge of the scene or objects. Segmentation of an image into regions generally produces a counterintuitive result in which regions do not correspond to objects in the image. After segmentation, a merging procedure attempts to group regions into meaningful units that will more nearly correspond to objects. Here, researchers avoid segmenting the image as a whole, and instead use a knowledge-directed approach to locate objects in the scene. The knowledge-based approach to scene analysis is described and the categories of knowledge used in the system are discussed.

  3. An architecture for integrating distributed and cooperating knowledge-based Air Force decision aids

    NASA Technical Reports Server (NTRS)

    Nugent, Richard O.; Tucker, Richard W.

    1988-01-01

    MITRE has been developing a Knowledge-Based Battle Management Testbed for evaluating the viability of integrating independently-developed knowledge-based decision aids in the Air Force tactical domain. The primary goal for the testbed architecture is to permit a new system to be added to a testbed with little change to the system's software. Each system that connects to the testbed network declares that it can provide a number of services to other systems. When a system wants to use another system's service, it does not address the server system by name, but instead transmits a request to the testbed network asking for a particular service to be performed. A key component of the testbed architecture is a common database which uses a relational database management system (RDBMS). The RDBMS provides a database update notification service to requesting systems. Normally, each system is expected to monitor data relations of interest to it. Alternatively, a system may broadcast an announcement message to inform other systems that an event of potential interest has occurred. Current research is aimed at dealing with issues resulting from integration efforts, such as dealing with potential mismatches of each system's assumptions about the common database, decentralizing network control, and coordinating multiple agents.

  4. Knowledge-based approach to multiple-transaction processing and distributed data-base design

    SciTech Connect

    Park, J.T.

    1987-01-01

    The collective processing of multiple transactions in a data-base system has recently received renewed attention due to its capability of improving the overall performance of a data-base system and its applicability to the design of knowledge-based expert systems and extensible data-base systems. This dissertation consists of two parts. The first part presents a new knowledge-based approach to the problems of processing multiple concurrent queries and distributing replicated data objects for further improvement of the overall system performance. The second part deals with distributed database design, i.e., designing horizontal fragments using a semantic knowledge, and allocating data in a distributed environment. The semantic knowledge on data such as functional dependencies and semantic-data-integrity constraints are newly exploited for the identification of subset relationships between intermediate results of query executions involving joins, such that the (intermediate) results of queries can be utilized for the efficient processing of other queries. The expertise on the collective processing of multiple transactions is embodied into the rules of a rule-based expert system, MTP (Multiple Transaction Processor). In the second part, MTP is applied for the determination of horizontal fragments exploiting the semantic knowledge. Heuristics for allocating data in local area networks are developed.

  5. CLIPS implementation of a knowledge-based distributed control of an autonomous mobile robot

    NASA Astrophysics Data System (ADS)

    Bou-Ghannam, Akram A.; Doty, Keith L.

    1991-03-01

    We implement an architecture for the planning and control of an intelligent autonomous mobile robot which consists of concurrently running modules forming a hierarchy of control in which lower-level modules perform 'reflexive' tasks while higher-level modules perform tasks requiring greater processing of sensor data. A knowledge-based system performs the task planning and arbitration of lower-level behaviors. This system reasons about behavior selection (fusion) based on its current knowledge and the situation at hand provided by monitoring the status from lower-level behaviors and the map builder. We implement this knowledge-based planning module in CLIPS (C Language Implementation Production System), a rule-based expert systems shell. CLIPS is written in and fully integrated with the C language providing high probability and ease of integration with external systems. We discuss implementation issues including the implementation of control strategy in CLIPS rules and interfacing to other modules through the use of CLIPS user-defined external functions.

  6. ASExpert: an integrated knowledge-based system for activated sludge plants.

    PubMed

    Sorour, M T; Bahgat, L M F; El, Iskandarani M A; Horan, N J

    2002-08-01

    The activated sludge process is commonly used for secondary wastewater treatment worldwide. This process is capable of achieving high quality effluent. However it has the reputation of being difficult to operate because of its poorly understood biological behaviour, variability of input flows and the need to incorporate qualitative data. To augment this incomplete knowledge with experience, knowledge-based systems were introduced in the 1980s however they didn't receive much popularity. This paper presents the Activated Sludge Expert system (ASExpert), which is a rule-based expert system plus a complete database tool proposed for use in activated sludge plants. The paper focuses on presenting the system's main features and capabilities to revive the interest in knowledge-based systems as a reliable means for monitoring plants. Then it presents the methodology adopted for ASExpert validation along with an assessment of testing results. Finally it concludes that expert systems technology has proved its importance for enhancing performance, especially if in the future it is integrated to a modern control system. PMID:12211453

  7. Dealing with difficult deformations: construction of a knowledge-based deformation atlas

    NASA Astrophysics Data System (ADS)

    Thorup, S. S.; Darvann, T. A.; Hermann, N. V.; Larsen, P.; Ólafsdóttir, H.; Paulsen, R. R.; Kane, A. A.; Govier, D.; Lo, L.-J.; Kreiborg, S.; Larsen, R.

    2010-03-01

    Twenty-three Taiwanese infants with unilateral cleft lip and palate (UCLP) were CT-scanned before lip repair at the age of 3 months, and again after lip repair at the age of 12 months. In order to evaluate the surgical result, detailed point correspondence between pre- and post-surgical images was needed. We have previously demonstrated that non-rigid registration using B-splines is able to provide automated determination of point correspondences in populations of infants without cleft lip. However, this type of registration fails when applied to the task of determining the complex deformation from before to after lip closure in infants with UCLP. The purpose of the present work was to show that use of prior information about typical deformations due to lip closure, through the construction of a knowledge-based atlas of deformations, could overcome the problem. Initially, mean volumes (atlases) for the pre- and post-surgical populations, respectively, were automatically constructed by non-rigid registration. An expert placed corresponding landmarks in the cleft area in the two atlases; this provided prior information used to build a knowledge-based deformation atlas. We model the change from pre- to post-surgery using thin-plate spline warping. The registration results are convincing and represent a first move towards an automatic registration method for dealing with difficult deformations due to this type of surgery.

  8. Knowledge-based biomedical word sense disambiguation: an evaluation and application to clinical document classification

    PubMed Central

    Garla, Vijay N; Brandt, Cynthia

    2013-01-01

    Background Word sense disambiguation (WSD) methods automatically assign an unambiguous concept to an ambiguous term based on context, and are important to many text-processing tasks. In this study we developed and evaluated a knowledge-based WSD method that uses semantic similarity measures derived from the Unified Medical Language System (UMLS) and evaluated the contribution of WSD to clinical text classification. Methods We evaluated our system on biomedical WSD datasets and determined the contribution of our WSD system to clinical document classification on the 2007 Computational Medicine Challenge corpus. Results Our system compared favorably with other knowledge-based methods. Machine learning classifiers trained on disambiguated concepts significantly outperformed those trained using all concepts. Conclusions We developed a WSD system that achieves high disambiguation accuracy on standard biomedical WSD datasets and showed that our WSD system improves clinical document classification. Data sharing We integrated our WSD system with MetaMap and the clinical Text Analysis and Knowledge Extraction System, two popular biomedical natural language processing systems. All codes required to reproduce our results and all tools developed as part of this study are released as open source, available under http://code.google.com/p/ytex. PMID:23077130

  9. Transformative Pedagogy, Leadership and School Organisation for the Twenty-First-Century Knowledge-Based Economy: The Case of Singapore

    ERIC Educational Resources Information Center

    Dimmock, Clive; Goh, Jonathan W. P.

    2011-01-01

    Singapore has a high performing school system; its students top international tests in maths and science. Yet while the Singapore government cherishes its world class "brand", it realises that in a globally competitive world, its schools need to prepare students for the twenty-first-century knowledge-based economy (KBE). Accordingly, over the past…

  10. Young People's Management of the Transition from Education to Employment in the Knowledge-Based Sector in Shanghai

    ERIC Educational Resources Information Center

    Wang, Qi; Lowe, John

    2011-01-01

    This paper reports on a study of the transition from university to work by students/employees in the complex and rapidly changing socio-economic context of contemporary Shanghai. It aims at understanding how highly educated young people perceive the nature and mode of operation of the newly emerging labour market for knowledge-based jobs, and how…

  11. Development of the Knowledge-Based Standard for the Written Certification Examination of the American Board of Anesthesiology.

    ERIC Educational Resources Information Center

    Slogoff, Stephen; And Others

    1992-01-01

    Application of a knowledge-based standard in evaluating a written certification examination developed by the American Board of Anesthesiology established a standard of 57 percent correct over two years' examinations. This process is recommended for developing mastery-based (rather than normative-based) success criteria for evaluation of medical…

  12. Enhancing Student Learning in Knowledge-Based Courses: Integrating Team-Based Learning in Mass Communication Theory Classes

    ERIC Educational Resources Information Center

    Han, Gang; Newell, Jay

    2014-01-01

    This study explores the adoption of the team-based learning (TBL) method in knowledge-based and theory-oriented journalism and mass communication (J&MC) courses. It first reviews the origin and concept of TBL, the relevant theories, and then introduces the TBL method and implementation, including procedures and assessments, employed in an…

  13. A Comparative Analysis of New Governance Instruments in the Transnational Educational Space: A Shift to Knowledge-Based Instruments?

    ERIC Educational Resources Information Center

    Ioannidou, Alexandra

    2007-01-01

    In recent years, the ongoing development towards a knowledge-based society--associated with globalization, an aging population, new technologies and organizational changes--has led to a more intensive analysis of education and learning throughout life with regard to quantitative, qualitative and financial aspects. In this framework, education…

  14. Methodology development of an engineering design expert system utilizing a modular knowledge-base inference process

    NASA Astrophysics Data System (ADS)

    Winter, Steven John

    Methodology development was conducted to incorporate a modular knowledge-base representation into an expert system engineering design application. The objective for using multidisciplinary methodologies in defining a design system was to develop a system framework that would be applicable to a wide range of engineering applications. The technique of "knowledge clustering" was used to construct a general decision tree for all factual information relating to the design application. This construction combined the design process surface knowledge and specific application depth knowledge. Utilization of both levels of knowledge created a system capable of processing multiple controlling tasks including; organizing factual information relative to the cognitive levels of the design process, building finite element models for depth knowledge analysis, developing a standardized finite element code for parallel processing, and determining a best solution generated by design optimization procedures. Proof of concept for the methodology developed here is shown in the implementation of an application defining the analysis and optimization of a composite aircraft canard subjected to a general compound loading condition. This application contained a wide range of factual information and heuristic rules. The analysis tools used included a finite element (FE) processor and numerical optimizer. An advisory knowledge-base was also developed to provide a standard for conversion of serial FE code for parallel processing. All knowledge-bases developed operated as either an advisory, selection, or classification systems. Laminate properties are limited to even-numbered, quasi-isotropic ply stacking sequences. This retained full influence of the coupled in-plane and bending effects of the structures behavior. The canard is modeled as a constant thickness plate and discretized into a varying number of four or nine-noded, quadrilateral, shear-deformable plate elements. The benefit gained by

  15. Optimization of knowledge-based systems and expert system building tools

    NASA Technical Reports Server (NTRS)

    Yasuda, Phyllis; Mckellar, Donald

    1993-01-01

    The objectives of the NASA-AMES Cooperative Agreement were to investigate, develop, and evaluate, via test cases, the system parameters and processing algorithms that constrain the overall performance of the Information Sciences Division's Artificial Intelligence Research Facility. Written reports covering various aspects of the grant were submitted to the co-investigators for the grant. Research studies concentrated on the field of artificial intelligence knowledge-based systems technology. Activities included the following areas: (1) AI training classes; (2) merging optical and digital processing; (3) science experiment remote coaching; (4) SSF data management system tests; (5) computer integrated documentation project; (6) conservation of design knowledge project; (7) project management calendar and reporting system; (8) automation and robotics technology assessment; (9) advanced computer architectures and operating systems; and (10) honors program.

  16. Knowledge-based simulation of DNA metabolism: prediction of enzyme action.

    PubMed

    Brutlag, D L; Galper, A R; Millis, D H

    1991-01-01

    We have developed a knowledge-based simulation of DNA metabolism that accurately predicts the actions of enzymes on DNA under a large number of environmental conditions. Previous simulations of enzyme systems rely predominantly on mathematical models. We use a frame-based representation to model enzymes, substrates and conditions. Interactions between these objects are expressed using production rules and an underlying truth maintenance system. The system performs rapid inference and can explain its reasoning. A graphical interface provides access to all elements of the simulation, including object representations and explanation graphs. Predicting enzyme action is the first step in the development of a large knowledge base to envision the metabolic pathways of DNA replication and repair. PMID:2004281

  17. Geomorphological feature extraction from a digital elevation model through fuzzy knowledge-based classification

    NASA Astrophysics Data System (ADS)

    Argialas, Demetre P.; Tzotsos, Angelos

    2003-03-01

    The objective of this research was the investigation of advanced image analysis methods for geomorphological mapping. Methods employed included multiresolution segmentation of the Digital Elevation Model (DEM) GTOPO30 and fuzzy knowledge based classification of the segmented DEM into three geomorphological classes: mountain ranges, piedmonts and basins. The study area was a segment of the Basin and Range Physiographic Province in Nevada, USA. The implementation was made in eCognition. In particular, the segmentation of GTOPO30 resulted into primitive objects. The knowledge-based classification of the primitive objects based on their elevation and shape parameters, resulted in the extraction of the geomorphological features. The resulted boundaries in comparison to those by previous studies were found satisfactory. It is concluded that geomorphological feature extraction can be carried out through fuzzy knowledge based classification as implemented in eCognition.

  18. Building organisational cyber resilience: A strategic knowledge-based view of cyber security management.

    PubMed

    Ferdinand, Jason

    The concept of cyber resilience has emerged in recent years in response to the recognition that cyber security is more than just risk management. Cyber resilience is the goal of organisations, institutions and governments across the world and yet the emerging literature is somewhat fragmented due to the lack of a common approach to the subject. This limits the possibility of effective collaboration across public, private and governmental actors in their efforts to build and maintain cyber resilience. In response to this limitation, and to calls for a more strategically focused approach, this paper offers a knowledge-based view of cyber security management that explains how an organisation can build, assess, and maintain cyber resilience. PMID:26642176

  19. Knowledge-based reasoning in the Paladin tactical decision generation system

    NASA Technical Reports Server (NTRS)

    Chappell, Alan R.

    1993-01-01

    A real-time tactical decision generation system for air combat engagements, Paladin, has been developed. A pilot's job in air combat includes tasks that are largely symbolic. These symbolic tasks are generally performed through the application of experience and training (i.e. knowledge) gathered over years of flying a fighter aircraft. Two such tasks, situation assessment and throttle control, are identified and broken out in Paladin to be handled by specialized knowledge based systems. Knowledge pertaining to these tasks is encoded into rule-bases to provide the foundation for decisions. Paladin uses a custom built inference engine and a partitioned rule-base structure to give these symbolic results in real-time. This paper provides an overview of knowledge-based reasoning systems as a subset of rule-based systems. The knowledge used by Paladin in generating results as well as the system design for real-time execution is discussed.

  20. Expert Knowledge-Based Automatic Sleep Stage Determination by Multi-Valued Decision Making Method

    NASA Astrophysics Data System (ADS)

    Wang, Bei; Sugi, Takenao; Kawana, Fusae; Wang, Xingyu; Nakamura, Masatoshi

    In this study, an expert knowledge-based automatic sleep stage determination system working on a multi-valued decision making method is developed. Visual inspection by a qualified clinician is adopted to obtain the expert knowledge database. The expert knowledge database consists of probability density functions of parameters for various sleep stages. Sleep stages are determined automatically according to the conditional probability. Totally, four subjects were participated. The automatic sleep stage determination results showed close agreements with the visual inspection on sleep stages of awake, REM (rapid eye movement), light sleep and deep sleep. The constructed expert knowledge database reflects the distributions of characteristic parameters which can be adaptive to variable sleep data in hospitals. The developed automatic determination technique based on expert knowledge of visual inspection can be an assistant tool enabling further inspection of sleep disorder cases for clinical practice.

  1. Knowledge-based recognition algorithm for long-range infrared bridge images

    NASA Astrophysics Data System (ADS)

    Cao, Zhiguo; Sun, Qi; Zhang, Tianxu

    2001-10-01

    The recognition of bridge in long-range infrared images presents a number of problems due to the complexity of background, high noise interference, small size of target and low contrast between bridge and its surrounding water area. To counter these barriers, we have developed a new knowledge-based recognition algorithm. It first detects the candidate bridge sub-regions and then focus on them. According to the degree to which they match with our pre-built framework, different credits are given, so the false objects are excluded and eventually real target is found. The experimental results demonstrate our localized method is always superior to the traditional global algorithms adopted by most former researchers.

  2. Knowledge-Based Manufacturing and Structural Design for a High Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Marx, William J.; Mavris, Dimitri N.; Schrage, Daniel P.

    1994-01-01

    The aerospace industry is currently addressing the problem of integrating manufacturing and design. To address the difficulties associated with using many conventional procedural techniques and algorithms, one feasible way to integrate the two concepts is with the development of an appropriate Knowledge-Based System (KBS). The authors present their reasons for selecting a KBS to integrate design and manufacturing. A methodology for an aircraft producibility assessment is proposed, utilizing a KBS for manufacturing process selection, that addresses both procedural and heuristic aspects of designing and manufacturing of a High Speed Civil Transport (HSCT) wing. A cost model is discussed that would allow system level trades utilizing information describing the material characteristics as well as the manufacturing process selections. Statements of future work conclude the paper.

  3. KIPSE1: A Knowledge-based Interactive Problem Solving Environment for data estimation and pattern classification

    NASA Technical Reports Server (NTRS)

    Han, Chia Yung; Wan, Liqun; Wee, William G.

    1990-01-01

    A knowledge-based interactive problem solving environment called KIPSE1 is presented. The KIPSE1 is a system built on a commercial expert system shell, the KEE system. This environment gives user capability to carry out exploratory data analysis and pattern classification tasks. A good solution often consists of a sequence of steps with a set of methods used at each step. In KIPSE1, solution is represented in the form of a decision tree and each node of the solution tree represents a partial solution to the problem. Many methodologies are provided at each node to the user such that the user can interactively select the method and data sets to test and subsequently examine the results. Otherwise, users are allowed to make decisions at various stages of problem solving to subdivide the problem into smaller subproblems such that a large problem can be handled and a better solution can be found.

  4. Knowledge-Based Parallel Performance Technology for Scientific Application Competitiveness Final Report

    SciTech Connect

    Malony, Allen D; Shende, Sameer

    2011-08-15

    The primary goal of the University of Oregon's DOE "œcompetitiveness" project was to create performance technology that embodies and supports knowledge of performance data, analysis, and diagnosis in parallel performance problem solving. The target of our development activities was the TAU Performance System and the technology accomplishments reported in this and prior reports have all been incorporated in the TAU open software distribution. In addition, the project has been committed to maintaining strong interactions with the DOE SciDAC Performance Engineering Research Institute (PERI) and Center for Technology for Advanced Scientific Component Software (TASCS). This collaboration has proved valuable for translation of our knowledge-based performance techniques to parallel application development and performance engineering practice. Our outreach has also extended to the DOE Advanced CompuTational Software (ACTS) collection and project. Throughout the project we have participated in the PERI and TASCS meetings, as well as the ACTS annual workshops.

  5. The neXtProt knowledgebase on human proteins: current status

    PubMed Central

    Gaudet, Pascale; Michel, Pierre-André; Zahn-Zabal, Monique; Cusin, Isabelle; Duek, Paula D.; Evalet, Olivier; Gateau, Alain; Gleizes, Anne; Pereira, Mario; Teixeira, Daniel; Zhang, Ying; Lane, Lydie; Bairoch, Amos

    2015-01-01

    neXtProt (http://www.nextprot.org) is a human protein-centric knowledgebase developed at the SIB Swiss Institute of Bioinformatics. Focused solely on human proteins, neXtProt aims to provide a state of the art resource for the representation of human biology by capturing a wide range of data, precise annotations, fully traceable data provenance and a web interface which enables researchers to find and view information in a comprehensive manner. Since the introductory neXtProt publication, significant advances have been made on three main aspects: the representation of proteomics data, an extended representation of human variants and the development of an advanced search capability built around semantic technologies. These changes are presented in the current neXtProt update. PMID:25593349

  6. FTDD973: A multimedia knowledge-based system and methodology for operator training and diagnostics

    NASA Technical Reports Server (NTRS)

    Hekmatpour, Amir; Brown, Gary; Brault, Randy; Bowen, Greg

    1993-01-01

    FTDD973 (973 Fabricator Training, Documentation, and Diagnostics) is an interactive multimedia knowledge based system and methodology for computer-aided training and certification of operators, as well as tool and process diagnostics in IBM's CMOS SGP fabrication line (building 973). FTDD973 is an example of what can be achieved with modern multimedia workstations. Knowledge-based systems, hypertext, hypergraphics, high resolution images, audio, motion video, and animation are technologies that in synergy can be far more useful than each by itself. FTDD973's modular and object-oriented architecture is also an example of how improvements in software engineering are finally making it possible to combine many software modules into one application. FTDD973 is developed in ExperMedia/2; and OS/2 multimedia expert system shell for domain experts.

  7. Knowledge-Based, Interactive, Custom Anatomical Scene Creation for Medical Education: The Biolucida System

    PubMed Central

    Warren, Wayne; Brinkley, James F.

    2005-01-01

    Few biomedical subjects of study are as resource-intensive to teach as gross anatomy. Medical education stands to benefit greatly from applications which deliver virtual representations of human anatomical structures. While many applications have been created to achieve this goal, their utility to the student is limited because of a lack of interactivity or customizability by expert authors. Here we describe the first version of the Biolucida system, which allows an expert anatomist author to create knowledge-based, customized, and fully interactive scenes and lessons for students of human macroscopic anatomy. Implemented in Java and VRML, Biolucida allows the sharing of these instructional 3D environments over the internet. The system simplifies the process of authoring immersive content while preserving its flexibility and expressivity. PMID:16779148

  8. Using fuzzy logic to integrate neural networks and knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Yen, John

    1991-01-01

    Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems.

  9. Knowledge-based potentials in bioinformatics: From a physicist’s viewpoint

    NASA Astrophysics Data System (ADS)

    Zheng, Wei-Mou

    2015-12-01

    Biological raw data are growing exponentially, providing a large amount of information on what life is. It is believed that potential functions and the rules governing protein behaviors can be revealed from analysis on known native structures of proteins. Many knowledge-based potentials for proteins have been proposed. Contrary to most existing review articles which mainly describe technical details and applications of various potential models, the main foci for the discussion here are ideas and concepts involving the construction of potentials, including the relation between free energy and energy, the additivity of potentials of mean force and some key issues in potential construction. Sequence analysis is briefly viewed from an energetic viewpoint. Project supported in part by the National Natural Science Foundation of China (Grant Nos. 11175224 and 11121403).

  10. Cyanobacterial KnowledgeBase (CKB), a Compendium of Cyanobacterial Genomes and Proteomes

    PubMed Central

    Mohandass, Shylajanaciyar; Varadharaj, Sangeetha; Thilagar, Sivasudha; Abdul Kareem, Kaleel Ahamed; Dharmar, Prabaharan; Gopalakrishnan, Subramanian; Lakshmanan, Uma

    2015-01-01

    Cyanobacterial KnowledgeBase (CKB) is a free access database that contains the genomic and proteomic information of 74 fully sequenced cyanobacterial genomes belonging to seven orders. The database also contains tools for sequence analysis. The Species report and the gene report provide details about each species and gene (including sequence features and gene ontology annotations) respectively. The database also includes cyanoBLAST, an advanced tool that facilitates comparative analysis, among cyanobacterial genomes and genomes of E. coli (prokaryote) and Arabidopsis (eukaryote). The database is developed and maintained by the Sub-Distributed Informatics Centre (sponsored by the Department of Biotechnology, Govt. of India) of the National Facility for Marine Cyanobacteria, a facility dedicated to marine cyanobacterial research. CKB is freely available at http://nfmc.res.in/ckb/index.html. PMID:26305368

  11. Quality control in nerve conduction studies with coupled knowledge-based system approach.

    PubMed

    Xiang, Y; Eisen, A; MacNeil, M; Beddoes, M P

    1992-02-01

    Contemporary equipment used for nerve conduction studies is usually capable of computerized measurement of latency, amplitude, duration, and area of nerve and muscle action potentials and resulting conduction velocities. Abnormalities can be due to technical error or disease. Identification of technical error is a major element of quality control in electromyography, and artificial intelligence could be useful for this purpose. We have developed a coupled knowledge-based prototype system (QUALICON) to assess the correctness of recording and stimulating characteristics in routine conduction studies. QUALICON extracts numeric features from CMAPs or SNAPs, which are translated into symbolic form to drive a Bayesian network. The network uses high-level knowledge to infer the quality of stimulating and recording electrode placement as well as polarity and stimulus strength making recommendations as to the likely technical error when abnormal potentials are detected. A preliminary assessment shows that QUALICON performs as well as manual assessment performed by professionals. PMID:1549138

  12. The neXtProt knowledgebase on human proteins: current status.

    PubMed

    Gaudet, Pascale; Michel, Pierre-André; Zahn-Zabal, Monique; Cusin, Isabelle; Duek, Paula D; Evalet, Olivier; Gateau, Alain; Gleizes, Anne; Pereira, Mario; Teixeira, Daniel; Zhang, Ying; Lane, Lydie; Bairoch, Amos

    2015-01-01

    neXtProt (http://www.nextprot.org) is a human protein-centric knowledgebase developed at the SIB Swiss Institute of Bioinformatics. Focused solely on human proteins, neXtProt aims to provide a state of the art resource for the representation of human biology by capturing a wide range of data, precise annotations, fully traceable data provenance and a web interface which enables researchers to find and view information in a comprehensive manner. Since the introductory neXtProt publication, significant advances have been made on three main aspects: the representation of proteomics data, an extended representation of human variants and the development of an advanced search capability built around semantic technologies. These changes are presented in the current neXtProt update. PMID:25593349

  13. Knowledge-based computer system to aid in the histopathological diagnosis of breast disease.

    PubMed Central

    Heathfield, H; Bose, D; Kirkham, N

    1991-01-01

    A knowledge-based computer system, designed to assist pathologists in the histological diagnosis of breast disease, is described. This system represents knowledge in the form of "disease profiles" and uses a novel inference model based on the mathematical technique of hypergraphs. Its design overcomes many of the limitations of existing expert system technologies when applied to breast disease. In particular, the system can quickly focus on a differential problem and thus reduce the amount of data necessary to reach a conclusion. The system was tested on two sets of samples, consisting of 14 retrospective cases and five hypothetical cases of breast disease. Its recommendations were judged "correct" by the evaluating pathologist in 15 cases. This study shows the feasibility of providing "decision support" in histopathology. PMID:2066430

  14. Research on Interactive Knowledge-Based Indexing: The MedIndEx Prototype

    PubMed Central

    Humphrey, Susanne M.

    1989-01-01

    The general purpose of the MedIndEx (Medical Indexing Expert) Project at the National Library of Medicine (NLM) is to design, develop, and test interactive knowledge-based systems for computer-assisted indexing of literature in the MEDLINE® database using terms from the MeSH® (Medical Subject Headings) thesaurus. In conventional MEDLINE indexing, although indexers enter MeSH descriptors at computer terminals, they consult the thesaurus, indexing manual, and other tools in published form. In the MedIndEx research prototype, the thesaurus and indexing rules are incorporated into a computerized knowledge base (KB) which provides specific assistance not possible in the conventional indexing system. We expect such a system, which combines principles and methods of artificial intelligence and information retrieval, will facilitate expert indexing that takes place at NLM.

  15. CASP2 knowledge-based approach to distant homology recognition and fold prediction in CASP4.

    PubMed

    Murzin, A G; Bateman, A

    2001-01-01

    In 1996, in CASP2, we presented a semimanual approach to the prediction of protein structure that was aimed at the recognition of probable distant homology, where it existed, between a given target protein and a protein of known structure (Murzin and Bateman, Proteins 1997; Suppl 1:105-112). Central to our method was the knowledge of all known structural and probable evolutionary relationships among proteins of known structure classified in the SCOP database (Murzin et al., J Mol Biol 1995;247:536-540). It was demonstrated that a knowledge-based approach could compete successfully with the best computational methods of the time in the correct recognition of the target protein fold. Four years later, in CASP4, we have applied essentially the same knowledge-based approach to distant homology recognition, concentrating our effort on the improvement of the completeness and alignment accuracy of our models. The manifold increase of available sequence and structure data was to our advantage, as well as was the experience and expertise obtained through the classification of these data. In particular, we were able to model most of our predictions from several distantly related structures rather than from a single parent structure, and we could use more superfamily characteristic features for the refinement of our alignments. Our predictions for each of the attempted distant homology recognition targets ranked among the few top predictions for each of these targets, with the predictions for the hypothetical protein HI0065 (T0104) and the C-terminal domain of the ABC transporter MalK (T0121C) being particularly successful. We also have attempted the prediction of protein folds of some of the targets tentatively assigned to new superfamilies. The average quality of our fold predictions was far less than the quality of our distant homology recognition models, but for the two targets, chorismate lyase (T0086) and Appr>p cyclic phosphodiesterase (T0094), our predictions achieved

  16. Comparative development of knowledge-based bioeconomy in the European Union and Turkey.

    PubMed

    Celikkanat Ozan, Didem; Baran, Yusuf

    2014-09-01

    Biotechnology, defined as the technological application that uses biological systems and living organisms, or their derivatives, to create or modify diverse products or processes, is widely used for healthcare, agricultural and environmental applications. The continuity in industrial applications of biotechnology enables the rise and development of the bioeconomy concept. Bioeconomy, including all applications of biotechnology, is defined as translation of knowledge received from life sciences into new, sustainable, environment friendly and competitive products. With the advanced research and eco-efficient processes in the scope of bioeconomy, more healthy and sustainable life is promised. Knowledge-based bioeconomy with its economic, social and environmental potential has already been brought to the research agendas of European Union (EU) countries. The aim of this study is to summarize the development of knowledge-based bioeconomy in EU countries and to evaluate Turkey's current situation compared to them. EU-funded biotechnology research projects under FP6 and FP7 and nationally-funded biotechnology projects under The Scientific and Technological Research Council of Turkey (TUBITAK) Academic Research Funding Program Directorate (ARDEB) and Technology and Innovation Funding Programs Directorate (TEYDEB) were examined. In the context of this study, the main research areas and subfields which have been funded, the budget spent and the number of projects funded since 2003 both nationally and EU-wide and the gaps and overlapping topics were analyzed. In consideration of the results, detailed suggestions for Turkey have been proposed. The research results are expected to be used as a roadmap for coordinating the stakeholders of bioeconomy and integrating Turkish Research Areas into European Research Areas. PMID:23815559

  17. Knowledge-based and model-based hybrid methodology for comprehensive waste minimization in electroplating plants

    NASA Astrophysics Data System (ADS)

    Luo, Keqin

    1999-11-01

    The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and

  18. Knowledge-based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.

    1992-01-01

    Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scaleable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded within this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on the concept called the Data Hub. With the Data Hub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (access, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.

  19. Knowledge-based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Jacobson, Allan S.; Doyle, Richard J.; Collins, Donald J.

    1993-01-01

    Within this decade, the growth in complexity of exploratory data analysis and the sheer volume of space data require new and innovative approaches to support science investigators in achieving their research objectives. To date, there have been numerous efforts addressing the individual issues involved in inter-disciplinary, multi-instrument investigations. However, while successful in small scale, these efforts have not proven to be open and scalable. This proposal addresses four areas of significant need: scientific visualization and analysis; science data management; interactions in a distributed, heterogeneous environment; and knowledge-based assistance for these functions. The fundamental innovation embedded with this proposal is the integration of three automation technologies, namely, knowledge-based expert systems, science visualization and science data management. This integration is based on concept called the DataHub. With the DataHub concept, NASA will be able to apply a more complete solution to all nodes of a distributed system. Both computation nodes and interactive nodes will be able to effectively and efficiently use the data services (address, retrieval, update, etc.) with a distributed, interdisciplinary information system in a uniform and standard way. This will allow the science investigators to concentrate on their scientific endeavors, rather than to involve themselves in the intricate technical details of the systems and tools required to accomplish their work. Thus, science investigators need not be programmers. The emphasis will be on the definition and prototyping of system elements with sufficient detail to enable data analysis and interpretation leading to publishable scientific results. In addition, the proposed work includes all the required end-to-end components and interfaces to demonstrate the completed concept.

  20. Developing a Culture and Infrastructure To Support Research-Related Activity in Further Education Institutions: A Knowledge-based Organisation Perspective.

    ERIC Educational Resources Information Center

    Brotherton, Bob

    1998-01-01

    Explores the role of research-related activity in further education institutions as knowledge-based organizations. Discusses issues of organizational strategy, design, and leadership that must be addressed in order to develop a supportive culture for research. (SK)

  1. A framework for the knowledge-based interpretation of laboratory data in intensive care units using deductive database technology.

    PubMed Central

    Schwaiger, J.; Haller, M.; Finsterer, U.

    1992-01-01

    In co-operation with the Institute of Anaesthesiology of the Ludwig-Maximilians-University in Munich a computer-based system for the analysis and interpretation of renal function, fluid and electrolyte metabolism of critical care patients has been developed. This paper focuses on the requirements and implementation aspects of the knowledge-based interpretation for this particular system. Objective of the proposed approach is, to transform an enormous--and constantly increasing--amount of raw data available in modern intensive care units (ICUs) into relevant, patient-oriented information, which is easy to understand by the medical staff. The essential features of a knowledge-based system at an ICU are outlined. A system is described where these features are realized using deductive database technology as a specification paradigm and extended relational databases as an implementation platform. The integration into the hospital information system is highlighted. PMID:1482854

  2. Automated knowledge-based fuzzy models generation for weaning of patients receiving ventricular assist device (VAD) therapy.

    PubMed

    Tsipouras, Markos G; Karvounis, Evaggelos C; Tzallas, Alexandros T; Goletsis, Yorgos; Fotiadis, Dimitrios I; Adamopoulos, Stamatis; Trivella, Maria G

    2012-01-01

    The SensorART project focus on the management of heart failure (HF) patients which are treated with implantable ventricular assist devices (VADs). This work presents the way that crisp models are transformed into fuzzy in the weaning module, which is one of the core modules of the specialist's decision support system (DSS) in SensorART. The weaning module is a DSS that supports the medical expert on the weaning and remove VAD from the patient decision. Weaning module has been developed following a "mixture of experts" philosophy, with the experts being fuzzy knowledge-based models, automatically generated from initial crisp knowledge-based set of rules and criteria for weaning. PMID:23366361

  3. On the Importance of the Distance Measures Used to Train and Test Knowledge-Based Potentials for Proteins

    PubMed Central

    Carlsen, Martin; Koehl, Patrice; Røgen, Peter

    2014-01-01

    Knowledge-based potentials are energy functions derived from the analysis of databases of protein structures and sequences. They can be divided into two classes. Potentials from the first class are based on a direct conversion of the distributions of some geometric properties observed in native protein structures into energy values, while potentials from the second class are trained to mimic quantitatively the geometric differences between incorrectly folded models and native structures. In this paper, we focus on the relationship between energy and geometry when training the second class of knowledge-based potentials. We assume that the difference in energy between a decoy structure and the corresponding native structure is linearly related to the distance between the two structures. We trained two distance-based knowledge-based potentials accordingly, one based on all inter-residue distances (PPD), while the other had the set of all distances filtered to reflect consistency in an ensemble of decoys (PPE). We tested four types of metric to characterize the distance between the decoy and the native structure, two based on extrinsic geometry (RMSD and GTD-TS*), and two based on intrinsic geometry (Q* and MT). The corresponding eight potentials were tested on a large collection of decoy sets. We found that it is usually better to train a potential using an intrinsic distance measure. We also found that PPE outperforms PPD, emphasizing the benefits of capturing consistent information in an ensemble. The relevance of these results for the design of knowledge-based potentials is discussed. PMID:25411785

  4. Knowledge-based personalized search engine for the Web-based Human Musculoskeletal System Resources (HMSR) in biomechanics.

    PubMed

    Dao, Tien Tuan; Hoang, Tuan Nha; Ta, Xuan Hien; Tho, Marie Christine Ho Ba

    2013-02-01

    Human musculoskeletal system resources of the human body are valuable for the learning and medical purposes. Internet-based information from conventional search engines such as Google or Yahoo cannot response to the need of useful, accurate, reliable and good-quality human musculoskeletal resources related to medical processes, pathological knowledge and practical expertise. In this present work, an advanced knowledge-based personalized search engine was developed. Our search engine was based on a client-server multi-layer multi-agent architecture and the principle of semantic web services to acquire dynamically accurate and reliable HMSR information by a semantic processing and visualization approach. A security-enhanced mechanism was applied to protect the medical information. A multi-agent crawler was implemented to develop a content-based database of HMSR information. A new semantic-based PageRank score with related mathematical formulas were also defined and implemented. As the results, semantic web service descriptions were presented in OWL, WSDL and OWL-S formats. Operational scenarios with related web-based interfaces for personal computers and mobile devices were presented and analyzed. Functional comparison between our knowledge-based search engine, a conventional search engine and a semantic search engine showed the originality and the robustness of our knowledge-based personalized search engine. In fact, our knowledge-based personalized search engine allows different users such as orthopedic patient and experts or healthcare system managers or medical students to access remotely into useful, accurate, reliable and good-quality HMSR information for their learning and medical purposes. PMID:23149160

  5. Evaluation of a Knowledge-Based Planning Solution for Head and Neck Cancer

    SciTech Connect

    Tol, Jim P. Delaney, Alexander R.; Dahele, Max; Slotman, Ben J.; Verbakel, Wilko F.A.R.

    2015-03-01

    Purpose: Automated and knowledge-based planning techniques aim to reduce variations in plan quality. RapidPlan uses a library consisting of different patient plans to make a model that can predict achievable dose-volume histograms (DVHs) for new patients and uses those models for setting optimization objectives. We benchmarked RapidPlan versus clinical plans for 2 patient groups, using 3 different libraries. Methods and Materials: Volumetric modulated arc therapy plans of 60 recent head and neck cancer patients that included sparing of the salivary glands, swallowing muscles, and oral cavity were evenly divided between 2 models, Model{sub 30A} and Model{sub 30B}, and were combined in a third model, Model{sub 60}. Knowledge-based plans were created for 2 evaluation groups: evaluation group 1 (EG1), consisting of 15 recent patients, and evaluation group 2 (EG2), consisting of 15 older patients in whom only the salivary glands were spared. RapidPlan results were compared with clinical plans (CP) for boost and/or elective planning target volume homogeneity index, using HI{sub B}/HI{sub E} = 100 × (D2% − D98%)/D50%, and mean dose to composite salivary glands, swallowing muscles, and oral cavity (D{sub sal}, D{sub swal}, and D{sub oc}, respectively). Results: For EG1, RapidPlan improved HI{sub B} and HI{sub E} values compared with CP by 1.0% to 1.3% and 1.0% to 0.6%, respectively. Comparable D{sub sal} and D{sub swal} values were seen in Model{sub 30A}, Model{sub 30B}, and Model{sub 60}, decreasing by an average of 0.1, 1.0, and 0.8 Gy and 4.8, 3.7, and 4.4 Gy, respectively. However, differences were noted between individual organs at risk (OARs), with Model{sub 30B} increasing D{sub oc} by 0.1, 3.2, and 2.8 Gy compared with CP, Model{sub 30A}, and Model{sub 60}. Plan quality was less consistent when the patient was flagged as an outlier. For EG2, RapidPlan decreased D{sub sal} by 4.1 to 4.9 Gy on average, whereas HI{sub B} and HI{sub E} decreased by 1.1% to

  6. An Intelligent Knowledge-Based and Customizable Home Care System Framework with Ubiquitous Patient Monitoring and Alerting Techniques

    PubMed Central

    Chen, Yen-Lin; Chiang, Hsin-Han; Yu, Chao-Wei; Chiang, Chuan-Yen; Liu, Chuan-Ming; Wang, Jenq-Haur

    2012-01-01

    This study develops and integrates an efficient knowledge-based system and a component-based framework to design an intelligent and flexible home health care system. The proposed knowledge-based system integrates an efficient rule-based reasoning model and flexible knowledge rules for determining efficiently and rapidly the necessary physiological and medication treatment procedures based on software modules, video camera sensors, communication devices, and physiological sensor information. This knowledge-based system offers high flexibility for improving and extending the system further to meet the monitoring demands of new patient and caregiver health care by updating the knowledge rules in the inference mechanism. All of the proposed functional components in this study are reusable, configurable, and extensible for system developers. Based on the experimental results, the proposed intelligent homecare system demonstrates that it can accomplish the extensible, customizable, and configurable demands of the ubiquitous healthcare systems to meet the different demands of patients and caregivers under various rehabilitation and nursing conditions. PMID:23112650

  7. Design of Composite Structures Using Knowledge-Based and Case Based Reasoning

    NASA Technical Reports Server (NTRS)

    Lambright, Jonathan Paul

    1996-01-01

    A method of using knowledge based and case based reasoning to assist designers during conceptual design tasks of composite structures was proposed. The cooperative use of heuristics, procedural knowledge, and previous similar design cases suggests a potential reduction in design cycle time and ultimately product lead time. The hypothesis of this work is that the design process of composite structures can be improved by using Case-Based Reasoning (CBR) and Knowledge-Based (KB) reasoning in the early design stages. The technique of using knowledge-based and case-based reasoning facilitates the gathering of disparate information into one location that is easily and readily available. The method suggests that the inclusion of downstream life-cycle issues into the conceptual design phase reduces potential of defective, and sub-optimal composite structures. Three industry experts were interviewed extensively. The experts provided design rules, previous design cases, and test problems. A Knowledge Based Reasoning system was developed using the CLIPS (C Language Interpretive Procedural System) environment and a Case Based Reasoning System was developed using the Design Memory Utility For Sharing Experiences (MUSE) xviii environment. A Design Characteristic State (DCS) was used to document the design specifications, constraints, and problem areas using attribute-value pair relationships. The DCS provided consistent design information between the knowledge base and case base. Results indicated that the use of knowledge based and case based reasoning provided a robust design environment for composite structures. The knowledge base provided design guidance from well defined rules and procedural knowledge. The case base provided suggestions on design and manufacturing techniques based on previous similar designs and warnings of potential problems and pitfalls. The case base complemented the knowledge base and extended the problem solving capability beyond the existence of

  8. A Knowledge-Based Approach to Improving and Homogenizing Intensity Modulated Radiation Therapy Planning Quality Among Treatment Centers: An Example Application to Prostate Cancer Planning

    SciTech Connect

    Good, David; Lo, Joseph; Lee, W. Robert; Wu, Q. Jackie; Yin, Fang-Fang; Das, Shiva K.

    2013-09-01

    Purpose: Intensity modulated radiation therapy (IMRT) treatment planning can have wide variation among different treatment centers. We propose a system to leverage the IMRT planning experience of larger institutions to automatically create high-quality plans for outside clinics. We explore feasibility by generating plans for patient datasets from an outside institution by adapting plans from our institution. Methods and Materials: A knowledge database was created from 132 IMRT treatment plans for prostate cancer at our institution. The outside institution, a community hospital, provided the datasets for 55 prostate cancer cases, including their original treatment plans. For each “query” case from the outside institution, a similar “match” case was identified in the knowledge database, and the match case’s plan parameters were then adapted and optimized to the query case by use of a semiautomated approach that required no expert planning knowledge. The plans generated with this knowledge-based approach were compared with the original treatment plans at several dose cutpoints. Results: Compared with the original plan, the knowledge-based plan had a significantly more homogeneous dose to the planning target volume and a significantly lower maximum dose. The volumes of the rectum, bladder, and femoral heads above all cutpoints were nominally lower for the knowledge-based plan; the reductions were significantly lower for the rectum. In 40% of cases, the knowledge-based plan had overall superior (lower) dose–volume histograms for rectum and bladder; in 54% of cases, the comparison was equivocal; in 6% of cases, the knowledge-based plan was inferior for both bladder and rectum. Conclusions: Knowledge-based planning was superior or equivalent to the original plan in 95% of cases. The knowledge-based approach shows promise for homogenizing plan quality by transferring planning expertise from more experienced to less experienced institutions.

  9. Knowledge-based battery design of short-term tests based on dose information.

    PubMed

    Buzzi, R; Würgler, F E

    1990-10-01

    A construction of batteries of short-term tests (STTs) is described which is based on a classification of 73 chemicals in regard to their carcinogenicity. The 73 chemicals were studied within the U.S. National Toxicology Program (Ashby and Tennant, 1988). The batteries are validated using the classification of 35 additional chemicals. They are defined by logically structured combinations of rules. The single rules are defined by the z-scores of the logarithmic values of the limiting doses obtained from the 4 in vitro STTs used in the study by Ashby and Tennant. The limiting dose is defined as the lowest effective dose or the highest ineffective dose (Waters et al., 1987). The batteries are constructed by minimizing the number of disagreements with the classification by Ashby and Tennant. Compared with the results obtained from single STTs, 2 batteries of 3 STTs have higher concordances with the carcinogenicity data, namely 70% for the NTP data and 74-77% for the independent test data. In addition, a theoretical result shows that the proposed battery design, for a large enough learning set of chemicals, leads to results which are replicated with high probability on a large enough validation set. Based on the first results obtained with a limited number of chemicals it is concluded that the knowledge-based battery design is worth further development. PMID:2215543

  10. Initial Validation of a Knowledge-Based Measure of Social Information Processing and Anger Management

    PubMed Central

    Cassano, Michael; MacEvoy, Julie Paquette; Costigan, Tracy

    2010-01-01

    Over the past fifteen years many schools have utilized aggression prevention programs. Despite these apparent advances, many programs are not examined systematically to determine the areas in which they are most effective. One reason for this is that many programs, especially those in urban under-resourced areas, do not utilize outcome measures that are sensitive to the needs of ethnic minority students. The current study illustrates how a new knowledge-based measure of social information processing and anger management techniques was designed through a partnership-based process to ensure that it would be sensitive to the needs of urban, predominately African American youngsters, while also having broad potential applicability for use as an outcome assessment tool for aggression prevention programs focusing upon social information processing. The new measure was found to have strong psychometric properties within a sample of urban predominately African American youth, as item analyses suggested that almost all items discriminate well between more and less knowledgeable individuals, that the test-retest reliability of the measure is strong, and that the measure appears to be sensitive to treatment changes over time. In addition, the overall score of this new measure is moderately associated with attributions of hostility on two measures (negative correlations) and demonstrates a low to moderate negative association with peer and teacher report measures of overt and relational aggression. More research is needed to determine the measure's utility outside of the urban school context. PMID:20449645

  11. Development of a Knowledgebase to Integrate, Analyze, Distribute, and Visualize Microbial Community Systems Biology Data

    SciTech Connect

    Banfield, Jillian; Thomas, Brian

    2015-01-15

    We have developed a flexible knowledgebase system, ggKbase, (http://gg.berkeley.edu), to enable effective data analysis and knowledge generation from samples from which metagenomic and other ‘omics’ data are obtained. Within ggKbase, data can be interpreted, integrated and linked to other databases and services. Sequence information from complex metagenomic samples can be quickly and effectively resolved into genomes and biologically meaningful investigations of an organism’s metabolic potential can then be conducted. Critical features make analyses efficient, allowing analysis of hundreds of genomes at a time. The system is being used to support research in multiple DOE-relevant systems, including the LBNL SFA subsurface science biogeochemical cycling research at Rifle, Colorado. ggKbase is supporting the research of a rapidly growing group of users. It has enabled studies of carbon cycling in acid mine drainage ecosystems, biologically-mediated transformations in deep subsurface biomes sampled from mines and the north slope of Alaska, to study the human microbiome and for laboratory bioreactor-based remediation investigations.

  12. Knowledge-based optical system design: some optical systems generated by the KBOSD

    NASA Astrophysics Data System (ADS)

    Nouri, Taoufik; Erard, Pierre-Jean

    1993-04-01

    This work is a new approach for the design of start optical systems and represents a new contribution of artificial intelligence techniques in the optical design field. A knowledge-based optical-systems design (KBOSD), based on artificial intelligence algorithms, first order logic, knowledge representation, rules, and heuristics on lens design, is realized. This KBOSD is equipped with optical knowledge in the domain of centered dioptrical optical systems used at low aperture and small field angles. This KBOSD generates centered dioptrical, on-axis and low-aperture optical-systems, which are used as start systems for the subsequent optimization by existing lens design programs. This KBOSD produces monochromatic or polychromatic optical systems, such as singlet lens, doublet lens, triplet lens, reversed singlet lens, reversed doublet lens, reversed triplet lens, and telescopes. In the design of optical systems, the KBOSD takes into account many user constraints such as cost, resistance of the optical material (glass) to chemical, thermal, and mechanical effects, as well as the optical quality such as minimal aberrations and chromatic aberrations corrections. This KBOSD is developed in the programming language Prolog and has knowledge on optical design principles and optical properties and uses neither a lens library nor a lens data base, it is completely based on optical design knowledge.

  13. A knowledge-based system paradigm for automatic interpretation of CT scans.

    PubMed

    Natarajan, K; Cawley, M G; Newell, J A

    1991-01-01

    The interpretation of X-ray CT scans is a task which relies on specialized medical expertise, comprising anatomical, modality-dependent, non-visual and radiological knowledge. Most medical imaging techniques generate a single scan or sequence of two-dimensional scans. The radiologist's experience is gained by interpreting two-dimensional scans. The more complex three-dimensional anatomical knowledge becomes significant only when non-standard slice orientations are used. Hence, implicit in the radiologist's knowledge is the appearance of anatomical structures in standard two-dimensional planes, transverse, sagittal and coronal. That is, position with respect to both a coordinate reference system and other structures; intensity ranges for tissue types; contrast between structures; and size within the slices. Further to this, neurological landmarking is used to establish points of reference, i.e. more easily identifiable structures are first found and subsequent hypotheses are formed. With this in mind we have developed a knowledge-based system paradigm that partitions an image by applying the domain-dependent knowledge necessary (1) to set constraints on region-based segmentation and (2) to make explicit the expectation of the appearance of the anatomy under the imaging modality for use in the region grouping phase. This paradigm affords both expectation- and event-driven segmentation by representing grouping knowledge as production rules. PMID:1921561

  14. Initial validation of a knowledge-based measure of social information processing and anger management.

    PubMed

    Leff, Stephen S; Cassano, Michael; MacEvoy, Julie Paquette; Costigan, Tracy

    2010-10-01

    Over the past fifteen years many schools have utilized aggression prevention programs. Despite these apparent advances, many programs are not examined systematically to determine the areas in which they are most effective. One reason for this is that many programs, especially those in urban under-resourced areas, do not utilize outcome measures that are sensitive to the needs of ethnic minority students. The current study illustrates how a new knowledge-based measure of social information processing and anger management techniques was designed through a partnership-based process to ensure that it would be sensitive to the needs of urban, predominately African American youngsters, while also having broad potential applicability for use as an outcome assessment tool for aggression prevention programs focusing upon social information processing. The new measure was found to have strong psychometric properties within a sample of urban predominately African American youth, as item analyses suggested that almost all items discriminate well between more and less knowledgeable individuals, that the test-retest reliability of the measure is strong, and that the measure appears to be sensitive to treatment changes over time. In addition, the overall score of this new measure is moderately associated with attributions of hostility on two measures (negative correlations) and demonstrates a low to moderate negative association with peer and teacher report measures of overt and relational aggression. More research is needed to determine the measure's utility outside of the urban school context. PMID:20449645

  15. A knowledge-based approach to the CADx of mammographic masses

    NASA Astrophysics Data System (ADS)

    Elter, Matthias; Haßlmeyer, Erik

    2008-03-01

    Today, mammography is recognized as the most effective technique for breast cancer screening. Unfortunately, the low positive predictive value of breast biopsy examinations resulting from mammogram interpretation leads to many unnecessary biopsies performed on benign lesions. In the last years, several computer assisted diagnosis (CADx) systems have been proposed with the goal to assist the radiologist in the discrimination of benign and malignant breast lesions and thus to reduce the high number of unnecessary biopsies. In this paper we present a novel, knowledge-based approach to the computer aided discrimination of mammographic mass lesions that uses computer-extracted attributes of mammographic masses and clinical data as input attributes to a case-based reasoning system. Our approach emphasizes a transparent reasoning process which is important for the acceptance of a CADx system in clinical practice. We evaluate the performance of the proposed system on a large publicly available mammography database using receiver operating characteristic curve analysis. Our results indicate that the proposed CADx system has the potential to significantly reduce the number of unnecessary breast biopsies in clinical practice.

  16. Structural semantic interconnections: a knowledge-based approach to word sense disambiguation.

    PubMed

    Navigli, Roberto; Velardi, Paola

    2005-07-01

    Word Sense Disambiguation (WSD) is traditionally considered an Al-hard problem. A break-through in this field would have a significant impact on many relevant Web-based applications, such as Web information retrieval, improved access to Web services, information extraction, etc. Early approaches to WSD, based on knowledge representation techniques, have been replaced in the past few years by more robust machine learning and statistical techniques. The results of recent comparative evaluations of WSD systems, however, show that these methods have inherent limitations. On the other hand, the increasing availability of large-scale, rich lexical knowledge resources seems to provide new challenges to knowledge-based approaches. In this paper, we present a method, called structural semantic interconnections (SSI), which creates structural specifications of the possible senses for each word in a context and selects the best hypothesis according to a grammar G, describing relations between sense specifications. Sense specifications are created from several available lexical resources that we integrated in part manually, in part with the help of automatic procedures. The SSI algorithm has been applied to different semantic disambiguation problems, like automatic ontology population, disambiguation of sentences in generic texts, disambiguation of words in glossary definitions. Evaluation experiments have been performed on specific knowledge domains (e.g., tourism, computer networks, enterprise interoperability), as well as on standard disambiguation test sets. PMID:16013755

  17. Human Disease Insight: An integrated knowledge-based platform for disease-gene-drug information.

    PubMed

    Tasleem, Munazzah; Ishrat, Romana; Islam, Asimul; Ahmad, Faizan; Hassan, Md Imtaiyaz

    2016-01-01

    The scope of the Human Disease Insight (HDI) database is not limited to researchers or physicians as it also provides basic information to non-professionals and creates disease awareness, thereby reducing the chances of patient suffering due to ignorance. HDI is a knowledge-based resource providing information on human diseases to both scientists and the general public. Here, our mission is to provide a comprehensive human disease database containing most of the available useful information, with extensive cross-referencing. HDI is a knowledge management system that acts as a central hub to access information about human diseases and associated drugs and genes. In addition, HDI contains well-classified bioinformatics tools with helpful descriptions. These integrated bioinformatics tools enable researchers to annotate disease-specific genes and perform protein analysis, search for biomarkers and identify potential vaccine candidates. Eventually, these tools will facilitate the analysis of disease-associated data. The HDI provides two types of search capabilities and includes provisions for downloading, uploading and searching disease/gene/drug-related information. The logistical design of the HDI allows for regular updating. The database is designed to work best with Mozilla Firefox and Google Chrome and is freely accessible at http://humandiseaseinsight.com. PMID:26631432

  18. Knowledge-based verification of clinical guidelines by detection of anomalies.

    PubMed

    Duftschmid, G; Miksch, S

    2001-04-01

    As shown in numerous studies, a significant part of published clinical guidelines is tainted with different types of semantical errors that interfere with their practical application. The adaptation of generic guidelines, necessitated by circumstances such as resource limitations within the applying organization or unexpected events arising in the course of patient care, further promotes the introduction of defects. Still, most current approaches for the automation of clinical guidelines are lacking mechanisms, which check the overall correctness of their output. In the domain of software engineering in general and in the domain of knowledge-based systems (KBS) in particular, a common strategy to examine a system for potential defects consists in its verification. The focus of this work is to present an approach, which helps to ensure the semantical correctness of clinical guidelines in a three-step process. We use a particular guideline specification language called Asbru to demonstrate our verification mechanism. A scenario-based evaluation of our method is provided based on a guideline for the artificial ventilation of newborn infants. The described approach is kept sufficiently general in order to allow its application to several other guideline representation formats. PMID:11259882

  19. A Knowledge-Based and Model-Driven Requirements Engineering Approach to Conceptual Satellite Design

    NASA Astrophysics Data System (ADS)

    Dos Santos, Walter A.; Leonor, Bruno B. F.; Stephany, Stephan

    Satellite systems are becoming even more complex, making technical issues a significant cost driver. The increasing complexity of these systems makes requirements engineering activities both more important and difficult. Additionally, today's competitive pressures and other market forces drive manufacturing companies to improve the efficiency with which they design and manufacture space products and systems. This imposes a heavy burden on systems-of-systems engineering skills and particularly on requirements engineering which is an important phase in a system's life cycle. When this is poorly performed, various problems may occur, such as failures, cost overruns and delays. One solution is to underpin the preliminary conceptual satellite design with computer-based information reuse and integration to deal with the interdisciplinary nature of this problem domain. This can be attained by taking a model-driven engineering approach (MDE), in which models are the main artifacts during system development. MDE is an emergent approach that tries to address system complexity by the intense use of models. This work outlines the use of SysML (Systems Modeling Language) and a novel knowledge-based software tool, named SatBudgets, to deal with these and other challenges confronted during the conceptual phase of a university satellite system, called ITASAT, currently being developed by INPE and some Brazilian universities.

  20. A knowledge-based approach to the adaptive finite element analysis

    SciTech Connect

    Haghighi, K.; Kang, E.

    1995-12-31

    An automatic and knowledge-based finite element mesh generator (INTELMESH), which makes extensive use of interactive computer graphics techniques, has been developed. INTELMESH is designed for planar domains and axisymmetric 3-D structures of elasticity and heat transfer subjected to mechanical and thermal loading. It intelligently identifies the critical regions/points in the problem domain and utilizes the new concepts of substructuring and wave propagation to choose the proper mesh size for them. INTELMESH generates well-shaped triangular elements by applying triangulation and Laplacian smoothing procedures. The adaptive analysis involves the initial finite element analysis and an efficient a-posteriori error analysis and estimation. Once a problem is defined, the system automatically builds a finite element model and analyzes the problem through an automatic iterative process until the error reaches a desired level. It has been shown that the proposed approach which initiates the process with an a-priori, and near optimum mesh of the object, converges to the desired accuracy in less time and at less cost.

  1. The fault monitoring and diagnosis knowledge-based system for space power systems: AMPERES, phase 1

    NASA Technical Reports Server (NTRS)

    Lee, S. C.

    1989-01-01

    The objective is to develop a real time fault monitoring and diagnosis knowledge-based system (KBS) for space power systems which can save costly operational manpower and can achieve more reliable space power system operation. The proposed KBS was developed using the Autonomously Managed Power System (AMPS) test facility currently installed at NASA Marshall Space Flight Center (MSFC), but the basic approach taken for this project could be applicable for other space power systems. The proposed KBS is entitled Autonomously Managed Power-System Extendible Real-time Expert System (AMPERES). In Phase 1 the emphasis was put on the design of the overall KBS, the identification of the basic research required, the initial performance of the research, and the development of a prototype KBS. In Phase 2, emphasis is put on the completion of the research initiated in Phase 1, and the enhancement of the prototype KBS developed in Phase 1. This enhancement is intended to achieve a working real time KBS incorporated with the NASA space power system test facilities. Three major research areas were identified and progress was made in each area. These areas are real time data acquisition and its supporting data structure; sensor value validations; development of inference scheme for effective fault monitoring and diagnosis, and its supporting knowledge representation scheme.

  2. Improving Loop Modeling of the Antibody Complementarity-Determining Region 3 Using Knowledge-Based Restraints

    PubMed Central

    Finn, Jessica A.; Koehler Leman, Julia; Cisneros, Alberto; Crowe, James E.; Meiler, Jens

    2016-01-01

    Structural restrictions are present even in the most sequence diverse portions of antibodies, the complementary determining region (CDR) loops. Previous studies identified robust rules that define canonical structures for five of the six CDR loops, however the heavy chain CDR 3 (HCDR3) defies standard classification attempts. The HCDR3 loop can be subdivided into two domains referred to as the “torso” and the “head” domains and two major families of canonical torso structures have been identified; the more prevalent “bulged” and less frequent “non-bulged” torsos. In the present study, we found that Rosetta loop modeling of 28 benchmark bulged HCDR3 loops is improved with knowledge-based structural restraints developed from available antibody crystal structures in the PDB. These restraints restrict the sampling space Rosetta searches in the torso domain, limiting the φ and ψ angles of these residues to conformations that have been experimentally observed. The application of these restraints in Rosetta result in more native-like structure sampling and improved score-based differentiation of native-like HCDR3 models, significantly improving our ability to model antibody HCDR3 loops. PMID:27182833

  3. Knowledge-based Potential for Positioning Membrane-Associated Structures and Assessing Residue Specific Energetic Contributions

    PubMed Central

    Schramm, Chaim A.; Hannigan, Brett T.; Donald, Jason E.; Keasar, Chen; Saven, Jeffrey G.; DeGrado, William F.; Samish, Ilan

    2012-01-01

    The complex hydrophobic and hydrophilic milieus of membrane-associated proteins pose experimental and theoretical challenges to their understanding. Here we produce a non-redundant database to compute knowledge-based asymmetric cross-membrane potentials from the per-residue distributions of Cβ, Cγ and functional group atoms. We predict transmembrane and peripherally associated regions from genomic sequence and position peptides and protein structures relative to the bilayer (available at http://www.degradolab.org/ez). The pseudo-energy topological landscapes underscore positional stability and functional mechanisms demonstrated here for antimicrobial peptides, transmembrane proteins, and viral fusion proteins. Moreover, experimental effects of point mutations on the relative ratio changes of dual-topology proteins are quantitatively reproduced. The functional group potential and the membrane-exposed residues display the largest energetic changes enabling to detect native-like structures from decoys. Hence, focusing on the uniqueness of membrane-associated proteins and peptides, we quantitatively parameterize their cross-membrane propensity thus facilitating structural refinement, characterization, prediction and design. PMID:22579257

  4. A Knowledge-Based System For The Delineation Of The Coronary Arteries

    NASA Astrophysics Data System (ADS)

    Smets, Carl; Suetens, Paul; Oosterlinck, Andre J.; van de Werf, Frans

    1989-05-01

    In this article we will present work in progress concerning a knowledge-based system for the labeling of the coronary arteries on single projections. The approach is based on a gradual refinement of the interpretation results, starting from the detection of blood vessel center lines, the extraction of bar-like primitives and the connection into blood vessel segments. In this paper we will focus on the final stage which is the labeling of the delineated blood vessel segments. In contrast with most existing approaches which are mainly based on a sequential labeling of the vessels starting from the most important segment, our system uses a constraint satisfaction technique. Mainly, because most anatomical knowledge can be easily formalized as constraints on local attributes such as position, greyvalue, thickness and orientation and as constraints on relations between blood vessel segments such as "left of" or "in same direction". Anatomical models are developed for the Left Coronary Artery in standard RAO and LAO views. In general, only 1-2 interpretations are left, which is an encouraging result if you take into account that for some projections there is a considerable overlap between vessel segments.

  5. Cyclodextrin KnowledgeBase a web-based service managing CD-ligand complexation data.

    PubMed

    Hazai, Eszter; Hazai, Istvan; Demko, Laszlo; Kovacs, Sandor; Malik, David; Akli, Peter; Hari, Peter; Szeman, Julianna; Fenyvesi, Eva; Benes, Edina; Szente, Lajos; Bikadi, Zsolt

    2010-08-01

    Cyclodextrins are cyclic oligosaccharides that are able to form water-soluble inclusion complexes with small molecules. Because of their complexing ability, they are widely applied in food, pharmaceutical and chemical industries. In this paper we describe the development of a free web-service, Cyclodextrin KnowledgeBase: ( http://www.cyclodextrin.net ). The database contains four modules: the Publication, Interaction, Chirality and Analysis Modules. In the Publication Module, almost 50,000 publication details are collected that can be retrieved by text search. In the Interaction and Chirality Modules relevant literature data on cyclodextrin complexation and chiral recognition are collected that can be retrieved by both text and structural searches. Moreover, in the Analysis Module, the geometries of small molecule-cyclodextrin complexes can be predicted using molecular docking tools in order to explore the structures and interaction energies of the inclusion complexes. Complex geometry prediction is made possible by the built-in database of 95 cyclodextrin derivatives, where the 3D structures as well as the partial charges are calculated and stored for further utilization. The use of the database is demonstrated by several examples. PMID:20521083

  6. Human Resource Development for Knowledge-based Society and Challenges of Nagoya University

    NASA Astrophysics Data System (ADS)

    Miyata, Takashi

    Innovation in the previous century resulted in development of useful products ranging from automobiles and aircraft to cellular phones. However, the innovation and development of science and technology have changed the society and brought about negative issues. The issues emerged in the previous century remain in the excessive forms in the 21st century. The 21st century is seeing the rise of knowledge-based society, and paradigm shift is now going on. Human resources of university for creation of innovation are being called on to contribute to solving issues. Young people who pass through a doctor program must play a role as an innovator who can promote the paradigm shift. However, the higher education system of the universities in Japan is now required to be changed to dissolve the mismatch on the doctor program with industries, government and students. The discussion in the Business-University Forum of Japan for innovation of education system and a few challenges of the Nagoya University are introduced in this paper.

  7. VIP: A knowledge-based design aid for the engineering of space systems

    NASA Technical Reports Server (NTRS)

    Lewis, Steven M.; Bellman, Kirstie L.

    1990-01-01

    The Vehicles Implementation Project (VIP), a knowledge-based design aid for the engineering of space systems is described. VIP combines qualitative knowledge in the form of rules, quantitative knowledge in the form of equations, and other mathematical modeling tools. The system allows users rapidly to develop and experiment with models of spacecraft system designs. As information becomes available to the system, appropriate equations are solved symbolically and the results are displayed. Users may browse through the system, observing dependencies and the effects of altering specific parameters. The system can also suggest approaches to the derivation of specific parameter values. In addition to providing a tool for the development of specific designs, VIP aims at increasing the user's understanding of the design process. Users may rapidly examine the sensitivity of a given parameter to others in the system and perform tradeoffs or optimizations of specific parameters. A second major goal of VIP is to integrate the existing corporate knowledge base of models and rules into a central, symbolic form.

  8. Structure of a protein (H2AX): a comparative study with knowledge-based interactions

    NASA Astrophysics Data System (ADS)

    Fritsche, Miriam; Heermann, Dieter; Farmer, Barry; Pandey, Ras

    2013-03-01

    The structural and conformational properties of the histone protein H2AX (with143 residues) is studied by a coarse-grained model as a function of temperature (T). Three knowledge-based phenomenological interactions (MJ, BT, and BFKV) are used as input to a generalized Lennard-Jones potential for residue-residue interactions. Large-scale Monte Carlo simulations are performed to identify similarity and differences in the equilibrium structures with these potentials. Multi-scale structures of the protein are examined by a detailed analysis of their structure functions. We find that the radius of gyration (Rg) of H2AX depends non-monotonically on temperature with a maximum at a characteristic value Tc, a common feature to each interaction. The characteristic temperature and the range of non-monotonic thermal response and decay pattern are, however, sensitive to interactions. A comparison of the structural properties emerging from three potentials will be presented in this talk. This work is supported by Air Force Research Laboratory.

  9. Knowledge-based model of hydrogen-bonding propensity in organic crystals.

    PubMed

    Galek, Peter T A; Fábián, László; Motherwell, W D Samuel; Allen, Frank H; Feeder, Neil

    2007-10-01

    A new method is presented to predict which donors and acceptors form hydrogen bonds in a crystal structure, based on the statistical analysis of hydrogen bonds in the Cambridge Structural Database (CSD). The method is named the logit hydrogen-bonding propensity (LHP) model. The approach has a potential application in identifying both likely and unusual hydrogen bonding, which can help to rationalize stable and metastable crystalline forms, of relevance to drug development in the pharmaceutical industry. Whilst polymorph prediction techniques are widely used, the LHP model is knowledge-based and is not restricted by the computational issues of polymorph prediction, and as such may form a valuable precursor to polymorph screening. Model construction applies logistic regression, using training data obtained with a new survey method based on the CSD system. The survey categorizes the hydrogen bonds and extracts model parameter values using descriptive structural and chemical properties from three-dimensional organic crystal structures. LHP predictions from a fitted model are made using two-dimensional observables alone. In the initial cases analysed, the model is highly accurate, achieving approximately 90% correct classification of both observed hydrogen bonds and non-interacting donor-acceptor pairs. Extensive statistical validation shows the LHP model to be robust across a range of small-molecule organic crystal structures. PMID:17873446

  10. Knowledge-based analysis of microarrays for the discovery of transcriptional regulation relationships

    PubMed Central

    2010-01-01

    Background The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. Results In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. Conclusion High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data. PMID:20122245

  11. NeuroGeM, a knowledgebase of genetic modifiers in neurodegenerative diseases

    PubMed Central

    2013-01-01

    Background Neurodegenerative diseases (NDs) are characterized by the progressive loss of neurons in the human brain. Although the majority of NDs are sporadic, evidence is accumulating that they have a strong genetic component. Therefore, significant efforts have been made in recent years to not only identify disease-causing genes but also genes that modify the severity of NDs, so-called genetic modifiers. To date there exists no compendium that lists and cross-links genetic modifiers of different NDs. Description In order to address this need, we present NeuroGeM, the first comprehensive knowledgebase providing integrated information on genetic modifiers of nine different NDs in the model organisms D. melanogaster, C. elegans, and S. cerevisiae. NeuroGeM cross-links curated genetic modifier information from the different NDs and provides details on experimental conditions used for modifier identification, functional annotations, links to homologous proteins and color-coded protein-protein interaction networks to visualize modifier interactions. We demonstrate how this database can be used to generate new understanding through meta-analysis. For instance, we reveal that the Drosophila genes DnaJ-1, thread, Atx2, and mub are generic modifiers that affect multiple if not all NDs. Conclusion As the first compendium of genetic modifiers, NeuroGeM will assist experimental and computational scientists in their search for the pathophysiological mechanisms underlying NDs. http://chibi.ubc.ca/neurogem. PMID:24229347

  12. Designing optimal transportation networks: a knowledge-based computer-aided multicriteria approach

    SciTech Connect

    Tung, S.I.

    1986-01-01

    The dissertation investigates the applicability of using knowledge-based expert systems (KBES) approach to solve the single-mode (automobile), fixed-demand, discrete, multicriteria, equilibrium transportation-network-design problem. Previous works on this problem has found that mathematical programming method perform well on small networks with only one objective. Needed is a solution technique that can be used on large networks having multiple, conflicting criteria with different relative importance weights. The KBES approach developed in this dissertation represents a new way to solve network design problems. The development of an expert system involves three major tasks: knowledge acquisition, knowledge representation, and testing. For knowledge acquisition, a computer aided network design/evaluation model (UFOS) was developed to explore the design space. This study is limited to the problem of designing an optimal transportation network by adding and deleting capacity increments to/from any link in the network. Three weighted criteria were adopted for use in evaluating each design alternative: cost, average V/C ratio, and average travel time.

  13. The Rice Genome Knowledgebase (RGKbase): an annotation database for rice comparative genomics and evolutionary biology

    PubMed Central

    Wang, Dapeng; Xia, Yan; Li, Xinna; Hou, Lixia; Yu, Jun

    2013-01-01

    Over the past 10 years, genomes of cultivated rice cultivars and their wild counterparts have been sequenced although most efforts are focused on genome assembly and annotation of two major cultivated rice (Oryza sativa L.) subspecies, 93-11 (indica) and Nipponbare (japonica). To integrate information from genome assemblies and annotations for better analysis and application, we now introduce a comparative rice genome database, the Rice Genome Knowledgebase (RGKbase, http://rgkbase.big.ac.cn/RGKbase/). RGKbase is built to have three major components: (i) integrated data curation for rice genomics and molecular biology, which includes genome sequence assemblies, transcriptomic and epigenomic data, genetic variations, quantitative trait loci (QTLs) and the relevant literature; (ii) User-friendly viewers, such as Gbrowse, GeneBrowse and Circos, for genome annotations and evolutionary dynamics and (iii) Bioinformatic tools for compositional and synteny analyses, gene family classifications, gene ontology terms and pathways and gene co-expression networks. RGKbase current includes data from five rice cultivars and species: Nipponbare (japonica), 93-11 (indica), PA64s (indica), the African rice (Oryza glaberrima) and a wild rice species (Oryza brachyantha). We are also constantly introducing new datasets from variety of public efforts, such as two recent releases—sequence data from ∼1000 rice varieties, which are mapped into the reference genome, yielding ample high-quality single-nucleotide polymorphisms and insertions–deletions. PMID:23193278

  14. Knowledge-based automated technique for measuring total lung volume from CT

    NASA Astrophysics Data System (ADS)

    Brown, Matthew S.; McNitt-Gray, Michael F.; Mankovich, Nicholas J.; Goldin, Jonathan G.; Aberle, Denise R.

    1996-04-01

    A robust, automated technique has been developed for estimating total lung volumes from chest computed tomography (CT) images. The technique includes a method for segmenting major chest anatomy. A knowledge-based approach automates the calculation of separate volumes of the whole thorax, lungs, and central tracheo-bronchial tree from volumetric CT data sets. A simple, explicit 3D model describes properties such as shape, topology and X-ray attenuation, of the relevant anatomy, which constrain the segmentation of these anatomic structures. Total lung volume is estimated as the sum of the right and left lungs and excludes the central airways. The method requires no operator intervention. In preliminary testing, the system was applied to image data from two healthy subjects and four patients with emphysema who underwent both helical CT and pulmonary function tests. To obtain single breath-hold scans, the healthy subjects were scanned with a collimation of 5 mm and a pitch of 1.5, while the emphysema patients were scanned with collimation of 10 mm at a pitch of 2.0. CT data were reconstructed as contiguous image sets. Automatically calculated volumes were consistent with body plethysmography results (< 10% difference).

  15. Ada and knowledge-based systems: A prototype combining the best of both worlds

    NASA Technical Reports Server (NTRS)

    Brauer, David C.

    1986-01-01

    A software architecture is described which facilitates the construction of distributed expert systems using Ada and selected knowledge based systems. This architecture was utilized in the development of a Knowledge-based Maintenance Expert System (KNOMES) prototype for the Space Station Mobile Service Center (MSC). The KNOMES prototype monitors a simulated data stream from MSC sensors and built-in test equipment. It detects anomalies in the data and performs diagnosis to determine the cause. The software architecture which supports the KNOMES prototype allows for the monitoring and diagnosis tasks to be performed concurrently. The basic concept of this software architecture is named ACTOR (Ada Cognitive Task ORganization Scheme). An individual ACTOR is a modular software unit which contains both standard data processing and artificial intelligence components. A generic ACTOR module contains Ada packages for communicating with other ACTORs and accessing various data sources. The knowledge based component of an ACTOR determines the role it will play in a system. In this prototype, an ACTOR will monitor the MSC data stream.

  16. T2D@ZJU: a knowledgebase integrating heterogeneous connections associated with type 2 diabetes mellitus.

    PubMed

    Yang, Zhenzhong; Yang, Jihong; Liu, Wei; Wu, Leihong; Xing, Li; Wang, Yi; Fan, Xiaohui; Cheng, Yiyu

    2013-01-01

    Type 2 diabetes mellitus (T2D), affecting >90% of the diabetic patients, is one of the major threats to human health. A comprehensive understanding of the mechanisms of T2D at molecular level is essential to facilitate the related translational research. Here, we introduce a comprehensive and up-to-date knowledgebase for T2D, i.e. T2D@ZJU. T2D@ZJU contains three levels of heterogeneous connections associated with T2D, which is retrieved from pathway databases, protein-protein interaction databases and literature, respectively. In current release, T2D@ZJU contains 1078 T2D related entities such as proteins, protein complexes, drugs and others together with their corresponding relationships, which include 3069 manually curated connections, 14,893 protein-protein interactions and 26,716 relationships identified by text-mining technology. Moreover, T2D@ZJU provides a user-friendly web interface for users to browse and search data. A Cytoscape Web-based interactive network browser is available to visualize the corresponding network relationships between T2D-related entities. The functionality of T2D@ZJU is shown by means of several case studies. Database URL: http://tcm.zju.edu.cn/t2d. PMID:23846596

  17. TSGene 2.0: an updated literature-based knowledgebase for tumor suppressor genes.

    PubMed

    Zhao, Min; Kim, Pora; Mitra, Ramkrishna; Zhao, Junfei; Zhao, Zhongming

    2016-01-01

    Tumor suppressor genes (TSGs) are a major type of gatekeeper genes in the cell growth. A knowledgebase with the systematic collection and curation of TSGs in multiple cancer types is critically important for further studying their biological functions as well as for developing therapeutic strategies. Since its development in 2012, the Tumor Suppressor Gene database (TSGene), has become a popular resource in the cancer research community. Here, we reported the TSGene version 2.0, which has substantial updates of contents (e.g. up-to-date literature and pan-cancer genomic data collection and curation), data types (noncoding RNAs and protein-coding genes) and content accessibility. Specifically, the current TSGene 2.0 contains 1217 human TSGs (1018 protein-coding and 199 non-coding genes) curated from over 9000 articles. Additionally, TSGene 2.0 provides thousands of expression and mutation patterns derived from pan-cancer data of The Cancer Genome Atlas. A new web interface is available at http://bioinfo.mc.vanderbilt.edu/TSGene/. Systematic analyses of 199 non-coding TSGs provide numerous cancer-specific non-coding mutational events for further screening and clinical use. Intriguingly, we identified 49 protein-coding TSGs that were consistently down-regulated in 11 cancer types. In summary, TSGene 2.0, which is the only available database for TSGs, provides the most updated TSGs and their features in pan-cancer. PMID:26590405

  18. Knowledge-based discovery for designing CRISPR-CAS systems against invading mobilomes in thermophiles.

    PubMed

    Chellapandi, P; Ranjani, J

    2015-09-01

    Clustered regularly interspaced short palindromic repeats (CRISPRs) are direct features of the prokaryotic genomes involved in resistance to their bacterial viruses and phages. Herein, we have identified CRISPR loci together with CRISPR-associated sequences (CAS) genes to reveal their immunity against genome invaders in the thermophilic archaea and bacteria. Genomic survey of this study implied that genomic distribution of CRISPR-CAS systems was varied from strain to strain, which was determined by the degree of invading mobiloms. Direct repeats found to be equal in some extent in many thermopiles, but their spacers were differed in each strain. Phylogenetic analyses of CAS superfamily revealed that genes cmr, csh, csx11, HD domain, devR were belonged to the subtypes of cas gene family. The members in cas gene family of thermophiles were functionally diverged within closely related genomes and may contribute to develop several defense strategies. Nevertheless, genome dynamics, geological variation and host defense mechanism were contributed to share their molecular functions across the thermophiles. A thermophilic archaean, Thermococcus gammotolerans and thermophilic bacteria, Petrotoga mobilis and Thermotoga lettingae have shown superoperons-like appearance to cluster cas genes, which were typically evolved for their defense pathways. A cmr operon was identified with a specific promoter in a thermophilic archaean, Caldivirga maquilingensis. Overall, we concluded that knowledge-based genomic survey and phylogeny-based functional assignment have suggested for designing a reliable genetic regulatory circuit naturally from CRISPR-CAS systems, acquired defense pathways, to thermophiles in future synthetic biology. PMID:26279704

  19. SIGMA: A Knowledge-Based Simulation Tool Applied to Ecosystem Modeling

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Keller, Richard; Lawless, James G. (Technical Monitor)

    1994-01-01

    The need for better technology to facilitate building, sharing and reusing models is generally recognized within the ecosystem modeling community. The Scientists' Intelligent Graphical Modelling Assistant (SIGMA) creates an environment for model building, sharing and reuse which provides an alternative to more conventional approaches which too often yield poorly documented, awkwardly structured model code. The SIGMA interface presents the user a list of model quantities which can be selected for computation. Equations to calculate the model quantities may be chosen from an existing library of ecosystem modeling equations, or built using a specialized equation editor. Inputs for dim equations may be supplied by data or by calculation from other equations. Each variable and equation is expressed using ecological terminology and scientific units, and is documented with explanatory descriptions and optional literature citations. Automatic scientific unit conversion is supported and only physically-consistent equations are accepted by the system. The system uses knowledge-based semantic conditions to decide which equations in its library make sense to apply in a given situation, and supplies these to the user for selection. "Me equations and variables are graphically represented as a flow diagram which provides a complete summary of the model. Forest-BGC, a stand-level model that simulates photosynthesis and evapo-transpiration for conifer canopies, was originally implemented in Fortran and subsequenty re-implemented using SIGMA. The SIGMA version reproduces daily results and also provides a knowledge base which greatly facilitates inspection, modification and extension of Forest-BGC.

  20. Sensorimotor representation and knowledge-based reasoning for spatial exploration and localisation.

    PubMed

    Zetzsche, C; Wolter, J; Schill, K

    2008-12-01

    We investigate a hybrid system for autonomous exploration and navigation, and implement it in a virtual mobile agent, which operates in virtual spatial environments. The system is based on several distinguishing properties. The representation is not map-like, but based on sensorimotor features, i.e. on combinations of sensory features and motor actions. The system has a hybrid architecture, which integrates a bottom-up processing of sensorimotor features with a top-down, knowledge-based reasoning strategy. This strategy selects the optimal motor action in each step according to the principle of maximum information gain. Two sensorimotor levels with different behavioural granularity are implemented, a macro-level, which controls the movements of the agent in space, and a micro-level, which controls its eye movements. At each level, the same type of hybrid architecture and the same principle of information gain are used for sensorimotor control. The localisation performance of the system is tested with large sets of virtual rooms containing different mixtures of unique and non-unique objects. The results demonstrate that the system efficiently performs those exploratory motor actions that yield a maximum amount of information about the current environment. Localisation is typically achieved within a few steps. Furthermore, the computational complexity of the underlying computations is limited, and the system is robust with respect to minor variations in the spatial environments. PMID:18461375

  1. ISPE: A knowledge-based system for fluidization studies. 1990 Annual report

    SciTech Connect

    Reddy, S.

    1991-01-01

    Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all ``specified goals`` are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that can enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.

  2. The UniProtKB/Swiss-Prot knowledgebase and its Plant Proteome Annotation Program

    PubMed Central

    Schneider, Michel; Lane, Lydie; Boutet, Emmanuel; Lieberherr, Damien; Tognolli, Michael; Bougueleret, Lydie; Bairoch, Amos

    2009-01-01

    The UniProt knowledgebase, UniProtKB, is the main product of the UniProt consortium. It consists of two sections, UniProtKB/Swiss-Prot, the manually curated section, and UniProtKB/TrEMBL, the computer translation of the EMBL/GenBank/DDBJ nucleotide sequence database. Taken together, these two sections cover all the proteins characterized or inferred from all publicly available nucleotide sequences. The Plant Proteome Annotation Program (PPAP) of UniProtKB/Swiss-Prot focuses on the manual annotation of plant-specific proteins and protein families. Our major effort is currently directed towards the two model plants Arabidopsis thaliana and Oryza sativa. In UniProtKB/Swiss-Prot, redundancy is minimized by merging all data from different sources in a single entry. The proposed protein sequence is frequently modified after comparison with ESTs, full length transcripts or homologous proteins from other species. The information present in manually curated entries allows the reconstruction of all described isoforms. The annotation also includes proteomics data such as PTM and protein identification MS experimental results. UniProtKB and the other products of the UniProt consortium are accessible online at www.uniprot.org. PMID:19084081

  3. The SWISS-PROT protein knowledgebase and its supplement TrEMBL in 2003.

    PubMed

    Boeckmann, Brigitte; Bairoch, Amos; Apweiler, Rolf; Blatter, Marie-Claude; Estreicher, Anne; Gasteiger, Elisabeth; Martin, Maria J; Michoud, Karine; O'Donovan, Claire; Phan, Isabelle; Pilbout, Sandrine; Schneider, Michel

    2003-01-01

    The SWISS-PROT protein knowledgebase (http://www.expasy.org/sprot/ and http://www.ebi.ac.uk/swissprot/) connects amino acid sequences with the current knowledge in the Life Sciences. Each protein entry provides an interdisciplinary overview of relevant information by bringing together experimental results, computed features and sometimes even contradictory conclusions. Detailed expertise that goes beyond the scope of SWISS-PROT is made available via direct links to specialised databases. SWISS-PROT provides annotated entries for all species, but concentrates on the annotation of entries from human (the HPI project) and other model organisms to ensure the presence of high quality annotation for representative members of all protein families. Part of the annotation can be transferred to other family members, as is already done for microbes by the High-quality Automated and Manual Annotation of microbial Proteomes (HAMAP) project. Protein families and groups of proteins are regularly reviewed to keep up with current scientific findings. Complementarily, TrEMBL strives to comprise all protein sequences that are not yet represented in SWISS-PROT, by incorporating a perpetually increasing level of mostly automated annotation. Researchers are welcome to contribute their knowledge to the scientific community by submitting relevant findings to SWISS-PROT at swiss-prot@expasy.org. PMID:12520024

  4. The SWISS-PROT protein knowledgebase and its supplement TrEMBL in 2003

    PubMed Central

    Boeckmann, Brigitte; Bairoch, Amos; Apweiler, Rolf; Blatter, Marie-Claude; Estreicher, Anne; Gasteiger, Elisabeth; Martin, Maria J.; Michoud, Karine; O'Donovan, Claire; Phan, Isabelle; Pilbout, Sandrine; Schneider, Michel

    2003-01-01

    The SWISS-PROT protein knowledgebase (http://www.expasy.org/sprot/ and http://www.ebi.ac.uk/swissprot/) connects amino acid sequences with the current knowledge in the Life Sciences. Each protein entry provides an interdisciplinary overview of relevant information by bringing together experimental results, computed features and sometimes even contradictory conclusions. Detailed expertise that goes beyond the scope of SWISS-PROT is made available via direct links to specialised databases. SWISS-PROT provides annotated entries for all species, but concentrates on the annotation of entries from human (the HPI project) and other model organisms to ensure the presence of high quality annotation for representative members of all protein families. Part of the annotation can be transferred to other family members, as is already done for microbes by the High-quality Automated and Manual Annotation of microbial Proteomes (HAMAP) project. Protein families and groups of proteins are regularly reviewed to keep up with current scientific findings. Complementarily, TrEMBL strives to comprise all protein sequences that are not yet represented in SWISS-PROT, by incorporating a perpetually increasing level of mostly automated annotation. Researchers are welcome to contribute their knowledge to the scientific community by submitting relevant findings to SWISS-PROT at swiss-prot@expasy.org. PMID:12520024

  5. The UniProtKB/Swiss-Prot knowledgebase and its Plant Proteome Annotation Program.

    PubMed

    Schneider, Michel; Lane, Lydie; Boutet, Emmanuel; Lieberherr, Damien; Tognolli, Michael; Bougueleret, Lydie; Bairoch, Amos

    2009-04-13

    The UniProt knowledgebase, UniProtKB, is the main product of the UniProt consortium. It consists of two sections, UniProtKB/Swiss-Prot, the manually curated section, and UniProtKB/TrEMBL, the computer translation of the EMBL/GenBank/DDBJ nucleotide sequence database. Taken together, these two sections cover all the proteins characterized or inferred from all publicly available nucleotide sequences. The Plant Proteome Annotation Program (PPAP) of UniProtKB/Swiss-Prot focuses on the manual annotation of plant-specific proteins and protein families. Our major effort is currently directed towards the two model plants Arabidopsis thaliana and Oryza sativa. In UniProtKB/Swiss-Prot, redundancy is minimized by merging all data from different sources in a single entry. The proposed protein sequence is frequently modified after comparison with ESTs, full length transcripts or homologous proteins from other species. The information present in manually curated entries allows the reconstruction of all described isoforms. The annotation also includes proteomics data such as PTM and protein identification MS experimental results. UniProtKB and the other products of the UniProt consortium are accessible online at www.uniprot.org. PMID:19084081

  6. Weaning Patients From Mechanical Ventilation: A Knowledge-Based System Approach

    PubMed Central

    Tong, David A.

    1990-01-01

    The WEANing PROtocol (WEANPRO) knowledge-based system assists respiratory therapists and nurses in weaning post-operative cardiovascular patients from mechanical ventilation in the intensive care unit. The knowledge contained in WEANPRO is represented by rules and is implemented in M.1® by Teknowledge, Inc. WEANPRO will run on any IBM® compatible microcomputer. WEANPRO's performance in weaning patients in the intensive care unit was evaluated three ways: (1) a statistical comparison between the mean number of arterial blood gases required to wean patients to a T-piece with and without the use of WEANPRO, (2) a critique of the suggestions offered by the system by clinicians not involved in the system development, and (3) an inspection of the user's acceptance of WEANPRO in the intensive care unit. The results of the evaluations revealed that using WEANPRO significantly decreases the number of arterial blood gas analyses needed to wean patients from total dependance on mechanical ventilation to independent breathing using a T-piece. In doing so, WEANPRO's suggestions are accurate and its use is accepted by the clinicians. Currently, WEANPRO is being used in the intensive care unit at the East Unit of Baptist Memorial Hospital in Memphis, Tennessee.

  7. Knowledge-based approaches to the maintenance of a large controlled medical terminology.

    PubMed Central

    Cimino, J J; Clayton, P D; Hripcsak, G; Johnson, S B

    1994-01-01

    OBJECTIVE: Develop a knowledge-based representation for a controlled terminology of clinical information to facilitate creation, maintenance, and use of the terminology. DESIGN: The Medical Entities Dictionary (MED) is a semantic network, based on the Unified Medical Language System (UMLS), with a directed acyclic graph to represent multiple hierarchies. Terms from four hospital systems (laboratory, electrocardiography, medical records coding, and pharmacy) were added as nodes in the network. Additional knowledge about terms, added as semantic links, was used to assist in integration, harmonization, and automated classification of disparate terminologies. RESULTS: The MED contains 32,767 terms and is in active clinical use. Automated classification was successfully applied to terms for laboratory specimens, laboratory tests, and medications. One benefit of the approach has been the automated inclusion of medications into multiple pharmacologic and allergenic classes that were not present in the pharmacy system. Another benefit has been the reduction of maintenance efforts by 90%. CONCLUSION: The MED is a hybrid of terminology and knowledge. It provides domain coverage, synonymy, consistency of views, explicit relationships, and multiple classification while preventing redundancy, ambiguity (homonymy) and misclassification. PMID:7719786

  8. Diagnosis by integrating model-based reasoning with knowledge-based reasoning

    NASA Technical Reports Server (NTRS)

    Bylander, Tom

    1988-01-01

    Our research investigates how observations can be categorized by integrating a qualitative physical model with experiential knowledge. Our domain is diagnosis of pathologic gait in humans, in which the observations are the gait motions, muscle activity during gait, and physical exam data, and the diagnostic hypotheses are the potential muscle weaknesses, muscle mistimings, and joint restrictions. Patients with underlying neurological disorders typically have several malfunctions. Among the problems that need to be faced are: the ambiguity of the observations, the ambiguity of the qualitative physical model, correspondence of the observations and hypotheses to the qualitative physical model, the inherent uncertainty of experiential knowledge, and the combinatorics involved in forming composite hypotheses. Our system divides the work so that the knowledge-based reasoning suggests which hypotheses appear more likely than others, the qualitative physical model is used to determine which hypotheses explain which observations, and another process combines these functionalities to construct a composite hypothesis based on explanatory power and plausibility. We speculate that the reasoning architecture of our system is generally applicable to complex domains in which a less-than-perfect physical model and less-than-perfect experiential knowledge need to be combined to perform diagnosis.

  9. The Application of Integrated Knowledge-based Systems for the Biomedical Risk Assessment Intelligent Network (BRAIN)

    NASA Technical Reports Server (NTRS)

    Loftin, Karin C.; Ly, Bebe; Webster, Laurie; Verlander, James; Taylor, Gerald R.; Riley, Gary; Culbert, Chris; Holden, Tina; Rudisill, Marianne

    1993-01-01

    One of NASA's goals for long duration space flight is to maintain acceptable levels of crew health, safety, and performance. One way of meeting this goal is through the Biomedical Risk Assessment Intelligent Network (BRAIN), an integrated network of both human and computer elements. The BRAIN will function as an advisor to flight surgeons by assessing the risk of in-flight biomedical problems and recommending appropriate countermeasures. This paper describes the joint effort among various NASA elements to develop BRAIN and an Infectious Disease Risk Assessment (IDRA) prototype. The implementation of this effort addresses the technological aspects of the following: (1) knowledge acquisition; (2) integration of IDRA components; (3) use of expert systems to automate the biomedical prediction process; (4) development of a user-friendly interface; and (5) integration of the IDRA prototype and Exercise Countermeasures Intelligent System (ExerCISys). Because the C Language, CLIPS (the C Language Integrated Production System), and the X-Window System were portable and easily integrated, they were chosen as the tools for the initial IDRA prototype. The feasibility was tested by developing an IDRA prototype that predicts the individual risk of influenza. The application of knowledge-based systems to risk assessment is of great market value to the medical technology industry.

  10. A methodology for evaluating potential KBS (Knowledge-Based Systems) applications

    SciTech Connect

    Melton, R.B.; DeVaney, D.M.; Whiting, M.A.; Laufmann, S.C.

    1989-06-01

    It is often difficult to assess how well Knowledge-Based Systems (KBS) techniques and paradigms may be applied to automating various tasks. This report describes the approach and organization of an assessment procedure that involves two levels of analysis. Level One can be performed by individuals with little technical expertise relative to KBS development, while Level Two is intended to be used by experienced KBS developers. The two levels review four groups of issues: goals, appropriateness, resources, and non-technical considerations. Those criteria are identified which are important at each step in the assessment. A qualitative methodology for scoring the task relative to the assessment criteria is provided to alloy analysts to make better informed decisions with regard to the potential effectiveness of applying KBS technology. In addition to this documentation, the assessment methodology has been implemented for personal computers use using the HYPERCARD{trademark} software on a Macintosh{trademark} computer. This interactive mode facilities small group analysis of potential KBS applications and permits a non-sequential appraisal with provisions for automated note-keeping and question scoring. The results provide a useful tool for assessing the feasibility of using KBS techniques in performing tasks in support of treaty verification or IC functions. 13 refs., 3 figs.

  11. Knowledge-based factor analysis of multidimensional nuclear medicine image sequences

    NASA Astrophysics Data System (ADS)

    Yap, Jeffrey T.; Chen, Chin-Tu; Cooper, Malcolm; Treffert, Jon D.

    1994-05-01

    We have developed a knowledge-based approach to analyzing dynamic nuclear medicine data sets using factor analysis. Prior knowledge is used as constraints to produce factor images and their associated time functions which are physically and physiologically realistic. These methods have been applied to both planar and tomographic image sequences acquired using various single-photon emitting and positron emitting radiotracers. Computer-simulated data, non-human primate studies, and human clinical studies have been used to develop and evaluate the methodology. The organ systems studied include the kidneys, heart, brain, liver, and bone. The factors generated represent various isolated aspects of physiologic function, such as tissue perfusion and clearance. In some clinical studies, the factors have indicated the potential to isolate diseased tissue from normally functioning tissue. In addition, the factor analysis of data acquired using newly developed radioligands has shown the ability to differentiate the specific binding of the radioligand to the targeted receptors from the non-specific binding. This suggests the potential use of factor analysis in the development and evaluation of radiolabeled compounds as well as in the investigation of specific receptor systems and their role in diagnosing disease.

  12. BrainKnowledge: a human brain function mapping knowledge-base system.

    PubMed

    Hsiao, Mei-Yu; Chen, Chien-Chung; Chen, Jyh-Horng

    2011-03-01

    Associating fMRI image datasets with the available literature is crucial for the analysis and interpretation of fMRI data. Here, we present a human brain function mapping knowledge-base system (BrainKnowledge) that associates fMRI data analysis and literature search functions. BrainKnowledge not only contains indexed literature, but also provides the ability to compare experimental data with those derived from the literature. BrainKnowledge provides three major functions: (1) to search for brain activation models by selecting a particular brain function; (2) to query functions by brain structure; (3) to compare the fMRI data with data extracted from the literature. All these functions are based on our literature extraction and mining module developed earlier (Hsiao, Chen, Chen. Journal of Biomedical Informatics 42, 912-922, 2009), which automatically downloads and extracts information from a vast amount of fMRI literature and generates co-occurrence models and brain association patterns to illustrate the relevance of brain structures and functions. BrainKnowledge currently provides three co-occurrence models: (1) a structure-to-function co-occurrence model; (2) a function-to-structure co-occurrence model; and (3) a brain structure co-occurrence model. Each model has been generated from over 15,000 extracted Medline abstracts. In this study, we illustrate the capabilities of BrainKnowledge and provide an application example with the studies of affect. BrainKnowledge, which combines fMRI experimental results with Medline abstracts, may be of great assistance to scientists not only by freeing up resources and valuable time, but also by providing a powerful tool that collects and organizes over ten thousand abstracts into readily usable and relevant sources of information for researchers. PMID:20857233

  13. MetazSecKB: the human and animal secretome and subcellular proteome knowledgebase.

    PubMed

    Meinken, John; Walker, Gary; Cooper, Chester R; Min, Xiang Jia

    2015-01-01

    The subcellular location of a protein is a key factor in determining the molecular function of the protein in an organism. MetazSecKB is a secretome and subcellular proteome knowledgebase specifically designed for metazoan, i.e. human and animals. The protein sequence data, consisting of over 4 million entries with 121 species having a complete proteome, were retrieved from UniProtKB. Protein subcellular locations including secreted and 15 other subcellular locations were assigned based on either curated experimental evidence or prediction using seven computational tools. The protein or subcellular proteome data can be searched and downloaded using several different types of identifiers, gene name or keyword(s), and species. BLAST search and community annotation of subcellular locations are also supported. Our primary analysis revealed that the proteome sizes, secretome sizes and other subcellular proteome sizes vary tremendously in different animal species. The proportions of secretomes vary from 3 to 22% (average 8%) in metazoa species. The proportions of other major subcellular proteomes ranged approximately 21-43% (average 31%) in cytoplasm, 20-37% (average 30%) in nucleus, 3-19% (average 12%) as plasma membrane proteins and 3-9% (average 6%) in mitochondria. We also compared the protein families in secretomes of different primates. The Gene Ontology and protein family domain analysis of human secreted proteins revealed that these proteins play important roles in regulation of human structure development, signal transduction, immune systems and many other biological processes. Database URL: http://proteomics.ysu.edu/secretomes/animal/index.php. PMID:26255309

  14. Ab Initio Protein Structure Assembly Using Continuous Structure Fragments and Optimized Knowledge-based Force Field

    PubMed Central

    Xu, Dong; Zhang, Yang

    2012-01-01

    Ab initio protein folding is one of the major unsolved problems in computational biology due to the difficulties in force field design and conformational search. We developed a novel program, QUARK, for template-free protein structure prediction. Query sequences are first broken into fragments of 1–20 residues where multiple fragment structures are retrieved at each position from unrelated experimental structures. Full-length structure models are then assembled from fragments using replica-exchange Monte Carlo simulations, which are guided by a composite knowledge-based force field. A number of novel energy terms and Monte Carlo movements are introduced and the particular contributions to enhancing the efficiency of both force field and search engine are analyzed in detail. QUARK prediction procedure is depicted and tested on the structure modeling of 145 non-homologous proteins. Although no global templates are used and all fragments from experimental structures with template modeling score (TM-score) >0.5 are excluded, QUARK can successfully construct 3D models of correct folds in 1/3 cases of short proteins up to 100 residues. In the ninth community-wide Critical Assessment of protein Structure Prediction (CASP9) experiment, QUARK server outperformed the second and third best servers by 18% and 47% based on the cumulative Z-score of global distance test-total (GDT-TS) scores in the free modeling (FM) category. Although ab initio protein folding remains a significant challenge, these data demonstrate new progress towards the solution of the most important problem in the field. PMID:22411565

  15. Consistent Refinement of Submitted Models at CASP using a Knowledge-based Potential

    PubMed Central

    Chopra, Gaurav; Kalisman, Nir; Levitt, Michael

    2010-01-01

    Protein structure refinement is an important but unsolved problem; it must be solved if we are to predict biological function that is very sensitive to structural details. Specifically, Critical Assessment of Techniques for Protein Structure Prediction (CASP) shows that the accuracy of predictions in the comparative modeling category is often worse than that of the template on which the homology model is based. Here we describe a refinement protocol that is able to consistently refine submitted predictions for all categories at CASP7. The protocol uses direct energy minimization of the knowledge-based potential of mean force that is based on the interaction statistics of 167 atom types (Summa and Levitt, Proc Natl Acad Sci USA 2007; 104:3177–3182). Our protocol is thus computationally very efficient; it only takes a few minutes of CPU time to run typical protein models (300 residues). We observe an average structural improvement of 1% in GDT_TS, for predictions that have low and medium homology to known PDB structures (Global Distance Test score or GDT_TS between 50 and 80%). We also observe a marked improvement in the stereochemistry of the models. The level of improvement varies amongst the various participants at CASP, but we see large improvements (>10% increase in GDT_TS) even for models predicted by the best performing groups at CASP7. In addition, our protocol consistently improved the best predicted models in the refinement category at CASP7 and CASP8. These improvements in structure and stereochemistry prove the usefulness of our computationally inexpensive, powerful and automatic refinement protocol. PMID:20589633

  16. MetazSecKB: the human and animal secretome and subcellular proteome knowledgebase

    PubMed Central

    Meinken, John; Walker, Gary; Cooper, Chester R.; Min, Xiang Jia

    2015-01-01

    The subcellular location of a protein is a key factor in determining the molecular function of the protein in an organism. MetazSecKB is a secretome and subcellular proteome knowledgebase specifically designed for metazoan, i.e. human and animals. The protein sequence data, consisting of over 4 million entries with 121 species having a complete proteome, were retrieved from UniProtKB. Protein subcellular locations including secreted and 15 other subcellular locations were assigned based on either curated experimental evidence or prediction using seven computational tools. The protein or subcellular proteome data can be searched and downloaded using several different types of identifiers, gene name or keyword(s), and species. BLAST search and community annotation of subcellular locations are also supported. Our primary analysis revealed that the proteome sizes, secretome sizes and other subcellular proteome sizes vary tremendously in different animal species. The proportions of secretomes vary from 3 to 22% (average 8%) in metazoa species. The proportions of other major subcellular proteomes ranged approximately 21–43% (average 31%) in cytoplasm, 20–37% (average 30%) in nucleus, 3–19% (average 12%) as plasma membrane proteins and 3–9% (average 6%) in mitochondria. We also compared the protein families in secretomes of different primates. The Gene Ontology and protein family domain analysis of human secreted proteins revealed that these proteins play important roles in regulation of human structure development, signal transduction, immune systems and many other biological processes. Database URL: http://proteomics.ysu.edu/secretomes/animal/index.php PMID:26255309

  17. Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles

    PubMed Central

    2016-01-01

    Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522

  18. A knowledge-based decision support system in bioinformatics: an application to protein complex extraction

    PubMed Central

    2013-01-01

    Background We introduce a Knowledge-based Decision Support System (KDSS) in order to face the Protein Complex Extraction issue. Using a Knowledge Base (KB) coding the expertise about the proposed scenario, our KDSS is able to suggest both strategies and tools, according to the features of input dataset. Our system provides a navigable workflow for the current experiment and furthermore it offers support in the configuration and running of every processing component of that workflow. This last feature makes our system a crossover between classical DSS and Workflow Management Systems. Results We briefly present the KDSS' architecture and basic concepts used in the design of the knowledge base and the reasoning component. The system is then tested using a subset of Saccharomyces cerevisiae Protein-Protein interaction dataset. We used this subset because it has been well studied in literature by several research groups in the field of complex extraction: in this way we could easily compare the results obtained through our KDSS with theirs. Our system suggests both a preprocessing and a clustering strategy, and for each of them it proposes and eventually runs suited algorithms. Our system's final results are then composed of a workflow of tasks, that can be reused for other experiments, and the specific numerical results for that particular trial. Conclusions The proposed approach, using the KDSS' knowledge base, provides a novel workflow that gives the best results with regard to the other workflows produced by the system. This workflow and its numeric results have been compared with other approaches about PPI network analysis found in literature, offering similar results. PMID:23368995

  19. Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles.

    PubMed

    Swift, Robert V; Jusoh, Siti A; Offutt, Tavina L; Li, Eric S; Amaro, Rommie E

    2016-05-23

    Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2(N)). A recursive approximation to the optimal solution scales as O(N(2)), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522

  20. Knowledge-based tensor anisotropic diffusion of cardiac magnetic resonance images.

    PubMed

    Sanchez-Ortiz, G I; Rueckert, D; Burger, P

    1999-03-01

    We present a general formulation for a new knowledge-based approach to anisotropic diffusion of multi-valued and multi-dimensional images, with an illustrative application for the enhancement and segmentation of cardiac magnetic resonance (MR) images. In the proposed method all available information is incorporated through a new definition of the conductance function which differs from previous approaches in two aspects. First, we model the conductance as an explicit function of time and position, and not only of the differential structure of the image data. Inherent properties of the system (such as geometrical features or non-homogeneous data sampling) can therefore be taken into account by allowing the conductance function to vary depending on the location in the spatial and temporal coordinate space. Secondly, by defining the conductance as a second-rank tensor, the non-homogeneous diffusion equation gains a truly anisotropic character which is essential to emulate and handle certain aspects of complex data systems. The method presented is suitable for image enhancement and segmentation of single- or multi-valued images. We demonstrate the efficiency of the proposed framework by applying it to anatomical and velocity-encoded cine volumetric (4-D) MR images of the left ventricle. Spatial and temporal a priori knowledge about the shape and dynamics of the heart is incorporated into the diffusion process. We compare our results to those obtained with other diffusion schemes and exhibit the improvement in regions of the image with low contrast and low signal-to-noise ratio. PMID:10709698

  1. Chemogenomics knowledgebased polypharmacology analyses of drug abuse related G-protein coupled receptors and their ligands

    PubMed Central

    Xie, Xiang-Qun; Wang, Lirong; Liu, Haibin; Ouyang, Qin; Fang, Cheng; Su, Weiwei

    2013-01-01

    Drug abuse (DA) and addiction is a complex illness, broadly viewed as a neurobiological impairment with genetic and environmental factors that influence its development and manifestation. Abused substances can disrupt the activity of neurons by interacting with many proteins, particularly G-protein coupled receptors (GPCRs). A few medicines that target the central nervous system (CNS) can also modulate DA related proteins, such as GPCRs, which can act in conjunction with the controlled psychoactive substance(s) and increase side effects. To fully explore the molecular interaction networks that underlie DA and to effectively modulate the GPCRs in these networks with small molecules for DA treatment, we built a drug-abuse domain specific chemogenomics knowledgebase (DA-KB) to centralize the reported chemogenomics research information related to DA and CNS disorders in an effort to benefit researchers across a broad range of disciplines. We then focus on the analysis of GPCRs as many of them are closely related with DA. Their distribution in human tissues was also analyzed for the study of side effects caused by abused drugs. We further implement our computational algorithms/tools to explore DA targets, DA mechanisms and pathways involved in polydrug addiction and to explore polypharmacological effects of the GPCR ligands. Finally, the polypharmacology effects of GPCRs-targeted medicines for DA treatment were investigated and such effects can be exploited for the development of drugs with polypharmacophore for DA intervention. The chemogenomics database and the analysis tools will help us better understand the mechanism of drugs abuse and facilitate to design new medications for system pharmacotherapy of DA. PMID:24567719

  2. A knowledge-based approach to estimating the magnitude and spatial patterns of potential threats to soil biodiversity.

    PubMed

    Orgiazzi, Alberto; Panagos, Panos; Yigini, Yusuf; Dunbar, Martha B; Gardi, Ciro; Montanarella, Luca; Ballabio, Cristiano

    2016-03-01

    Because of the increasing pressures exerted on soil, below-ground life is under threat. Knowledge-based rankings of potential threats to different components of soil biodiversity were developed in order to assess the spatial distribution of threats on a European scale. A list of 13 potential threats to soil biodiversity was proposed to experts with different backgrounds in order to assess the potential for three major components of soil biodiversity: soil microorganisms, fauna, and biological functions. This approach allowed us to obtain knowledge-based rankings of threats. These classifications formed the basis for the development of indices through an additive aggregation model that, along with ad-hoc proxies for each pressure, allowed us to preliminarily assess the spatial patterns of potential threats. Intensive exploitation was identified as the highest pressure. In contrast, the use of genetically modified organisms in agriculture was considered as the threat with least potential. The potential impact of climate change showed the highest uncertainty. Fourteen out of the 27 considered countries have more than 40% of their soils with moderate-high to high potential risk for all three components of soil biodiversity. Arable soils are the most exposed to pressures. Soils within the boreal biogeographic region showed the lowest risk potential. The majority of soils at risk are outside the boundaries of protected areas. First maps of risks to three components of soil biodiversity based on the current scientific knowledge were developed. Despite the intrinsic limits of knowledge-based assessments, a remarkable potential risk to soil biodiversity was observed. Guidelines to preliminarily identify and circumscribe soils potentially at risk are provided. This approach may be used in future research to assess threat at both local and global scale and identify areas of possible risk and, subsequently, design appropriate strategies for monitoring and protection of soil

  3. A Knowledge-Based System For Analysis, Intervention Planning and Prevention of Defects in Immovable Cultural Heritage Objects and Monuments

    NASA Astrophysics Data System (ADS)

    Valach, J.; Cacciotti, R.; Kuneš, P.; ČerÅanský, M.; Bláha, J.

    2012-04-01

    The paper presents a project aiming to develop a knowledge-based system for documentation and analysis of defects of cultural heritage objects and monuments. The MONDIS information system concentrates knowledge on damage of immovable structures due to various causes, and preventive/remedial actions performed to protect/repair them, where possible. The currently built system is to provide for understanding of causal relationships between a defect, materials, external load, and environment of built object. Foundation for the knowledge-based system will be the systemized and formalized knowledge on defects and their mitigation acquired in the process of analysis of a representative set of cases documented in the past. On the basis of design comparability, used technologies, materials and the nature of the external forces and surroundings, the developed software system has the capacity to indicate the most likely risks of new defect occurrence or the extension of the existing ones. The system will also allow for a comparison of the actual failure with similar cases documented and will propose a suitable technical intervention plan. The system will provide conservationists, administrators and owners of historical objects with a toolkit for defect documentation for their objects. Also, advanced artificial intelligence methods will offer accumulated knowledge to users and will also enable them to get oriented in relevant techniques of preventive interventions and reconstructions based on similarity with their case.

  4. A community effort towards a knowledge-base and mathematical model of the human pathogen Salmonella Typhimurium LT2

    SciTech Connect

    Thiele, Ines; Hyduke, Daniel R.; Steeb, Benjamin; Fankam, Guy; Allen, Douglas K.; Bazzani, Susanna; Charusanti, Pep; Chen, Feng-Chi; Fleming, Ronan MT; Hsiung, Chao A.; De Keersmaecker, Sigrid CJ; Liao, Yu-Chieh; Marchal, Kathleen; Mo, Monica L.; Özdemir, Emre; Raghunathan, Anu; Reed, Jennifer L.; Shin, Sook-Il; Sigurbjörnsdóttir, Sara; Steinmann, Jonas; Sudarsan, Suresh; Swainston, Neil; Thijs, Inge M.; Zengler, Karsten; Palsson, Bernhard O.; Adkins, Joshua N.; Bumann, Dirk

    2011-01-01

    Metabolic reconstructions (MRs) are common denominators in systems biology and represent biochemical, genetic, and genomic (BiGG) knowledge-bases for target organisms by capturing currently available information in a consistent, structured manner. Salmonella enterica subspecies I serovar Typhimurium is a human pathogen, causes various diseases and its increasing antibiotic resistance poses a public health problem. Here, we describe a community-driven effort, in which more than 20 experts in S. Typhimurium biology and systems biology collaborated to reconcile and expand the S. Typhimurium BiGG knowledge-base. The consensus MR was obtained starting from two independently developed MRs for S. Typhimurium. Key results of this reconstruction jamboree include i) development and implementation of a community-based workflow for MR annotation and reconciliation; ii) incorporation of thermodynamic information; and iii) use of the consensus MR to identify potential multi-target drug therapy approaches. Finally, taken together, with the growing number of parallel MRs a structured, community-driven approach will be necessary to maximize quality while increasing adoption of MRs in experimental design and interpretation.

  5. Personal profile of medical students selected through a knowledge-based exam only: are we missing suitable students?

    PubMed Central

    Abbiati, Milena; Baroffio, Anne; Gerbase, Margaret W.

    2016-01-01

    Introduction A consistent body of literature highlights the importance of a broader approach to select medical school candidates both assessing cognitive capacity and individual characteristics. However, selection in a great number of medical schools worldwide is still based on knowledge exams, a procedure that might neglect students with needed personal characteristics for future medical practice. We investigated whether the personal profile of students selected through a knowledge-based exam differed from those not selected. Methods Students applying for medical school (N=311) completed questionnaires assessing motivations for becoming a doctor, learning approaches, personality traits, empathy, and coping styles. Selection was based on the results of MCQ tests. Principal component analysis was used to draw a profile of the students. Differences between selected and non-selected students were examined by Multivariate ANOVAs, and their impact on selection by logistic regression analysis. Results Students demonstrating a profile of diligence with higher conscientiousness, deep learning approach, and task-focused coping were more frequently selected (p=0.01). Other personal characteristics such as motivation, sociability, and empathy did not significantly differ, comparing selected and non-selected students. Conclusion Selection through a knowledge-based exam privileged diligent students. It did neither advantage nor preclude candidates with a more humane profile. PMID:27079886

  6. Development of an intelligent interface for adding spatial objects to a knowledge-based geographic information system

    NASA Technical Reports Server (NTRS)

    Campbell, William J.; Goettsche, Craig

    1989-01-01

    Earth Scientists lack adequate tools for quantifying complex relationships between existing data layers and studying and modeling the dynamic interactions of these data layers. There is a need for an earth systems tool to manipulate multi-layered, heterogeneous data sets that are spatially indexed, such as sensor imagery and maps, easily and intelligently in a single system. The system can access and manipulate data from multiple sensor sources, maps, and from a learned object hierarchy using an advanced knowledge-based geographical information system. A prototype Knowledge-Based Geographic Information System (KBGIS) was recently constructed. Many of the system internals are well developed, but the system lacks an adequate user interface. A methodology is described for developing an intelligent user interface and extending KBGIS to interconnect with existing NASA systems, such as imagery from the Land Analysis System (LAS), atmospheric data in Common Data Format (CDF), and visualization of complex data with the National Space Science Data Center Graphics System. This would allow NASA to quickly explore the utility of such a system, given the ability to transfer data in and out of KBGIS easily. The use and maintenance of the object hierarchies as polymorphic data types brings, to data management, a while new set of problems and issues, few of which have been explored above the prototype level.

  7. From protein sequences to 3D-structures and beyond: the example of the UniProt knowledgebase.

    PubMed

    Hinz, Ursula

    2010-04-01

    With the dramatic increase in the volume of experimental results in every domain of life sciences, assembling pertinent data and combining information from different fields has become a challenge. Information is dispersed over numerous specialized databases and is presented in many different formats. Rapid access to experiment-based information about well-characterized proteins helps predict the function of uncharacterized proteins identified by large-scale sequencing. In this context, universal knowledgebases play essential roles in providing access to data from complementary types of experiments and serving as hubs with cross-references to many specialized databases. This review outlines how the value of experimental data is optimized by combining high-quality protein sequences with complementary experimental results, including information derived from protein 3D-structures, using as an example the UniProt knowledgebase (UniProtKB) and the tools and links provided on its website ( http://www.uniprot.org/ ). It also evokes precautions that are necessary for successful predictions and extrapolations. PMID:20043185

  8. Development of the Knowledge-based & Empirical Combined Scoring Algorithm (KECSA) to Score Protein-Ligand Interactions

    PubMed Central

    Zheng, Zheng

    2013-01-01

    We describe a novel knowledge-based protein-ligand scoring function that employs a new definition for the reference state, allowing us to relate a statistical potential to a Lennard-Jones (LJ) potential. In this way, the LJ potential parameters were generated from protein-ligand complex structural data contained in the PDB. Forty-nine types of atomic pairwise interactions were derived using this method, which we call the knowledge-based and empirical combined scoring algorithm (KECSA). Two validation benchmarks were introduced to test the performance of KECSA. The first validation benchmark included two test sets that address the training-set and enthalpy/entropy of KECSA The second validation benchmark suite included two large-scale and five small-scale test sets to compare the reproducibility of KECSA with respect to two empirical score functions previously developed in our laboratory (LISA and LISA+), as well as to other well-known scoring methods. Validation results illustrate that KECSA shows improved performance in all test sets when compared with other scoring methods especially in its ability to minimize the RMSE. LISA and LISA+ displayed similar performance using the correlation coefficient and Kendall τ as the metric of quality for some of the small test sets. Further pathways for improvement are discussed which would KECSA more sensitive to subtle changes in ligand structure. PMID:23560465

  9. Architecture for Knowledge-Based and Federated Search of Online Clinical Evidence

    PubMed Central

    Walther, Martin; Nguyen, Ken; Lovell, Nigel H

    2005-01-01

    Background It is increasingly difficult for clinicians to keep up-to-date with the rapidly growing biomedical literature. Online evidence retrieval methods are now seen as a core tool to support evidence-based health practice. However, standard search engine technology is not designed to manage the many different types of evidence sources that are available or to handle the very different information needs of various clinical groups, who often work in widely different settings. Objectives The objectives of this paper are (1) to describe the design considerations and system architecture of a wrapper-mediator approach to federate search system design, including the use of knowledge-based, meta-search filters, and (2) to analyze the implications of system design choices on performance measurements. Methods A trial was performed to evaluate the technical performance of a federated evidence retrieval system, which provided access to eight distinct online resources, including e-journals, PubMed, and electronic guidelines. The Quick Clinical system architecture utilized a universal query language to reformulate queries internally and utilized meta-search filters to optimize search strategies across resources. We recruited 227 family physicians from across Australia who used the system to retrieve evidence in a routine clinical setting over a 4-week period. The total search time for a query was recorded, along with the duration of individual queries sent to different online resources. Results Clinicians performed 1662 searches over the trial. The average search duration was 4.9 ± 3.2 s (N = 1662 searches). Mean search duration to the individual sources was between 0.05 s and 4.55 s. Average system time (ie, system overhead) was 0.12 s. Conclusions The relatively small system overhead compared to the average time it takes to perform a search for an individual source shows that the system achieves a good trade-off between performance and reliability. Furthermore, despite

  10. Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery

    SciTech Connect

    Shiraishi, Satomi; Moore, Kevin L.; Tan, Jun; Olsen, Lindsey A.

    2015-02-15

    Purpose: The objective of this work was to develop a comprehensive knowledge-based methodology for predicting achievable dose–volume histograms (DVHs) and highly precise DVH-based quality metrics (QMs) in stereotactic radiosurgery/radiotherapy (SRS/SRT) plans. Accurate QM estimation can identify suboptimal treatment plans and provide target optimization objectives to standardize and improve treatment planning. Methods: Correlating observed dose as it relates to the geometric relationship of organs-at-risk (OARs) to planning target volumes (PTVs) yields mathematical models to predict achievable DVHs. In SRS, DVH-based QMs such as brain V{sub 10Gy} (volume receiving 10 Gy or more), gradient measure (GM), and conformity index (CI) are used to evaluate plan quality. This study encompasses 223 linear accelerator-based SRS/SRT treatment plans (SRS plans) using volumetric-modulated arc therapy (VMAT), representing 95% of the institution’s VMAT radiosurgery load from the past four and a half years. Unfiltered models that use all available plans for the model training were built for each category with a stratification scheme based on target and OAR characteristics determined emergently through initial modeling process. Model predictive accuracy is measured by the mean and standard deviation of the difference between clinical and predicted QMs, δQM = QM{sub clin} − QM{sub pred}, and a coefficient of determination, R{sup 2}. For categories with a large number of plans, refined models are constructed by automatic elimination of suspected suboptimal plans from the training set. Using the refined model as a presumed achievable standard, potentially suboptimal plans are identified. Predictions of QM improvement are validated via standardized replanning of 20 suspected suboptimal plans based on dosimetric predictions. The significance of the QM improvement is evaluated using the Wilcoxon signed rank test. Results: The most accurate predictions are obtained when plans are

  11. New Learning Models for the New Knowledge-Based Economy: Professional and Local-Personal Networks as a Source of Knowledge Development in the Multimedia Sector.

    ERIC Educational Resources Information Center

    Tremblay, Diane-Gabrielle

    The role of professional and local-personal networks as a source of knowledge development in the new knowledge-based economy was examined in a 15-month study that focuses on people working in the multimedia industry in Montreal, Quebec. The study focused on the modes of exchange and learning, collaborative work, and management and development of…

  12. An Emerging Knowledge-Based Economy in China? Indicators from OECD Databases. OECD Science, Technology and Industry Working Papers, 2004/4

    ERIC Educational Resources Information Center

    Criscuolo, Chiara; Martin, Ralf

    2004-01-01

    The main objective of this Working Paper is to show a set of indicators on the knowledge-based economy for China, mainly compiled from databases within EAS, although data from databases maintained by other parts of the OECD are included as well. These indicators are put in context by comparison with data for the United States, Japan and the EU (or…

  13. Considering Human Capital Theory in Assessment and Training: Mapping the Gap between Current Skills and the Needs of a Knowledge-Based Economy in Northeast Iowa

    ERIC Educational Resources Information Center

    Mihm-Herold, Wendy

    2010-01-01

    In light of the current economic downturn, thousands of Iowans are unemployed and this is the ideal time to build the skills of the workforce to compete in the knowledge-based economy so businesses and entrepreneurs can compete in a global economy. A tool for assessing the skills and knowledge of dislocated workers and students as well as…

  14. Creating a Knowledge-Based Economy in the United Arab Emirates: Realising the Unfulfilled Potential of Women in the Science, Technology and Engineering Fields

    ERIC Educational Resources Information Center

    Aswad, Noor Ghazal; Vidican, Georgeta; Samulewicz, Diana

    2011-01-01

    As the United Arab Emirates (UAE) moves towards a knowledge-based economy, maximising the participation of the national workforce, especially women, in the transformation process is crucial. Using survey methods and semi-structured interviews, this paper examines the factors that influence women's decisions regarding their degree programme and…

  15. Intelligent personal navigator supported by knowledge-based systems for estimating dead reckoning navigation parameters

    NASA Astrophysics Data System (ADS)

    Moafipoor, Shahram

    Personal navigators (PN) have been studied for about a decade in different fields and applications, such as safety and rescue operations, security and emergency services, and police and military applications. The common goal of all these applications is to provide precise and reliable position, velocity, and heading information of each individual in various environments. In the PN system developed in this dissertation, the underlying assumption is that the system does not require pre-existing infrastructure to enable pedestrian navigation. To facilitate this capability, a multisensor system concept, based on the Global Positioning System (GPS), inertial navigation, barometer, magnetometer, and a human pedometry model has been developed. An important aspect of this design is to use the human body as navigation sensor to facilitate Dead Reckoning (DR) navigation in GPS-challenged environments. The system is designed predominantly for outdoor environments, where occasional loss of GPS lock may happen; however, testing and performance demonstration have been extended to indoor environments. DR navigation is based on a relative-measurement approach, with the key idea of integrating the incremental motion information in the form of step direction (SD) and step length (SL) over time. The foundation of the intelligent navigation system concept proposed here rests in exploiting the human locomotion pattern, as well as change of locomotion in varying environments. In this context, the term intelligent navigation represents the transition from the conventional point-to-point DR to dynamic navigation using the knowledge about the mechanism of the moving person. This approach increasingly relies on integrating knowledge-based systems (KBS) and artificial intelligence (AI) methodologies, including artificial neural networks (ANN) and fuzzy logic (FL). In addition, a general framework of the quality control for the real-time validation of the DR processing is proposed, based on a

  16. Computational and human observer image quality evaluation of low dose, knowledge-based CT iterative reconstruction

    SciTech Connect

    Eck, Brendan L.; Fahmi, Rachid; Miao, Jun; Brown, Kevin M.; Zabic, Stanislav; Raihani, Nilgoun; Wilson, David L.

    2015-10-15

    Purpose: Aims in this study are to (1) develop a computational model observer which reliably tracks the detectability of human observers in low dose computed tomography (CT) images reconstructed with knowledge-based iterative reconstruction (IMR™, Philips Healthcare) and filtered back projection (FBP) across a range of independent variables, (2) use the model to evaluate detectability trends across reconstructions and make predictions of human observer detectability, and (3) perform human observer studies based on model predictions to demonstrate applications of the model in CT imaging. Methods: Detectability (d′) was evaluated in phantom studies across a range of conditions. Images were generated using a numerical CT simulator. Trained observers performed 4-alternative forced choice (4-AFC) experiments across dose (1.3, 2.7, 4.0 mGy), pin size (4, 6, 8 mm), contrast (0.3%, 0.5%, 1.0%), and reconstruction (FBP, IMR), at fixed display window. A five-channel Laguerre–Gauss channelized Hotelling observer (CHO) was developed with internal noise added to the decision variable and/or to channel outputs, creating six different internal noise models. Semianalytic internal noise computation was tested against Monte Carlo and used to accelerate internal noise parameter optimization. Model parameters were estimated from all experiments at once using maximum likelihood on the probability correct, P{sub C}. Akaike information criterion (AIC) was used to compare models of different orders. The best model was selected according to AIC and used to predict detectability in blended FBP-IMR images, analyze trends in IMR detectability improvements, and predict dose savings with IMR. Predicted dose savings were compared against 4-AFC study results using physical CT phantom images. Results: Detection in IMR was greater than FBP in all tested conditions. The CHO with internal noise proportional to channel output standard deviations, Model-k4, showed the best trade-off between fit

  17. An intelligent, knowledge-based multiple criteria decision making advisor for systems design

    NASA Astrophysics Data System (ADS)

    Li, Yongchang

    of an appropriate decision making method. Furthermore, some DMs may be exclusively using one or two specific methods which they are familiar with or trust and not realizing that they may be inappropriate to handle certain classes of the problems, thus yielding erroneous results. These issues reveal that in order to ensure a good decision a suitable decision method should be chosen before the decision making process proceeds. The first part of this dissertation proposes an MCDM process supported by an intelligent, knowledge-based advisor system referred to as Multi-Criteria Interactive Decision-Making Advisor and Synthesis process (MIDAS), which is able to facilitate the selection of the most appropriate decision making method and which provides insight to the user for fulfilling different preferences. The second part of this dissertation presents an autonomous decision making advisor which is capable of dealing with ever-evolving real time information and making autonomous decisions under uncertain conditions. The advisor encompasses a Markov Decision Process (MDP) formulation which takes uncertainty into account when determines the best action for each system state. (Abstract shortened by UMI.)

  18. LCG Persistency Framework (CORAL, COOL, POOL): Status and Outlook

    SciTech Connect

    Valassi, A.; Clemencic, M.; Dykstra, D.; Frank, M.; Front, D.; Govi, G.; Kalkhof, A.; Loth, A.; Nowak, M.; Pokorski, W.; Salnikov, A.; Schmidt, S.A.; Trentadue, R.; Wache, M.; Xie, Z.; /Princeton U.

    2012-04-19

    The Persistency Framework consists of three software packages (CORAL, COOL and POOL) addressing the data access requirements of the LHC experiments in different areas. It is the result of the collaboration between the CERN IT Department and the three experiments (ATLAS, CMS and LHCb) that use this software to access their data. POOL is a hybrid technology store for C++ objects, metadata catalogs and collections. CORAL is a relational database abstraction layer with an SQL-free API. COOL provides specific software tools and components for the handling of conditions data. This paper reports on the status and outlook of the project and reviews in detail the usage of each package in the three experiments.

  19. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    NASA Technical Reports Server (NTRS)

    Robers, James L.; Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    Only recently have engineers begun making use of Artificial Intelligence (AI) tools in the area of conceptual design. To continue filling this void in the design process, a prototype knowledge-based system, called STRUTEX has been developed to initially configure a structure to support point loads in two dimensions. This prototype was developed for testing the application of AI tools to conceptual design as opposed to being a testbed for new methods for improving structural analysis and optimization. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user. How the system is constructed to interact with the user is described. Of special interest is the information flow between the knowledge base and the data base under control of the algorithmic main program. Examples of computed and refined structures are presented during the explanation of the system.

  20. STRUTEX: A prototype knowledge-based system for initially configuring a structure to support point loads in two dimensions

    NASA Technical Reports Server (NTRS)

    Rogers, James L.; Feyock, Stefan; Sobieszczanski-Sobieski, Jaroslaw

    1988-01-01

    The purpose of this research effort is to investigate the benefits that might be derived from applying artificial intelligence tools in the area of conceptual design. Therefore, the emphasis is on the artificial intelligence aspects of conceptual design rather than structural and optimization aspects. A prototype knowledge-based system, called STRUTEX, was developed to initially configure a structure to support point loads in two dimensions. This system combines numerical and symbolic processing by the computer with interactive problem solving aided by the vision of the user by integrating a knowledge base interface and inference engine, a data base interface, and graphics while keeping the knowledge base and data base files separate. The system writes a file which can be input into a structural synthesis system, which combines structural analysis and optimization.

  1. Knowledge-based Method for Determining the Meaning of Ambiguous Biomedical Terms Using Information Content Measures of Similarity

    PubMed Central

    McInnes, Bridget T.; Pedersen, Ted; Liu, Ying; Melton, Genevieve B.; Pakhomov, Serguei V.

    2011-01-01

    In this paper, we introduce a novel knowledge-based word sense disambiguation method that determines the sense of an ambiguous word in biomedical text using semantic similarity or relatedness measures. These measures quantify the degree of similarity between concepts in the Unified Medical Language System (UMLS). The objective of this work was to develop a method that can disambiguate terms in biomedical text by exploiting similarity information extracted from the UMLS and to evaluate the efficacy of information content-based semantic similarity measures, which augment path-based information with probabilities derived from biomedical corpora. We show that information content-based measures obtain a higher disambiguation accuracy than path-based measures because they weight the path based on where it exists in the taxonomy coupled with the probability of the concepts occurring in a corpus of text. PMID:22195148

  2. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks

    PubMed Central

    De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616

  3. Prediction of Slot Shape and Slot Size for Improving the Performance of Microstrip Antennas Using Knowledge-Based Neural Networks.

    PubMed

    Khan, Taimoor; De, Asok

    2014-01-01

    In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616

  4. Automated integration of external databases: a knowledge-based approach to enhancing rule-based expert systems.

    PubMed Central

    Berman, L.; Cullen, M. R.; Miller, P. L.

    1992-01-01

    Expert system applications in the biomedical domain have long been hampered by the difficulty inherent in maintaining and extending large knowledge bases. We have developed a knowledge-based method for automatically augmenting such knowledge bases. The method consists of automatically integrating data contained in commercially available, external, on-line databases with data contained in an expert system's knowledge base. We have built a prototype system, named DBX, using this technique to augment an expert system's knowledge base as a decision support aid and as a bibliographic retrieval tool. In this paper, we describe this prototype system in detail, illustrate its use and discuss the lessons we have learned in its implementation. PMID:1482872

  5. An approach to knowledge engineering to support knowledge-based simulation of payload ground processing at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Mcmanus, Shawn; Mcdaniel, Michael

    1989-01-01

    Planning for processing payloads was always difficult and time-consuming. With the advent of Space Station Freedom and its capability to support a myriad of complex payloads, the planning to support this ground processing maze involves thousands of man-hours of often tedious data manipulation. To provide the capability to analyze various processing schedules, an object oriented knowledge-based simulation environment called the Advanced Generic Accomodations Planning Environment (AGAPE) is being developed. Having nearly completed the baseline system, the emphasis in this paper is directed toward rule definition and its relation to model development and simulation. The focus is specifically on the methodologies implemented during knowledge acquisition, analysis, and representation within the AGAPE rule structure. A model is provided to illustrate the concepts presented. The approach demonstrates a framework for AGAPE rule development to assist expert system development.

  6. Ontology Language to Support Description of Experiment Control System Semantics, Collaborative Knowledge-Base Design and Ontology Reuse

    SciTech Connect

    Vardan Gyurjyan, D Abbott, G Heyes, E Jastrzembski, B Moffit, C Timmer, E Wolin

    2009-10-01

    In this paper we discuss the control domain specific ontology that is built on top of the domain-neutral Resource Definition Framework (RDF). Specifically, we will discuss the relevant set of ontology concepts along with the relationships among them in order to describe experiment control components and generic event-based state machines. Control Oriented Ontology Language (COOL) is a meta-data modeling language that provides generic means for representation of physics experiment control processes and components, and their relationships, rules and axioms. It provides a semantic reference frame that is useful for automating the communication of information for configuration, deployment and operation. COOL has been successfully used to develop a complete and dynamic knowledge-base for experiment control systems, developed using the AFECS framework.

  7. MeRy-B: a web knowledgebase for the storage, visualization, analysis and annotation of plant NMR metabolomic profiles

    PubMed Central

    2011-01-01

    Background Improvements in the techniques for metabolomics analyses and growing interest in metabolomic approaches are resulting in the generation of increasing numbers of metabolomic profiles. Platforms are required for profile management, as a function of experimental design, and for metabolite identification, to facilitate the mining of the corresponding data. Various databases have been created, including organism-specific knowledgebases and analytical technique-specific spectral databases. However, there is currently no platform meeting the requirements for both profile management and metabolite identification for nuclear magnetic resonance (NMR) experiments. Description MeRy-B, the first platform for plant 1H-NMR metabolomic profiles, is designed (i) to provide a knowledgebase of curated plant profiles and metabolites obtained by NMR, together with the corresponding experimental and analytical metadata, (ii) for queries and visualization of the data, (iii) to discriminate between profiles with spectrum visualization tools and statistical analysis, (iv) to facilitate compound identification. It contains lists of plant metabolites and unknown compounds, with information about experimental conditions, the factors studied and metabolite concentrations for several plant species, compiled from more than one thousand annotated NMR profiles for various organs or tissues. Conclusion MeRy-B manages all the data generated by NMR-based plant metabolomics experiments, from description of the biological source to identification of the metabolites and determinations of their concentrations. It is the first database allowing the display and overlay of NMR metabolomic profiles selected through queries on data or metadata. MeRy-B is available from http://www.cbib.u-bordeaux2.fr/MERYB/index.php. PMID:21668943

  8. Experiences in improving the state of the practice in verification and validation of knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Culbert, Chris; French, Scott W.; Hamilton, David

    1994-01-01

    Knowledge-based systems (KBS's) are in general use in a wide variety of domains, both commercial and government. As reliance on these types of systems grows, the need to assess their quality and validity reaches critical importance. As with any software, the reliability of a KBS can be directly attributed to the application of disciplined programming and testing practices throughout the development life-cycle. However, there are some essential differences between conventional software and KBSs, both in construction and use. The identification of these differences affect the verification and validation (V&V) process and the development of techniques to handle them. The recognition of these differences is the basis of considerable on-going research in this field. For the past three years IBM (Federal Systems Company - Houston) and the Software Technology Branch (STB) of NASA/Johnson Space Center have been working to improve the 'state of the practice' in V&V of Knowledge-based systems. This work was motivated by the need to maintain NASA's ability to produce high quality software while taking advantage of new KBS technology. To date, the primary accomplishment has been the development and teaching of a four-day workshop on KBS V&V. With the hope of improving the impact of these workshops, we also worked directly with NASA KBS projects to employ concepts taught in the workshop. This paper describes two projects that were part of this effort. In addition to describing each project, this paper describes problems encountered and solutions proposed in each case, with particular emphasis on implications for transferring KBS V&V technology beyond the NASA domain.

  9. Getting libraries involved in industry-university-government collaboration : Libraries should support inauguration of business and lead SME into a knowledge-based society : What Toshiaki Takeuchi does as Business Library Association's President

    NASA Astrophysics Data System (ADS)

    Morita, Utako

    Getting libraries involved in industry-university-government collaboration : Libraries should support inauguration of business and lead SME into a knowledge-based society : What Toshiaki Takeuchi does as Business Library Association's President

  10. Applications of artificial intelligence 1993: Knowledge-based systems in aerospace and industry; Proceedings of the Meeting, Orlando, FL, Apr. 13-15, 1993

    NASA Technical Reports Server (NTRS)

    Fayyad, Usama M. (Editor); Uthurusamy, Ramasamy (Editor)

    1993-01-01

    The present volume on applications of artificial intelligence with regard to knowledge-based systems in aerospace and industry discusses machine learning and clustering, expert systems and optimization techniques, monitoring and diagnosis, and automated design and expert systems. Attention is given to the integration of AI reasoning systems and hardware description languages, care-based reasoning, knowledge, retrieval, and training systems, and scheduling and planning. Topics addressed include the preprocessing of remotely sensed data for efficient analysis and classification, autonomous agents as air combat simulation adversaries, intelligent data presentation for real-time spacecraft monitoring, and an integrated reasoner for diagnosis in satellite control. Also discussed are a knowledge-based system for the design of heat exchangers, reuse of design information for model-based diagnosis, automatic compilation of expert systems, and a case-based approach to handling aircraft malfunctions.

  11. Research resource: Gonadotropin-releasing hormone receptor-mediated signaling network in LbetaT2 cells: a pathway-based web-accessible knowledgebase.

    PubMed

    Fink, Marc Y; Pincas, Hanna; Choi, Soon Gang; Nudelman, German; Sealfon, Stuart C

    2010-09-01

    The GnRH receptor (GnRHR), expressed at the cell surface of the anterior pituitary gonadotrope, is critical for normal secretion of gonadotropins LH and FSH, pubertal development, and reproduction. The signaling network downstream of the GnRHR and the molecular bases of the regulation of gonadotropin expression have been the subject of intense research. The murine LbetaT2 cell line represents a mature gonadotrope and therefore is an important model for the study of GnRHR-signaling pathways and modulation of the gonadotrope cell by physiological regulators. In order to facilitate access to the information contained in this complex and evolving literature, we have developed a pathway-based knowledgebase that is web hosted. At present, using 106 relevant primary publications, we curated a comprehensive knowledgebase of the GnRHR signaling in the LbetaT2 cell in the form of a process diagram. Positive and negative controls of gonadotropin gene expression, which included GnRH itself, hypothalamic factors, gonadal steroids and peptides, as well as other hormones, were illustrated. The knowledgebase contains 187 entities and 206 reactions. It was assembled using CellDesigner software, which provides an annotated graphic representation of interactions, stored in Systems Biology Mark-up Language. We then utilized Biological Pathway Publisher, a software suite previously developed in our laboratory, to host the knowledgebase in a web-accessible format as a public resource. In addition, the network entities were linked to a public wiki, providing a forum for discussion, updating, and error correction. The GnRHR-signaling network is openly accessible at http://tsb.mssm.edu/pathwayPublisher/GnRHR_Pathway/GnRHR_Pathway_ index.html. PMID:20592162

  12. A knowledge-based taxonomy of critical factors for adopting electronic health record systems by physicians: a systematic literature review

    PubMed Central

    2010-01-01

    Background The health care sector is an area of social and economic interest in several countries; therefore, there have been lots of efforts in the use of electronic health records. Nevertheless, there is evidence suggesting that these systems have not been adopted as it was expected, and although there are some proposals to support their adoption, the proposed support is not by means of information and communication technology which can provide automatic tools of support. The aim of this study is to identify the critical adoption factors for electronic health records by physicians and to use them as a guide to support their adoption process automatically. Methods This paper presents, based on the PRISMA statement, a systematic literature review in electronic databases with adoption studies of electronic health records published in English. Software applications that manage and process the data in the electronic health record have been considered, i.e.: computerized physician prescription, electronic medical records, and electronic capture of clinical data. Our review was conducted with the purpose of obtaining a taxonomy of the physicians main barriers for adopting electronic health records, that can be addressed by means of information and communication technology; in particular with the information technology roles of the knowledge management processes. Which take us to the question that we want to address in this work: "What are the critical adoption factors of electronic health records that can be supported by information and communication technology?". Reports from eight databases covering electronic health records adoption studies in the medical domain, in particular those focused on physicians, were analyzed. Results The review identifies two main issues: 1) a knowledge-based classification of critical factors for adopting electronic health records by physicians; and 2) the definition of a base for the design of a conceptual framework for supporting the

  13. Structure for a knowledge-based system to estimate Soviet tactics in the air-land battle. Master's thesis

    SciTech Connect

    Fletcher, A.M.

    1988-03-01

    The purpose of this thesis was to build a prototype decision aid that can use knowledge about Soviet military doctrine and tactics to infer when, where, and how the Soviet Army plans to attack NATO defenses given intelligence data about Soviet (Red) military units, terrain data, and the positions of the NATO (Blue) defenses. Issues are raised that must be resolved before such a decision aid, which is part of the Rapid Application of Air Power concept, can become operational. First examined is the need to shorten the C2 decision cycle in order for the ATOC staff to keep pace with the tempo of modern warfare. The Rapid Application of Air Power is a concept that includes automating various steps in the decision cycle to allow air power to be applied proactively to stop Soviet forces before they obtain critical objectives. A structure is presented for automating the second step in the decision cycle, assessing and clarifying the situation, through a knowledge-based decision aid for interpreting intelligence data from the perspective of Soviet (Red) doctrine and estimating future Red tactical objectives and maneuvers.

  14. Knowledge-based multisensoral and multitemporal approach for land use classification in rugged terrain using Landsat TM and ERS SAR

    NASA Astrophysics Data System (ADS)

    Stolz, Roswitha; Strasser, Gertrud; Mauser, Wolfram

    1999-12-01

    Land use has an important impact on the climatic and hydrological cycle. For modeling this impact detailed knowledge of the land use and land cover pattern is necessary. Optical remote sensing data are good information sources to derive land use classifications for large areas. But due to the fact that commonly used classification algorithms are solely based on the spectral information, this often leads to misclassifications, because different classes can show similar spectral signatures. This is especially true for areas where a high rate of cloudiness reduces the availability of data. These are often heterogeneous and rugged areas such as mountains and their forelands. Advanced knowledge-based classification approaches which integrate non-spectral geographical ancillary data (i.e. climatic and terrain data) can improve the classification accuracy drastically. Still the method fails if spatially distributed ancillary data is not available or show no influence on the land use structure. The major advantage of the approach described in this paper is that it uses data, which are solely based on remotely sensed images and is therefore independent from map sources. The lack of multitemporal satellite data is cleared by the synergistic use of ERS radar data and LANDSAT-TM optical data.

  15. APL: An angle probability list to improve knowledge-based metaheuristics for the three-dimensional protein structure prediction.

    PubMed

    Borguesan, Bruno; Barbachan e Silva, Mariel; Grisci, Bruno; Inostroza-Ponta, Mario; Dorn, Márcio

    2015-12-01

    Tertiary protein structure prediction is one of the most challenging problems in structural bioinformatics. Despite the advances in algorithm development and computational strategies, predicting the folded structure of a protein only from its amino acid sequence remains as an unsolved problem. We present a new computational approach to predict the native-like three-dimensional structure of proteins. Conformational preferences of amino acid residues and secondary structure information were obtained from protein templates stored in the Protein Data Bank and represented as an Angle Probability List. Two knowledge-based prediction methods based on Genetic Algorithms and Particle Swarm Optimization were developed using this information. The proposed method has been tested with twenty-six case studies selected to validate our approach with different classes of proteins and folding patterns. Stereochemical and structural analysis were performed for each predicted three-dimensional structure. Results achieved suggest that the Angle Probability List can improve the effectiveness of metaheuristics used to predicted the three-dimensional structure of protein molecules by reducing its conformational search space. PMID:26495908

  16. Data acquisition for a real time fault monitoring and diagnosis knowledge-based system for space power system

    NASA Technical Reports Server (NTRS)

    Wilhite, Larry D.; Lee, S. C.; Lollar, Louis F.

    1989-01-01

    The design and implementation of the real-time data acquisition and processing system employed in the AMPERES project is described, including effective data structures for efficient storage and flexible manipulation of the data by the knowledge-based system (KBS), the interprocess communication mechanism required between the data acquisition system and the KBS, and the appropriate data acquisition protocols for collecting data from the sensors. Sensor data are categorized as critical or noncritical data on the basis of the inherent frequencies of the signals and the diagnostic requirements reflected in their values. The critical data set contains 30 analog values and 42 digital values and is collected every 10 ms. The noncritical data set contains 240 analog values and is collected every second. The collected critical and noncritical data are stored in separate circular buffers. Buffers are created in shared memory to enable other processes, i.e., the fault monitoring and diagnosis process and the user interface process, to freely access the data sets.

  17. PCOSKB: A KnowledgeBase on genes, diseases, ontology terms and biochemical pathways associated with PolyCystic Ovary Syndrome.

    PubMed

    Joseph, Shaini; Barai, Ram Shankar; Bhujbalrao, Rasika; Idicula-Thomas, Susan

    2016-01-01

    Polycystic ovary syndrome (PCOS) is one of the major causes of female subfertility worldwide and ≈ 7-10% of women in reproductive age are affected by it. The affected individuals exhibit varying types and levels of comorbid conditions, along with the classical PCOS symptoms. Extensive studies on PCOS across diverse ethnic populations have resulted in a plethora of information on dysregulated genes, gene polymorphisms and diseases linked to PCOS. However, efforts have not been taken to collate and link these data. Our group, for the first time, has compiled PCOS-related information available through scientific literature; cross-linked it with molecular, biochemical and clinical databases and presented it as a user-friendly, web-based online knowledgebase for the benefit of the scientific and clinical community. Manually curated information on associated genes, single nucleotide polymorphisms, diseases, gene ontology terms and pathways along with supporting reference literature has been collated and included in PCOSKB (http://pcoskb.bicnirrh.res.in). PMID:26578565

  18. Knowledge-based expert systems and a proof-of-concept case study for multiple sequence alignment construction and analysis

    PubMed Central

    Aniba, Mohamed Radhouene; Siguenza, Sophie; Friedrich, Anne; Plewniak, Frédéric; Poch, Olivier; Marchler-Bauer, Aron

    2009-01-01

    The traditional approach to bioinformatics analyses relies on independent task-specific services and applications, using different input and output formats, often idiosyncratic, and frequently not designed to inter-operate. In general, such analyses were performed by experts who manually verified the results obtained at each step in the process. Today, the amount of bioinformatics information continuously being produced means that handling the various applications used to study this information presents a major data management and analysis challenge to researchers. It is now impossible to manually analyse all this information and new approaches are needed that are capable of processing the large-scale heterogeneous data in order to extract the pertinent information. We review the recent use of integrated expert systems aimed at providing more efficient knowledge extraction for bioinformatics research. A general methodology for building knowledge-based expert systems is described, focusing on the unstructured information management architecture, UIMA, which provides facilities for both data and process management. A case study involving a multiple alignment expert system prototype called AlexSys is also presented. PMID:18971242

  19. AlzPlatform: An Alzheimer’s Disease Domain-Specific Chemogenomics Knowledgebase for Polypharmacology and Target Identification Research

    PubMed Central

    2015-01-01

    Alzheimer’s disease (AD) is one of the most complicated progressive neurodegeneration diseases that involve many genes, proteins, and their complex interactions. No effective medicines or treatments are available yet to stop or reverse the progression of the disease due to its polygenic nature. To facilitate discovery of new AD drugs and better understand the AD neurosignaling pathways involved, we have constructed an Alzheimer’s disease domain-specific chemogenomics knowledgebase, AlzPlatform (www.cbligand.org/AD/) with cloud computing and sourcing functions. AlzPlatform is implemented with powerful computational algorithms, including our established TargetHunter, HTDocking, and BBB Predictor for target identification and polypharmacology analysis for AD research. The platform has assembled various AD-related chemogenomics data records, including 928 genes and 320 proteins related to AD, 194 AD drugs approved or in clinical trials, and 405 188 chemicals associated with 1 023 137 records of reported bioactivities from 38 284 corresponding bioassays and 10 050 references. Furthermore, we have demonstrated the application of the AlzPlatform in three case studies for identification of multitargets and polypharmacology analysis of FDA-approved drugs and also for screening and prediction of new AD active small chemical molecules and potential novel AD drug targets by our established TargetHunter and/or HTDocking programs. The predictions were confirmed by reported bioactivity data and our in vitro experimental validation. Overall, AlzPlatform will enrich our knowledge for AD target identification, drug discovery, and polypharmacology analyses and, also, facilitate the chemogenomics data sharing and information exchange/communications in aid of new anti-AD drug discovery and development. PMID:24597646

  20. Knowledge-Based Personal Health System to empower outpatients of diabetes mellitus by means of P4 Medicine.

    PubMed

    Bresó, Adrián; Sáez, Carlos; Vicente, Javier; Larrinaga, Félix; Robles, Montserrat; García-Gómez, Juan Miguel

    2015-01-01

    Diabetes Mellitus (DM) affects hundreds of millions of people worldwide and it imposes a large economic burden on healthcare systems. We present a web patient empowering system (PHSP4) that ensures continuous monitoring and assessment of the health state of patients with DM (type I and II). PHSP4 is a Knowledge-Based Personal Health System (PHS) which follows the trend of P4 Medicine (Personalized, Predictive, Preventive, and Participative). It provides messages to outpatients and clinicians about the achievement of objectives, follow-up, and treatments adjusted to the patient condition. Additionally, it calculates a four-component risk vector of the associated pathologies with DM: Nephropathy, Diabetic retinopathy, Diabetic foot, and Cardiovascular event. The core of the system is a Rule-Based System which Knowledge Base is composed by a set of rules implementing the recommendations of the American Diabetes Association (ADA) (American Diabetes Association: http://www.diabetes.org/ ) clinical guideline. The PHSP4 is designed to be standardized and to facilitate its interoperability by means of terminologies (SNOMED-CT [The International Health Terminology Standards Development Organization: http://www.ihtsdo.org/snomed-ct/ ] and UCUM [The Unified Code for Units of Measure: http://unitsofmeasure.org/ ]), standardized clinical documents (HL7 CDA R2 [Health Level Seven International: http://www.hl7.org/index.cfm ]) for managing Electronic Health Record (EHR). We have evaluated the functionality of the system and its users' acceptance of the system using simulated and real data, and a questionnaire based in the Technology Acceptance Model methodology (TAM). Finally results show the reliability of the system and the high acceptance of clinicians. PMID:25417090

  1. MitProNet: A Knowledgebase and Analysis Platform of Proteome, Interactome and Diseases for Mammalian Mitochondria

    PubMed Central

    Mao, Song; Chai, Xiaoqiang; Hu, Yuling; Hou, Xugang; Tang, Yiheng; Bi, Cheng; Li, Xiao

    2014-01-01

    Mitochondrion plays a central role in diverse biological processes in most eukaryotes, and its dysfunctions are critically involved in a large number of diseases and the aging process. A systematic identification of mitochondrial proteomes and characterization of functional linkages among mitochondrial proteins are fundamental in understanding the mechanisms underlying biological functions and human diseases associated with mitochondria. Here we present a database MitProNet which provides a comprehensive knowledgebase for mitochondrial proteome, interactome and human diseases. First an inventory of mammalian mitochondrial proteins was compiled by widely collecting proteomic datasets, and the proteins were classified by machine learning to achieve a high-confidence list of mitochondrial proteins. The current version of MitProNet covers 1124 high-confidence proteins, and the remainders were further classified as middle- or low-confidence. An organelle-specific network of functional linkages among mitochondrial proteins was then generated by integrating genomic features encoded by a wide range of datasets including genomic context, gene expression profiles, protein-protein interactions, functional similarity and metabolic pathways. The functional-linkage network should be a valuable resource for the study of biological functions of mitochondrial proteins and human mitochondrial diseases. Furthermore, we utilized the network to predict candidate genes for mitochondrial diseases using prioritization algorithms. All proteins, functional linkages and disease candidate genes in MitProNet were annotated according to the information collected from their original sources including GO, GEO, OMIM, KEGG, MIPS, HPRD and so on. MitProNet features a user-friendly graphic visualization interface to present functional analysis of linkage networks. As an up-to-date database and analysis platform, MitProNet should be particularly helpful in comprehensive studies of complicated

  2. Materials Characterization at Utah State University: Facilities and Knowledge-base of Electronic Properties of Materials Applicable to Spacecraft Charging

    NASA Technical Reports Server (NTRS)

    Dennison, J. R.; Thomson, C. D.; Kite, J.; Zavyalov, V.; Corbridge, Jodie

    2004-01-01

    In an effort to improve the reliability and versatility of spacecraft charging models designed to assist spacecraft designers in accommodating and mitigating the harmful effects of charging on spacecraft, the NASA Space Environments and Effects (SEE) Program has funded development of facilities at Utah State University for the measurement of the electronic properties of both conducting and insulating spacecraft materials. We present here an overview of our instrumentation and capabilities, which are particularly well suited to study electron emission as related to spacecraft charging. These measurements include electron-induced secondary and backscattered yields, spectra, and angular resolved measurements as a function of incident energy, species and angle, plus investigations of ion-induced electron yields, photoelectron yields, sample charging and dielectric breakdown. Extensive surface science characterization capabilities are also available to fully characterize the samples in situ. Our measurements for a wide array of conducting and insulating spacecraft materials have been incorporated into the SEE Charge Collector Knowledge-base as a Database of Electronic Properties of Materials Applicable to Spacecraft Charging. This Database provides an extensive compilation of electronic properties, together with parameterization of these properties in a format that can be easily used with existing spacecraft charging engineering tools and with next generation plasma, charging, and radiation models. Tabulated properties in the Database include: electron-induced secondary electron yield, backscattered yield and emitted electron spectra; He, Ar and Xe ion-induced electron yields and emitted electron spectra; photoyield and solar emittance spectra; and materials characterization including reflectivity, dielectric constant, resistivity, arcing, optical microscopy images, scanning electron micrographs, scanning tunneling microscopy images, and Auger electron spectra. Further

  3. Sbexpert users guide (version 1.0): A knowledge-based decision-support system for spruce beetle management. Forest Service general technical report

    SciTech Connect

    Reynolds, K.M.; Holsten, E.H.; Werner, R.A.

    1995-03-01

    SBexpert version 1.0 is a knowledge-based decision-support system for management of spruce beetle developed for use in Microsoft Windows. The users guide provides detailed instructions on the use of all SBexpert features. SBexpert has four main subprograms; introduction, analysis, textbook, and literature. The introduction is the first of the five subtopics in the SBexpert help system. The analysis topic is an advisory system for spruce beetle management that provides recommendation for reducing spruce beetle hazard and risk to spruce stands and is the main analytical topic in SBexpert. The textbook and literature topics provide complementary decision support for analysis.

  4. A knowledge-based multimedia telecare system to improve the provision of formal and informal care for the elderly and disabled.

    PubMed

    Wallace, J G; Chambers, M G; Hobson, R A

    2001-01-01

    We have developed a knowledge-based multimedia telecare system, based on a multimedia PC connected by ISDN at 128 kbit/s. The user display is a television. Multimedia material is accessed through a browser-based interface. A remote-control handset is used as the main means of interaction, to ensure ease of use and overcome any initial reservations resulting from 'technophobia' on the part of the informal carer. The system was used in 13 family homes and four professional sites in Northern Ireland. The evaluations produced positive comments from the informal carers. There are plans to expand the use of the system. PMID:11576492

  5. A knowledge-based method for reducing attenuation artefacts caused by cardiac appliances in myocardial PET/CT

    NASA Astrophysics Data System (ADS)

    Hamill, James J.; Brunken, Richard C.; Bybel, Bohdan; Di Filippo, Frank P.; Faul, David D.

    2006-06-01

    Attenuation artefacts due to implanted cardiac defibrillator leads have previously been shown to adversely impact cardiac PET/CT imaging. In this study, the severity of the problem is characterized, and an image-based method is described which reduces the resulting artefact in PET. Automatic implantable cardioverter defibrillator (AICD) leads cause a moving-metal artefact in the CT sections from which the PET attenuation correction factors (ACFs) are derived. Fluoroscopic cine images were measured to demonstrate that the defibrillator's highly attenuating distal shocking coil moves rhythmically across distances on the order of 1 cm. Rhythmic motion of this magnitude was created in a phantom with a moving defibrillator lead. A CT study of the phantom showed that the artefact contained regions of incorrect, very high CT values and adjacent regions of incorrect, very low CT values. The study also showed that motion made the artefact more severe. A knowledge-based metal artefact reduction method (MAR) is described that reduces the magnitude of the error in the CT images, without use of the corrupted sinograms. The method modifies the corrupted image through a sequence of artefact detection procedures, morphological operations, adjustments of CT values and three-dimensional filtering. The method treats bone the same as metal. The artefact reduction method is shown to run in a few seconds, and is validated by applying it to a series of phantom studies in which reconstructed PET tracer distribution values are wrong by as much as 60% in regions near the CT artefact when MAR is not applied, but the errors are reduced to about 10% of expected values when MAR is applied. MAR changes PET image values by a few per cent in regions not close to the artefact. The changes can be larger in the vicinity of bone. In patient studies, the PET reconstruction without MAR sometimes results in anomalously high values in the infero-septal wall. Clinical performance of MAR is assessed by two

  6. The use of knowledge-based Genetic Algorithm for starting time optimisation in a lot-bucket MRP

    NASA Astrophysics Data System (ADS)

    Ridwan, Muhammad; Purnomo, Andi

    2016-01-01

    In production planning, Material Requirement Planning (MRP) is usually developed based on time-bucket system, a period in the MRP is representing the time and usually weekly. MRP has been successfully implemented in Make To Stock (MTS) manufacturing, where production activity must be started before customer demand is received. However, to be implemented successfully in Make To Order (MTO) manufacturing, a modification is required on the conventional MRP in order to make it in line with the real situation. In MTO manufacturing, delivery schedule to the customers is defined strictly and must be fulfilled in order to increase customer satisfaction. On the other hand, company prefers to keep constant number of workers, hence production lot size should be constant as well. Since a bucket in conventional MRP system is representing time and usually weekly, hence, strict delivery schedule could not be accommodated. Fortunately, there is a modified time-bucket MRP system, called as lot-bucket MRP system that proposed by Casimir in 1999. In the lot-bucket MRP system, a bucket is representing a lot, and the lot size is preferably constant. The time to finish every lot could be varying depends on due date of lot. Starting time of a lot must be determined so that every lot has reasonable production time. So far there is no formal method to determine optimum starting time in the lot-bucket MRP system. Trial and error process usually used for it but some time, it causes several lots have very short production time and the lot-bucket MRP would be infeasible to be executed. This paper presents the use of Genetic Algorithm (GA) for optimisation of starting time in a lot-bucket MRP system. Even though GA is well known as powerful searching algorithm, however, improvement is still required in order to increase possibility of GA in finding optimum solution in shorter time. A knowledge-based system has been embedded in the proposed GA as the improvement effort, and it is proven that the

  7. Application of knowledge-based classification techniques and geographic information systems (GIS) on satellite imagery for stormwater management

    NASA Astrophysics Data System (ADS)

    Abellera, Lourdes Villanueva

    Stormwater management is concerned with runoff control and water quality optimization. A stormwater model is a tool applied to reach this goal. Hydrologic variables required to run this model are usually obtained from field surveys and aerial photo-interpretation. However, these procedures are slow and difficult. An alternative is the automated processing of satellite imagery. We examined various studies that utilized satellite data to provide inputs to stormwater models. The overall results of the modeling effort are acceptable even if the outputs of satellite data processing are used instead of those obtained from standard techniques. One important model input parameter is land use because it is associated with the amounts of runoff and pollutants generated in a parcel of land. Hence, we also explored new ways that land use can be identified from satellite imagery. Next, we demonstrated how the combined technologies of satellite remote sensing, knowledge-based systems, and geographic information systems (GIS) are used to delineate impervious surfaces from a Landsat ETM+ data. Imperviousness is a critical model input parameter because it is proportional to runoff rates and volumes. We found that raw satellite image, normalized difference vegetation image, and ancillary data can provide rules to distinguish impervious surfaces satisfactorily. We also identified different levels of pollutant loadings (high, medium, low) from the same satellite imagery using similar techniques. It is useful to identify areas with high stormwater pollutant emissions so that they can be prioritized for the implementation of best management practices. The contaminants studied were total suspended solids, biochemical oxygen demand, total phosphorus, total Kjeldahl nitrogen, copper, and oil and grease. We observed that raw data, tasseled cap transformed images, and ancillary data can be utilized to make rules for mapping pollution levels. Finally, we devised a method to compute weights

  8. TH-A-9A-08: Knowledge-Based Quality Control of Clinical Stereotactic Radiosurgery Treatment Plans

    SciTech Connect

    Shiraishi, S; Moore, K L; Tan, J; Olsen, L

    2014-06-15

    Purpose: To develop a quality control tool to reduce stereotactic radiosurgery (SRS) planning variability using models that predict achievable plan quality metrics (QMs) based on individual patient anatomy. Methods: Using a knowledge-based methodology that quantitatively correlates anatomical geometric features to resultant organ-at-risk (OAR) dosimetry, we developed models for predicting achievable OAR dose-volume histograms (DVHs) by training with a cohort of previously treated SRS patients. The DVH-based QMs used in this work are the gradient measure, GM=(3/4pi)^1/3*[V50%^1/3−V100%^1/3], and V10Gy of normal brain. As GM quantifies the total rate of dose fall-off around the planning target volume (PTV), all voxels inside the patient's body contour were treated as OAR for DVH prediction. 35 previously treated SRS plans from our institution were collected; all were planned with non-coplanar volumetric-modulated arc therapy to prescription doses of 12–25 Gy. Of the 35-patient cohort, 15 were used for model training and 20 for model validation. Accuracies of the predictions were quantified by the mean and the standard deviation of the difference between clinical and predicted QMs, δQM=QM-clin−QM-pred. Results: Best agreement between predicted and clinical QMs was obtained when models were built separately for V-PTV<2.5cc and V-PTV>2.5cc. Eight patients trained the V-PTV<2.5cc model and seven patients trained the V-PTV>2.5cc models, respectively. The mean and the standard deviation of δGM were 0.3±0.4mm for the training sets and −0.1±0.6mm for the validation sets, demonstrating highly accurate GM predictions. V10Gy predictions were also highly accurate, with δV10Gy=0.8±0.7cc for the training sets and δV10Gy=0.7±1.4cc for the validation sets. Conclusion: The accuracy of the models in predicting two key SRS quality metrics highlights the potential of this technique for quality control for SRS treatments. Future investigations will seek to determine

  9. Producing Qualified Graduates and Assuring Education Quality in the Knowledge-Based Society: Roles and Issues of Graduate Education. Report of the International Workshop on Graduate Education, 2009. RIHE International Seminar Reports. No.14

    ERIC Educational Resources Information Center

    Research Institute for Higher Education, Hiroshima University (NJ3), 2010

    2010-01-01

    Through being specially funded by the Ministry of Education and Science in 2008, the Research Institute for Higher Education (RIHE) in Hiroshima University has been able to implement a new research project on the reform of higher education in the knowledge-based society of the 21st century. Thus RIHE hosted the second International Workshop on…

  10. The Reactome pathway knowledgebase.

    PubMed

    Croft, David; Mundo, Antonio Fabregat; Haw, Robin; Milacic, Marija; Weiser, Joel; Wu, Guanming; Caudy, Michael; Garapati, Phani; Gillespie, Marc; Kamdar, Maulik R; Jassal, Bijay; Jupe, Steven; Matthews, Lisa; May, Bruce; Palatnik, Stanislav; Rothfels, Karen; Shamovsky, Veronica; Song, Heeyeon; Williams, Mark; Birney, Ewan; Hermjakob, Henning; Stein, Lincoln; D'Eustachio, Peter

    2014-01-01

    Reactome (http://www.reactome.org) is a manually curated open-source open-data resource of human pathways and reactions. The current version 46 describes 7088 human proteins (34% of the predicted human proteome), participating in 6744 reactions based on data extracted from 15 107 research publications with PubMed links. The Reactome Web site and analysis tool set have been completely redesigned to increase speed, flexibility and user friendliness. The data model has been extended to support annotation of disease processes due to infectious agents and to mutation. PMID:24243840

  11. It Ain't the Heat, It's the Humanity: Evidence and Implications of a Knowledge-Based Consensus on Man-Made Global Warming

    NASA Astrophysics Data System (ADS)

    Jacobs, P.; Cook, J.; Nuccitelli, D. A.

    2013-12-01

    One of the most worrisome misconceptions among the general public about climate change is a belief that scientists disagree not only about the cause of the present climate change, but also whether or not the planet is currently warming. Recent surveys have demonstrated that an overwhelming consensus exists, both within the scientific literature and among scientists with climate expertise, that the planet is warming and humans are driving this climatic change. This disconnect, or 'consensus gap', between scientific agreement and public belief has significant consequences for public understanding of the reality and cause of climate change, as well as support for potential solutions. Ensuring that the consensus message is not simply broadcast but is also accepted as legitimate by the public appears to be a primary education and communications opportunity. While the existence of a consensus is not itself evidence of a position's truth, according to Miller (2013) scientific consensus can be taken as evidence that a position is true if it is 'knowledge-based', satisfying the conditions of social calibration, apparent consilience of evidence, and social diversity. We demonstrate that the scientific consensus on anthropogenic climate change is knowledge-based, satisfying Miller's criteria. In so doing, we hope to increase confidence in its use as an education and communications tool, and assure the public of its validity. We show the consensus is socially calibrated, based on common evidential standards, ontological schemes, and shared formalism. We establish that consilience of evidence points overwhelmingly to the reality of anthropogenic climate change by examining the evidence from several perspectives. We identify unique fingerprints expected as a result of increased greenhouse forcing, eliminate potential natural drivers of climate change as the cause of the present change, and demonstrate the consistency of the observed climate response with known changes in natural

  12. Knowledge-based systems as decision support tools in an ecosystem approach to fisheries: Comparing a fuzzy-logic and a rule-based approach

    NASA Astrophysics Data System (ADS)

    Jarre, Astrid; Paterson, Barbara; Moloney, Coleen L.; Miller, David C. M.; Field, John G.; Starfield, Anthony M.

    2008-10-01

    In an ecosystem approach to fisheries (EAF), management must draw on information of widely different types, and information addressing various scales. Knowledge-based systems assist in the decision-making process by summarising this information in a logical, transparent and reproducible way. Both rule-based Boolean and fuzzy-logic models have been used successfully as knowledge-based decision support tools. This study compares two such systems relevant to fisheries management in an EAF developed for the southern Benguela. The first is a rule-based system for the prediction of anchovy recruitment and the second is a fuzzy-logic tool to monitor implementation of an EAF in the sardine fishery. We construct a fuzzy-logic counterpart to the rule-based model, and a rule-based counterpart to the fuzzy-logic model, compare their results, and include feedback from potential users of these two decision support tools in our evaluation of the two approaches. With respect to the model objectives, no method clearly outperformed the other. The advantages of numerically processing continuous variables, and interpreting the final output, as in fuzzy-logic models, can be weighed up against the advantages of using a few, qualitative, easy-to-understand categories as in rule-based models. The natural language used in rule-based implementations is easily understood by, and communicated among, users of these systems. Users unfamiliar with fuzzy-set theory must “trust” the logic of the model. Graphical visualization of intermediate and end results is an important advantage of any system. Applying the two approaches in parallel improved our understanding of the model as well as of the underlying problems. Even for complex problems, small knowledge-based systems such as the ones explored here are worth developing and using. Their strengths lie in (i) synthesis of the problem in a logical and transparent framework, (ii) helping scientists to deliberate how to apply their science to

  13. Creating a knowledge-based economy in the United Arab Emirates: realising the unfulfilled potential of women in the science, technology and engineering fields

    NASA Astrophysics Data System (ADS)

    Ghazal Aswad, Noor; Vidican, Georgeta; Samulewicz, Diana

    2011-12-01

    As the United Arab Emirates (UAE) moves towards a knowledge-based economy, maximising the participation of the national workforce, especially women, in the transformation process is crucial. Using survey methods and semi-structured interviews, this paper examines the factors that influence women's decisions regarding their degree programme and their attitudes towards science, technology and engineering (STE). The findings point to the importance of adapting mainstream policies to the local context and the need to better understand the effect of culture and society on the individual and the economy. There is a need to increase interest in STE by raising awareness of what the fields entail, potential careers and their suitability with existing cultural beliefs. Also suggested is the need to overcome negative stereotypes of engineering, implement initiatives for further family involvement at the higher education level, as well as the need to ensure a greater availability of STE university programmes across the UAE.

  14. An Adaptive Knowledge-based Ion Source Automation Methodology To Improve Beam to Beam Switch Performance On Applied Materials Quantum® X Single Wafer Ion Implanter

    NASA Astrophysics Data System (ADS)

    Burgess, Chris; Keane, Martin; Oliver, Robert

    2006-11-01

    In both the integrated device manufacturers (IDM) and the foundry market sectors, 300mm processing has been accompanied by decreasing size of device lots and increased variability in lot sizes. This has resulted in an increased focus on ion implant automated tune / recipe change in order to minimise the lost production time associated with this phenomenon. In parallel, significantly higher levels of automation have led to the necessity for consistency and reliability in achieving these changes. This study demonstrates the performance of a knowledge-based control mechanism for controlling the ion source on a Quantum X single wafer ion implanter. Data is reviewed spanning multiple systems deploying multiple process recipes over the extended lifecycle associated with the indirectly heated cathode ion source. Intelligent automated recovery sequences are described along with tuning success rate and time data are presented.

  15. On the analysis of protein–protein interactions via knowledge-based potentials for the prediction of protein–protein docking

    PubMed Central

    Feliu, Elisenda; Aloy, Patrick; Oliva, Baldo

    2011-01-01

    Development of effective methods to screen binary interactions obtained by rigid-body protein–protein docking is key for structure prediction of complexes and for elucidating physicochemical principles of protein–protein binding. We have derived empirical knowledge-based potential functions for selecting rigid-body docking poses. These potentials include the energetic component that provides the residues with a particular secondary structure and surface accessibility. These scoring functions have been tested on a state-of-art benchmark dataset and on a decoy dataset of permanent interactions. Our results were compared with a residue-pair potential scoring function (RPScore) and an atomic-detailed scoring function (Zrank). We have combined knowledge-based potentials to score protein–protein poses of decoys of complexes classified either as transient or as permanent protein–protein interactions. Being defined from residue-pair statistical potentials and not requiring of an atomic level description, our method surpassed Zrank for scoring rigid-docking decoys where the unbound partners of an interaction have to endure conformational changes upon binding. However, when only moderate conformational changes are required (in rigid docking) or when the right conformational changes are ensured (in flexible docking), Zrank is the most successful scoring function. Finally, our study suggests that the physicochemical properties necessary for the binding are allocated on the proteins previous to its binding and with independence of the partner. This information is encoded at the residue level and could be easily incorporated in the initial grid scoring for Fast Fourier Transform rigid-body docking methods. PMID:21432933

  16. The Swiss-Prot protein knowledgebase and ExPASy: providing the plant community with high quality proteomic data and tools.

    PubMed

    Schneider, Michel; Tognolli, Michael; Bairoch, Amos

    2004-12-01

    The Swiss-Prot protein knowledgebase provides manually annotated entries for all species, but concentrates on the annotation of entries from model organisms to ensure the presence of high quality annotation of representative members of all protein families. A specific Plant Protein Annotation Program (PPAP) was started to cope with the increasing amount of data produced by the complete sequencing of plant genomes. Its main goal is the annotation of proteins from the model plant organism Arabidopsis thaliana. In addition to bibliographic references, experimental results, computed features and sometimes even contradictory conclusions, direct links to specialized databases connect amino acid sequences with the current knowledge in plant sciences. As protein families and groups of plant-specific proteins are regularly reviewed to keep up with current scientific findings, we hope that the wealth of information of Arabidopsis origin accumulated in our knowledgebase, and the numerous software tools provided on the Expert Protein Analysis System (ExPASy) web site might help to identify and reveal the function of proteins originating from other plants. Recently, a single, centralized, authoritative resource for protein sequences and functional information, UniProt, was created by joining the information contained in Swiss-Prot, Translation of the EMBL nucleotide sequence (TrEMBL), and the Protein Information Resource-Protein Sequence Database (PIR-PSD). A rising problem is that an increasing number of nucleotide sequences are not being submitted to the public databases, and thus the proteins inferred from such sequences will have difficulties finding their way to the Swiss-Prot or TrEMBL databases. PMID:15707838

  17. The feasibility of sub-millisievert coronary CT angiography with low tube voltage, prospective ECG gating, and a knowledge-based iterative model reconstruction algorithm.

    PubMed

    Park, Chul Hwan; Lee, Joohee; Oh, Chisuk; Han, Kyung Hwa; Kim, Tae Hoon

    2015-12-01

    We evaluated the feasibility of sub-millisievert (mSv) coronary CT angiography (CCTA) using low tube voltage, prospective ECG gating, and a knowledge-based iterative model reconstruction algorithm. Twenty-four non-obese healthy subjects (M:F 13:11; mean age 50.2 ± 7.8 years) were enrolled. Three sets of CT images were reconstructed using three different reconstruction methods: filtered back projection (FBP), iterative reconstruction (IR), and knowledge-based iterative model reconstruction (IMR). The scanning parameters were as follows: step-and-shoot axial scanning, 80 kVp, and 200 mAs. On the three sets of CT images, the attenuation and image noise values were measured at the aortic root. The signal-to-noise ratio (SNR) and the contrast-to-noise ratio (CNR) were calculated at the proximal right coronary artery and the left main coronary artery. The qualitative image quality of the CCTA with IMR was assessed using a 4-point grading scale (grade 1, poor; grade 4, excellent). The mean radiation dose of the CCTA was 0.89 ± 0.09 mSv. The attenuation values with IMR were not different from those of other reconstruction methods. The image noise with IMR was significantly lower than with IR and FBP. Compared to FBP, the noise reduction rate of IMR was 69 %. The SNR and CNR of CCTA with IMR were significantly higher than with FBP or IR. On the qualitative analysis with IMR, all included segments were diagnostic (grades 2, 3, and 4), and the mean image quality score was 3.6 ± 0.6. In conclusion, CCTA with low tube voltage, prospective ECG gating, and an IMR algorithm might be a feasible method that allows for sub-millisievert radiation doses and good image quality when used with non-obese subjects. PMID:26521066

  18. An Innovative Approach to Addressing Childhood Obesity: A Knowledge-Based Infrastructure for Supporting Multi-Stakeholder Partnership Decision-Making in Quebec, Canada

    PubMed Central

    Addy, Nii Antiaye; Shaban-Nejad, Arash; Buckeridge, David L.; Dubé, Laurette

    2015-01-01

    Multi-stakeholder partnerships (MSPs) have become a widespread means for deploying policies in a whole of society strategy to address the complex problem of childhood obesity. However, decision-making in MSPs is fraught with challenges, as decision-makers are faced with complexity, and have to reconcile disparate conceptualizations of knowledge across multiple sectors with diverse sets of indicators and data. These challenges can be addressed by supporting MSPs with innovative tools for obtaining, organizing and using data to inform decision-making. The purpose of this paper is to describe and analyze the development of a knowledge-based infrastructure to support MSP decision-making processes. The paper emerged from a study to define specifications for a knowledge-based infrastructure to provide decision support for community-level MSPs in the Canadian province of Quebec. As part of the study, a process assessment was conducted to understand the needs of communities as they collect, organize, and analyze data to make decisions about their priorities. The result of this process is a “portrait”, which is an epidemiological profile of health and nutrition in their community. Portraits inform strategic planning and development of interventions, and are used to assess the impact of interventions. Our key findings indicate ambiguities and disagreement among MSP decision-makers regarding causal relationships between actions and outcomes, and the relevant data needed for making decisions. MSP decision-makers expressed a desire for easy-to-use tools that facilitate the collection, organization, synthesis, and analysis of data, to enable decision-making in a timely manner. Findings inform conceptual modeling and ontological analysis to capture the domain knowledge and specify relationships between actions and outcomes. This modeling and analysis provide the foundation for an ontology, encoded using OWL 2 Web Ontology Language. The ontology is developed to provide

  19. An innovative approach to addressing childhood obesity: a knowledge-based infrastructure for supporting multi-stakeholder partnership decision-making in Quebec, Canada.

    PubMed

    Addy, Nii Antiaye; Shaban-Nejad, Arash; Buckeridge, David L; Dubé, Laurette

    2015-02-01

    Multi-stakeholder partnerships (MSPs) have become a widespread means for deploying policies in a whole of society strategy to address the complex problem of childhood obesity. However, decision-making in MSPs is fraught with challenges, as decision-makers are faced with complexity, and have to reconcile disparate conceptualizations of knowledge across multiple sectors with diverse sets of indicators and data. These challenges can be addressed by supporting MSPs with innovative tools for obtaining, organizing and using data to inform decision-making. The purpose of this paper is to describe and analyze the development of a knowledge-based infrastructure to support MSP decision-making processes. The paper emerged from a study to define specifications for a knowledge-based infrastructure to provide decision support for community-level MSPs in the Canadian province of Quebec. As part of the study, a process assessment was conducted to understand the needs of communities as they collect, organize, and analyze data to make decisions about their priorities. The result of this process is a "portrait", which is an epidemiological profile of health and nutrition in their community. Portraits inform strategic planning and development of interventions, and are used to assess the impact of interventions. Our key findings indicate ambiguities and disagreement among MSP decision-makers regarding causal relationships between actions and outcomes, and the relevant data needed for making decisions. MSP decision-makers expressed a desire for easy-to-use tools that facilitate the collection, organization, synthesis, and analysis of data, to enable decision-making in a timely manner. Findings inform conceptual modeling and ontological analysis to capture the domain knowledge and specify relationships between actions and outcomes. This modeling and analysis provide the foundation for an ontology, encoded using OWL 2 Web Ontology Language. The ontology is developed to provide semantic

  20. Numerical, Analytical, Experimental Study of Fluid Dynamic Forces in Seals. Volume 1; Executive Summary and Description of Knowledge-Based System

    NASA Technical Reports Server (NTRS)

    Liang, Anita D. (Technical Monitor); Shapiro, Wilbur; Aggarwal, Bharat

    2004-01-01

    The objectives of the program were to develop computational fluid dynamics (CFD) codes and simpler industrial codes for analyzing and designing advanced seals for air-breathing and space propulsion engines. The CFD code SCISEAL is capable of producing full three-dimensional flow field information for a variety of cylindrical configurations. An implicit multidomain capability allows the division of complex flow domains to allow optimum use of computational cells. SCISEAL also has the unique capability to produce cross-coupled stiffness and damping coefficients for rotordynamic computations. The industrial codes consist of a series of separate stand-alone modules designed for expeditious parametric analyses and optimization of a wide variety of cylindrical and face seals. Coupled through a Knowledge-Based System (KBS) that provides a user-friendly Graphical User Interface (GUI), the industrial codes are PC based using an OS/2 operating system. These codes were designed to treat film seals where a clearance exists between the rotating and stationary components. Leakage is inhibited by surface roughness, small but stiff clearance films, and viscous pumping devices. The codes have demonstrated to be a valuable resource for seal development of future air-breathing and space propulsion engines.

  1. Distance learning on the Web supported by Javascript: a critical appraisal with examples from clay mineralogy and knowledge-based tests

    NASA Astrophysics Data System (ADS)

    Krumm, S.; Thum, I.

    1998-08-01

    The hypertext mark-up language (HTML) is used to create hypertext documents in use on the World-Wide Web (WWW), built up as a client/server model. In this paper we discuss the enhancement of HTML documents with JavaScript, a script language understood by most common browsers. JavaScript is considered an easy means for bringing interactivity and answer checking to educational Web pages. It is faster to learn compared to using a programming language like PERL and has the advantage of high portability between different operating systems. Because all actions are performed on the client side, it reduces net traffic and pages can be used off-line. Educational usage, including tests and operations in future distance learning are outlined. Examples of JavaScript supported documents are given using clay mineralogy and knowledge-based tests as examples. A critical review of this relatively new technology reveals some compatibility problems but these seem to be offset by the possibility to make Web pages more attractive.

  2. Iterative Knowledge-Based Scoring Functions Derived from Rigid and Flexible Decoy Structures: Evaluation with the 2013 and 2014 CSAR Benchmarks.

    PubMed

    Yan, Chengfei; Grinter, Sam Z; Merideth, Benjamin Ryan; Ma, Zhiwei; Zou, Xiaoqin

    2016-06-27

    In this study, we developed two iterative knowledge-based scoring functions, ITScore_pdbbind(rigid) and ITScore_pdbbind(flex), using rigid decoy structures and flexible decoy structures, respectively, that were generated from the protein-ligand complexes in the refined set of PDBbind 2012. These two scoring functions were evaluated using the 2013 and 2014 CSAR benchmarks. The results were compared with the results of two other scoring functions, the Vina scoring function and ITScore, the scoring function that we previously developed from rigid decoy structures for a smaller set of protein-ligand complexes. A graph-based method was developed to evaluate the root-mean-square deviation between two conformations of the same ligand with different atom names and orders due to different file preparations, and the program is freely available. Our study showed that the two new scoring functions developed from the larger training set yielded significantly improved performance in binding mode predictions. For binding affinity predictions, all four scoring functions showed protein-dependent performance. We suggest the development of protein-family-dependent scoring functions for accurate binding affinity prediction. PMID:26389744

  3. Individual 3D region-of-interest atlas of the human brain: knowledge-based class image analysis for extraction of anatomical objects

    NASA Astrophysics Data System (ADS)

    Wagenknecht, Gudrun; Kaiser, Hans-Juergen; Sabri, Osama; Buell, Udalrich

    2000-06-01

    After neural network-based classification of tissue types, the second step of atlas extraction is knowledge-based class image analysis to get anatomically meaningful objects. Basic algorithms are region growing, mathematical morphology operations, and template matching. A special algorithm was designed for each object. The class label of each voxel and the knowledge about the relative position of anatomical objects to each other and to the sagittal midplane of the brain can be utilized for object extraction. User interaction is only necessary to define starting, mid- and end planes for most object extractions and to determine the number of iterations for erosion and dilation operations. Extraction can be done for the following anatomical brain regions: cerebrum; cerebral hemispheres; cerebellum; brain stem; white matter (e.g., centrum semiovale); gray matter [cortex, frontal, parietal, occipital, temporal lobes, cingulum, insula, basal ganglia (nuclei caudati, putamen, thalami)]. For atlas- based quantification of functional data, anatomical objects can be convoluted with the point spread function of functional data to take into account the different resolutions of morphological and functional modalities. This method allows individual atlas extraction from MRI image data of a patient without the need of warping individual data to an anatomical or statistical MRI brain atlas.

  4. Validation of an enhanced knowledge-based method for segmentation and quantitative analysis of intrathoracic airway trees from three-dimensional CT images

    SciTech Connect

    Sonka, M.; Park, W.; Hoffman, E.A.

    1995-12-31

    Accurate assessment of airway physiology, evaluated in terms of geometric changes, is critically dependent upon the accurate imaging and image segmentation of the three-dimensional airway tree structure. The authors have previously reported a knowledge-based method for three-dimensional airway tree segmentation from high resolution CT (HRCT) images. Here, they report a substantially improved version of the method. In the current implementation, the method consists of several stages. First, the lung borders are automatically determined in the three-dimensional set of HRCT data. The primary airway tree is semi-automatically identified. In the next stage, potential airways are determined in individual CT slices using a rule-based system that uses contextual information and a priori knowledge about pulmonary anatomy. Using three-dimensional connectivity properties of the pulmonary airway tree, the three-dimensional tree is constructed from the set of adjacent slices. The method`s performance and accuracy were assessed in five 3D HRCT canine images. Computer-identified airways matched 226/258 observer-defined airways (87.6%); the computer method failed to detect the airways in the remaining 32 locations. By visual assessment of rendered airway trees, the experienced observers judged the computer-detected airway trees as highly realistic.

  5. Conformational Temperature-Dependent Behavior of a Histone H2AX: A Coarse-Grained Monte Carlo Approach Via Knowledge-Based Interaction Potentials

    PubMed Central

    Fritsche, Miriam; Pandey, Ras B.; Farmer, Barry L.; Heermann, Dieter W.

    2012-01-01

    Histone proteins are not only important due to their vital role in cellular processes such as DNA compaction, replication and repair but also show intriguing structural properties that might be exploited for bioengineering purposes such as the development of nano-materials. Based on their biological and technological implications, it is interesting to investigate the structural properties of proteins as a function of temperature. In this work, we study the spatial response dynamics of the histone H2AX, consisting of 143 residues, by a coarse-grained bond fluctuating model for a broad range of normalized temperatures. A knowledge-based interaction matrix is used as input for the residue-residue Lennard-Jones potential. We find a variety of equilibrium structures including global globular configurations at low normalized temperature (), combination of segmental globules and elongated chains (), predominantly elongated chains (), as well as universal SAW conformations at high normalized temperature (). The radius of gyration of the protein exhibits a non-monotonic temperature dependence with a maximum at a characteristic temperature () where a crossover occurs from a positive (stretching at ) to negative (contraction at ) thermal response on increasing . PMID:22442661

  6. Refinement of modelled structures by knowledge-based energy profiles and secondary structure prediction: application to the human procarboxypeptidase A2.

    PubMed

    Aloy, P; Mas, J M; Martí-Renom, M A; Querol, E; Avilés, F X; Oliva, B

    2000-01-01

    Knowledge-based energy profiles combined with secondary structure prediction have been applied to molecular modelling refinement. To check the procedure, three different models of human procarboxypeptidase A2 (hPCPA2) have been built using the 3D structures of procarboxypeptidase A1 (pPCPA1) and bovine procarboxypeptidase A (bPCPA) as templates. The results of the refinement can be tested against the X-ray structure of hPCPA2 which has been recently determined. Regions miss-modelled in the activation segment of hPCPA2 were detected by means of pseudo-energies using Prosa II and modified afterwards according to the secondary structure prediction. Moreover, models obtained by automated methods as COMPOSER, MODELLER and distance restraints have also been compared, where it was found possible to find out the best model by means of pseudo-energies. Two general conclusions can be elicited from this work: (1) on a given set of putative models it is possible to distinguish among them the one closest to the crystallographic structure, and (2) within a given structure it is possible to find by means of pseudo-energies those regions that have been defectively modelled. PMID:10702927

  7. FluKB: A Knowledge-Based System for Influenza Vaccine Target Discovery and Analysis of the Immunological Properties of Influenza Viruses

    PubMed Central

    Simon, Christian; Kudahl, Ulrich J.; Sun, Jing; Olsen, Lars Rønn; Zhang, Guang Lan; Reinherz, Ellis L.; Brusic, Vladimir

    2015-01-01

    FluKB is a knowledge-based system focusing on data and analytical tools for influenza vaccine discovery. The main goal of FluKB is to provide access to curated influenza sequence and epitope data and enhance the analysis of influenza sequence diversity and the analysis of targets of immune responses. FluKB consists of more than 400,000 influenza protein sequences, known epitope data (357 verified T-cell epitopes, 685 HLA binders, and 16 naturally processed MHC ligands), and a collection of 28 influenza antibodies and their structurally defined B-cell epitopes. FluKB was built using a modular framework allowing the implementation of analytical workflows and includes standard search tools, such as keyword search and sequence similarity queries, as well as advanced tools for the analysis of sequence variability. The advanced analytical tools for vaccine discovery include visual mapping of T- and B-cell vaccine targets and assessment of neutralizing antibody coverage. FluKB supports the discovery of vaccine targets and the analysis of viral diversity and its implications for vaccine discovery as well as potential T-cell breadth and antibody cross neutralization involving multiple strains. FluKB is representation of a new generation of databases that integrates data, analytical tools, and analytical workflows that enable comprehensive analysis and automatic generation of analysis reports. PMID:26504853

  8. Performance of a Knowledge-Based Model for Optimization of Volumetric Modulated Arc Therapy Plans for Single and Bilateral Breast Irradiation

    PubMed Central

    Fogliata, Antonella; Nicolini, Giorgia; Bourgier, Celine; Clivio, Alessandro; De Rose, Fiorenza; Fenoglietto, Pascal; Lobefalo, Francesca; Mancosu, Pietro; Tomatis, Stefano; Vanetti, Eugenio; Scorsetti, Marta; Cozzi, Luca

    2015-01-01

    Purpose To evaluate the performance of a model-based optimisation process for volumetric modulated arc therapy, VMAT, applied to whole breast irradiation. Methods and Materials A set of 150 VMAT dose plans with simultaneous integrated boost were selected to train a model for the prediction of dose-volume constraints. The dosimetric validation was done on different groups of patients from three institutes for single (50 cases) and bilateral breast (20 cases). Results Quantitative improvements were observed between the model-based and the reference plans, particularly for heart dose. Of 460 analysed dose-volume objectives, 13% of the clinical plans failed to meet the constraints while the respective model-based plans succeeded. Only in 5 cases did the reference plans pass while the respective model-based failed the criteria. For the bilateral breast analysis, the model-based plans resulted in superior or equivalent dose distributions to the reference plans in 96% of the cases. Conclusions Plans optimised using a knowledge-based model to determine the dose-volume constraints showed dosimetric improvements when compared to earlier approved clinical plans. The model was applicable to patients from different centres for both single and bilateral breast irradiation. The data suggests that the dose-volume constraint optimisation can be effectively automated with the new engine and could encourage its application to clinical practice. PMID:26691687

  9. The Ensemble Folding Kinetics of the FBP28 WW Domain Revealed by an All-atom Monte Carlo Simulation in a Knowledge-based Potential

    PubMed Central

    Xu, Jiabin; Huang, Lei; Shakhnovich, Eugene I.

    2011-01-01

    In this work, we apply a detailed all-atom model with a transferable knowledge-based potential to study the folding kinetics of Formin-Binding protein, FBP28, which is a canonical three-stranded β-sheet WW domain. Replica exchange Monte Carlo (REMC) simulations starting from random coils find native-like (C α RMSD of 2.68Å) lowest energy structure. We also study the folding kinetics of FBP28 WW domain by performing a large number of ab initio Monte Carlo folding simulations. Using these trajectories, we examine the order of formation of two β –hairpins, the folding mechanism of each individual β– hairpin, and transition state ensemble (TSE) of FBP28 WW domain and compare our results with experimental data and previous computational studies. To obtain detailed structural information on the folding dynamics viewed as an ensemble process, we perform a clustering analysis procedure based on graph theory. Further, a rigorous Pfold analysis is used to obtain representative samples of the TSEs showing good quantitative agreement between experimental and simulated Φ values. Our analysis shows that the turn structure between first and second β strands is a partially stable structural motif that gets formed before entering the TSE in FBP28 WW domain and there exist two major pathways for the folding of FBP28 WW domain, which differ in the order and mechanism of hairpin formation. PMID:21365688

  10. Definition of the applicability domains of knowledge-based predictive toxicology expert systems by using a structural fragment-based approach.

    PubMed

    Ellison, Claire M; Enoch, Steven J; Cronin, Mark Td; Madden, Judith C; Judson, Philip

    2009-11-01

    The applicability domain of a (quantitative) structure-activity relationship ([Q]SAR) must be defined, if a model is to be used successfully for toxicity prediction, particularly for regulatory purposes. Previous efforts to set guidelines on the definition of applicability domains have often been biased toward quantitative, rather than qualitative, models. As a result, novel techniques are still required to define the applicability domains of structural alert models and knowledge-based systems. By using Derek for Windows as an example, this study defined the domain for the skin sensitisation structural alert rule-base. This was achieved by fragmenting the molecules within a training set of compounds, then searching the fragments for those created from a test compound. This novel method was able to highlight test chemicals which differed from those in the training set. The information was then used to designate chemicals as being either within or outside the domain of applicability for the structural alert on which that training set was based. PMID:20017582

  11. SU-E-P-58: Dosimetric Study of Conventional Intensity-Modulated Radiotherapy and Knowledge-Based Radiation Therapy for Postoperation of Cervix Cancer

    SciTech Connect

    Ma, C; Yin, Y

    2015-06-15

    Purpose: To compare the dosimetric difference of the target volume and organs at risk(OARs) between conventional intensity-modulated radiotherapy(C-IMRT) and knowledge-based radiation therapy (KBRT) plans for cervix cancer. Methods: 39 patients with cervical cancer after surgery were randomly selected, 20 patient plans were used to create the model, the other 19 cases used for comparative evaluation. All plans were designed in Eclipse system. The prescription dose was 30.6Gy, 17 fractions, OARs dose satisfied to the clinical requirement. A paired t test was used to evaluate the differences of dose-volume histograms (DVH). Results: Comparaed to C-IMRT plan, the KBRT plan target can achieve the similar target dose coverage, D98,D95,D2,HI and CI had no difference (P≥0.05). The dose of rectum, bladder and femoral heads had no significant differences(P≥0.05). The time was used to design treatment plan was significant reduced. Conclusion: This study shows that postoperative radiotherapy of cervical KBRT plans can achieve the similar target and OARs dose, but the shorter designing time.

  12. kPROT: a knowledge-based scale for the propensity of residue orientation in transmembrane segments. Application to membrane protein structure prediction.

    PubMed

    Pilpel, Y; Ben-Tal, N; Lancet, D

    1999-12-10

    Modeling of integral membrane proteins and the prediction of their functional sites requires the identification of transmembrane (TM) segments and the determination of their angular orientations. Hydrophobicity scales predict accurately the location of TM helices, but are less accurate in computing angular disposition. Estimating lipid-exposure propensities of the residues from statistics of solved membrane protein structures has the disadvantage of relying on relatively few proteins. As an alternative, we propose here a scale of knowledge-based Propensities for Residue Orientation in Transmembrane segments (kPROT), derived from the analysis of more than 5000 non-redundant protein sequences. We assume that residues that tend to be exposed to the membrane are more frequent in TM segments of single-span proteins, while residues that prefer to be buried in the transmembrane bundle interior are present mainly in multi-span TMs. The kPROT value for each residue is thus defined as the logarithm of the ratio of its proportions in single and multiple TM spans. The scale is refined further by defining it for three discrete sections of the TM segment; namely, extracellular, central, and intracellular. The capacity of the kPROT scale to predict angular helical orientation was compared to that of alternative methods in a benchmark test, using a diversity of multi-span alpha-helical transmembrane proteins with a solved 3D structure. kPROT yielded an average angular error of 41 degrees, significantly lower than that of alternative scales (62 degrees -68 degrees ). The new scale thus provides a useful general tool for modeling and prediction of functional residues in membrane proteins. A WWW server (http://bioinfo.weizmann.ac.il/kPROT) is available for automatic helix orientation prediction with kPROT. PMID:10588897

  13. Evaluation of TOPKAT, Toxtree, and Derek Nexus in Silico Models for Ocular Irritation and Development of a Knowledge-Based Framework To Improve the Prediction of Severe Irritation.

    PubMed

    Bhhatarai, Barun; Wilson, Daniel M; Parks, Amanda K; Carney, Edward W; Spencer, Pamela J

    2016-05-16

    Assessment of ocular irritation is an essential component of any risk assessment. A number of (Q)SARs and expert systems have been developed and are described in the literature. Here, we focus on three in silico models (TOPKAT, BfR rulebase implemented in Toxtree, and Derek Nexus) and evaluate their performance using 1644 in-house and 123 European Centre for Toxicology and Ecotoxicology of Chemicals (ECETOC) compounds with existing in vivo ocular irritation classification data. Overall, the in silico models performed poorly. The best consensus predictions of severe ocular irritants were 52 and 65% for the in-house and ECETOC compounds, respectively. The prediction performance was improved by designing a knowledge-based chemical profiling framework that incorporated physicochemical properties and electrophilic reactivity mechanisms. The utility of the framework was assessed by applying it to the same test sets and three additional publicly available in vitro irritation data sets. The prediction of severe ocular irritants was improved to 73-77% if compounds were filtered on the basis of AlogP_MR (hydrophobicity with molar refractivity). The predictivity increased to 74-80% for compounds capable of preferentially undergoing hard electrophilic reactions, such as Schiff base formation and acylation. This research highlights the need for reliable ocular irritation models to be developed that take into account mechanisms of action and individual structural classes. It also demonstrates the value of profiling compounds with respect to their chemical reactivity and physicochemical properties that, in combination with existing models, results in better predictions for severe irritants. PMID:27018716

  14. The peptide agonist-binding site of the glucagon-like peptide-1 (GLP-1) receptor based on site-directed mutagenesis and knowledge-based modelling

    PubMed Central

    Dods, Rachel L.; Donnelly, Dan

    2015-01-01

    Glucagon-like peptide-1 (7–36)amide (GLP-1) plays a central role in regulating blood sugar levels and its receptor, GLP-1R, is a target for anti-diabetic agents such as the peptide agonist drugs exenatide and liraglutide. In order to understand the molecular nature of the peptide–receptor interaction, we used site-directed mutagenesis and pharmacological profiling to highlight nine sites as being important for peptide agonist binding and/or activation. Using a knowledge-based approach, we constructed a 3D model of agonist-bound GLP-1R, basing the conformation of the N-terminal region on that of the receptor-bound NMR structure of the related peptide pituitary adenylate cyclase-activating protein (PACAP21). The relative position of the extracellular to the transmembrane (TM) domain, as well as the molecular details of the agonist-binding site itself, were found to be different from the model that was published alongside the crystal structure of the TM domain of the glucagon receptor, but were nevertheless more compatible with published mutagenesis data. Furthermore, the NMR-determined structure of a high-potency cyclic conformationally-constrained 11-residue analogue of GLP-1 was also docked into the receptor-binding site. Despite having a different main chain conformation to that seen in the PACAP21 structure, four conserved residues (equivalent to His-7, Glu-9, Ser-14 and Asp-15 in GLP-1) could be structurally aligned and made similar interactions with the receptor as their equivalents in the GLP-1-docked model, suggesting the basis of a pharmacophore for GLP-1R peptide agonists. In this way, the model not only explains current mutagenesis and molecular pharmacological data but also provides a basis for further experimental design. PMID:26598711

  15. Automated knowledge-base refinement

    NASA Technical Reports Server (NTRS)

    Mooney, Raymond J.

    1994-01-01

    Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.

  16. Knowledge-based robotic grasping

    SciTech Connect

    Stansfield, S.A.

    1989-01-01

    In this paper, we describe a general-purpose robotic grasping system for use in unstructured environments. Using computer vision and a compact set of heuristics, the system automatically generates the robot arm and hand motions required for grasping an unmodeled object. The utility of such a system is most evident in environments where the robot will have to grasp and manipulate a variety of unknown objects, but where many of the manipulation tasks may be relatively simple. Examples of such domains are planetary exploration and astronaut assistance, undersea salvage and rescue, and nuclear waste site clean-up. This work implements a two-stage model of grasping: stage one is an orientation of the hand and wrist and a ballistic reach toward the object; stage two is hand preshaping and adjustment. Visual features are first extracted from the unmodeled object. These features and their relations are used by an expert system to generate a set of valid reach/grasps for the object. These grasps are then used in driving the robot hand and arm to bring the fingers into contact with the object in the desired configuration. 16 refs., 14 figs.

  17. Cerebrospinal fluid data compilation and knowledge-based interpretation of bacterial, viral, parasitic, oncological, chronic inflammatory and demyelinating diseases. Diagnostic patterns not to be missed in neurology and psychiatry.

    PubMed

    Reiber, Hansotto

    2016-04-01

    The analysis of intrathecal IgG, IgA and IgM synthesis in cerebrospinal fluid (CSF) and evaluation in combined quotient diagrams provides disease-related patterns. The compilation with complementary parameters (barrier function, i.e., CSF flow rate, cytology, lactate, antibodies) in a cumulative CSF data report allows a knowledge-based interpretation and provides analytical and medical plausibility for the quality assessment in CSF laboratories. The diagnostic relevance is described for neurological and psychiatric diseases, for which CSF analysis can't be replaced by other diagnostic methods without loss of information. Dominance of intrathecal IgM, IgA or three class immune responses give a systematic approach for Facial nerve palsy, Neurotrypanosomiasis, Opportunistic diseases, lymphoma, Neurotuberculosis, Adrenoleucodystrophy or tumor metastases. Particular applications consider the diagnostic power of the polyspecific antibody response (MRZ-antibodies) in multiple sclerosis, a CSF-related systematic view on differential diagnostic of psychiatric diseases and the dynamics of brain- derived compared to blood-derived molecules in CSF for localization of paracytes. PMID:27097008

  18. Improving the efficiencies of simultaneous organic substance and nitrogen removal in a multi-stage loop membrane bioreactor-based PWWTP using an on-line Knowledge-Based Expert System.

    PubMed

    Chen, Zhao-Bo; Nie, Shu-Kai; Ren, Nan-Qi; Chen, Zhi-Qiang; Wang, Hong-Cheng; Cui, Min-Hua

    2011-10-15

    The results of the use of an expert system (ES) to control a novel multi-stage loop membrane bioreactor (MLMBR) for the simultaneous removal of organic substances and nutrients are reported. The study was conducted at a bench-scale plant for the purpose of meeting new discharge standards (GB21904-2008) for the treatment of chemical synthesis-based pharmaceutical wastewater (1200-9600 mg/L COD, 500-2500 mg/L BOD5, 50-200 mg/L NH4+-N and 105-400 mg/L TN in the influent water) by developing a distributed control system. The system allows various expert operational approaches to be deployed with the goal of minimizing organic substances and nitrogen levels in the outlet while using the minimum amount of energy. The proposed distributed control system, which is supervised by a Knowledge-Based Expert System (KBES) constructed with G2 (a tool for expert system development) and a back propagation BP artificial neural network, permits the on-line implementation of every operating strategy of the experimental system. A support vector machine (SVM) is applied to achieve pattern recognition. A set of experiments involving variable sludge retention time (SRT), hydraulic retention time (HRT) and dissolved oxygen (DO) was carried out. Using the proposed system, the amounts of COD, TN and NH4+-N in the effluent decreased by 55%, 62% and 38%, respectively, compared to the usual operating conditions. These improvements were achieved with little energy cost because the performance of the treatment plant was optimized using operating rules implemented in real time. PMID:21862097

  19. Knowledge-based systems for power management

    NASA Technical Reports Server (NTRS)

    Lollar, L. F.

    1992-01-01

    NASA-Marshall's Electrical Power Branch has undertaken the development of expert systems in support of further advancements in electrical power system automation. Attention is given to the features (1) of the Fault Recovery and Management Expert System, (2) a resource scheduler or Master of Automated Expert Scheduling Through Resource Orchestration, and (3) an adaptive load-priority manager, or Load Priority List Management System. The characteristics of an advisory battery manager for the Hubble Space Telescope, designated the 'nickel-hydrogen expert system', are also noted.

  20. HMDB: a knowledgebase for the human metabolome

    PubMed Central

    Wishart, David S.; Knox, Craig; Guo, An Chi; Eisner, Roman; Young, Nelson; Gautam, Bijaya; Hau, David D.; Psychogios, Nick; Dong, Edison; Bouatra, Souhaila; Mandal, Rupasri; Sinelnikov, Igor; Xia, Jianguo; Jia, Leslie; Cruz, Joseph A.; Lim, Emilia; Sobsey, Constance A.; Shrivastava, Savita; Huang, Paul; Liu, Philip; Fang, Lydia; Peng, Jun; Fradette, Ryan; Cheng, Dean; Tzur, Dan; Clements, Melisa; Lewis, Avalyn; De Souza, Andrea; Zuniga, Azaret; Dawe, Margot; Xiong, Yeping; Clive, Derrick; Greiner, Russ; Nazyrova, Alsu; Shaykhutdinov, Rustem; Li, Liang; Vogel, Hans J.; Forsythe, Ian

    2009-01-01

    The Human Metabolome Database (HMDB, http://www.hmdb.ca) is a richly annotated resource that is designed to address the broad needs of biochemists, clinical chemists, physicians, medical geneticists, nutritionists and members of the metabolomics community. Since its first release in 2007, the HMDB has been used to facilitate the research for nearly 100 published studies in metabolomics, clinical biochemistry and systems biology. The most recent release of HMDB (version 2.0) has been significantly expanded and enhanced over the previous release (version 1.0). In particular, the number of fully annotated metabolite entries has grown from 2180 to more than 6800 (a 300% increase), while the number of metabolites with biofluid or tissue concentration data has grown by a factor of five (from 883 to 4413). Similarly, the number of purified compounds with reference to NMR, LC-MS and GC-MS spectra has more than doubled (from 380 to more than 790 compounds). In addition to this significant expansion in database size, many new database searching tools and new data content has been added or enhanced. These include better algorithms for spectral searching and matching, more powerful chemical substructure searches, faster text searching software, as well as dedicated pathway searching tools and customized, clickable metabolic maps. Changes to the user-interface have also been implemented to accommodate future expansion and to make database navigation much easier. These improvements should make the HMDB much more useful to a much wider community of users. PMID:18953024

  1. Knowledge-based fragment binding prediction.

    PubMed

    Tang, Grace W; Altman, Russ B

    2014-04-01

    Target-based drug discovery must assess many drug-like compounds for potential activity. Focusing on low-molecular-weight compounds (fragments) can dramatically reduce the chemical search space. However, approaches for determining protein-fragment interactions have limitations. Experimental assays are time-consuming, expensive, and not always applicable. At the same time, computational approaches using physics-based methods have limited accuracy. With increasing high-resolution structural data for protein-ligand complexes, there is now an opportunity for data-driven approaches to fragment binding prediction. We present FragFEATURE, a machine learning approach to predict small molecule fragments preferred by a target protein structure. We first create a knowledge base of protein structural environments annotated with the small molecule substructures they bind. These substructures have low-molecular weight and serve as a proxy for fragments. FragFEATURE then compares the structural environments within a target protein to those in the knowledge base to retrieve statistically preferred fragments. It merges information across diverse ligands with shared substructures to generate predictions. Our results demonstrate FragFEATURE's ability to rediscover fragments corresponding to the ligand bound with 74% precision and 82% recall on average. For many protein targets, it identifies high scoring fragments that are substructures of known inhibitors. FragFEATURE thus predicts fragments that can serve as inputs to fragment-based drug design or serve as refinement criteria for creating target-specific compound libraries for experimental or computational screening. PMID:24762971

  2. HMDB: a knowledgebase for the human metabolome.

    PubMed

    Wishart, David S; Knox, Craig; Guo, An Chi; Eisner, Roman; Young, Nelson; Gautam, Bijaya; Hau, David D; Psychogios, Nick; Dong, Edison; Bouatra, Souhaila; Mandal, Rupasri; Sinelnikov, Igor; Xia, Jianguo; Jia, Leslie; Cruz, Joseph A; Lim, Emilia; Sobsey, Constance A; Shrivastava, Savita; Huang, Paul; Liu, Philip; Fang, Lydia; Peng, Jun; Fradette, Ryan; Cheng, Dean; Tzur, Dan; Clements, Melisa; Lewis, Avalyn; De Souza, Andrea; Zuniga, Azaret; Dawe, Margot; Xiong, Yeping; Clive, Derrick; Greiner, Russ; Nazyrova, Alsu; Shaykhutdinov, Rustem; Li, Liang; Vogel, Hans J; Forsythe, Ian

    2009-01-01

    The Human Metabolome Database (HMDB, http://www.hmdb.ca) is a richly annotated resource that is designed to address the broad needs of biochemists, clinical chemists, physicians, medical geneticists, nutritionists and members of the metabolomics community. Since its first release in 2007, the HMDB has been used to facilitate the research for nearly 100 published studies in metabolomics, clinical biochemistry and systems biology. The most recent release of HMDB (version 2.0) has been significantly expanded and enhanced over the previous release (version 1.0). In particular, the number of fully annotated metabolite entries has grown from 2180 to more than 6800 (a 300% increase), while the number of metabolites with biofluid or tissue concentration data has grown by a factor of five (from 883 to 4413). Similarly, the number of purified compounds with reference to NMR, LC-MS and GC-MS spectra has more than doubled (from 380 to more than 790 compounds). In addition to this significant expansion in database size, many new database searching tools and new data content has been added or enhanced. These include better algorithms for spectral searching and matching, more powerful chemical substructure searches, faster text searching software, as well as dedicated pathway searching tools and customized, clickable metabolic maps. Changes to the user-interface have also been implemented to accommodate future expansion and to make database navigation much easier. These improvements should make the HMDB much more useful to a much wider community of users. PMID:18953024

  3. Reuse: A knowledge-based approach

    NASA Technical Reports Server (NTRS)

    Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui

    1992-01-01

    This paper describes our research in automating the reuse process through the use of application domain models. Application domain models are explicit formal representations of the application knowledge necessary to understand, specify, and generate application programs. Furthermore, they provide a unified repository for the operational structure, rules, policies, and constraints of a specific application area. In our approach, domain models are expressed in terms of a transaction-based meta-modeling language. This paper has described in detail the creation and maintenance of hierarchical structures. These structures are created through a process that includes reverse engineering of data models with supplementary enhancement from application experts. Source code is also reverse engineered but is not a major source of domain model instantiation at this time. In the second phase of the software synthesis process, program specifications are interactively synthesized from an instantiated domain model. These specifications are currently integrated into a manual programming process but will eventually be used to derive executable code with mechanically assisted transformations. This research is performed within the context of programming-in-the-large types of systems. Although our goals are ambitious, we are implementing the synthesis system in an incremental manner through which we can realize tangible results. The client/server architecture is capable of supporting 16 simultaneous X/Motif users and tens of thousands of attributes and classes. Domain models have been partially synthesized from five different application areas. As additional domain models are synthesized and additional knowledge is gathered, we will inevitably add to and modify our representation. However, our current experience indicates that it will scale and expand to meet our modeling needs.

  4. DAPD: A Knowledgebase for Diabetes Associated Proteins.

    PubMed

    Gopinath, Krishnasamy; Jayakumararaj, Ramaraj; Karthikeyan, Muthusamy

    2015-01-01

    Recent advancements in genomics and proteomics provide a solid foundation for understanding the pathogenesis of diabetes. Proteomics of diabetes associated pathways help to identify the most potent target for the management of diabetes. The relevant datasets are scattered in various prominent sources which takes much time to select the therapeutic target for the clinical management of diabetes. However, additional information about target proteins is needed for validation. This lacuna may be resolved by linking diabetes associated genes, pathways and proteins and it will provide a strong base for the treatment and planning management strategies of diabetes. Thus, a web source "Diabetes Associated Proteins Database (DAPD)" has been developed to link the diabetes associated genes, pathways and proteins using PHP, MySQL. The current version of DAPD has been built with proteins associated with different types of diabetes. In addition, DAPD has been linked to external sources to gain the access to more participatory proteins and their pathway network. DAPD will reduce the time and it is expected to pave the way for the discovery of novel anti-diabetic leads using computational drug designing for diabetes management. DAPD is open accessed via following url www.mkarthikeyan.bioinfoau.org/dapd. PMID:26357271

  5. Knowledge-Based Flight-Status Monitor

    NASA Technical Reports Server (NTRS)

    Duke, E. L.; Disbrow, J. D.; Butler, G. F.

    1991-01-01

    Conceptual digital computing system intended to monitor and interpret telemetered data on health and status of complicated avionic system in advanced experimental aircraft. Monitor programmed with expert-system software to interpret data in real time. Software includes rule-based model of failure-management system of aircraft that processes fault-indicating signals from avionic system to give timely advice to human operators in mission-control room on ground.

  6. Clinical knowledge-based inverse treatment planning

    NASA Astrophysics Data System (ADS)

    Yang, Yong; Xing, Lei

    2004-11-01

    Clinical IMRT treatment plans are currently made using dose-based optimization algorithms, which do not consider the nonlinear dose-volume effects for tumours and normal structures. The choice of structure specific importance factors represents an additional degree of freedom of the system and makes rigorous optimization intractable. The purpose of this work is to circumvent the two problems by developing a biologically more sensible yet clinically practical inverse planning framework. To implement this, the dose-volume status of a structure was characterized by using the effective volume in the voxel domain. A new objective function was constructed with the incorporation of the volumetric information of the system so that the figure of merit of a given IMRT plan depends not only on the dose deviation from the desired distribution but also the dose-volume status of the involved organs. The conventional importance factor of an organ was written into a product of two components: (i) a generic importance that parametrizes the relative importance of the organs in the ideal situation when the goals for all the organs are met; (ii) a dose-dependent factor that quantifies our level of clinical/dosimetric satisfaction for a given plan. The generic importance can be determined a priori, and in most circumstances, does not need adjustment, whereas the second one, which is responsible for the intractable behaviour of the trade-off seen in conventional inverse planning, was determined automatically. An inverse planning module based on the proposed formalism was implemented and applied to a prostate case and a head-neck case. A comparison with the conventional inverse planning technique indicated that, for the same target dose coverage, the critical structure sparing was substantially improved for both cases. The incorporation of clinical knowledge allows us to obtain better IMRT plans and makes it possible to auto-select the importance factors, greatly facilitating the inverse planning process. The new formalism proposed also reveals the relationship between different inverse planning schemes and gives important insight into the problem of therapeutic plan optimization. In particular, we show that the EUD-based optimization is a special case of the general inverse planning formalism described in this paper.

  7. NASDA knowledge-based network planning system

    NASA Technical Reports Server (NTRS)

    Yamaya, K.; Fujiwara, M.; Kosugi, S.; Yambe, M.; Ohmori, M.

    1993-01-01

    One of the SODS (space operation and data system) sub-systems, NP (network planning) was the first expert system used by NASDA (national space development agency of Japan) for tracking and control of satellite. The major responsibilities of the NP system are: first, the allocation of network and satellite control resources and, second, the generation of the network operation plan data (NOP) used in automated control of the stations and control center facilities. Up to now, the first task of network resource scheduling was done by network operators. NP system automatically generates schedules using its knowledge base, which contains information on satellite orbits, station availability, which computer is dedicated to which satellite, and how many stations must be available for a particular satellite pass or a certain time period. The NP system is introduced.

  8. Database systems for knowledge-based discovery.

    PubMed

    Jagarlapudi, Sarma A R P; Kishan, K V Radha

    2009-01-01

    Several database systems have been developed to provide valuable information from the bench chemist to biologist, medical practitioner to pharmaceutical scientist in a structured format. The advent of information technology and computational power enhanced the ability to access large volumes of data in the form of a database where one could do compilation, searching, archiving, analysis, and finally knowledge derivation. Although, data are of variable types the tools used for database creation, searching and retrieval are similar. GVK BIO has been developing databases from publicly available scientific literature in specific areas like medicinal chemistry, clinical research, and mechanism-based toxicity so that the structured databases containing vast data could be used in several areas of research. These databases were classified as reference centric or compound centric depending on the way the database systems were designed. Integration of these databases with knowledge derivation tools would enhance the value of these systems toward better drug design and discovery. PMID:19727614

  9. Knowledge-based flow field zoning

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation flow field zoning in two dimensions is an important step towards easing the three-dimensional grid generation bottleneck in computational fluid dynamics. A knowledge based approach works well, but certain aspects of flow field zoning make the use of such an approach challenging. A knowledge based flow field zoner, called EZGrid, was implemented and tested on representative two-dimensional aerodynamic configurations. Results are shown which illustrate the way in which EZGrid incorporates the effects of physics, shape description, position, and user bias in a flow field zoning.

  10. Structure-function relationships in the cysteine proteinases actinidin, papain and papaya proteinase omega. Three-dimensional structure of papaya proteinase omega deduced by knowledge-based modelling and active-centre characteristics determined by two-hydronic-state reactivity probe kinetics and kinetics of catalysis.

    PubMed Central

    Topham, C M; Salih, E; Frazao, C; Kowlessur, D; Overington, J P; Thomas, M; Brocklehurst, S M; Patel, M; Thomas, E W; Brocklehurst, K

    1991-01-01

    1. A model of the three-dimensional structure of papaya proteinase omega, the most basic cysteine proteinase component of the latex of papaya (Carica papaya), was built from its amino acid sequence and the two currently known high-resolution crystal structures of the homologous enzymes papain (EC 3.4.22.2) and actinidin (EC 3.4.22.14). The method used a knowledge-based approach incorporated in the COMPOSER suite of programs and refinement by using the interactive graphics program FRODO on an Evans and Sutherland PS 390 and by energy minimization using the GROMOS program library. 2. Functional similarities and differences between the three cysteine proteinases revealed by analysis of pH-dependent kinetics of the acylation process of the catalytic act and of the reactions of the enzyme catalytic sites with substrate-derived 2-pyridyl disulphides as two-hydronic-state reactivity probes are reported and discussed in terms of the knowledge-based model. 3. To facilitate analysis of complex pH-dependent kinetic data, a multitasking application program (SKETCHER) for parameter estimation by interactive manipulation of calculated curves and a simple method of writing down pH-dependent kinetic equations for reactions involving any number of reactive hydronic states by using information matrices were developed. 4. Papaya proteinase omega differs from the other two enzymes in the ionization characteristics of the common (Cys)-SH/(His)-Im+H catalytic-site system and of the other acid/base groups that modulate thiol reactivity towards substrate-derived inhibitors and the acylation process of the catalytic act. The most marked difference in the Cys/His system is that the pKa for the loss of the ion-pair state to form -S-/-Im is 8.1-8.3 for papaya proteinase omega, whereas it is 9.5 for both actinidin and papain. Papaya proteinase omega is similar to actinidin in that it lacks the second catalytically influential group with pKa approx. 4 present in papain and possesses a

  11. Structure-function relationships in the cysteine proteinases actinidin, papain and papaya proteinase omega. Three-dimensional structure of papaya proteinase omega deduced by knowledge-based modelling and active-centre characteristics determined by two-hydronic-state reactivity probe kinetics and kinetics of catalysis.

    PubMed

    Topham, C M; Salih, E; Frazao, C; Kowlessur, D; Overington, J P; Thomas, M; Brocklehurst, S M; Patel, M; Thomas, E W; Brocklehurst, K

    1991-11-15

    1. A model of the three-dimensional structure of papaya proteinase omega, the most basic cysteine proteinase component of the latex of papaya (Carica papaya), was built from its amino acid sequence and the two currently known high-resolution crystal structures of the homologous enzymes papain (EC 3.4.22.2) and actinidin (EC 3.4.22.14). The method used a knowledge-based approach incorporated in the COMPOSER suite of programs and refinement by using the interactive graphics program FRODO on an Evans and Sutherland PS 390 and by energy minimization using the GROMOS program library. 2. Functional similarities and differences between the three cysteine proteinases revealed by analysis of pH-dependent kinetics of the acylation process of the catalytic act and of the reactions of the enzyme catalytic sites with substrate-derived 2-pyridyl disulphides as two-hydronic-state reactivity probes are reported and discussed in terms of the knowledge-based model. 3. To facilitate analysis of complex pH-dependent kinetic data, a multitasking application program (SKETCHER) for parameter estimation by interactive manipulation of calculated curves and a simple method of writing down pH-dependent kinetic equations for reactions involving any number of reactive hydronic states by using information matrices were developed. 4. Papaya proteinase omega differs from the other two enzymes in the ionization characteristics of the common (Cys)-SH/(His)-Im+H catalytic-site system and of the other acid/base groups that modulate thiol reactivity towards substrate-derived inhibitors and the acylation process of the catalytic act. The most marked difference in the Cys/His system is that the pKa for the loss of the ion-pair state to form -S-/-Im is 8.1-8.3 for papaya proteinase omega, whereas it is 9.5 for both actinidin and papain. Papaya proteinase omega is similar to actinidin in that it lacks the second catalytically influential group with pKa approx. 4 present in papain and possesses a

  12. CACTUS: Command and Control Training Using Knowledge-Based Simulations.

    ERIC Educational Resources Information Center

    Hartley, J. R.; And Others

    1992-01-01

    Describes a computer simulation, CACTUS, that was developed in the United Kingdom to help police with command and control training for large crowd control incidents. Use of the simulation for pre-event planning and decision making is discussed, debriefing is described, and the role of the trainer is considered. (LRW)

  13. Knowledge-Based Object Detection in Laser Scanning Point Clouds

    NASA Astrophysics Data System (ADS)

    Boochs, F.; Karmacharya, A.; Marbs, A.

    2012-07-01

    Object identification and object processing in 3D point clouds have always posed challenges in terms of effectiveness and efficiency. In practice, this process is highly dependent on human interpretation of the scene represented by the point cloud data, as well as the set of modeling tools available for use. Such modeling algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. We present an approach that brings the human expert knowledge about the scene, the objects inside, and their representation by the data and the behavior of algorithms to the machine. This "understanding" enables the machine to assist human interpretation of the scene inside the point cloud. Furthermore, it allows the machine to understand possibilities and limitations of algorithms and to take this into account within the processing chain. This not only assists the researchers in defining optimal processing steps, but also provides suggestions when certain changes or new details emerge from the point cloud. Our approach benefits from the advancement in knowledge technologies within the Semantic Web framework. This advancement has provided a strong base for applications based on knowledge management. In the article we will present and describe the knowledge technologies used for our approach such as Web Ontology Language (OWL), used for formulating the knowledge base and the Semantic Web Rule Language (SWRL) with 3D processing and topologic built-ins, aiming to combine geometrical analysis of 3D point clouds, and specialists' knowledge of the scene and algorithmic processing.

  14. Satellite Contamination and Materials Outgassing Knowledgebase - An Interactive Database Reference

    NASA Technical Reports Server (NTRS)

    Green, D. B.; Burns, Dewitt (Technical Monitor)

    2001-01-01

    The goal of this program is to collect at one site much of the knowledge accumulated about the outgassing properties of aerospace materials based on ground testing, the effects of this outgassing observed on spacecraft in flight, and the broader contamination environment measured by instruments on-orbit. We believe that this Web site will help move contamination a step forward, away from anecdotal folklore toward engineering discipline. Our hope is that once operational, this site will form a nucleus for information exchange, that users will not only take information from our knowledge base, but also provide new information from ground testing and space missions, expanding and increasing the value of this site to all. We urge Government and industry users to endorse this approach that will reduce redundant testing, reduce unnecessary delays, permit uniform comparisons, and permit informed decisions.

  15. A knowledge-based system for materials selection

    SciTech Connect

    Trethewey, K.R.; Puget, Y.; Wood, R.J.K.; Roberge, P.R.

    1997-12-01

    Materials selection is a process obviously suitable for computerization. The growing list of available materials makes a choice increasingly susceptible to the vagaries of human decision making, as evidenced by the continued incidence of high corrosion costs and unacceptable failures. A methodology for computerization of the materials selection process is described and evaluated by an example involving the choice of a polymeric paint coating for seawater service.

  16. GENNY: A Knowledge-Based Text Generation System.

    ERIC Educational Resources Information Center

    Maybury, Mark T.

    1989-01-01

    Describes a computational model of the human process of generating text. The system design and generation process are discussed with particular attention to domain independence and cross language portability. The results of system tests are presented, the generator is evaluated with respect to current generators, and future directions are…

  17. Knowledge-based system to assess air crew training requirements

    NASA Technical Reports Server (NTRS)

    Smith, Barry R.; Banda, Carolyn P.

    1990-01-01

    A description is given of a prototype training assessment tool developed as part of a computer-based cockpit design and analysis workstation that estimates the training resources and time imposed by the anticipated mission and cockpit design. Embedding instructional system and training analysis domain knowledge in a production system environment, the tool allows crew station designers to readily determine the training ramifications of their choices for cockpit equipment, mission tasks, and operator qualifications. Initial results have been validated by comparison to an existing training program, demonstrating the tool's utility as a conceptual design aid and illuminating areas for future development.

  18. Enabling a Systems Biology Knowledgebase with Gaggle and Firegoose

    SciTech Connect

    Baliga, Nitin S.

    2014-12-12

    The overall goal of this project was to extend the existing Gaggle and Firegoose systems to develop an open-source technology that runs over the web and links desktop applications with many databases and software applications. This technology would enable researchers to incorporate workflows for data analysis that can be executed from this interface to other online applications. The four specific aims were to (1) provide one-click mapping of genes, proteins, and complexes across databases and species; (2) enable multiple simultaneous workflows; (3) expand sophisticated data analysis for online resources; and enhance open-source development of the Gaggle-Firegoose infrastructure. Gaggle is an open-source Java software system that integrates existing bioinformatics programs and data sources into a user-friendly, extensible environment to allow interactive exploration, visualization, and analysis of systems biology data. Firegoose is an extension to the Mozilla Firefox web browser that enables data transfer between websites and desktop tools including Gaggle. In the last phase of this funding period, we have made substantial progress on development and application of the Gaggle integration framework. We implemented the workspace to the Network Portal. Users can capture data from Firegoose and save them to the workspace. Users can create workflows to start multiple software components programmatically and pass data between them. Results of analysis can be saved to the cloud so that they can be easily restored on any machine. We also developed the Gaggle Chrome Goose, a plugin for the Google Chrome browser in tandem with an opencpu server in the Amazon EC2 cloud. This allows users to interactively perform data analysis on a single web page using the R packages deployed on the opencpu server. The cloud-based framework facilitates collaboration between researchers from multiple organizations. We have made a number of enhancements to the cmonkey2 application to enable and improve the integration within different environments, and we have created a new tools pipeline for generating EGRIN2 models in a largely automated way.

  19. Knowledge-based operation and management of communications systems

    NASA Technical Reports Server (NTRS)

    Heggestad, Harold M.

    1988-01-01

    Expert systems techniques are being applied in operation and control of the Defense Communications System (DCS), which has the mission of providing reliable worldwide voice, data and message services for U.S. forces and commands. Thousands of personnel operate DCS facilities, and many of their functions match the classical expert system scenario: complex, skill-intensive environments with a full spectrum of problems in training and retention, cost containment, modernization, and so on. Two of these functions are: (1) fault isolation and restoral of dedicated circuits at Tech Control Centers, and (2) network management for the Defense Switched Network (the modernized dial-up voice system currently replacing AUTOVON). An expert system for the first of these is deployed for evaluation purposes at Andrews Air Force Base, and plans are being made for procurement of operational systems. In the second area, knowledge obtained with a sophisticated simulator is being embedded in an expert system. The background, design and status of both projects are described.

  20. The SwissLipids knowledgebase for lipid biology

    PubMed Central

    Liechti, Robin; Hyka-Nouspikel, Nevila; Niknejad, Anne; Gleizes, Anne; Götz, Lou; Kuznetsov, Dmitry; David, Fabrice P.A.; van der Goot, F. Gisou; Riezman, Howard; Bougueleret, Lydie; Xenarios, Ioannis; Bridge, Alan

    2015-01-01

    Motivation: Lipids are a large and diverse group of biological molecules with roles in membrane formation, energy storage and signaling. Cellular lipidomes may contain tens of thousands of structures, a staggering degree of complexity whose significance is not yet fully understood. High-throughput mass spectrometry-based platforms provide a means to study this complexity, but the interpretation of lipidomic data and its integration with prior knowledge of lipid biology suffers from a lack of appropriate tools to manage the data and extract knowledge from it. Results: To facilitate the description and exploration of lipidomic data and its integration with prior biological knowledge, we have developed a knowledge resource for lipids and their biology—SwissLipids. SwissLipids provides curated knowledge of lipid structures and metabolism which is used to generate an in silico library of feasible lipid structures. These are arranged in a hierarchical classification that links mass spectrometry analytical outputs to all possible lipid structures, metabolic reactions and enzymes. SwissLipids provides a reference namespace for lipidomic data publication, data exploration and hypothesis generation. The current version of SwissLipids includes over 244 000 known and theoretically possible lipid structures, over 800 proteins, and curated links to published knowledge from over 620 peer-reviewed publications. We are continually updating the SwissLipids hierarchy with new lipid categories and new expert curated knowledge. Availability: SwissLipids is freely available at http://www.swisslipids.org/. Contact: alan.bridge@isb-sib.ch Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25943471

  1. Utilitarian Model of Measuring Confidence within Knowledge-Based Societies

    ERIC Educational Resources Information Center

    Jack, Brady Michael; Hung, Kuan-Ming; Liu, Chia Ju; Chiu, Houn Lin

    2009-01-01

    This paper introduces a utilitarian confidence testing statistic called Risk Inclination Model (RIM) which indexes all possible confidence wagering combinations within the confines of a defined symmetrically point-balanced test environment. This paper presents the theoretical underpinnings, a formal derivation, a hypothetical application, and…

  2. Knowledge-based critiquing of graphical user interfaces with CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jianping; Murphy, Elizabeth D.; Carter, Leslie E.; Truszkowski, Walter F.

    1994-01-01

    CHIMES is a critiquing tool that automates the process of checking graphical user interface (GUI) designs for compliance with human factors design guidelines and toolkit style guides. The current prototype identifies instances of non-compliance and presents problem statements, advice, and tips to the GUI designer. Changes requested by the designer are made automatically, and the revised GUI is re-evaluated. A case study conducted at NASA-Goddard showed that CHIMES has the potential for dramatically reducing the time formerly spent in hands-on consistency checking. Capabilities recently added to CHIMES include exception handling and rule building. CHIMES is intended for use prior to usability testing as a means, for example, of catching and correcting syntactic inconsistencies in a larger user interface.

  3. Fuzzy logic controllers: A knowledge-based system perspective

    NASA Technical Reports Server (NTRS)

    Bonissone, Piero P.

    1993-01-01

    Over the last few years we have seen an increasing number of applications of Fuzzy Logic Controllers. These applications range from the development of auto-focus cameras, to the control of subway trains, cranes, automobile subsystems (automatic transmissions), domestic appliances, and various consumer electronic products. In summary, we consider a Fuzzy Logic Controller to be a high level language with its local semantics, interpreter, and compiler, which enables us to quickly synthesize non-linear controllers for dynamic systems.

  4. Knowledge-based biomedical word sense disambiguation: comparison of approaches

    PubMed Central

    2010-01-01

    Background Word sense disambiguation (WSD) algorithms attempt to select the proper sense of ambiguous terms in text. Resources like the UMLS provide a reference thesaurus to be used to annotate the biomedical literature. Statistical learning approaches have produced good results, but the size of the UMLS makes the production of training data infeasible to cover all the domain. Methods We present research on existing WSD approaches based on knowledge bases, which complement the studies performed on statistical learning. We compare four approaches which rely on the UMLS Metathesaurus as the source of knowledge. The first approach compares the overlap of the context of the ambiguous word to the candidate senses based on a representation built out of the definitions, synonyms and related terms. The second approach collects training data for each of the candidate senses to perform WSD based on queries built using monosemous synonyms and related terms. These queries are used to retrieve MEDLINE citations. Then, a machine learning approach is trained on this corpus. The third approach is a graph-based method which exploits the structure of the Metathesaurus network of relations to perform unsupervised WSD. This approach ranks nodes in the graph according to their relative structural importance. The last approach uses the semantic types assigned to the concepts in the Metathesaurus to perform WSD. The context of the ambiguous word and semantic types of the candidate concepts are mapped to Journal Descriptors. These mappings are compared to decide among the candidate concepts. Results are provided estimating accuracy of the different methods on the WSD test collection available from the NLM. Conclusions We have found that the last approach achieves better results compared to the other methods. The graph-based approach, using the structure of the Metathesaurus network to estimate the relevance of the Metathesaurus concepts, does not perform well compared to the first two methods. In addition, the combination of methods improves the performance over the individual approaches. On the other hand, the performance is still below statistical learning trained on manually produced data and below the maximum frequency sense baseline. Finally, we propose several directions to improve the existing methods and to improve the Metathesaurus to be more effective in WSD. PMID:21092226

  5. A knowledge-based system for controlling automobile traffic

    NASA Technical Reports Server (NTRS)

    Maravas, Alexander; Stengel, Robert F.

    1994-01-01

    Transportation network capacity variations arising from accidents, roadway maintenance activity, and special events as well as fluctuations in commuters' travel demands complicate traffic management. Artificial intelligence concepts and expert systems can be useful in framing policies for incident detection, congestion anticipation, and optimal traffic management. This paper examines the applicability of intelligent route guidance and control as decision aids for traffic management. Basic requirements for managing traffic are reviewed, concepts for studying traffic flow are introduced, and mathematical models for modeling traffic flow are examined. Measures for quantifying transportation network performance levels are chosen, and surveillance and control strategies are evaluated. It can be concluded that automated decision support holds great promise for aiding the efficient flow of automobile traffic over limited-access roadways, bridges, and tunnels.

  6. Knowledge-based operation and management of communications systems

    NASA Astrophysics Data System (ADS)

    Heggestad, Harold M.

    1988-11-01

    Expert systems techniques are being applied in operation and control of the Defense Communications System (DCS), which has the mission of providing reliable worldwide voice, data and message services for U.S. forces and commands. Thousands of personnel operate DCS facilities, and many of their functions match the classical expert system scenario: complex, skill-intensive environments with a full spectrum of problems in training and retention, cost containment, modernization, and so on. Two of these functions are: (1) fault isolation and restoral of dedicated circuits at Tech Control Centers, and (2) network management for the Defense Switched Network (the modernized dial-up voice system currently replacing AUTOVON). An expert system for the first of these is deployed for evaluation purposes at Andrews Air Force Base, and plans are being made for procurement of operational systems. In the second area, knowledge obtained with a sophisticated simulator is being embedded in an expert system. The background, design and status of both projects are described.

  7. Knowledge-Based Economies and Education: A Grand Canyon Analogy

    ERIC Educational Resources Information Center

    Mahy, Colleen; Krimmel, Tyler

    2008-01-01

    Expeditions inspire people to reach beyond themselves. Today, post-secondary education requires as much planning as any expedition. However, there has been a trend that has seen just over half of all high school students in Ontario going on to post-secondary education. While some people have barely noticed this statistic, the OECD has released a…

  8. A special purpose knowledge-based face localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  9. Knowledge-based Expert Systems for Crop Identification

    NASA Technical Reports Server (NTRS)

    Smith, T. R.; Estes, J. E. (Principal Investigator); Sailer, C. T.; Tinney, L. R.

    1984-01-01

    The development of an improved understanding of the interactive man machine environment is investigated. In such an environment, as many feature inputs as practical would be automatically derived from a data base and input into an expert system decision making procedure. This procedure could then provide expert assistance to a trained image analyst to upgrade and improve the quantity and accuracy of the information extracted from the input data. A comparison of the similarities and differences between manual and automated image interpretation techniques is also examined.

  10. Strategic Positioning of HRM in Knowledge-Based Organizations

    ERIC Educational Resources Information Center

    Thite, Mohan

    2004-01-01

    With knowledge management as the strategic intent and learning to learn as the strategic weapon, the current management focus is on how to leverage knowledge faster and better than competitors. Research demonstrates that it is the cultural mindset of the people in the organisation that primarily defines success in knowledge intensive…

  11. Knowledge-Based Optimization of Molecular Geometries Using Crystal Structures.

    PubMed

    Cole, Jason C; Groom, Colin R; Korb, Oliver; McCabe, Patrick; Shields, Gregory P

    2016-04-25

    This paper describes a novel way to use the structural information contained in the Cambridge Structural Database (CSD) to drive geometry optimization of organic molecules. We describe how CSD structural information is transformed into objective functions for gradient-based optimization to provide good quality geometries for a large variety of organic molecules. Performance is assessed by minimizing different sets of organic molecules reporting RMSD movements for bond lengths, valence angles, torsion angles, and heavy atom positions. PMID:26977906

  12. Knowledge-Based Interpretation Of Scanned Business Letters

    NASA Astrophysics Data System (ADS)

    Kreich, Joachim; Luhn, Achim; Maderlechner, Gerd

    1989-07-01

    Office Automation by electronic text processing has not reduced the amount of paper used for communication and storage. The present boom of FAX-Systems proves this tendency. With this growing degree of office automation the paper-computer interface becomes increasingly important. To be useful, this interface must be able to handle documents containing text as well as graphics, and convert them into an electronic representation that not only captures content (like in current OCR readers), but also the layout and logic structure. We describe a system for the analysis of business letters which is able to extract the key elements of a letter like its sender, the date, etc. The letter can thus for instance be stored in electronic archival systems, edited by structure editors, or forwarded via electronic mail services. This system was implemented on a Symbolics Lisp machine for the high level part of the analysis and on a VAX for the low and medium level processing stages. Some practical results are presented and discussed. Apart from this application our system is a useful testbed to implement and test sophisticated control structures and model representations for image understanding.

  13. Impact of knowledge-based software engineering on aerospace systems

    NASA Technical Reports Server (NTRS)

    Peyton, Liem; Gersh, Mark A.; Swietek, Gregg

    1991-01-01

    The emergence of knowledge engineering as a software technology will dramatically alter the use of software by expanding application areas across a wide spectrum of industries. The engineering and management of large aerospace software systems could benefit from a knowledge engineering approach. An understanding of this technology can potentially make significant improvements to the current practice of software engineering, and provide new insights into future development and support practices.

  14. CLEO: A knowledge-based refueling assistant at FFTF

    SciTech Connect

    Smith, D.E.; Kocher, L.F.; Seeman, S.E.

    1985-07-01

    CLEO is computer software system to assist in the planning and performance of the reactor refueling operations at the Fast Flux Test Facility. It is a recently developed application of artificial intelligence software with both expert systems and automated reasoning aspects. The computer system seeks to organize the sequence of core component movements according to the rules and logic used by the expert. In this form, CLEO has aspects which tie it to both the expert systems and automated reasoning areas within the artificial intelligence field.

  15. CLEO: a knowledge-based refueling assistant at FFTF

    SciTech Connect

    Smith, D.E.; Kocher, L.F.; Seeman, S.E.

    1985-11-01

    A computer software system, CLEO, is used to assist in the planning and performance of the reactor refueling operations at the Fast Flux Test Facility (FFTF). It is a recently developed application of artificial intelligence software with both expert systems and automated reasoning aspects. CLEO, an acronym for Cloned LEO, is a logic-based computer program written in Pascal. It imitates the processes that the refueling expert for FFTF performs in organizing the refueling of FFTF. The computer assistant seeks to organize the sequence of core component movements according to the rules and logic used by the expert. In this form, CLEO has aspects that tie it to both the expert systems and automated reasoning areas within the artificial intelligence field.

  16. Knowledge-based image bandwidth compression and enhancement

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Tescher, Andrew G.

    1987-01-01

    Techniques for incorporating a priori knowledge in the digital coding and bandwidth compression of image data are described and demonstrated. An algorithm for identifying and highlighting thin lines and point objects prior to coding is presented, and the precoding enhancement of a slightly smoothed version of the image is shown to be more effective than enhancement of the original image. Also considered are readjustment of the local distortion parameter and variable-block-size coding. The line-segment criteria employed in the classification are listed in a table, and sample images demonstrating the effectiveness of the enhancement techniques are presented.

  17. Model-based knowledge-based optical processors

    NASA Astrophysics Data System (ADS)

    Casasent, David; Liebowitz, Suzanne A.

    1987-05-01

    An efficient 3-D object-centered knowledge base is described. The ability to on-line generate a 2-D image projection or range image for any object/viewer orientation from this knowledge base is addressed. Applications of this knowledge base in associative processors and symbolic correlators are then discussed. Initial test results are presented for a multiple degree of freedom object recognition problem. These include new techniques to achieve object orientation information and two new associative memory matrix formulations.

  18. Knowledge-based localization of hippocampus in human brain MRI

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Siadat, Mohammad-Reza

    1999-05-01

    Hippocampus is an important structure of the human brain limbic system. The variations in the volume and architecture of this structure have been related to certain neurological diseases such as schizophrenia and epilepsy. This paper presents a two-stage method for localizing hippocampus in human brain MRI automatically. The first stage utilizes image processing techniques such as nonlinear filtering and histogram analysis to extract information from MRI. This stage generates binary images, locates lateral and third ventricles, and the inferior limit of Sylvian Fissure. The second stage uses a shell of expert system named VP-EXPERT to analyze the information extracted in the first stage. This stage utilizes absolute and relative spatial rules and spatial symmetry rules to locate the hippocampus. The system has been tested using MRI studies of six epilepsy patients. MRI data consisted of a total of 128 images. The system correctly identified all of the slices without hippocampus, and correctly localized hippocampus is about n 78% of the slices with hippocampus.

  19. Knowledge-based processing for aircraft flight control

    NASA Technical Reports Server (NTRS)

    Painter, John H.

    1991-01-01

    The purpose is to develop algorithms and architectures for embedding artificial intelligence in aircraft guidance and control systems. With the approach adopted, AI-computing is used to create an outer guidance loop for driving the usual aircraft autopilot. That is, a symbolic processor monitors the operation and performance of the aircraft. Then, based on rules and other stored knowledge, commands are automatically formulated for driving the autopilot so as to accomplish desired flight operations. The focus is on developing a software system which can respond to linguistic instructions, input in a standard format, so as to formulate a sequence of simple commands to the autopilot. The instructions might be a fairly complex flight clearance, input either manually or by data-link. Emphasis is on a software system which responds much like a pilot would, employing not only precise computations, but, also, knowledge which is less precise, but more like common-sense. The approach is based on prior work to develop a generic 'shell' architecture for an AI-processor, which may be tailored to many applications by describing the application in appropriate processor data bases (libraries). Such descriptions include numerical models of the aircraft and flight control system, as well as symbolic (linguistic) descriptions of flight operations, rules, and tactics.

  20. Utilizing knowledge-base semantics in graph-based algorithms

    SciTech Connect

    Darwiche, A.

    1996-12-31

    Graph-based algorithms convert a knowledge base with a graph structure into one with a tree structure (a join-tree) and then apply tree-inference on the result. Nodes in the join-tree are cliques of variables and tree-inference is exponential in w*, the size of the maximal clique in the join-tree. A central property of join-trees that validates tree-inference is the running-intersection property: the intersection of any two cliques must belong to every clique on the path between them. We present two key results in connection to graph-based algorithms. First, we show that the running-intersection property, although sufficient, is not necessary for validating tree-inference. We present a weaker property for this purpose, called running-interaction, that depends on non-structural (semantical) properties of a knowledge base. We also present a linear algorithm that may reduce w* of a join-tree, possibly destroying its running-intersection property, while maintaining its running-interaction property and, hence, its validity for tree-inference. Second, we develop a simple algorithm for generating trees satisfying the running-interaction property. The algorithm bypasses triangulation (the standard technique for constructing join-trees) and does not construct a join-tree first. We show that the proposed algorithm may in some cases generate trees that are more efficient than those generated by modifying a join-tree.

  1. Confidence Testing for Knowledge-Based Global Communities

    ERIC Educational Resources Information Center

    Jack, Brady Michael; Liu, Chia-Ju; Chiu, Houn-Lin; Shymansky, James A.

    2009-01-01

    This proposal advocates the position that the use of confidence wagering (CW) during testing can predict the accuracy of a student's test answer selection during between-subject assessments. Data revealed female students were more favorable to taking risks when making CW and less inclined toward risk aversion than their male counterparts. Student…

  2. Knowledge-based requirements analysis for automating software development

    NASA Technical Reports Server (NTRS)

    Markosian, Lawrence Z.

    1988-01-01

    We present a new software development paradigm that automates the derivation of implementations from requirements. In this paradigm, informally-stated requirements are expressed in a domain-specific requirements specification language. This language is machine-understable and requirements expressed in it are captured in a knowledge base. Once the requirements are captured, more detailed specifications and eventually implementations are derived by the system using transformational synthesis. A key characteristic of the process is that the required human intervention is in the form of providing problem- and domain-specific engineering knowledge, not in writing detailed implementations. We describe a prototype system that applies the paradigm in the realm of communication engineering: the prototype automatically generates implementations of buffers following analysis of the requirements on each buffer.

  3. Knowledge-based classification of neuronal fibers in entire brain.

    PubMed

    Xia, Yan; Turken, U; Whitfield-Gabrieli, Susan L; Gabrieli, John D

    2005-01-01

    This work presents a framework driven by parcellation of brain gray matter in standard normalized space to classify the neuronal fibers obtained from diffusion tensor imaging (DTI) in entire human brain. Classification of fiber bundles into groups is an important step for the interpretation of DTI data in terms of functional correlates of white matter structures. Connections between anatomically delineated brain regions that are considered to form functional units, such as a short-term memory network, are identified by first clustering fibers based on their terminations in anatomically defined zones of gray matter according to Talairach Atlas, and then refining these groups based on geometric similarity criteria. Fiber groups identified this way can then be interpreted in terms of their functional properties using knowledge of functional neuroanatomy of individual brain regions specified in standard anatomical space, as provided by functional neuroimaging and brain lesion studies. PMID:16685847

  4. Towards a HPV Vaccine Knowledgebase for Patient Education Content.

    PubMed

    Wang, Dennis; Cunningham, Rachel; Boom, Julie; Amith, Muhammad; Tao, Cui

    2016-01-01

    Human papillomavirus is a widespread sexually transmitted infection that can be prevented with vaccination. However, HPV vaccination rates in the United States are disappointingly low. This paper will introduce a patient oriented web ontology intended to provide an interactive way to educate patients about HPV and the HPV vaccine that will to empower patients to make the right vaccination decision. The information gathered for this initial draft of the ontology was primarily taken from the Centers for Disease Control and Prevention's Vaccine Information Statements. The ontology currently consists of 160 triples, 141 classes, 52 properties and 55 individuals. For future iterations, we aim to incorporate more information as well as obtain subject matter expert feedback to improve the overall quality of the ontology. PMID:27332237

  5. Dynamic reasoning in a knowledge-based system

    NASA Technical Reports Server (NTRS)

    Rao, Anand S.; Foo, Norman Y.

    1988-01-01

    Any space based system, whether it is a robot arm assembling parts in space or an onboard system monitoring the space station, has to react to changes which cannot be foreseen. As a result, apart from having domain-specific knowledge as in current expert systems, a space based AI system should also have general principles of change. This paper presents a modal logic which can not only represent change but also reason with it. Three primitive operations, expansion, contraction and revision are introduced and axioms which specify how the knowledge base should change when the external world changes are also specified. Accordingly the notion of dynamic reasoning is introduced, which unlike the existing forms of reasoning, provide general principles of change. Dynamic reasoning is based on two main principles, namely minimize change and maximize coherence. A possible-world semantics which incorporates the above two principles is also discussed. The paper concludes by discussing how the dynamic reasoning system can be used to specify actions and hence form an integral part of an autonomous reasoning and planning system.

  6. Designer: A Knowledge-Based Graphic Design Assistant.

    ERIC Educational Resources Information Center

    Weitzman, Louis

    This report describes Designer, an interactive tool for assisting with the design of two-dimensional graphic interfaces for instructional systems. The system, which consists of a color graphics interface to a mathematical simulation, provides enhancements to the Graphics Editor component of Steamer (a computer-based training system designed to aid…

  7. Knowledge-based understanding of aerial surveillance video

    NASA Astrophysics Data System (ADS)

    Cheng, Hui; Butler, Darren

    2006-05-01

    Aerial surveillance has long been used by the military to locate, monitor and track the enemy. Recently, its scope has expanded to include law enforcement activities, disaster management and commercial applications. With the ever-growing amount of aerial surveillance video acquired daily, there is an urgent need for extracting actionable intelligence in a timely manner. Furthermore, to support high-level video understanding, this analysis needs to go beyond current approaches and consider the relationships, motivations and intentions of the objects in the scene. In this paper we propose a system for interpreting aerial surveillance videos that automatically generates a succinct but meaningful description of the observed regions, objects and events. For a given video, the semantics of important regions and objects, and the relationships between them, are summarised into a semantic concept graph. From this, a textual description is derived that provides new search and indexing options for aerial video and enables the fusion of aerial video with other information modalities, such as human intelligence, reports and signal intelligence. Using a Mixture-of-Experts video segmentation algorithm an aerial video is first decomposed into regions and objects with predefined semantic meanings. The objects are then tracked and coerced into a semantic concept graph and the graph is summarized spatially, temporally and semantically using ontology guided sub-graph matching and re-writing. The system exploits domain specific knowledge and uses a reasoning engine to verify and correct the classes, identities and semantic relationships between the objects. This approach is advantageous because misclassifications lead to knowledge contradictions and hence they can be easily detected and intelligently corrected. In addition, the graph representation highlights events and anomalies that a low-level analysis would overlook.

  8. A Knowledge-Based Representation Scheme for Environmental Science Models

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.; Dungan, Jennifer L.; Lum, Henry, Jr. (Technical Monitor)

    1994-01-01

    One of the primary methods available for studying environmental phenomena is the construction and analysis of computational models. We have been studying how artificial intelligence techniques can be applied to assist in the development and use of environmental science models within the context of NASA-sponsored activities. We have identified several high-utility areas as potential targets for research and development: model development; data visualization, analysis, and interpretation; model publishing and reuse, training and education; and framing, posing, and answering questions. Central to progress on any of the above areas is a representation for environmental models that contains a great deal more information than is present in a traditional software implementation. In particular, a traditional software implementation is devoid of any semantic information that connects the code with the environmental context that forms the background for the modeling activity. Before we can build AI systems to assist in model development and usage, we must develop a representation for environmental models that adequately describes a model's semantics and explicitly represents the relationship between the code and the modeling task at hand. We have developed one such representation in conjunction with our work on the SIGMA (Scientists' Intelligent Graphical Modeling Assistant) environment. The key feature of the representation is that it provides a semantic grounding for the symbols in a set of modeling equations by linking those symbols to an explicit representation of the underlying environmental scenario.

  9. Knowledge-based visualization of time-oriented clinical data.

    PubMed Central

    Shahar, Y.; Cheng, C.

    1998-01-01

    We describe a domain-independent framework (KNAVE) specific to the task of interpretation, summarization, visualization, explanation, and interactive exploration in a context-sensitive manner through time-oriented raw clinical data and the multiple levels of higher-level, interval-based concepts that can be abstracted from these data. The KNAVE exploration operators, which are independent of any particular clinical domain, access a knowledge base of temporal properties of measured data and interventions that is specific to the clinical domain. Thus, domain-specific knowledge underlies the domain-independent semantics of the interpretation, visualization, and exploration processes. Initial evaluation of the KNAVE prototype by a small number of users with variable clinical and informatics training has been encouraging. Images Figure 3 Figure 4 PMID:9929201

  10. Knowledge-based control for robot self-localization

    NASA Technical Reports Server (NTRS)

    Bennett, Bonnie Kathleen Holte

    1993-01-01

    Autonomous robot systems are being proposed for a variety of missions including the Mars rover/sample return mission. Prior to any other mission objectives being met, an autonomous robot must be able to determine its own location. This will be especially challenging because location sensors like GPS, which are available on Earth, will not be useful, nor will INS sensors because their drift is too large. Another approach to self-localization is required. In this paper, we describe a novel approach to localization by applying a problem solving methodology. The term 'problem solving' implies a computational technique based on logical representational and control steps. In this research, these steps are derived from observing experts solving localization problems. The objective is not specifically to simulate human expertise but rather to apply its techniques where appropriate for computational systems. In doing this, we describe a model for solving the problem and a system built on that model, called localization control and logic expert (LOCALE), which is a demonstration of concept for the approach and the model. The results of this work represent the first successful solution to high-level control aspects of the localization problem.

  11. ECGene: A Literature-Based Knowledgebase of Endometrial Cancer Genes.

    PubMed

    Zhao, Min; Liu, Yining; O'Mara, Tracy A

    2016-04-01

    Endometrial cancer (EC) ranks as the sixth common cancer for women worldwide. To better distinguish cancer subtypes and identify effective early diagnostic biomarkers, we need improved understanding of the biological mechanisms associated with EC dysregulated genes. Although there is a wealth of clinical and molecular information relevant to EC in the literature, there has been no systematic summary of EC-implicated genes. In this study, we developed a literature-based database ECGene (Endometrial Cancer Gene database) with comprehensive annotations. ECGene features manual curation of 414 genes from thousands of publications, results from eight EC gene expression datasets, precomputation of coexpressed long noncoding RNAs, and an EC-implicated gene interactome. In the current release, we generated and comprehensively annotated a list of 458 EC-implicated genes. We found the top-ranked EC-implicated genes are frequently mutated in The Cancer Genome Atlas (TCGA) tumor samples. Furthermore, systematic analysis of coexpressed lncRNAs provided insight into the important roles of lncRNA in EC development. ECGene has a user-friendly Web interface and is freely available at http://ecgene.bioinfo-minzhao.org/. As the first literature-based online resource for EC, ECGene serves as a useful gateway for researchers to explore EC genetics. PMID:26699919

  12. Robotic grasping of unknown objects: A knowledge-based approach

    SciTech Connect

    Stansfield, S.A.

    1989-06-01

    In this paper, we demonstrate a general-purpose robotic grasping system for use in unstructured environments. Using computer vision and a compact set of heuristics, the system automatically generates the robot arm and hand motions required for grasping an unmodeled object. The utility of such a system is most evident in environments where the robot will have to grasp and manipulate a variety of unknown objects, but where many of the manipulation tasks may be relatively simple. Examples of such domains are planetary exploration and astronaut assistance, undersea salvage and rescue, and nuclear waste site clean-up. This work implements a two-stage model of grasping: stage one is an orientation of the hand and wrist and a ballistic reach toward the object; stage two is hand preshaping and adjustment. Visual features are first extracted from the unmodeled object. These features and their relations are used by an expert system to generate a set of valid reach/grasps for the object. These grasps are then used in driving the robot hand and arm to bring the fingers into contact with the object in the desired configuration. Experimental results are presented to illustrate the functioning of the system. 27 refs., 38 figs.

  13. Robotic grasping of unknown objects: A knowledge-based approach

    SciTech Connect

    Stansfield, S.A.

    1990-11-01

    In this paper, the authors demonstrate a general-purpose robotic grasping system for use in unstructured environments. Using computer vision and a compact set of heuristics, the system automatically generates the robot arm and hand motions required for grasping an unmodeled object. The utility of such a system is most evident in environments where the robot will have to grasp and manipulate a variety of unknown objects, but where many of the manipulation tasks may be relatively simple. Examples of such domains are planetary exploration and astronaut assistance, undersea salvage and rescue, and nuclear waste site clean-up. This work implements a two-stage model of grasping: stage one is an orientation of the hand and wrist and a ballistic reach toward the object; stage two is hand preshaping and adjustment. Visual features are first extracted from the unmodeled object. These features and their relations are used by an expert system to generate a set of valid reach/grasps for the object. These grasps are then used in driving the robot hand and arm to bring the fingers into contact with the object in the desired configuration. Experimental results are presented to illustrate the functioning of the system.

  14. Robotic grasping of unknown objects: A knowledge-based approach

    SciTech Connect

    Stansfield, S.A. )

    1991-08-01

    In this article the author describes a general-purpose robotic grasping system for use in unstructured environments. Using computer vision and a compact set of heuristics, the system automatically generates the robot arm and hand motions required for grasping an unmodeled object. The utility of such a system is most evident in environments where the robot will have to grasp and manipulation tasks may be relatively simple. Examples of such domains are planetary exploration and astronaut assistance, undersea salvage and rescue, and nuclear waste site clean-up. This work implements a two-stage model of grasping: stage one is an orientation of the hand and wrist and a ballistic reach toward the object; stage two is hand preshaping and adjustment. Visual features are first extracted from the unmodeled object. These features and their relations are used by an expert system to generate a set of valid reach/grasps for the object. These grasps are then used in driving the robot hand and arm to bring the fingers into contact with the object in the desired configuration. Experimental results are presented to illustrate the functioning of the system.

  15. A model for a knowledge-based system's life cycle

    NASA Technical Reports Server (NTRS)

    Kiss, Peter A.

    1990-01-01

    The American Institute of Aeronautics and Astronautics has initiated a Committee on Standards for Artificial Intelligence. Presented here are the initial efforts of one of the working groups of that committee. The purpose here is to present a candidate model for the development life cycle of Knowledge Based Systems (KBS). The intent is for the model to be used by the Aerospace Community and eventually be evolved into a standard. The model is rooted in the evolutionary model, borrows from the spiral model, and is embedded in the standard Waterfall model for software development. Its intent is to satisfy the development of both stand-alone and embedded KBSs. The phases of the life cycle are detailed as are and the review points that constitute the key milestones throughout the development process. The applicability and strengths of the model are discussed along with areas needing further development and refinement by the aerospace community.

  16. The AI Bus architecture for distributed knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Schultz, Roger D.; Stobie, Iain

    1991-01-01

    The AI Bus architecture is layered, distributed object oriented framework developed to support the requirements of advanced technology programs for an order of magnitude improvement in software costs. The consequent need for highly autonomous computer systems, adaptable to new technology advances over a long lifespan, led to the design of an open architecture and toolbox for building large scale, robust, production quality systems. The AI Bus accommodates a mix of knowledge based and conventional components, running on heterogeneous, distributed real world and testbed environment. The concepts and design is described of the AI Bus architecture and its current implementation status as a Unix C++ library or reusable objects. Each high level semiautonomous agent process consists of a number of knowledge sources together with interagent communication mechanisms based on shared blackboards and message passing acquaintances. Standard interfaces and protocols are followed for combining and validating subsystems. Dynamic probes or demons provide an event driven means for providing active objects with shared access to resources, and each other, while not violating their security.

  17. Knowledge-based optical coatings design and manufacturing

    NASA Astrophysics Data System (ADS)

    Guenther, Karl H.; Gonzalez, Avelino J.; Yoo, Hoi J.

    1990-12-01

    The theory of thin film optics is well developed for the spectral analysis of a given optical coating. The inverse synthesis - designing an optical coating for a certain spectral performance - is more complicated. Usually a multitude of theoretical designs is feasible because most design problems are over-determined with the number of layers possible with three variables each (n, k, t). The expertise of a good thin film designer comes in at this point with a mostly intuitive selection of certain designs based on previous experience and current manufacturing capabilities. Manufacturing a designed coating poses yet another subset of multiple solutions, as thin if in deposition technology has evolved over the years with a vast variety of different processes. The abundance of published literature may often be more confusing than helpful to the practicing thin film engineer, even if he has time and opportunity to read it. The choice of the right process is also severely limited by the given manufacturing hardware and cost considerations which may not easily allow for the adaption of a new manufacturing approach, even if it promises to be better technically (it ought to be also cheaper). On the user end of the thin film coating business, the typical optical designer or engineer who needs an optical coating may have limited or no knowledge at all about the theoretical and manufacturing criteria for the optimum selection of what he needs. This can be sensed frequently by overly tight tolerances and requirements for optical performance which sometimes stretch the limits of mother nature. We introduce here a know1edge-based system (KBS) intended to assist expert designers and manufacturers in their task of maximizing results and minimizing errors, trial runs, and unproductive time. It will help the experts to manipulate parameters which are largely determined through heuristic reasoning by employing artificial intelligence techniques. In a later state, the KBS will include a module allowing the layman user of coatings to make the right choice.

  18. Design of a knowledge-based welding advisor

    SciTech Connect

    Kleban, S.D.

    1996-06-01

    Expert system implementation can take numerous forms ranging form traditional declarative rule-based systems with if-then syntax to imperative programming languages that capture expertise in procedural code. The artificial intelligence community generally thinks of expert systems as rules or rule-bases and an inference engine to process the knowledge. The welding advisor developed at Sandia National Laboratories and described in this paper deviates from this by codifying expertise using object representation and methods. Objects allow computer scientists to model the world as humans perceive it giving us a very natural way to encode expert knowledge. The design of the welding advisor, which generates and evaluates solutions, will be compared and contrasted to a traditional rule- based system.

  19. Compiling knowledge-based systems specified in KEE to Ada

    NASA Technical Reports Server (NTRS)

    Filman, Robert E.; Feldman, Roy D.

    1991-01-01

    The first year of the PrKAda project is recounted. The primary goal was to develop a system for delivering Artificial Intelligence applications developed in the ProKappa system in a pure-Ada environment. The following areas are discussed: the ProKappa core and ProTalk programming language; the current status of the implementation; the limitations and restrictions of the current system; and the development of Ada-language message handlers in the ProKappa environment.

  20. Special Issue: Decision Support and Knowledge-Based Systems.

    ERIC Educational Resources Information Center

    Stohr, Edward A.; And Others

    1987-01-01

    Six papers dealing with decision support and knowledge based systems are presented. Five of the papers are concerned in some way with the use of artificial intelligence techniques in individual or group decision support. The sixth paper presents empirical results from the use of a group decision support system. (CLB)

  1. Multilingual Knowledge-Based Concept Recognition in Textual Data

    NASA Astrophysics Data System (ADS)

    Schierle, Martin; Trabold, Daniel

    With respect to the increasing volume of textual data which is available through digital resources today, the identification of the main concepts in those texts becomes increasingly important and can be seen as a vital step in the analysis of unstructured information.

  2. Knowledge-based model building of proteins: concepts and examples.

    PubMed Central

    Bajorath, J.; Stenkamp, R.; Aruffo, A.

    1993-01-01

    We describe how to build protein models from structural templates. Methods to identify structural similarities between proteins in cases of significant, moderate to low, or virtually absent sequence similarity are discussed. The detection and evaluation of structural relationships is emphasized as a central aspect of protein modeling, distinct from the more technical aspects of model building. Computational techniques to generate and complement comparative protein models are also reviewed. Two examples, P-selectin and gp39, are presented to illustrate the derivation of protein model structures and their use in experimental studies. PMID:7505680

  3. Thermal Performance Testing of EMU and CSAFE Liquid Cooling Gannents

    NASA Technical Reports Server (NTRS)

    Rhodes, Richard; Bue, Grant; Meginnis, Ian; Hakam, Mary; Radford, Tamara

    2013-01-01

    Future exploration missions require the development of a new liquid cooling garment (LCG) to support the next generation extravehicular activity (EVA) suit system. The new LCG must offer greater system reliability, optimal thermal performance as required by mission directive, and meet other design requirements including improved tactile comfort. To advance the development of a future LCG, a thermal performance test was conducted to evaluate: (1) the comparable thermal performance of the EMU LCG and the CSAFE developed engineering evaluation unit (EEU) LCG, (2) the effect of the thermal comfort undergarment (TCU) on the EMU LCG tactile and thermal comfort, and (3) the performance of a torso or upper body only LCG shirt to evaluate a proposed auxiliary loop. To evaluate the thermal performance of each configuration, a metabolic test was conducted using the Demonstrator Spacesuit to create a relevant test environment. Three (3) male test subjects of similar height and weight walked on a treadmill at various speeds to produce three different metabolic loads - resting (300-600 BTU/hr), walking at a slow pace (1200 BTU/hr), and walking at a brisk pace (2200 BTU/hr). Each subject participated in five tests - two wearing the CSAFE full LCG, one wearing the EMU LCG without TCUs, one wearing the EMU LCG with TCUs, and one with the CSAFE shirt-only. During the test, performance data for the breathing air and cooling water systems and subject specific data was collected to define the thermal performance of the configurations. The test results show that the CSAFE EEU LCG and EMU LCG with TCU had comparable performance. The testing also showed that an auxiliary loop LCG, sized similarly to the shirt-only configuration, should provide adequate cooling for contingency scenarios. Finally, the testing showed that the TCU did not significantly hinder LCG heat transfer, and may prove to be acceptable for future suit use with additional analysis and testing.

  4. Applications of the automatic change detection for disaster monitoring by the knowledge-based framework

    NASA Astrophysics Data System (ADS)

    Tadono, T.; Hashimoto, S.; Onosato, M.; Hori, M.

    2012-11-01

    Change detection is a fundamental approach in utilization of satellite remote sensing image, especially in multi-temporal analysis that involves for example extracting damaged areas by a natural disaster. Recently, the amount of data obtained by Earth observation satellites has increased significantly owing to the increasing number and types of observing sensors, the enhancement of their spatial resolution, and improvements in their data processing systems. In applications for disaster monitoring, in particular, fast and accurate analysis of broad geographical areas is required to facilitate efficient rescue efforts. It is expected that robust automatic image interpretation is necessary. Several algorithms have been proposed in the field of automatic change detection in past, however they are still lack of robustness for multi purposes, an instrument independency, and accuracy better than a manual interpretation. We are trying to develop a framework for automatic image interpretation using ontology-based knowledge representation. This framework permits the description, accumulation, and use of knowledge drawn from image interpretation. Local relationships among certain concepts defined in the ontology are described as knowledge modules and are collected in the knowledge base. The knowledge representation uses a Bayesian network as a tool to describe various types of knowledge in a uniform manner. Knowledge modules are synthesized and used for target-specified inference. The results applied to two types of disasters by the framework without any modification and tuning are shown in this paper.

  5. Knowledge-based analysis of microarray gene expression data by using support vector machines

    SciTech Connect

    William Grundy; Manuel Ares, Jr.; David Haussler

    2001-06-18

    The authors introduce a method of functionally classifying genes by using gene expression data from DNA microarray hybridization experiments. The method is based on the theory of support vector machines (SVMs). SVMs are considered a supervised computer learning method because they exploit prior knowledge of gene function to identify unknown genes of similar function from expression data. SVMs avoid several problems associated with unsupervised clustering methods, such as hierarchical clustering and self-organizing maps. SVMs have many mathematical features that make them attractive for gene expression analysis, including their flexibility in choosing a similarity function, sparseness of solution when dealing with large data sets, the ability to handle large feature spaces, and the ability to identify outliers. They test several SVMs that use different similarity metrics, as well as some other supervised learning methods, and find that the SVMs best identify sets of genes with a common function using expression data. Finally, they use SVMs to predict functional roles for uncharacterized yeast ORFs based on their expression data.

  6. MytiBase: a knowledgebase of mussel (M. galloprovincialis) transcribed sequences

    PubMed Central

    Venier, Paola; De Pittà, Cristiano; Bernante, Filippo; Varotto, Laura; De Nardi, Barbara; Bovo, Giuseppe; Roch, Philippe; Novoa, Beatriz; Figueras, Antonio; Pallavicini, Alberto; Lanfranchi, Gerolamo

    2009-01-01

    Background Although Bivalves are among the most studied marine organisms due to their ecological role, economic importance and use in pollution biomonitoring, very little information is available on the genome sequences of mussels. This study reports the functional analysis of a large-scale Expressed Sequence Tag (EST) sequencing from different tissues of Mytilus galloprovincialis (the Mediterranean mussel) challenged with toxic pollutants, temperature and potentially pathogenic bacteria. Results We have constructed and sequenced seventeen cDNA libraries from different Mediterranean mussel tissues: gills, digestive gland, foot, anterior and posterior adductor muscle, mantle and haemocytes. A total of 24,939 clones were sequenced from these libraries generating 18,788 high-quality ESTs which were assembled into 2,446 overlapping clusters and 4,666 singletons resulting in a total of 7,112 non-redundant sequences. In particular, a high-quality normalized cDNA library (Nor01) was constructed as determined by the high rate of gene discovery (65.6%). Bioinformatic screening of the non-redundant M. galloprovincialis sequences identified 159 microsatellite-containing ESTs. Clusters, consensuses, related similarities and gene ontology searches have been organized in a dedicated, searchable database . Conclusion We defined the first species-specific catalogue of M. galloprovincialis ESTs including 7,112 unique transcribed sequences. Putative microsatellite markers were identified. This annotated catalogue represents a valuable platform for expression studies, marker validation and genetic linkage analysis for investigations in the biology of Mediterranean mussels. PMID:19203376

  7. Knowledge-based changes to health systems: the Thai experience in policy development.

    PubMed Central

    Tangcharoensathien, Viroj; Wibulpholprasert, Suwit; Nitayaramphong, Sanguan

    2004-01-01

    Over the past two decades the government in Thailand has adopted an incremental approach to extending health-care coverage to the population. It first offered coverage to government employees and their dependents, and then introduced a scheme under which low-income people were exempt from charges for health care. This scheme was later extended to include elderly people, children younger than 12 years of age and disabled people. A voluntary public insurance scheme was implemented to cover those who could afford to pay for their own care. Private sector employees were covered by the Social Health Insurance scheme, which was implemented in 1991. Despite these efforts, 30% of the population remained uninsured in 2001. In October of that year, the new government decided to embark on a programme to provide universal health-care coverage. This paper describes how research into health systems and health policy contributed to the move towards universal coverage. Data on health systems financing and functioning had been gathered before and after the founding of the Health Systems Research Institute in early 1990. In 1991, a contract capitation model had been used to launch the Social Health Insurance scheme. The advantages of using a capitation model are that it contains costs and provides an acceptable quality of service as opposed to the cost escalation and inefficiency that occur under fee-for-service reimbursement models, such as the one used to provide medical benefits to civil servants. An analysis of the implementation of universal coverage found that politics moved universal coverage onto the policy agenda during the general election campaign in January 2001. The capacity for research on health systems and policy to generate evidence guided the development of the policy and the design of the system at a later stage. Because the reformists who sought to bring about universal coverage (who were mostly civil servants in the Ministry of Public Health and members of nongovernmental organizations) were able to bridge the gap between researchers and politicians, an evidence-based political decision was made. Additionally, the media played a part in shaping the societal consensus on universal coverage. PMID:15643796

  8. Knowledge-Based CAI: CINS for Individualized Curriculum Sequencing. Final Technical Report No. 290.

    ERIC Educational Resources Information Center

    Wescourt, Keith T.; And Others

    This report describes research on the Curriculum Information Network (CIN) paradigm for computer-assisted instruction (CAI) in technical subjects. The CIN concept was first conceived and implemented in the BASIC Instructional Program (BIP). The primary objective of CIN-based CAI and the BIP project has been to develop procedures for providing each…

  9. A knowledge-based system for monitoring the electrical power system of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Eddy, Pat

    1987-01-01

    The design and the prototype for the expert system for the Hubble Space Telescope's electrical power system are discussed. This prototype demonstrated the capability to use real time data from a 32k telemetry stream and to perform operational health and safety status monitoring, detect trends such as battery degradation, and detect anomalies such as solar array failures. This prototype, along with the pointing control system and data management system expert systems, forms the initial Telemetry Analysis for Lockheed Operated Spacecraft (TALOS) capability.

  10. Using Knowledge-Based Systems to Support Learning of Organizational Knowledge: A Case Study

    NASA Technical Reports Server (NTRS)

    Cooper, Lynne P.; Nash, Rebecca L.; Phan, Tu-Anh T.; Bailey, Teresa R.

    2003-01-01

    This paper describes the deployment of a knowledge system to support learning of organizational knowledge at the Jet Propulsion Laboratory (JPL), a US national research laboratory whose mission is planetary exploration and to 'do what no one has done before.' Data collected over 19 weeks of operation were used to assess system performance with respect to design considerations, participation, effectiveness of communication mechanisms, and individual-based learning. These results are discussed in the context of organizational learning research and implications for practice.

  11. Exploring Architecture Options for a Federated, Cloud-based System Biology Knowledgebase

    SciTech Connect

    Gorton, Ian; Liu, Yan; Yin, Jian

    2010-12-02

    This paper evaluates various cloud computing technologies and resources for building a system biology knowledge base system. This system will host a huge amount of data and contain a flexible sets of workflows to operate on these data. It will enable system biologist to share their data and algorithms to allow research results to be reproduced, shared, and reused across the system biology community.

  12. Constraint-based component-modeling for knowledge-based design

    NASA Technical Reports Server (NTRS)

    Kolb, Mark A.

    1992-01-01

    The paper describes the application of various advanced programming techniques derived from artificial intelligence research to the development of flexible design tools for conceptual design. Special attention is given to two techniques which appear to be readily applicable to such design tools: the constraint propagation technique and the object-oriented programming. The implementation of these techniques in a prototype computer tool, Rubber Airplane, is described.

  13. Knowledge-based decision support for Space Station assembly sequence planning

    NASA Technical Reports Server (NTRS)

    1991-01-01

    A complete Personal Analysis Assistant (PAA) for Space Station Freedom (SSF) assembly sequence planning consists of three software components: the system infrastructure, intra-flight value added, and inter-flight value added. The system infrastructure is the substrate on which software elements providing inter-flight and intra-flight value-added functionality are built. It provides the capability for building representations of assembly sequence plans and specification of constraints and analysis options. Intra-flight value-added provides functionality that will, given the manifest for each flight, define cargo elements, place them in the National Space Transportation System (NSTS) cargo bay, compute performance measure values, and identify violated constraints. Inter-flight value-added provides functionality that will, given major milestone dates and capability requirements, determine the number and dates of required flights and develop a manifest for each flight. The current project is Phase 1 of a projected two phase program and delivers the system infrastructure. Intra- and inter-flight value-added were to be developed in Phase 2, which has not been funded. Based on experience derived from hundreds of projects conducted over the past seven years, ISX developed an Intelligent Systems Engineering (ISE) methodology that combines the methods of systems engineering and knowledge engineering to meet the special systems development requirements posed by intelligent systems, systems that blend artificial intelligence and other advanced technologies with more conventional computing technologies. The ISE methodology defines a phased program process that begins with an application assessment designed to provide a preliminary determination of the relative technical risks and payoffs associated with a potential application, and then moves through requirements analysis, system design, and development.

  14. Implementing a Knowledge-Based Library Information System with Typed Horn Logic.

    ERIC Educational Resources Information Center

    Ait-Kaci, Hassan; And Others

    1990-01-01

    Describes a prototype library expert system called BABEL which uses a new programing language, LOGIN, that combines the idea of attribute inheritance with logic programing. Use of hierarchical classification of library objects to build a knowledge base for a library information system is explained, and further research is suggested. (11…

  15. An Architecture for Performance Optimization in a Collaborative Knowledge-Based Approach for Wireless Sensor Networks

    PubMed Central

    Gadeo-Martos, Manuel Angel; Fernandez-Prieto, Jose Angel; Canada-Bago, Joaquin; Velasco, Juan Ramon

    2011-01-01

    Over the past few years, Intelligent Spaces (ISs) have received the attention of many Wireless Sensor Network researchers. Recently, several studies have been devoted to identify their common capacities and to set up ISs over these networks. However, little attention has been paid to integrating Fuzzy Rule-Based Systems into collaborative Wireless Sensor Networks for the purpose of implementing ISs. This work presents a distributed architecture proposal for collaborative Fuzzy Rule-Based Systems embedded in Wireless Sensor Networks, which has been designed to optimize the implementation of ISs. This architecture includes the following: (a) an optimized design for the inference engine; (b) a visual interface; (c) a module to reduce the redundancy and complexity of the knowledge bases; (d) a module to evaluate the accuracy of the new knowledge base; (e) a module to adapt the format of the rules to the structure used by the inference engine; and (f) a communications protocol. As a real-world application of this architecture and the proposed methodologies, we show an application to the problem of modeling two plagues of the olive tree: prays (olive moth, Prays oleae Bern.) and repilo (caused by the fungus Spilocaea oleagina). The results show that the architecture presented in this paper significantly decreases the consumption of resources (memory, CPU and battery) without a substantial decrease in the accuracy of the inferred values. PMID:22163687

  16. A knowledge-based system design/information tool for aircraft flight control systems

    NASA Technical Reports Server (NTRS)

    Mackall, Dale A.; Allen, James G.

    1989-01-01

    Research aircraft have become increasingly dependent on advanced control systems to accomplish program goals. These aircraft are integrating multiple disciplines to improve performance and satisfy research objectives. This integration is being accomplished through electronic control systems. Because of the number of systems involved and the variety of engineering disciplines, systems design methods and information management have become essential to program success. The primary objective of the system design/information tool for aircraft flight control system is to help transfer flight control system design knowledge to the flight test community. By providing all of the design information and covering multiple disciplines in a structured, graphical manner, flight control systems can more easily be understood by the test engineers. This will provide the engineers with the information needed to thoroughly ground test the system and thereby reduce the likelihood of serious design errors surfacing in flight. The secondary objective is to apply structured design techniques to all of the design domains. By using the techniques in the top level system design down through the detailed hardware and software designs, it is hoped that fewer design anomalies will result. The flight test experiences of three highly complex, integrated aircraft programs are reviewed: the X-29 forward-swept wing, the advanced fighter technology integration (AFTI) F-16, and the highly maneuverable aircraft technology (HiMAT) program. Significant operating anomalies and the design errors which cause them, are examined to help identify what functions a system design/information tool should provide to assist designers in avoiding errors.

  17. A knowledge-based shell for selecting a nondestructive evaluation technique

    SciTech Connect

    Roberge, P.R.

    1995-02-01

    The complexity of planning a nondestructive evaluation (NDE) program or an inspection schedule for specific problems and available NDE techniques can be drastically reduced by the creation of a knowledge based system that would balance the advantages and limitations of each technique for specific problems. Such a system could incorporate the fundamental knowledge derived from failure analysis and cover topics such as materials vs. defect size and type, probability of failure, and basic reliability information. In order to efficiently organize knowledge of materials degradation, the parameters that control various forms of failure must first be rationalized in a general framework. This framework and their factors would then constitute a quantitative and easily programmable description of the independent variables controlling the intensity of a failure. This article describes such a framework, which could guide the general selection of NDE for materials failure with a particular emphasis on corrosion related failures. The framework architecture itself was constructed using an object-oriented methodology for maximum flexibility because it was anticipated that the materials parameters could easily be described as multidimensional objects.

  18. A knowledge-based structure to improve learning lessons in the military

    SciTech Connect

    Roberge, P.R.; Trethewey, K.R.

    1996-10-01

    Life extension of military systems poses a continuous risk of soaring costs due to corrosion control measures. In a context where the majority of corrosion problems are a direct result of compounded human and organizational errors, there is scope for considerable savings to budgets by improving the process by which one learns from the lessons of the past. In this paper the use of the latest computer technology is discussed in the context of transforming often sterile reports and documents into easily accessible information systems.

  19. Knowledge-based formant tracking with confidence measure using dynamic programming

    NASA Astrophysics Data System (ADS)

    Manocha, Sandeep; Espy-Wilson, Carol Y.

    2005-09-01

    In this study, we report on refinements to a formant tracking technique originally reported in [Xia et al., ICSLP (2000)]. The formant tracker operates in two phases. First, it finds optimal formant track estimates in oral sonorant regions by imposing frequency continuity constraints using dynamic programming. Second, post-processing is performed to make the estimates more robust and accurate, and to extend formant tracks in nasal and obstruent regions. In recent work, we have improved on our initial estimates of the formants by combining the outputs from a 12th-order LPC analysis and a 16th-order LPC analysis. Additionally, we have added a confidence measure for each formant track in each frame. The confidence measure is based on formant continuity, competing formants, short-time energy, context information, and formant information over the entire utterance. The experiments show that most of the tracking errors are associated with a low confidence value, while the correct formants have high confidence values. The performance of the algorithm in the sonorant regions was tested using randomly selected male and female speech from the TIMIT database. [Work supported by NSF Grant No. BCS0236707.

  20. Knowledge-based probabilistic representations of branching ratios in chemical networks: The case of dissociative recombinations

    SciTech Connect

    Plessis, Sylvain; Carrasco, Nathalie; Pernot, Pascal

    2010-10-07

    Experimental data about branching ratios for the products of dissociative recombination of polyatomic ions are presently the unique information source available to modelers of natural or laboratory chemical plasmas. Yet, because of limitations in the measurement techniques, data for many ions are incomplete. In particular, the repartition of hydrogen atoms among the fragments of hydrocarbons ions is often not available. A consequence is that proper implementation of dissociative recombination processes in chemical models is difficult, and many models ignore invaluable data. We propose a novel probabilistic approach based on Dirichlet-type distributions, enabling modelers to fully account for the available information. As an application, we consider the production rate of radicals through dissociative recombination in an ionospheric chemistry model of Titan, the largest moon of Saturn. We show how the complete scheme of dissociative recombination products derived with our method dramatically affects these rates in comparison with the simplistic H-loss mechanism implemented by default in all recent models.

  1. miRegulome: a knowledge-base of miRNA regulomics and analysis

    PubMed Central

    Barh, Debmalya; Kamapantula, Bhanu; Jain, Neha; Nalluri, Joseph; Bhattacharya, Antaripa; Juneja, Lucky; Barve, Neha; Tiwari, Sandeep; Miyoshi, Anderson; Azevedo, Vasco; Blum, Kenneth; Kumar, Anil; Silva, Artur; Ghosh, Preetam

    2015-01-01

    miRNAs regulate post transcriptional gene expression by targeting multiple mRNAs and hence can modulate multiple signalling pathways, biological processes, and patho-physiologies. Therefore, understanding of miRNA regulatory networks is essential in order to modulate the functions of a miRNA. The focus of several existing databases is to provide information on specific aspects of miRNA regulation. However, an integrated resource on the miRNA regulome is currently not available to facilitate the exploration and understanding of miRNA regulomics. miRegulome attempts to bridge this gap. The current version of miRegulome v1.0 provides details on the entire regulatory modules of miRNAs altered in response to chemical treatments and transcription factors, based on validated data manually curated from published literature. Modules of miRegulome (upstream regulators, downstream targets, miRNA regulated pathways, functions, diseases, etc) are hyperlinked to an appropriate external resource and are displayed visually to provide a comprehensive understanding. Four analysis tools are incorporated to identify relationships among different modules based on user specified datasets. miRegulome and its tools are helpful in understanding the biology of miRNAs and will also facilitate the discovery of biomarkers and therapeutics. With added features in upcoming releases, miRegulome will be an essential resource to the scientific community. Availability: http://bnet.egr.vcu.edu/miRegulome. PMID:26243198

  2. Knowledge-based image data management - An expert front-end for the BROWSE facility

    NASA Technical Reports Server (NTRS)

    Stoms, David M.; Star, Jeffrey L.; Estes, John E.

    1988-01-01

    An intelligent user interface being added to the NASA-sponsored BROWSE testbed facility is described. BROWSE is a prototype system designed to explore issues involved in locating image data in distributed archives and displaying low-resolution versions of that imagery at a local terminal. For prototyping, the initial application is the remote sensing of forest and range land.

  3. Sensor explication: knowledge-based robotic plan execution through logical objects.

    PubMed

    Budenske, J; Gini, M

    1997-01-01

    Complex robot tasks are usually described as high level goals, with no details on how to achieve them. However, details must be provided to generate primitive commands to control a real robot. A sensor explication concept that makes details explicit from general commands is presented. We show how the transformation from high-level goals to primitive commands can be performed at execution time and we propose an architecture based on reconfigurable objects that contain domain knowledge and knowledge about the sensors and actuators available. Our approach is based on two premises: 1) plan execution is an information gathering process where determining what information is relevant is a great part of the process; and 2) plan execution requires that many details are made explicit. We show how our approach is used in solving the task of moving a robot to and through an unknown, and possibly narrow, doorway; where sonic range data is used to find the doorway, walls, and obstacles. We illustrate the difficulty of such a task using data from a large number of experiments we conducted with a real mobile robot. The laboratory results illustrate how the proper application of knowledge in the integration and utilization of sensors and actuators increases the robustness of plan execution. PMID:18255901

  4. Knowledge-based decision support for Space Station assembly sequence planning

    NASA Astrophysics Data System (ADS)

    1991-04-01

    A complete Personal Analysis Assistant (PAA) for Space Station Freedom (SSF) assembly sequence planning consists of three software components: the system infrastructure, intra-flight value added, and inter-flight value added. The system infrastructure is the substrate on which software elements providing inter-flight and intra-flight value-added functionality are built. It provides the capability for building representations of assembly sequence plans and specification of constraints and analysis options. Intra-flight value-added provides functionality that will, given the manifest for each flight, define cargo elements, place them in the National Space Transportation System (NSTS) cargo bay, compute performance measure values, and identify violated constraints. Inter-flight value-added provides functionality that will, given major milestone dates and capability requirements, determine the number and dates of required flights and develop a manifest for each flight. The current project is Phase 1 of a projected two phase program and delivers the system infrastructure. Intra- and inter-flight value-added were to be developed in Phase 2, which has not been funded. Based on experience derived from hundreds of projects conducted over the past seven years, ISX developed an Intelligent Systems Engineering (ISE) methodology that combines the methods of systems engineering and knowledge engineering to meet the special systems development requirements posed by intelligent systems, systems that blend artificial intelligence and other advanced technologies with more conventional computing technologies. The ISE methodology defines a phased program process that begins with an application assessment designed to provide a preliminary determination of the relative technical risks and payoffs associated with a potential application, and then moves through requirements analysis, system design, and development.

  5. Knowledge-based load leveling and task allocation in human-machine systems

    NASA Technical Reports Server (NTRS)

    Chignell, M. H.; Hancock, P. A.

    1986-01-01

    Conventional human-machine systems use task allocation policies which are based on the premise of a flexible human operator. This individual is most often required to compensate for and augment the capabilities of the machine. The development of artificial intelligence and improved technologies have allowed for a wider range of task allocation strategies. In response to these issues a Knowledge Based Adaptive Mechanism (KBAM) is proposed for assigning tasks to human and machine in real time, using a load leveling policy. This mechanism employs an online workload assessment and compensation system which is responsive to variations in load through an intelligent interface. This interface consists of a loading strategy reasoner which has access to information about the current status of the human-machine system as well as a database of admissible human/machine loading strategies. Difficulties standing in the way of successful implementation of the load leveling strategy are examined.

  6. A Knowledge-Based Approach to Describe and Adapt Learning Objects

    ERIC Educational Resources Information Center

    Bouzeghoub, Amel; Defude, Bruno; Duitama, John Freddy; Lecocq, Claire

    2006-01-01

    Our claim is that semantic metadata are required to allow a real reusing and assembling of learning objects. Our system is based on three models used to describe the domain, learners, and learning objects. The learning object model is inspired from knowledge representation proposals. A learning object can be reused directly or can be combined with…

  7. Coordinated Solar Observation and Event Searches using the Heliophysics Events Knowledgebase (HEK)

    NASA Astrophysics Data System (ADS)

    Timmons, R.; Hurlburt, N. E.

    2014-12-01

    We present new capabilities of the HEK allowing for joint searches, returning overlapping data from multiple instruments (IRIS, SOT, XRT, EIS) that also include particular solar features and events (active regions, (large) flares, sunspots, etc.). The new search tools aid the process of finding particular observations from non-synotpic instruments.

  8. From Cues to Nudge: A Knowledge-Based Framework for Surveillance of Healthcare-Associated Infections.

    PubMed

    Shaban-Nejad, Arash; Mamiya, Hiroshi; Riazanov, Alexandre; Forster, Alan J; Baker, Christopher J O; Tamblyn, Robyn; Buckeridge, David L

    2016-01-01

    We propose an integrated semantic web framework consisting of formal ontologies, web services, a reasoner and a rule engine that together recommend appropriate level of patient-care based on the defined semantic rules and guidelines. The classification of healthcare-associated infections within the HAIKU (Hospital Acquired Infections - Knowledge in Use) framework enables hospitals to consistently follow the standards along with their routine clinical practice and diagnosis coding to improve quality of care and patient safety. The HAI ontology (HAIO) groups over thousands of codes into a consistent hierarchy of concepts, along with relationships and axioms to capture knowledge on hospital-associated infections and complications with focus on the big four types, surgical site infections (SSIs), catheter-associated urinary tract infection (CAUTI); hospital-acquired pneumonia, and blood stream infection. By employing statistical inferencing in our study we use a set of heuristics to define the rule axioms to improve the SSI case detection. We also demonstrate how the occurrence of an SSI is identified using semantic e-triggers. The e-triggers will be used to improve our risk assessment of post-operative surgical site infections (SSIs) for patients undergoing certain type of surgeries (e.g., coronary artery bypass graft surgery (CABG)). PMID:26537131

  9. Learning and plan refinement in a knowledge-based system for automatic speech recognition

    SciTech Connect

    De Mori, R.; Lam, L.; Gilloux, M.

    1987-03-01

    This paper shows how a semiautomatic design of a speech recognition system can be done as a planning activity. Recognition performances are used for deciding plan refinement. Inductive learning is performed for setting action preconditions. Experimental results in the recognition of connected letters spoken by 100 speakers are presented.

  10. Knowledge-based geographic information systems (KBGIS): new analytic and data management tools

    SciTech Connect

    Albert, T.M.

    1988-11-01

    In its simplest form, a geographic information system (GIS) may be viewed as a data base management system in which most of the data are spatially indexed, and upon which sets of procedures operate to answer queries about spatial entities represented in the data base. Utilization of artificial intelligence (AI) techniques can enhance greatly the capabilities of a GIS, particularly in handling very large, diverse data bases involved in the earth sciences. A KBGIS has been developed by the US Geological Survey which incorporates AI techniques such as learning, expert systems, new data representation, and more. The system, which will be developed further and applied, is a prototype of the next generation of GIS's, an intelligent GIS, as well as an example of a general-purpose intelligent data handling system. The paper provides a description of KBGIS and its application, as well as the AI techniques involved.

  11. Issues in implementing a knowledge-based ECG analyzer for personal mobile health monitoring.

    PubMed

    Goh, K W; Kim, E; Lavanya, J; Kim, Y; Soh, C B

    2006-01-01

    Advances in sensor technology, personal mobile devices, and wireless broadband communications are enabling the development of an integrated personal mobile health monitoring system that can provide patients with a useful tool to assess their own health and manage their personal health information anytime and anywhere. Personal mobile devices, such as PDAs and mobile phones, are becoming more powerful integrated information management tools and play a major role in many people's lives. We focus on designing a health-monitoring system for people who suffer from cardiac arrhythmias. We have developed computer simulation models to evaluate the performance of appropriate electrocardiogram (ECG) analysis techniques that can be implemented on personal mobile devices. This paper describes an ECG analyzer to perform ECG beat and episode detection and classification. We have obtained promising preliminary results from our study. Also, we discuss several key considerations when implementing a mobile health monitoring solution. The mobile ECG analyzer would become a front-end patient health data acquisition module, which is connected to the Personal Health Information Management System (PHIMS) for data repository. PMID:17947185

  12. Architectural Design Decisions For A Knowledge-Based Distributed Systems Manager

    NASA Astrophysics Data System (ADS)

    Pasquale, Joseph

    1987-05-01

    A major fundamental problem in decentralized resource control in distributed systems is that in general, no decision-making node knows with complete certainty the current global state of the system. We present an architecture for an Expert Manager which provides a framework for dealing with this problem. Expert system techniques are used to infer the global system state using whatever partial state information is at hand, along with mechanisms for reasoning. Decision-making is enhanced by taking into account the uncertainty of observations, and that concurrent decisions made by many Expert Managers may conflict.

  13. Towards sustainable infrastructure management: knowledge-based service-oriented computing framework for visual analytics

    NASA Astrophysics Data System (ADS)

    Vatcha, Rashna; Lee, Seok-Won; Murty, Ajeet; Tolone, William; Wang, Xiaoyu; Dou, Wenwen; Chang, Remco; Ribarsky, William; Liu, Wanqiu; Chen, Shen-en; Hauser, Edd

    2009-05-01

    Infrastructure management (and its associated processes) is complex to understand, perform and thus, hard to make efficient and effective informed decisions. The management involves a multi-faceted operation that requires the most robust data fusion, visualization and decision making. In order to protect and build sustainable critical assets, we present our on-going multi-disciplinary large-scale project that establishes the Integrated Remote Sensing and Visualization (IRSV) system with a focus on supporting bridge structure inspection and management. This project involves specific expertise from civil engineers, computer scientists, geographers, and real-world practitioners from industry, local and federal government agencies. IRSV is being designed to accommodate the essential needs from the following aspects: 1) Better understanding and enforcement of complex inspection process that can bridge the gap between evidence gathering and decision making through the implementation of ontological knowledge engineering system; 2) Aggregation, representation and fusion of complex multi-layered heterogeneous data (i.e. infrared imaging, aerial photos and ground-mounted LIDAR etc.) with domain application knowledge to support machine understandable recommendation system; 3) Robust visualization techniques with large-scale analytical and interactive visualizations that support users' decision making; and 4) Integration of these needs through the flexible Service-oriented Architecture (SOA) framework to compose and provide services on-demand. IRSV is expected to serve as a management and data visualization tool for construction deliverable assurance and infrastructure monitoring both periodically (annually, monthly, even daily if needed) as well as after extreme events.

  14. A knowledge-based approach to identification and adaptation in dynamical systems control

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Wong, C. M.

    1988-01-01

    Artificial intelligence techniques are applied to the problems of model form and parameter identification of large-scale dynamic systems. The object-oriented knowledge representation is discussed in the context of causal modeling and qualitative reasoning. Structured sets of rules are used for implementing qualitative component simulations, for catching qualitative discrepancies and quantitative bound violations, and for making reconfiguration and control decisions that affect the physical system. These decisions are executed by backward-chaining through a knowledge base of control action tasks. This approach was implemented for two examples: a triple quadrupole mass spectrometer and a two-phase thermal testbed. Results of tests with both of these systems demonstrate that the software replicates some or most of the functionality of a human operator, thereby reducing the need for a human-in-the-loop in the lower levels of control of these complex systems.

  15. Knowledge-based geographic information systems (KBGIS): New analytic and data management tools

    USGS Publications Warehouse

    Albert, T.M.

    1988-01-01

    In its simplest form, a geographic information system (GIS) may be viewed as a data base management system in which most of the data are spatially indexed, and upon which sets of procedures operate to answer queries about spatial entities represented in the data base. Utilization of artificial intelligence (AI) techniques can enhance greatly the capabilities of a GIS, particularly in handling very large, diverse data bases involved in the earth sciences. A KBGIS has been developed by the U.S. Geological Survey which incorporates AI techniques such as learning, expert systems, new data representation, and more. The system, which will be developed further and applied, is a prototype of the next generation of GIS's, an intelligent GIS, as well as an example of a general-purpose intelligent data handling system. The paper provides a description of KBGIS and its application, as well as the AI techniques involved. ?? 1988 International Association for Mathematical Geology.

  16. Design consideration in constructing high performance embedded Knowledge-Based Systems (KBS)

    NASA Technical Reports Server (NTRS)

    Dalton, Shelly D.; Daley, Philip C.

    1988-01-01

    As the hardware trends for artificial intelligence (AI) involve more and more complexity, the process of optimizing the computer system design for a particular problem will also increase in complexity. Space applications of knowledge based systems (KBS) will often require an ability to perform both numerically intensive vector computations and real time symbolic computations. Although parallel machines can theoretically achieve the speeds necessary for most of these problems, if the application itself is not highly parallel, the machine's power cannot be utilized. A scheme is presented which will provide the computer systems engineer with a tool for analyzing machines with various configurations of array, symbolic, scaler, and multiprocessors. High speed networks and interconnections make customized, distributed, intelligent systems feasible for the application of AI in space. The method presented can be used to optimize such AI system configurations and to make comparisons between existing computer systems. It is an open question whether or not, for a given mission requirement, a suitable computer system design can be constructed for any amount of money.

  17. An “ADME Module” in the Adverse Outcome Pathway Knowledgebase

    EPA Science Inventory

    The Adverse Outcome Pathway (AOP) framework has generated intense interest for its utility to organize knowledge on the toxicity mechanisms, starting from a molecular initiating event (MIE) to an adverse outcome across various levels of biological organization. While the AOP fra...

  18. Final Report - Phylogenomic tools and web resources for the Systems Biology Knowledgebase

    SciTech Connect

    Sjolander, Kimmen

    2014-11-07

    The major advance during this last reporting period (8/15/12 to present) is our release of data on the PhyloFacts website: phylogenetic trees, multiple sequence alignments and other data for protein families are now available for download from http://phylogenomics.berkeley.edu/data/. This project as a whole aimed to develop high-throughput functional annotation systems that exploit information from protein 3D structure and evolution to provide highly precise inferences of various aspects of gene function, including molecular function, biological process, pathway association, Pfam domains, cellular localization and so on. We accomplished these aims by developing and testing different systems on a database of protein family trees: the PhyloFacts Phylogenomic Encyclopedia (at http://phylogenomics.berkeley.edu/phylofacts/ ).

  19. An evidence-based knowledgebase of metastasis suppressors to identify key pathways relevant to cancer metastasis

    PubMed Central

    Zhao, Min; Li, Zhe; Qu, Hong

    2015-01-01

    Metastasis suppressor genes (MS genes) are genes that play important roles in inhibiting the process of cancer metastasis without preventing growth of the primary tumor. Identification of these genes and understanding their functions are critical for investigation of cancer metastasis. Recent studies on cancer metastasis have identified many new susceptibility MS genes. However, the comprehensive illustration of diverse cellular processes regulated by metastasis suppressors during the metastasis cascade is lacking. Thus, the relationship between MS genes and cancer risk is still unclear. To unveil the cellular complexity of MS genes, we have constructed MSGene (http://MSGene.bioinfo-minzhao.org/), the first literature-based gene resource for exploring human MS genes. In total, we manually curated 194 experimentally verified MS genes and mapped to 1448 homologous genes from 17 model species. Follow-up functional analyses associated 194 human MS genes with epithelium/tissue morphogenesis and epithelia cell proliferation. In addition, pathway analysis highlights the prominent role of MS genes in activation of platelets and coagulation system in tumor metastatic cascade. Moreover, global mutation pattern of MS genes across multiple cancers may reveal common cancer metastasis mechanisms. All these results illustrate the importance of MSGene to our understanding on cell development and cancer metastasis. PMID:26486520

  20. A Knowledge-Based Approach for Item Exposure Control in Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Doong, Shing H.

    2009-01-01

    The purpose of this study is to investigate a functional relation between item exposure parameters (IEPs) and item parameters (IPs) over parallel pools. This functional relation is approximated by a well-known tool in machine learning. Let P and Q be parallel item pools and suppose IEPs for P have been obtained via a Sympson and Hetter-type…