Sample records for expert choice software

  1. Analysis of methods of processing of expert information by optimization of administrative decisions

    NASA Astrophysics Data System (ADS)

    Churakov, D. Y.; Tsarkova, E. G.; Marchenko, N. D.; Grechishnikov, E. V.

    2018-03-01

    In the real operation the measure definition methodology in case of expert estimation of quality and reliability of application-oriented software products is offered. In operation methods of aggregation of expert estimates on the example of a collective choice of an instrumental control projects in case of software development of a special purpose for needs of institutions are described. Results of operation of dialogue decision making support system are given an algorithm of the decision of the task of a choice on the basis of a method of the analysis of hierarchies and also. The developed algorithm can be applied by development of expert systems to the solution of a wide class of the tasks anyway connected to a multicriteria choice.

  2. ANALYSIS OF ALTERNATIVES FOR THE LONG TERM MANAGEMENT OF EXCESS MERCURY

    EPA Science Inventory

    This paper describes a systematic method for comparing options for the long-term management of surplus elemental mercury in the U.S., using the Analytic Hierarchy Process (AHP) as embodied in commercially available Expert Choice software. A limited scope multi-criteria decisionan...

  3. PRELIMINARY ANALYSIS OF ALTERNATIVES FOR THE LONG TERM MANAGEMENT OF EXCESS MERCURY

    EPA Science Inventory

    This report describes the use of a systematic method for comparing options for the long term management and retirement of surplus mercury in the U.S. The method chosen is the Analytical Hierarchy Procedure (AHP) as embodied in the Expert Choice 2000 software. The goal, criteria, ...

  4. Virtual test: A student-centered software to measure student's critical thinking on human disease

    NASA Astrophysics Data System (ADS)

    Rusyati, Lilit; Firman, Harry

    2016-02-01

    The study "Virtual Test: A Student-Centered Software to Measure Student's Critical Thinking on Human Disease" is descriptive research. The background is importance of computer-based test that use element and sub element of critical thinking. Aim of this study is development of multiple choices to measure critical thinking that made by student-centered software. Instruments to collect data are (1) construct validity sheet by expert judge (lecturer and medical doctor) and professional judge (science teacher); and (2) test legibility sheet by science teacher and junior high school student. Participants consisted of science teacher, lecturer, and medical doctor as validator; and the students as respondent. Result of this study are describe about characteristic of virtual test that use to measure student's critical thinking on human disease, analyze result of legibility test by students and science teachers, analyze result of expert judgment by science teachers and medical doctor, and analyze result of trial test of virtual test at junior high school. Generally, result analysis shown characteristic of multiple choices to measure critical thinking was made by eight elements and 26 sub elements that developed by Inch et al.; complete by relevant information; and have validity and reliability more than "enough". Furthermore, specific characteristic of multiple choices to measure critical thinking are information in form science comic, table, figure, article, and video; correct structure of language; add source of citation; and question can guide student to critical thinking logically.

  5. Assessment of Leisure Preferences for Students with Severe Developmental Disabilities and Communication Difficulties

    ERIC Educational Resources Information Center

    Kreiner, Janice; Flexer, Robert

    2009-01-01

    The purpose of this study was to develop and to evaluate the Preferences for Leisure Attributes (PLA) Assessment, a forced-choice computer software program for students with severe disabilities and communication difficulties. In order to determine content validity of the PLA Assessment, four experts in related fields assigned critical attributes…

  6. APPLICATION OF THE ANALYTIC HIERARCHY PROCESS TO COMPARE ALTERNATIVES FOR THE LONG-TERM MANAGEMENT OF SURPLUS MERCURY

    EPA Science Inventory

    This paper describes a systematic method for comparing options for the long-term management of surplus elemental mercury in the U.S., using the Analytic Hierarchy Process (AHP) as embodied in commercially available Expert Choice software. A limited scope multi-criteria decision-a...

  7. Complex Investigations of Sapphire Crystals Production

    NASA Astrophysics Data System (ADS)

    Malyukov, S. P.; Klunnikova, Yu V.

    The problem of optimum conditions choice for processing sapphire substrates was solved with optimization methods and with combination of analytical simulation methods, experiment and expert system technology. The experimental results and software give rather full information on features of real structure of the sapphire crystal substrates and can be effectively used for optimization of technology of the substrate preparation for electronic devices.

  8. Documentation of Old Turkic Runic Inscriptions of the Altai Mountains Using Photogrammetric Technology

    NASA Astrophysics Data System (ADS)

    Vavulin, M. V.

    2017-11-01

    Old Turkic runic inscriptions of the Altai Mountains (8th-9th centuries AD) were digitized in the course of this project to be preserved in the current state on the Web and deciphered by linguistic experts. The ways the inscriptions were made as well as their location in hardly accessible areas required finding an inexpensive solution that would provide detailed 3D documentation of rock faces, while at the same time mobility and autonomy. Digital photogrammetry came as a quite affordable and optimal choice for getting high-quality outcomes using inexpensive software and further data processing using free software.

  9. [To what extent do reviewers of multiple-choice questions need to be trained? A comparison between handing out information sheets and brief workshop sessions].

    PubMed

    Öchsner, Wolfgang; Böckers, Anja

    2016-01-01

    A competent review process is crucial to ensure the quality of multiple-choice (MC) questions. However, the acquisition of reviewing skills should not cause any unnecessary additional burden for a medical staff that is already facing heavy workloads. 100 MC questions, for which an expert review existed, were presented to 12 novices. In advance, six participants received a specific information sheet covering critical information for high-calibre review; the other six participants attended a 2.5-hour workshop covering the same information. The review results of both groups were analysed with a licensed version of the IBM software SPSS 19.0 (SPSS Inc., Chicago, IL). The results of the workshop group were distinctly closer to the experts' results (gold standard) than those of the information sheet group. For the quantitatively important category of medium quality MC questions, the results of the workshop group did not significantly differ from the experts' results. In the information sheet group the results were significantly poorer than the experts', regardless of the quality of the questions. Distributing specific information sheets to MC question reviewers is not sufficient for ensuring the quality of the review so that - regardless of the increased effort involved - a recommendation to conduct specific workshops must be made. Copyright © 2014. Published by Elsevier GmbH.

  10. Assessment of ICount software, a precise and fast egg counting tool for the mosquito vector Aedes aegypti.

    PubMed

    Gaburro, Julie; Duchemin, Jean-Bernard; Paradkar, Prasad N; Nahavandi, Saeid; Bhatti, Asim

    2016-11-18

    Widespread in the tropics, the mosquito Aedes aegypti is an important vector of many viruses, posing a significant threat to human health. Vector monitoring often requires fecundity estimation by counting eggs laid by female mosquitoes. Traditionally, manual data analyses have been used but this requires a lot of effort and is the methods are prone to errors. An easy tool to assess the number of eggs laid would facilitate experimentation and vector control operations. This study introduces a built-in software called ICount allowing automatic egg counting of the mosquito vector, Aedes aegypti. ICount egg estimation compared to manual counting is statistically equivalent, making the software effective for automatic and semi-automatic data analysis. This technique also allows rapid analysis compared to manual methods. Finally, the software has been used to assess p-cresol oviposition choices under laboratory conditions in order to test the system with different egg densities. ICount is a powerful tool for fast and precise egg count analysis, freeing experimenters from manual data processing. Software access is free and its user-friendly interface allows easy use by non-experts. Its efficiency has been tested in our laboratory with oviposition dual choices of Aedes aegypti females. The next step will be the development of a mobile application, based on the ICount platform, for vector monitoring surveys in the field.

  11. Factors Affecting the Location of Road Emergency Bases in Iran Using Analytical Hierarchy Process (AHP).

    PubMed

    Bahadori, Mohammadkarim; Hajebrahimi, Ahmad; Alimohammadzadeh, Khalil; Ravangard, Ramin; Hosseini, Seyed Mojtaba

    2017-10-01

    To identify and prioritize factors affecting the location of road emergency bases in Iran using Analytical Hierarchy Process (AHP). This was a mixed method (quantitative-qualitative) study conducted in 2016. The participants in this study included the professionals and experts in the field of pre-hospital and road emergency services issues working in the Health Deputy of Iran Ministry of Health and Medical Education, which were selected using purposive sampling method. In this study at first, the factors affecting the location of road emergency bases in Iran were identified using literature review and conducting interviews with the experts. Then, the identified factors were scored and prioritized using the studied professionals and experts' viewpoints through using the analytic hierarchy process (AHP) technique and its related pair-wise questionnaire. The collected data were analyzed using MAXQDA 10.0 software to analyze the answers given to the open question and Expert Choice 10.0 software to determine the weights and priorities of the identified factors. The results showed that eight factors were effective in locating the road emergency bases in Iran from the viewpoints of the studied professionals and experts in the field of pre-hospital and road emergency services issues, including respectively distance from the next base, region population, topography and geographical situation of the region, the volume of road traffic, the existence of amenities such as water, electricity, gas, etc. and proximity to the village, accident-prone sites, University ownership of the base site, and proximity to toll-house. Among the eight factors which were effective in locating the road emergency bases from the studied professionals and experts' perspectives, "distance from the next base" and "region population" were respectively the most important ones which had great differences with other factors.

  12. An application of the AHP in water resources management: a case study on urban drainage rehabilitation in Medan City

    NASA Astrophysics Data System (ADS)

    Tarigan, A. P. M.; Rahmad, D.; Sembiring, R. A.; Iskandar, R.

    2018-02-01

    This paper illustrates an application of Analytical Hierarchy Process (AHP) as a potential decision-making method in water resource management related to drainage rehabilitation. The prioritization problem of urban drainage rehabilitation in Medan City due to limited budget is used as a study case. A hierarchical structure is formed for the prioritization criteria and the alternative drainages to be rehabilitated. Based on the AHP, the prioritization criteria are ranked and a descending-order list of drainage is made in order to select the most favorable drainages to have rehabilitation. A sensitivity analysis is then conducted to check the consistency of the final decisions in case of minor changes in judgements. The results of AHP computed manually are compared with that using the software Expert Choice. It is observed that the top three ranked drainages are consistent, and both results of the AHP methods, calculated manually and performed using Expert Choice, are in agreement. It is hoped that the application of the AHP will help the decision-making process by the city government in the problem of urban drainage rehabilitation.

  13. A software engineering approach to expert system design and verification

    NASA Technical Reports Server (NTRS)

    Bochsler, Daniel C.; Goodwin, Mary Ann

    1988-01-01

    Software engineering design and verification methods for developing expert systems are not yet well defined. Integration of expert system technology into software production environments will require effective software engineering methodologies to support the entire life cycle of expert systems. The software engineering methods used to design and verify an expert system, RENEX, is discussed. RENEX demonstrates autonomous rendezvous and proximity operations, including replanning trajectory events and subsystem fault detection, onboard a space vehicle during flight. The RENEX designers utilized a number of software engineering methodologies to deal with the complex problems inherent in this system. An overview is presented of the methods utilized. Details of the verification process receive special emphasis. The benefits and weaknesses of the methods for supporting the development life cycle of expert systems are evaluated, and recommendations are made based on the overall experiences with the methods.

  14. SandiaMRCR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-01-05

    SandiaMCR was developed to identify pure components and their concentrations from spectral data. This software efficiently implements the multivariate calibration regression alternating least squares (MCR-ALS), principal component analysis (PCA), and singular value decomposition (SVD). Version 3.37 also includes the PARAFAC-ALS Tucker-1 (for trilinear analysis) algorithms. The alternating least squares methods can be used to determine the composition without or with incomplete prior information on the constituents and their concentrations. It allows the specification of numerous preprocessing, initialization and data selection and compression options for the efficient processing of large data sets. The software includes numerous options including the definition ofmore » equality and non-negativety constraints to realistically restrict the solution set, various normalization or weighting options based on the statistics of the data, several initialization choices and data compression. The software has been designed to provide a practicing spectroscopist the tools required to routinely analysis data in a reasonable time and without requiring expert intervention.« less

  15. Getting expert systems off the ground: Lessons learned from integrating model-based diagnostics with prototype flight hardware

    NASA Technical Reports Server (NTRS)

    Stephan, Amy; Erikson, Carol A.

    1991-01-01

    As an initial attempt to introduce expert system technology into an onboard environment, a model based diagnostic system using the TRW MARPLE software tool was integrated with prototype flight hardware and its corresponding control software. Because this experiment was designed primarily to test the effectiveness of the model based reasoning technique used, the expert system ran on a separate hardware platform, and interactions between the control software and the model based diagnostics were limited. While this project met its objective of showing that model based reasoning can effectively isolate failures in flight hardware, it also identified the need for an integrated development path for expert system and control software for onboard applications. In developing expert systems that are ready for flight, artificial intelligence techniques must be evaluated to determine whether they offer a real advantage onboard, identify which diagnostic functions should be performed by the expert systems and which are better left to the procedural software, and work closely with both the hardware and the software developers from the beginning of a project to produce a well designed and thoroughly integrated application.

  16. From expert-derived user needs to user-perceived ease of use and usefulness: a two-phase mixed-methods evaluation framework.

    PubMed

    Boland, Mary Regina; Rusanov, Alexander; So, Yat; Lopez-Jimenez, Carlos; Busacca, Linda; Steinman, Richard C; Bakken, Suzanne; Bigger, J Thomas; Weng, Chunhua

    2014-12-01

    Underspecified user needs and frequent lack of a gold standard reference are typical barriers to technology evaluation. To address this problem, this paper presents a two-phase evaluation framework involving usability experts (phase 1) and end-users (phase 2). In phase 1, a cross-system functionality alignment between expert-derived user needs and system functions was performed to inform the choice of "the best available" comparison system to enable a cognitive walkthrough in phase 1 and a comparative effectiveness evaluation in phase 2. During phase 2, five quantitative and qualitative evaluation methods are mixed to assess usability: time-motion analysis, software log, questionnaires - System Usability Scale and the Unified Theory of Acceptance of Use of Technology, think-aloud protocols, and unstructured interviews. Each method contributes data for a unique measure (e.g., time motion analysis contributes task-completion-time; software log contributes action transition frequency). The measures are triangulated to yield complementary insights regarding user-perceived ease-of-use, functionality integration, anxiety during use, and workflow impact. To illustrate its use, we applied this framework in a formative evaluation of a software called Integrated Model for Patient Care and Clinical Trials (IMPACT). We conclude that this mixed-methods evaluation framework enables an integrated assessment of user needs satisfaction and user-perceived usefulness and usability of a novel design. This evaluation framework effectively bridges the gap between co-evolving user needs and technology designs during iterative prototyping and is particularly useful when it is difficult for users to articulate their needs for technology support due to the lack of a baseline. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. An evaluation of the documented requirements of the SSP UIL and a review of commercial software packages for the development and testing of UIL prototypes

    NASA Technical Reports Server (NTRS)

    Gill, Esther Naomi

    1986-01-01

    A review was conducted of software packages currently on the market which might be integrated with the interface language and aid in reaching the objectives of customization, standardization, transparency, reliability, maintainability, language substitutions, expandability, portability, and flexibility. Recommendations are given for best choices in hardware and software acquisition for inhouse testing of these possible integrations. Software acquisition in the line of tools to aid expert-system development and/or novice program development, artificial intelligent voice technology and touch screen or joystick or mouse utilization as well as networking were recommended. Other recommendations concerned using the language Ada for the user interface language shell because of its high level of standardization, structure, and ability to accept and execute programs written in other programming languages, its DOD ownership and control, and keeping the user interface language simple so that multiples of users will find the commercialization of space within their realm of possibility which is, after all, the purpose of the Space Station.

  18. A Novel Coupling Pattern in Computational Science and Engineering Software

    EPA Science Inventory

    Computational science and engineering (CSE) software is written by experts of certain area(s). Due to the specialization, existing CSE software may need to integrate other CSE software systems developed by different groups of experts. The coupling problem is one of the challenges...

  19. A Novel Coupling Pattern in Computational Science and Engineering Software

    EPA Science Inventory

    Computational science and engineering (CSE) software is written by experts of certain area(s). Due to the specialization,existing CSE software may need to integrate other CSE software systems developed by different groups of experts. Thecoupling problem is one of the challenges f...

  20. TES: A modular systems approach to expert system development for real-time space applications

    NASA Technical Reports Server (NTRS)

    Cacace, Ralph; England, Brenda

    1988-01-01

    A major goal of the Space Station era is to reduce reliance on support from ground based experts. The development of software programs using expert systems technology is one means of reaching this goal without requiring crew members to become intimately familiar with the many complex spacecraft subsystems. Development of an expert systems program requires a validation of the software with actual flight hardware. By combining accurate hardware and software modelling techniques with a modular systems approach to expert systems development, the validation of these software programs can be successfully completed with minimum risk and effort. The TIMES Expert System (TES) is an application that monitors and evaluates real time data to perform fault detection and fault isolation tasks as they would otherwise be carried out by a knowledgeable designer. The development process and primary features of TES, a modular systems approach, and the lessons learned are discussed.

  1. Executive system software design and expert system implementation

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1992-01-01

    The topics are presented in viewgraph form and include: software requirements; design layout of the automated assembly system; menu display for automated composite command; expert system features; complete robot arm state diagram and logic; and expert system benefits.

  2. ClassCompass: A Software Design Mentoring System

    ERIC Educational Resources Information Center

    Coelho, Wesley; Murphy, Gail

    2007-01-01

    Becoming a quality software developer requires practice under the guidance of an expert mentor. Unfortunately, in most academic environments, there are not enough experts to provide any significant design mentoring for software engineering students. To address this problem, we present a collaborative software design tool intended to maximize an…

  3. An expert system based software sizing tool, phase 2

    NASA Technical Reports Server (NTRS)

    Friedlander, David

    1990-01-01

    A software tool was developed for predicting the size of a future computer program at an early stage in its development. The system is intended to enable a user who is not expert in Software Engineering to estimate software size in lines of source code with an accuracy similar to that of an expert, based on the program's functional specifications. The project was planned as a knowledge based system with a field prototype as the goal of Phase 2 and a commercial system planned for Phase 3. The researchers used techniques from Artificial Intelligence and knowledge from human experts and existing software from NASA's COSMIC database. They devised a classification scheme for the software specifications, and a small set of generic software components that represent complexity and apply to large classes of programs. The specifications are converted to generic components by a set of rules and the generic components are input to a nonlinear sizing function which makes the final prediction. The system developed for this project predicted code sizes from the database with a bias factor of 1.06 and a fluctuation factor of 1.77, an accuracy similar to that of human experts but without their significant optimistic bias.

  4. What are they thinking? Automated analysis of student writing about acid-base chemistry in introductory biology.

    PubMed

    Haudek, Kevin C; Prevost, Luanna B; Moscarella, Rosa A; Merrill, John; Urban-Lurain, Mark

    2012-01-01

    Students' writing can provide better insight into their thinking than can multiple-choice questions. However, resource constraints often prevent faculty from using writing assessments in large undergraduate science courses. We investigated the use of computer software to analyze student writing and to uncover student ideas about chemistry in an introductory biology course. Students were asked to predict acid-base behavior of biological functional groups and to explain their answers. Student explanations were rated by two independent raters. Responses were also analyzed using SPSS Text Analysis for Surveys and a custom library of science-related terms and lexical categories relevant to the assessment item. These analyses revealed conceptual connections made by students, student difficulties explaining these topics, and the heterogeneity of student ideas. We validated the lexical analysis by correlating student interviews with the lexical analysis. We used discriminant analysis to create classification functions that identified seven key lexical categories that predict expert scoring (interrater reliability with experts = 0.899). This study suggests that computerized lexical analysis may be useful for automatically categorizing large numbers of student open-ended responses. Lexical analysis provides instructors unique insights into student thinking and a whole-class perspective that are difficult to obtain from multiple-choice questions or reading individual responses.

  5. What Are They Thinking? Automated Analysis of Student Writing about Acid–Base Chemistry in Introductory Biology

    PubMed Central

    Haudek, Kevin C.; Prevost, Luanna B.; Moscarella, Rosa A.; Merrill, John; Urban-Lurain, Mark

    2012-01-01

    Students’ writing can provide better insight into their thinking than can multiple-choice questions. However, resource constraints often prevent faculty from using writing assessments in large undergraduate science courses. We investigated the use of computer software to analyze student writing and to uncover student ideas about chemistry in an introductory biology course. Students were asked to predict acid–base behavior of biological functional groups and to explain their answers. Student explanations were rated by two independent raters. Responses were also analyzed using SPSS Text Analysis for Surveys and a custom library of science-related terms and lexical categories relevant to the assessment item. These analyses revealed conceptual connections made by students, student difficulties explaining these topics, and the heterogeneity of student ideas. We validated the lexical analysis by correlating student interviews with the lexical analysis. We used discriminant analysis to create classification functions that identified seven key lexical categories that predict expert scoring (interrater reliability with experts = 0.899). This study suggests that computerized lexical analysis may be useful for automatically categorizing large numbers of student open-ended responses. Lexical analysis provides instructors unique insights into student thinking and a whole-class perspective that are difficult to obtain from multiple-choice questions or reading individual responses. PMID:22949425

  6. Statistical evaluation of manual segmentation of a diffuse low-grade glioma MRI dataset.

    PubMed

    Ben Abdallah, Meriem; Blonski, Marie; Wantz-Mezieres, Sophie; Gaudeau, Yann; Taillandier, Luc; Moureaux, Jean-Marie

    2016-08-01

    Software-based manual segmentation is critical to the supervision of diffuse low-grade glioma patients and to the optimal treatment's choice. However, manual segmentation being time-consuming, it is difficult to include it in the clinical routine. An alternative to circumvent the time cost of manual segmentation could be to share the task among different practitioners, providing it can be reproduced. The goal of our work is to assess diffuse low-grade gliomas' manual segmentation's reproducibility on MRI scans, with regard to practitioners, their experience and field of expertise. A panel of 13 experts manually segmented 12 diffuse low-grade glioma clinical MRI datasets using the OSIRIX software. A statistical analysis gave promising results, as the practitioner factor, the medical specialty and the years of experience seem to have no significant impact on the average values of the tumor volume variable.

  7. Code query by example

    NASA Astrophysics Data System (ADS)

    Vaucouleur, Sebastien

    2011-02-01

    We introduce code query by example for customisation of evolvable software products in general and of enterprise resource planning systems (ERPs) in particular. The concept is based on an initial empirical study on practices around ERP systems. We motivate our design choices based on those empirical results, and we show how the proposed solution helps with respect to the infamous upgrade problem: the conflict between the need for customisation and the need for upgrade of ERP systems. We further show how code query by example can be used as a form of lightweight static analysis, to detect automatically potential defects in large software products. Code query by example as a form of lightweight static analysis is particularly interesting in the context of ERP systems: it is often the case that programmers working in this field are not computer science specialists but more of domain experts. Hence, they require a simple language to express custom rules.

  8. The recognition of potato varieties using of neural image analysis method

    NASA Astrophysics Data System (ADS)

    Przybył, K.; Górna, K.; Wojcieszak, D.; Czekała, W.; Ludwiczak, A.; Przybylak, A.; Boniecki, P.; Koszela, K.; Zaborowicz, M.; Janczak, D.; Lewicki, A.

    2015-07-01

    The aim of this paper was to extract the representative features and generate an appropriate neural model for classification of varieties of edible potato. Potatoes of variety the Vineta and the Denar were the empirical object of this thesis. The main concept of the project was to develop and prepare an image database using the computer image analysis software. The choice of appropriate neural model the one which will have the greatest abilities to identify the selected variety. The aim of this project is ultimately to conduct assistance and accelerate work of the expert, who classifies and keeps different varieties of potatoes in heaps.

  9. Uncertainty reasoning in expert systems

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik

    1993-01-01

    Intelligent control is a very successful way to transform the expert's knowledge of the type 'if the velocity is big and the distance from the object is small, hit the brakes and decelerate as fast as possible' into an actual control. To apply this transformation, one must choose appropriate methods for reasoning with uncertainty, i.e., one must: (1) choose the representation for words like 'small', 'big'; (2) choose operations corresponding to 'and' and 'or'; (3) choose a method that transforms the resulting uncertain control recommendations into a precise control strategy. The wrong choice can drastically affect the quality of the resulting control, so the problem of choosing the right procedure is very important. From a mathematical viewpoint these choice problems correspond to non-linear optimization and are therefore extremely difficult. In this project, a new mathematical formalism (based on group theory) is developed that allows us to solve the problem of optimal choice and thus: (1) explain why the existing choices are really the best (in some situations); (2) explain a rather mysterious fact that fuzzy control (i.e., control based on the experts' knowledge) is often better than the control by these same experts; and (3) give choice recommendations for the cases when traditional choices do not work.

  10. Take-the-best in expert-novice decision strategies for residential burglary.

    PubMed

    Garcia-Retamero, Rocio; Dhami, Mandeep K

    2009-02-01

    We examined the decision strategies and cue use of experts and novices in a consequential domain: crime. Three participant groups decided which of two residential properties was more likely to be burgled, on the basis of eight cues such as location of the property. The two expert groups were experienced burglars and police officers, and the novice group was composed of graduate students. We found that experts' choices were best predicted by a lexicographic heuristic strategy called take-the-best that implies noncompensatory information processing, whereas novices' choices were best predicted by a weighted additive linear strategy that implies compensatory processing. The two expert groups, however, differed in the cues they considered important in making their choices, and the police officers were actually more similar to novices in this regard. These findings extend the literature on judgment, decision making, and expertise, and have implications for criminal justice policy.

  11. The need for a comprehensive expert system development methodology

    NASA Technical Reports Server (NTRS)

    Baumert, John; Critchfield, Anna; Leavitt, Karen

    1988-01-01

    In a traditional software development environment, the introduction of standardized approaches has led to higher quality, maintainable products on the technical side and greater visibility into the status of the effort on the management side. This study examined expert system development to determine whether it differed enough from traditional systems to warrant a reevaluation of current software development methodologies. Its purpose was to identify areas of similarity with traditional software development and areas requiring tailoring to the unique needs of expert systems. A second purpose was to determine whether existing expert system development methodologies meet the needs of expert system development, management, and maintenance personnel. The study consisted of a literature search and personal interviews. It was determined that existing methodologies and approaches to developing expert systems are not comprehensive nor are they easily applied, especially to cradle to grave system development. As a result, requirements were derived for an expert system development methodology and an initial annotated outline derived for such a methodology.

  12. Research on an expert system for database operation of simulation-emulation math models. Volume 1, Phase 1: Results

    NASA Technical Reports Server (NTRS)

    Kawamura, K.; Beale, G. O.; Schaffer, J. D.; Hsieh, B. J.; Padalkar, S.; Rodriguez-Moscoso, J. J.

    1985-01-01

    The results of the first phase of Research on an Expert System for Database Operation of Simulation/Emulation Math Models, is described. Techniques from artificial intelligence (AI) were to bear on task domains of interest to NASA Marshall Space Flight Center. One such domain is simulation of spacecraft attitude control systems. Two related software systems were developed to and delivered to NASA. One was a generic simulation model for spacecraft attitude control, written in FORTRAN. The second was an expert system which understands the usage of a class of spacecraft attitude control simulation software and can assist the user in running the software. This NASA Expert Simulation System (NESS), written in LISP, contains general knowledge about digital simulation, specific knowledge about the simulation software, and self knowledge.

  13. Development of an expert system prototype for determining software functional requirements for command management activities at NASA Goddard

    NASA Technical Reports Server (NTRS)

    Liebowitz, J.

    1985-01-01

    The development of an expert system prototype for determining software functional requirements for NASA Goddard's Command Management System (CMS) is described. The role of the CMS is to transform general requests into specific spacecraft commands with command execution conditions. The CMS is part of the NASA Data System which entails the downlink of science and engineering data from NASA near-earth satellites to the user, and the uplink of command and control data to the spacecraft. Subjects covered include: the problem environment of determining CMS software functional requirements; the expert system approach for handling CMS requirements development; validation and evaluation procedures for the expert system.

  14. Expert system verification and validation study: ES V/V Workshop

    NASA Technical Reports Server (NTRS)

    French, Scott; Hamilton, David

    1992-01-01

    The primary purpose of this document is to build a foundation for applying principles of verification and validation (V&V) of expert systems. To achieve this, some V&V as applied to conventionally implemented software is required. Part one will discuss the background of V&V from the perspective of (1) what is V&V of software and (2) V&V's role in developing software. Part one will also overview some common analysis techniques that are applied when performing V&V of software. All of these materials will be presented based on the assumption that the reader has little or no background in V&V or in developing procedural software. The primary purpose of part two is to explain the major techniques that have been developed for V&V of expert systems.

  15. The choice of primary energy source including PV installation for providing electric energy to a public utility building - a case study

    NASA Astrophysics Data System (ADS)

    Radomski, Bartosz; Ćwiek, Barbara; Mróz, Tomasz M.

    2017-11-01

    The paper presents multicriteria decision aid analysis of the choice of PV installation providing electric energy to a public utility building. From the energy management point of view electricity obtained by solar radiation has become crucial renewable energy source. Application of PV installations may occur a profitable solution from energy, economic and ecologic point of view for both existing and newly erected buildings. Featured variants of PV installations have been assessed by multicriteria analysis based on ANP (Analytic Network Process) method. Technical, economical, energy and environmental criteria have been identified as main decision criteria. Defined set of decision criteria has an open character and can be modified in the dialog process between the decision-maker and the expert - in the present case, an expert in planning of development of energy supply systems. The proposed approach has been used to evaluate three variants of PV installation acceptable for existing educational building located in Poznań, Poland - the building of Faculty of Chemical Technology, Poznań University of Technology. Multi-criteria analysis based on ANP method and the calculation software Super Decisions has proven to be an effective tool for energy planning, leading to the indication of the recommended variant of PV installation in existing and newly erected public buildings. Achieved results show prospects and possibilities of rational renewable energy usage as complex solution to public utility buildings.

  16. ACES: Space shuttle flight software analysis expert system

    NASA Technical Reports Server (NTRS)

    Satterwhite, R. Scott

    1990-01-01

    The Analysis Criteria Evaluation System (ACES) is a knowledge based expert system that automates the final certification of the Space Shuttle onboard flight software. Guidance, navigation and control of the Space Shuttle through all its flight phases are accomplished by a complex onboard flight software system. This software is reconfigured for each flight to allow thousands of mission-specific parameters to be introduced and must therefore be thoroughly certified prior to each flight. This certification is performed in ground simulations by executing the software in the flight computers. Flight trajectories from liftoff to landing, including abort scenarios, are simulated and the results are stored for analysis. The current methodology of performing this analysis is repetitive and requires many man-hours. The ultimate goals of ACES are to capture the knowledge of the current experts and improve the quality and reduce the manpower required to certify the Space Shuttle onboard flight software.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M.; Kempner, L. Jr.; Mueller, W. III

    The concept of an Expert System is not new. It has been around since the days of the early computers when scientists had dreams of robot automation to do everything from washing windows to automobile design. This paper discusses an application of an expert system and addresses software development issues and various levels of expert system development form a structural engineering viewpoint. An expert system designed to aid the structural engineer in first order inelastic analysis of latticed steel transmission powers is presented. The utilization of expert systems with large numerical analysis programs is discussed along with the software developmentmore » of such a system.« less

  18. Testing expert systems

    NASA Technical Reports Server (NTRS)

    Chang, C. L.; Stachowitz, R. A.

    1988-01-01

    Software quality is of primary concern in all large-scale expert system development efforts. Building appropriate validation and test tools for ensuring software reliability of expert systems is therefore required. The Expert Systems Validation Associate (EVA) is a validation system under development at the Lockheed Artificial Intelligence Center. EVA provides a wide range of validation and test tools to check correctness, consistency, and completeness of an expert system. Testing a major function of EVA. It means executing an expert system with test cases with the intent of finding errors. In this paper, we describe many different types of testing such as function-based testing, structure-based testing, and data-based testing. We describe how appropriate test cases may be selected in order to perform good and thorough testing of an expert system.

  19. Rule groupings: A software engineering approach towards verification of expert systems

    NASA Technical Reports Server (NTRS)

    Mehrotra, Mala

    1991-01-01

    Currently, most expert system shells do not address software engineering issues for developing or maintaining expert systems. As a result, large expert systems tend to be incomprehensible, difficult to debug or modify and almost impossible to verify or validate. Partitioning rule based systems into rule groups which reflect the underlying subdomains of the problem should enhance the comprehensibility, maintainability, and reliability of expert system software. Attempts were made to semiautomatically structure a CLIPS rule base into groups of related rules that carry the same type of information. Different distance metrics that capture relevant information from the rules for grouping are discussed. Two clustering algorithms that partition the rule base into groups of related rules are given. Two independent evaluation criteria are developed to measure the effectiveness of the grouping strategies. Results of the experiment with three sample rule bases are presented.

  20. The Relationship of Expert-System Scored Constrained Free-Response Items to Multiple-Choice and Open-Ended Items.

    ERIC Educational Resources Information Center

    Bennett, Randy Elliot; And Others

    1990-01-01

    The relationship of an expert-system-scored constrained free-response item type to multiple-choice and free-response items was studied using data for 614 students on the College Board's Advanced Placement Computer Science (APCS) Examination. Implications for testing and the APCS test are discussed. (SLD)

  1. Inherent health and environmental risk assessment of nanostructured metal oxide production processes.

    PubMed

    Torabifard, Mina; Arjmandi, Reza; Rashidi, Alimorad; Nouri, Jafar; Mohammadfam, Iraj

    2018-01-10

    The health and environmental effects of chemical processes can be assessed during the initial stage of their production. In this paper, the Chemical Screening Tool for Exposure and Environmental Release (ChemSTEER) software was used to compare the health and environmental risks of spray pyrolysis and wet chemical techniques for the fabrication of nanostructured metal oxide on a semi-industrial scale with a capacity of 300 kg/day in Iran. The pollution sources identified in each production process were pairwise compared in Expert Choice software using indicators including respiratory damage, skin damage, and environmental damages including air, water, and soil pollution. The synthesis of nanostructured zinc oxide using the wet chemical technique (with 0.523 wt%) leads to lower health and environmental risks compared to when spray pyrolysis is used (with 0.477 wt%). The health and environmental risk assessment of nanomaterial production processes can help select safer processes, modify the operation conditions, and select or modify raw materials that can help eliminate the risks.

  2. ARROWSMITH-P: A prototype expert system for software engineering management

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Ramsey, Connie Loggia

    1985-01-01

    Although the field of software engineering is relatively new, it can benefit from the use of expert systems. Two prototype expert systems were developed to aid in software engineering management. Given the values for certain metrics, these systems will provide interpretations which explain any abnormal patterns of these values during the development of a software project. The two systems, which solve the same problem, were built using different methods, rule-based deduction and frame-based abduction. A comparison was done to see which method was better suited to the needs of this field. It was found that both systems performed moderately well, but the rule-based deduction system using simple rules provided more complete solutions than did the frame-based abduction system.

  3. Development of a software for the design of custom-made hip prostheses using an open-source rapid application development environment.

    PubMed

    Viceconti, M; Testi, D; Gori, R; Zannoni, C

    2000-01-01

    The present work describes a technology transfer project called HIPCOM devoted to the re-engineering of the process used by a medical devices manufacturer to design custom-made hip prostheses. Although it started with insufficient support from the end-user management, a very tight scheduling and a moderate budget, the project developed into what is considered by all partners a success story. In particular, the development of the design software, called HIPCOM Interactive Design Environment (HIDE) was completed in a time shorter than any optimistic expectation. The software was quite stable since its first beta version, and once introduced at the user site it fully replaced the original procedure in less than two months. One year after the early adoption, more than 80 custom-made prostheses had been designed with HIDE and the user had reported only two bugs, both cosmetics. The scope of the present work was to report the development experience and to investigate the reasons for these positive results, with particular reference to the development procedure and the software architecture. The choice of TCL/TK as development language and the adoption of well-defined software architecture were found to be the success key factors. Other important determinants were found to be the adoption of an incremental software engineering strategy, well suited for small to medium projects and the presence in the development staff of a technology transfer expert.

  4. Hearing the Signal in the Noise: A Software-Based Content Analysis of Patterns in Responses by Experts and Students to a New Venture Investment Proposal

    ERIC Educational Resources Information Center

    Hostager, Todd J.; Voiovich, Jason; Hughes, Raymond K.

    2013-01-01

    The authors apply a software-based content analysis method to uncover differences in responses by expert entrepreneurs and undergraduate entrepreneur majors to a new venture investment proposal. Data analyzed via the Leximancer software package yielded conceptual maps highlighting key differences in the nature of these responses. Study methods and…

  5. Power subsystem automation study

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.; Sewy, D.; Pickering, C.; Sauers, R.

    1984-01-01

    The purpose of the phase 2 of the power subsystem automation study was to demonstrate the feasibility of using computer software to manage an aspect of the electrical power subsystem on a space station. The state of the art in expert systems software was investigated in this study. This effort resulted in the demonstration of prototype expert system software for managing one aspect of a simulated space station power subsystem.

  6. ART-Ada: An Ada-based expert system tool

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel; Allen, Bradley P.

    1990-01-01

    The Department of Defense mandate to standardize on Ada as the language for software systems development has resulted in an increased interest in making expert systems technology readily available in Ada environments. NASA's Space Station Freedom is an example of the large Ada software development projects that will require expert systems in the 1990's. Another large scale application that can benefit from Ada based expert system tool technology is the Pilot's Associate (PA) expert system project for military combat aircraft. The Automated Reasoning Tool-Ada (ART-Ada), an Ada expert system tool, is explained. ART-Ada allows applications of a C-based expert system tool called ART-IM to be deployed in various Ada environments. ART-Ada is being used to implement several prototype expert systems for NASA's Space Station Freedom program and the U.S. Air Force.

  7. ART-Ada: An Ada-based expert system tool

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel; Allen, Bradley P.

    1991-01-01

    The Department of Defense mandate to standardize on Ada as the language for software systems development has resulted in increased interest in making expert systems technology readily available in Ada environments. NASA's Space Station Freedom is an example of the large Ada software development projects that will require expert systems in the 1990's. Another large scale application that can benefit from Ada based expert system tool technology is the Pilot's Associate (PA) expert system project for military combat aircraft. Automated Reasoning Tool (ART) Ada, an Ada Expert system tool is described. ART-Ada allow applications of a C-based expert system tool called ART-IM to be deployed in various Ada environments. ART-Ada is being used to implement several prototype expert systems for NASA's Space Station Freedom Program and the U.S. Air Force.

  8. An expert system as applied to bridges : software development phase.

    DOT National Transportation Integrated Search

    1989-01-01

    This report describes the results of the third of a four-part study dealing with the use of a computerized expert system to assist bridge engineers in their structures management program. In this phase of the study, software (called DOBES) was writte...

  9. Artificial intelligence and expert systems in-flight software testing

    NASA Technical Reports Server (NTRS)

    Demasie, M. P.; Muratore, J. F.

    1991-01-01

    The authors discuss the introduction of advanced information systems technologies such as artificial intelligence, expert systems, and advanced human-computer interfaces directly into Space Shuttle software engineering. The reconfiguration automation project (RAP) was initiated to coordinate this move towards 1990s software technology. The idea behind RAP is to automate several phases of the flight software testing procedure and to introduce AI and ES into space shuttle flight software testing. In the first phase of RAP, conventional tools to automate regression testing have already been developed or acquired. There are currently three tools in use.

  10. Evaluation of expert systems - An approach and case study. [of determining software functional requirements for command management of satellites

    NASA Technical Reports Server (NTRS)

    Liebowitz, J.

    1985-01-01

    Techniques that were applied in defining an expert system prototype for first-cut evaluations of the software functional requirements of NASA satellite command management activities are described. The prototype was developed using the Knowledge Engineering System. Criteria were selected for evaluating the satellite software before defining the expert system prototype. Application of the prototype system is illustrated in terms of the evaluation procedures used with the COBE satellite to be launched in 1988. The limited number of options which can be considered by the program mandates that biases in the system output must be well understood by the users.

  11. Software Analyzes Complex Systems in Real Time

    NASA Technical Reports Server (NTRS)

    2008-01-01

    Expert system software programs, also known as knowledge-based systems, are computer programs that emulate the knowledge and analytical skills of one or more human experts, related to a specific subject. SHINE (Spacecraft Health Inference Engine) is one such program, a software inference engine (expert system) designed by NASA for the purpose of monitoring, analyzing, and diagnosing both real-time and non-real-time systems. It was developed to meet many of the Agency s demanding and rigorous artificial intelligence goals for current and future needs. NASA developed the sophisticated and reusable software based on the experience and requirements of its Jet Propulsion Laboratory s (JPL) Artificial Intelligence Research Group in developing expert systems for space flight operations specifically, the diagnosis of spacecraft health. It was designed to be efficient enough to operate in demanding real time and in limited hardware environments, and to be utilized by non-expert systems applications written in conventional programming languages. The technology is currently used in several ongoing NASA applications, including the Mars Exploration Rovers and the Spacecraft Health Automatic Reasoning Pilot (SHARP) program for the diagnosis of telecommunication anomalies during the Neptune Voyager Encounter. It is also finding applications outside of the Space Agency.

  12. Nonlinear rescaling of control values simplifies fuzzy control

    NASA Technical Reports Server (NTRS)

    Vanlangingham, H.; Tsoukkas, A.; Kreinovich, V.; Quintana, C.

    1993-01-01

    Traditional control theory is well-developed mainly for linear control situations. In non-linear cases there is no general method of generating a good control, so we have to rely on the ability of the experts (operators) to control them. If we want to automate their control, we must acquire their knowledge and translate it into a precise control strategy. The experts' knowledge is usually represented in non-numeric terms, namely, in terms of uncertain statements of the type 'if the obstacle is straight ahead, the distance to it is small, and the velocity of the car is medium, press the brakes hard'. Fuzzy control is a methodology that translates such statements into precise formulas for control. The necessary first step of this strategy consists of assigning membership functions to all the terms that the expert uses in his rules (in our sample phrase these words are 'small', 'medium', and 'hard'). The appropriate choice of a membership function can drastically improve the quality of a fuzzy control. In the simplest cases, we can take the functions whose domains have equally spaced endpoints. Because of that, many software packages for fuzzy control are based on this choice of membership functions. This choice is not very efficient in more complicated cases. Therefore, methods have been developed that use neural networks or generic algorithms to 'tune' membership functions. But this tuning takes lots of time (for example, several thousands iterations are typical for neural networks). In some cases there are evident physical reasons why equally space domains do not work: e.g., if the control variable u is always positive (i.e., if we control temperature in a reactor), then negative values (that are generated by equal spacing) simply make no sense. In this case it sounds reasonable to choose another scale u' = f(u) to represent u, so that equal spacing will work fine for u'. In the present paper we formulate the problem of finding the best rescaling function, solve this problem, and show (on a real-life example) that after an optimal rescaling, the un-tuned fuzzy control can be as good as the best state-of-art traditional non-linear controls.

  13. Users manual for an expert system (HSPEXP) for calibration of the hydrological simulation program; Fortran

    USGS Publications Warehouse

    Lumb, A.M.; McCammon, R.B.; Kittle, J.L.

    1994-01-01

    Expert system software was developed to assist less experienced modelers with calibration of a watershed model and to facilitate the interaction between the modeler and the modeling process not provided by mathematical optimization. A prototype was developed with artificial intelligence software tools, a knowledge engineer, and two domain experts. The manual procedures used by the domain experts were identified and the prototype was then coded by the knowledge engineer. The expert system consists of a set of hierarchical rules designed to guide the calibration of the model through a systematic evaluation of model parameters. When the prototype was completed and tested, it was rewritten for portability and operational use and was named HSPEXP. The watershed model Hydrological Simulation Program--Fortran (HSPF) is used in the expert system. This report is the users manual for HSPEXP and contains a discussion of the concepts and detailed steps and examples for using the software. The system has been tested on watersheds in the States of Washington and Maryland, and the system correctly identified the model parameters to be adjusted and the adjustments led to improved calibration.

  14. Web-Based Environment for Maintaining Legacy Software

    NASA Technical Reports Server (NTRS)

    Tigges, Michael; Thompson, Nelson; Orr, Mark; Fox, Richard

    2007-01-01

    Advanced Tool Integration Environment (ATIE) is the name of both a software system and a Web-based environment created by the system for maintaining an archive of legacy software and expertise involved in developing the legacy software. ATIE can also be used in modifying legacy software and developing new software. The information that can be encapsulated in ATIE includes experts documentation, input and output data of tests cases, source code, and compilation scripts. All of this information is available within a common environment and retained in a database for ease of access and recovery by use of powerful search engines. ATIE also accommodates the embedment of supporting software that users require for their work, and even enables access to supporting commercial-off-the-shelf (COTS) software within the flow of the experts work. The flow of work can be captured by saving the sequence of computer programs that the expert uses. A user gains access to ATIE via a Web browser. A modern Web-based graphical user interface promotes efficiency in the retrieval, execution, and modification of legacy code. Thus, ATIE saves time and money in the support of new and pre-existing programs.

  15. Guidance, navigation, and control subsystem equipment selection algorithm using expert system methods

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1991-01-01

    Enhanced engineering tools can be obtained through the integration of expert system methodologies and existing design software. The application of these methodologies to the spacecraft design and cost model (SDCM) software provides an improved technique for the selection of hardware for unmanned spacecraft subsystem design. The knowledge engineering system (KES) expert system development tool was used to implement a smarter equipment section algorithm than that which is currently achievable through the use of a standard data base system. The guidance, navigation, and control subsystems of the SDCM software was chosen as the initial subsystem for implementation. The portions of the SDCM code which compute the selection criteria and constraints remain intact, and the expert system equipment selection algorithm is embedded within this existing code. The architecture of this new methodology is described and its implementation is reported. The project background and a brief overview of the expert system is described, and once the details of the design are characterized, an example of its implementation is demonstrated.

  16. Statistical Validation for Clinical Measures: Repeatability and Agreement of Kinect™-Based Software.

    PubMed

    Lopez, Natalia; Perez, Elisa; Tello, Emanuel; Rodrigo, Alejandro; Valentinuzzi, Max E

    2018-01-01

    The rehabilitation process is a fundamental stage for recovery of people's capabilities. However, the evaluation of the process is performed by physiatrists and medical doctors, mostly based on their observations, that is, a subjective appreciation of the patient's evolution. This paper proposes a tracking platform of the movement made by an individual's upper limb using Kinect sensor(s) to be applied for the patient during the rehabilitation process. The main contribution is the development of quantifying software and the statistical validation of its performance, repeatability, and clinical use in the rehabilitation process. The software determines joint angles and upper limb trajectories for the construction of a specific rehabilitation protocol and quantifies the treatment evolution. In turn, the information is presented via a graphical interface that allows the recording, storage, and report of the patient's data. For clinical purposes, the software information is statistically validated with three different methodologies, comparing the measures with a goniometer in terms of agreement and repeatability. The agreement of joint angles measured with the proposed software and goniometer is evaluated with Bland-Altman plots; all measurements fell well within the limits of agreement, meaning interchangeability of both techniques. Additionally, the results of Bland-Altman analysis of repeatability show 95% confidence. Finally, the physiotherapists' qualitative assessment shows encouraging results for the clinical use. The main conclusion is that the software is capable of offering a clinical history of the patient and is useful for quantification of the rehabilitation success. The simplicity, low cost, and visualization possibilities enhance the use of the software Kinect for rehabilitation and other applications, and the expert's opinion endorses the choice of our approach for clinical practice. Comparison of the new measurement technique with established goniometric methods determines that the proposed software agrees sufficiently to be used interchangeably.

  17. A Model for the Development of Hospital Beds Using Fuzzy Analytical Hierarchy Process (Fuzzy AHP).

    PubMed

    Ravangard, Ramin; Bahadori, Mohammadkarim; Raadabadi, Mehdi; Teymourzadeh, Ehsan; Alimomohammadzadeh, Khalil; Mehrabian, Fardin

    2017-11-01

    This study aimed to identify and prioritize factors affecting the development of military hospital beds and provide a model using fuzzy analytical hierarchy process (Fuzzy AHP). This applied study was conducted in 2016 in Iran using a mixed method. The sample included experts in the field of military health care system. The MAXQDA 10.0 and Expert Choice 10.0 software were used for analyzing the collected data. Geographic situation, demographic status, economic status, health status, health care centers and organizations, financial and human resources, laws and regulations and by-laws, and the military nature of service recipients had effects on the development of military hospital beds. The military nature of service recipients (S=0.249) and economic status (S=0.040) received the highest and lowest priorities, respectively. Providing direct health care services to the military forces in order to maintain their dignity, and according to its effects in the crisis, as well as the necessity for maintaining the security of the armed forces, and the hospital beds per capita based on the existing laws, regulations and bylaws are of utmost importance.

  18. Methodology capture: discriminating between the "best" and the rest of community practice

    PubMed Central

    Eales, James M; Pinney, John W; Stevens, Robert D; Robertson, David L

    2008-01-01

    Background The methodologies we use both enable and help define our research. However, as experimental complexity has increased the choice of appropriate methodologies has become an increasingly difficult task. This makes it difficult to keep track of available bioinformatics software, let alone the most suitable protocols in a specific research area. To remedy this we present an approach for capturing methodology from literature in order to identify and, thus, define best practice within a field. Results Our approach is to implement data extraction techniques on the full-text of scientific articles to obtain the set of experimental protocols used by an entire scientific discipline, molecular phylogenetics. Our methodology for identifying methodologies could in principle be applied to any scientific discipline, whether or not computer-based. We find a number of issues related to the nature of best practice, as opposed to community practice. We find that there is much heterogeneity in the use of molecular phylogenetic methods and software, some of which is related to poor specification of protocols. We also find that phylogenetic practice exhibits field-specific tendencies that have increased through time, despite the generic nature of the available software. We used the practice of highly published and widely collaborative researchers ("expert" researchers) to analyse the influence of authority on community practice. We find expert authors exhibit patterns of practice common to their field and therefore act as useful field-specific practice indicators. Conclusion We have identified a structured community of phylogenetic researchers performing analyses that are customary in their own local community and significantly different from those in other areas. Best practice information can help to bridge such subtle differences by increasing communication of protocols to a wider audience. We propose that the practice of expert authors from the field of evolutionary biology is the closest to contemporary best practice in phylogenetic experimental design. Capturing best practice is, however, a complex task and should also acknowledge the differences between fields such as the specific context of the analysis. PMID:18761740

  19. Virtual building environments (VBE) - Applying information modeling to buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazjanac, Vladimir

    2004-06-21

    A Virtual Building Environment (VBE) is a ''place'' where building industry project staffs can get help in creating Building Information Models (BIM) and in the use of virtual buildings. It consists of a group of industry software that is operated by industry experts who are also experts in the use of that software. The purpose of a VBE is to facilitate expert use of appropriate software applications in conjunction with each other to efficiently support multidisciplinary work. This paper defines BIM and virtual buildings, and describes VBE objectives, set-up and characteristics of operation. It informs about the VBE Initiative andmore » the benefits from a couple of early VBE projects.« less

  20. An Investigation of Expert Systems Usage for Software Requirements Development in the Strategic Defense Initiative Environment.

    DTIC Science & Technology

    1986-06-10

    the solution of the base could be the solution of the target. If expert systems are to mimic humans , then they should inherently utilize analogy. In the...expert systems environment, the theory of frames for representing knowledge developed partly because humans usually solve problems by first seeing if...Goals," Computer, May 1975, p. 17. 8. A.I. Wasserman, "Some Principles of User Software Engineering for Information Systems ," Digest of Papers, COMPCON

  1. User Directed Tools for Exploiting Expert Knowledge in an Immersive Segmentation and Visualization Environment

    NASA Technical Reports Server (NTRS)

    Senger, Steven O.

    1998-01-01

    Volumetric data sets have become common in medicine and many sciences through technologies such as computed x-ray tomography (CT), magnetic resonance (MR), positron emission tomography (PET), confocal microscopy and 3D ultrasound. When presented with 2D images humans immediately and unconsciously begin a visual analysis of the scene. The viewer surveys the scene identifying significant landmarks and building an internal mental model of presented information. The identification of features is strongly influenced by the viewers expectations based upon their expert knowledge of what the image should contain. While not a conscious activity, the viewer makes a series of choices about how to interpret the scene. These choices occur in parallel with viewing the scene and effectively change the way the viewer sees the image. It is this interaction of viewing and choice which is the basis of many familiar visual illusions. This is especially important in the interpretation of medical images where it is the expert knowledge of the radiologist which interprets the image. For 3D data sets this interaction of view and choice is frustrated because choices must precede the visualization of the data set. It is not possible to visualize the data set with out making some initial choices which determine how the volume of data is presented to the eye. These choices include, view point orientation, region identification, color and opacity assignments. Further compounding the problem is the fact that these visualization choices are defined in terms of computer graphics as opposed to language of the experts knowledge. The long term goal of this project is to develop an environment where the user can interact with volumetric data sets using tools which promote the utilization of expert knowledge by incorporating visualization and choice into a tight computational loop. The tools will support activities involving the segmentation of structures, construction of surface meshes and local filtering of the data set. To conform to this environment tools should have several key attributes. First, they should be only rely on computations over a local neighborhood of the probe position. Second, they should operate iteratively over time converging towards a limit behavior. Third, they should adapt to user input modifying they operational parameters with time.

  2. Pathways to lean software development: An analysis of effective methods of change

    NASA Astrophysics Data System (ADS)

    Hanson, Richard D.

    This qualitative Delphi study explored the challenges that exist in delivering software on time, within budget, and with the original scope identified. The literature review identified many attempts over the past several decades to reform the methods used to develop software. These attempts found that the classical waterfall method, which is firmly entrenched in American business today was to blame for this difficulty (Chatterjee, 2010). Each of these proponents of new methods sought to remove waste, lighten out the process, and implement lean principles in software development. Through this study, the experts evaluated the barriers to effective development principles and defined leadership qualities necessary to overcome these barriers. The barriers identified were issues of resistance to change, risk and reward issues, and management buy-in. Thirty experts in software development from several Fortune 500 companies across the United States explored each of these issues in detail. The conclusion reached by these experts was that visionary leadership is necessary to overcome these challenges.

  3. An expert system executive for automated assembly of large space truss structures

    NASA Technical Reports Server (NTRS)

    Allen, Cheryl L.

    1993-01-01

    Langley Research Center developed a unique test bed for investigating the practical problems associated with the assembly of large space truss structures using robotic manipulators. The test bed is the result of an interdisciplinary effort that encompasses the full spectrum of assembly problems - from the design of mechanisms to the development of software. The automated structures assembly test bed and its operation are described, the expert system executive and its development are detailed, and the planned system evolution is discussed. Emphasis is on the expert system implementation of the program executive. The executive program must direct and reliably perform complex assembly tasks with the flexibility to recover from realistic system errors. The employment of an expert system permits information that pertains to the operation of the system to be encapsulated concisely within a knowledge base. This consolidation substantially reduced code, increased flexibility, eased software upgrades, and realized a savings in software maintenance costs.

  4. Does reflection lead to wise choices?

    PubMed Central

    Bortolotti, Lisa

    2011-01-01

    Does conscious reflection lead to good decision-making? Whereas engaging in reflection is traditionally thought to be the best way to make wise choices, recent psychological evidence undermines the role of reflection in lay and expert judgement. The literature suggests that thinking about reasons does not improve the choices people make, and that experts do not engage in reflection, but base their judgements on intuition, often shaped by extensive previous experience. Can we square the traditional accounts of wisdom with the results of these empirical studies? Should we even attempt to? I shall defend the view that philosophy and cognitive sciences genuinely interact in tackling questions such as whether reflection leads to making wise choices. PMID:22408385

  5. Development of an expert system prototype for determining software functional requirements for command management activities at NASA Goddard

    NASA Technical Reports Server (NTRS)

    Liebowitz, J.

    1986-01-01

    The development of an expert system prototype for software functional requirement determination for NASA Goddard's Command Management System, as part of its process of transforming general requests into specific near-earth satellite commands, is described. The present knowledge base was formulated through interactions with domain experts, and was then linked to the existing Knowledge Engineering Systems (KES) expert system application generator. Steps in the knowledge-base development include problem-oriented attribute hierarchy development, knowledge management approach determination, and knowledge base encoding. The KES Parser and Inspector, in addition to backcasting and analogical mapping, were used to validate the expert system-derived requirements for one of the major functions of a spacecraft, the solar Maximum Mission. Knowledge refinement, evaluation, and implementation procedures of the expert system were then accomplished.

  6. Pros and cons of conjoint analysis of discrete choice experiments to define classification and response criteria in rheumatology.

    PubMed

    Taylor, William J

    2016-03-01

    Conjoint analysis of choice or preference data has been used in marketing for over 40 years but has appeared in healthcare settings much more recently. It may be a useful technique for applications within the rheumatology field. Conjoint analysis in rheumatology contexts has mainly used the approaches implemented in 1000Minds Ltd, Dunedin, New Zealand, Sawtooth Software, Orem UT, USA. Examples include classification criteria, composite response criteria, service prioritization tools and utilities assessment. Limitations imposed by very many attributes can be managed using new techniques. Conjoint analysis studies of classification and response criteria suggest that the assumption of equal weighting of attributes cannot be met, which challenges traditional approaches to composite criteria construction. Weights elicited through choice experiments with experts can derive more accurate classification criteria, than unweighted criteria. Studies that find significant variation in attribute weights for composite response criteria for gout make construction of such criteria problematic. Better understanding of various multiattribute phenomena is likely to increase with increased use of conjoint analysis, especially when the attributes concern individual perceptions or opinions. In addition to classification criteria, some applications for conjoint analysis that are emerging in rheumatology include prioritization tools, remission criteria, and utilities for life areas.

  7. XMM-Newton Remote Interface to Science Analysis Software: First Public Version

    NASA Astrophysics Data System (ADS)

    Ibarra, A.; Gabriel, C.

    2011-07-01

    We present the first public beta release of the XMM-Newton Remote Interface to Science Analysis (RISA) software, available through the official XMM-Newton web pages. In a nutshell, RISA is a web based application that encapsulates the XMM-Newton data analysis software. The client identifies observations and creates XMM-Newton workflows. The server processes the client request, creates job templates and sends the jobs to a computer. RISA has been designed to help, at the same time, non-expert and professional XMM-Newton users. Thanks to the predefined threads, non-expert users can easily produce light curves and spectra. And on the other hand, expert user can use the full parameter interface to tune their own analysis. In both cases, the VO compliant client/server design frees the users from having to install any specific software to analyze XMM-Newton data.

  8. Software For Design Of Life-Support Systems

    NASA Technical Reports Server (NTRS)

    Rudokas, Mary R.; Cantwell, Elizabeth R.; Robinson, Peter I.; Shenk, Timothy W.

    1991-01-01

    Design Assistant Workstation (DAWN) computer program is prototype of expert software system for analysis and design of regenerative, physical/chemical life-support systems that revitalize air, reclaim water, produce food, and treat waste. Incorporates both conventional software for quantitative mathematical modeling of physical, chemical, and biological processes and expert system offering user stored knowledge about materials and processes. Constructs task tree as it leads user through simulated process, offers alternatives, and indicates where alternative not feasible. Also enables user to jump from one design level to another.

  9. Probabilistic structural analysis methods for select space propulsion system components

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Cruse, T. A.

    1989-01-01

    The Probabilistic Structural Analysis Methods (PSAM) project developed at the Southwest Research Institute integrates state-of-the-art structural analysis techniques with probability theory for the design and analysis of complex large-scale engineering structures. An advanced efficient software system (NESSUS) capable of performing complex probabilistic analysis has been developed. NESSUS contains a number of software components to perform probabilistic analysis of structures. These components include: an expert system, a probabilistic finite element code, a probabilistic boundary element code and a fast probability integrator. The NESSUS software system is shown. An expert system is included to capture and utilize PSAM knowledge and experience. NESSUS/EXPERT is an interactive menu-driven expert system that provides information to assist in the use of the probabilistic finite element code NESSUS/FEM and the fast probability integrator (FPI). The expert system menu structure is summarized. The NESSUS system contains a state-of-the-art nonlinear probabilistic finite element code, NESSUS/FEM, to determine the structural response and sensitivities. A broad range of analysis capabilities and an extensive element library is present.

  10. Proceedings of Tenth Annual Software Engineering Workshop

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Papers are presented on the following topics: measurement of software technology, recent studies of the Software Engineering Lab, software management tools, expert systems, error seeding as a program validation technique, software quality assurance, software engineering environments (including knowledge-based environments), the Distributed Computing Design System, and various Ada experiments.

  11. An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development

    PubMed Central

    Knauff, Markus; Nejasmic, Jelica

    2014-01-01

    The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors. PMID:25526083

  12. Inter- and intrarater reliability of the Chicago Classification in pediatric high-resolution esophageal manometry recordings.

    PubMed

    Singendonk, M M J; Smits, M J; Heijting, I E; van Wijk, M P; Nurko, S; Rosen, R; Weijenborg, P W; Abu-Assi, R; Hoekman, D R; Kuizenga-Wessel, S; Seiboth, G; Benninga, M A; Omari, T I; Kritas, S

    2015-02-01

    The Chicago Classification (CC) facilitates interpretation of high-resolution manometry (HRM) recordings. Application of this adult based algorithm to the pediatric population is unknown. We therefore assessed intra and interrater reliability of software-based CC diagnosis in a pediatric cohort. Thirty pediatric solid state HRM recordings (13M; mean age 12.1 ± 5.1 years) assessing 10 liquid swallows per patient were analyzed twice by 11 raters (six experts, five non-experts). Software-placed anatomical landmarks required manual adjustment or removal. Integrated relaxation pressure (IRP4s), distal contractile integral (DCI), contractile front velocity (CFV), distal latency (DL) and break size (BS), and an overall CC diagnosis were software-generated. In addition, raters provided their subjective CC diagnosis. Reliability was calculated with Cohen's and Fleiss' kappa (κ) and intraclass correlation coefficient (ICC). Intra- and interrater reliability of software-generated CC diagnosis after manual adjustment of landmarks was substantial (mean κ = 0.69 and 0.77 respectively) and moderate-substantial for subjective CC diagnosis (mean κ = 0.70 and 0.58 respectively). Reliability of both software-generated and subjective diagnosis of normal motility was high (κ = 0.81 and κ = 0.79). Intra- and interrater reliability were excellent for IRP4s, DCI, and BS. Experts had higher interrater reliability than non-experts for DL (ICC = 0.65 vs ICC = 0.36 respectively) and the software-generated diagnosis diffuse esophageal spasm (DES, κ = 0.64 vs κ = 0.30). Among experts, the reliability for the subjective diagnosis of achalasia and esophageal gastric junction outflow obstruction was moderate-substantial (κ = 0.45-0.82). Inter- and intrarater reliability of software-based CC diagnosis of pediatric HRM recordings was high overall. However, experience was a factor influencing the diagnosis of some motility disorders, particularly DES and achalasia. © 2014 John Wiley & Sons Ltd.

  13. System of experts for intelligent data management (SEIDAM)

    NASA Technical Reports Server (NTRS)

    Goodenough, David G.; Iisaka, Joji; Fung, KO

    1993-01-01

    A proposal to conduct research and development on a system of expert systems for intelligent data management (SEIDAM) is being developed. CCRS has much expertise in developing systems for integrating geographic information with space and aircraft remote sensing data and in managing large archives of remotely sensed data. SEIDAM will be composed of expert systems grouped in three levels. At the lowest level, the expert systems will manage and integrate data from diverse sources, taking account of symbolic representation differences and varying accuracies. Existing software can be controlled by these expert systems, without rewriting existing software into an Artificial Intelligence (AI) language. At the second level, SEIDAM will take the interpreted data (symbolic and numerical) and combine these with data models. at the top level, SEIDAM will respond to user goals for predictive outcomes given existing data. The SEIDAM Project will address the research areas of expert systems, data management, storage and retrieval, and user access and interfaces.

  14. System of Experts for Intelligent Data Management (SEIDAM)

    NASA Technical Reports Server (NTRS)

    Goodenough, David G.; Iisaka, Joji; Fung, KO

    1992-01-01

    It is proposed to conduct research and development on a system of expert systems for intelligent data management (SEIDAM). CCRS has much expertise in developing systems for integrating geographic information with space and aircraft remote sensing data and in managing large archives of remotely sensed data. SEIDAM will be composed of expert systems grouped in three levels. At the lowest level, the expert systems will manage and integrate data from diverse sources, taking account of symbolic representation differences and varying accuracies. Existing software can be controlled by these expert systems, without rewriting existing software into an Artificial Intelligence (AI) language. At the second level, SEIDAM will take the interpreted data (symbolic and numerical) and combine these with data models. At the top level, SEIDAM will respond to user goals for predictive outcomes given existing data. The SEIDAM Project will address the research areas of expert systems, data management, storage and retrieval, and user access and interfaces.

  15. An SSME High Pressure Oxidizer Turbopump diagnostic system using G2 real-time expert system

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei

    1991-01-01

    An expert system which diagnoses various seal leakage faults in the High Pressure Oxidizer Turbopump of the SSME was developed using G2 real-time expert system. Three major functions of the software were implemented: model-based data generation, real-time expert system reasoning, and real-time input/output communication. This system is proposed as one module of a complete diagnostic system for the SSME. Diagnosis of a fault is defined as the determination of its type, severity, and likelihood. Since fault diagnosis is often accomplished through the use of heuristic human knowledge, an expert system based approach has been adopted as a paradigm to develop this diagnostic system. To implement this approach, a software shell which can be easily programmed to emulate the human decision process, the G2 Real-Time Expert System, was selected. Lessons learned from this implementation are discussed.

  16. An SSME high pressure oxidizer turbopump diagnostic system using G2(TM) real-time expert system

    NASA Technical Reports Server (NTRS)

    Guo, Ten-Huei

    1991-01-01

    An expert system which diagnoses various seal leakage faults in the High Pressure Oxidizer Turbopump of the SSME was developed using G2(TM) real-time expert system. Three major functions of the software were implemented: model-based data generation, real-time expert system reasoning, and real-time input/output communication. This system is proposed as one module of a complete diagnostic system for Space Shuttle Main Engine. Diagnosis of a fault is defined as the determination of its type, severity, and likelihood. Since fault diagnosis is often accomplished through the use of heuristic human knowledge, an expert system based approach was adopted as a paradigm to develop this diagnostic system. To implement this approach, a software shell which can be easily programmed to emulate the human decision process, the G2 Real-Time Expert System, was selected. Lessons learned from this implementation are discussed.

  17. The Management and Security Expert (MASE)

    NASA Technical Reports Server (NTRS)

    Miller, Mark D.; Barr, Stanley J.; Gryphon, Coranth D.; Keegan, Jeff; Kniker, Catherine A.; Krolak, Patrick D.

    1991-01-01

    The Management and Security Expert (MASE) is a distributed expert system that monitors the operating systems and applications of a network. It is capable of gleaning the information provided by the different operating systems in order to optimize hardware and software performance; recognize potential hardware and/or software failure, and either repair the problem before it becomes an emergency, or notify the systems manager of the problem; and monitor applications and known security holes for indications of an intruder or virus. MASE can eradicate much of the guess work of system management.

  18. International study of expert judgment on therapeutic use of benzodiazepines and other psychotherapeutic medications: VI. Trends in recommendations for the pharmacotherapy of anxiety disorders, 1992-1997.

    PubMed

    Uhlenhuth, E H; Balter, M B; Ban, T A; Yang, K

    1999-01-01

    To assemble expert clinical experience and judgment regarding the treatment of anxiety disorders in a systematic, quantitative manner, particularly with respect to changes during the preceding five years. A panel of 73 internationally recognized experts in the pharmacotherapy of anxiety and depression was constituted by multistage peer nomination. Sixty-six completed a questionnaire in 1992, and 51 of those completed a follow-up questionnaire in 1997. This report focuses on the experts' responses to questions about therapeutic options relevant to seven vignettes describing typical cases of different anxiety disorders. The preferred initial treatment strategy in 1992 was a combination of medication with a psychological therapy for all vignettes except simple phobia, where a psychological procedure alone was favored. There was little change in 1997, primarily some decrease in the choice of psychological therapy and some increase in the choice of medication for social phobia. Experts recommending a medication in 1992 most often chose as first-line treatment a benzodiazepine anxiolytic (BZ) for panic disorder (PD), generalized anxiety disorder (GAD), simple phobia, and adjustment disorder. They recommended a beta-blocker most often for social phobia and a tricyclic anti-depressant (TCA) for agoraphobia and obsessive-compulsive disorder (OCD). Nearly a fourth chose a combination of medications, usually a TCA plus a BZ. In 1997, the expert panel's most frequent recommendation for agoraphobia, PD, and OCD changed to a specific serotonin reuptake inhibitor (SSRI); and they also recommended these compounds more often for GAD, social phobia, and simple phobia. Fewer experts chose BZs or TCAs. However, in 1997 many again chose a combination of medications, often a BZ plus a SSRI, so that, overall, there was only a small decline in recommendations for BZs. As second-line medications (1997 only), the experts recommended SSRIs most often for most vignettes, but a TCA for PD and GAD. Recommendations for a combination of medications rose substantially for most vignettes, usually a BZ plus an antidepressant. Combined cognitive-behavioral therapy plus medication was highly favored by the experts as the initial treatment strategy for anxiety disorders. During the preceding five years, SSRIs displaced older antidepressants as the experts' first-line choices for the pharmacotherapy of anxiety disorders. In case of an unsatisfactory response, the experts' second-line choices more often were an older antidepressant or a combination of an antidepressant plus a BZ. According to the experts' judgements, the BZs, especially combined with an antidepressant, remain mainstays of pharmacotherapy for anxiety disorders.

  19. Application Reuse Library for Software, Requirements, and Guidelines

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Thronesbery, Carroll

    1994-01-01

    Better designs are needed for expert systems and other operations automation software, for more reliable, usable and effective human support. A prototype computer-aided Application Reuse Library shows feasibility of supporting concurrent development and improvement of advanced software by users, analysts, software developers, and human-computer interaction experts. Such a library expedites development of quality software, by providing working, documented examples, which support understanding, modification and reuse of requirements as well as code. It explicitly documents and implicitly embodies design guidelines, standards and conventions. The Application Reuse Library provides application modules with Demo-and-Tester elements. Developers and users can evaluate applicability of a library module and test modifications, by running it interactively. Sub-modules provide application code and displays and controls. The library supports software modification and reuse, by providing alternative versions of application and display functionality. Information about human support and display requirements is provided, so that modifications will conform to guidelines. The library supports entry of new application modules from developers throughout an organization. Example library modules include a timer, some buttons and special fonts, and a real-time data interface program. The library prototype is implemented in the object-oriented G2 environment for developing real-time expert systems.

  20. An expert system prototype for aiding in the development of software functional requirements for NASA Goddard's command management system: A case study and lessons learned

    NASA Technical Reports Server (NTRS)

    Liebowitz, Jay

    1986-01-01

    At NASA Goddard, the role of the command management system (CMS) is to transform general requests for spacecraft opeerations into detailed operational plans to be uplinked to the spacecraft. The CMS is part of the NASA Data System which entails the downlink of science and engineering data from NASA near-earth satellites to the user, and the uplink of command and control data to the spacecraft. Presently, it takes one to three years, with meetings once or twice a week, to determine functional requirements for CMS software design. As an alternative approach to the present technique of developing CMS software functional requirements, an expert system prototype was developed to aid in this function. Specifically, the knowledge base was formulated through interactions with domain experts, and was then linked to an existing expert system application generator called 'Knowledge Engineering System (Version 1.3).' Knowledge base development focused on four major steps: (1) develop the problem-oriented attribute hierachy; (2) determine the knowledge management approach; (3) encode the knowledge base; and (4) validate, test, certify, and evaluate the knowledge base and the expert system prototype as a whole. Backcasting was accomplished for validating and testing the expert system prototype. Knowledge refinement, evaluation, and implementation procedures of the expert system prototype were then transacted.

  1. Design of an Ada expert system shell for the VHSIC avionic modular flight processor

    NASA Technical Reports Server (NTRS)

    Fanning, F. Jesse

    1992-01-01

    The Embedded Computer System Expert System Shell (ES Shell) is an Ada-based expert system shell developed at the Avionics Laboratory for use on the VHSIC Avionic Modular Processor (VAMP) running under the Ada Avionics Real-Time Software (AARTS) Operating System. The ES Shell provides the interface between the expert system and the avionics environment, and controls execution of the expert system. Testing of the ES Shell in the Avionics Laboratory's Integrated Test Bed (ITB) has demonstrated its ability to control a non-deterministic software application executing on the VAMP's which can control the ITB's real-time closed-loop aircraft simulation. The results of these tests and the conclusions reached in the design and development of the ES Shell have played an important role in the formulation of the requirements for a production-quality expert system inference engine, an ingredient necessary for the successful use of expert systems on the VAMP embedded avionic flight processor.

  2. Formal verification of AI software

    NASA Technical Reports Server (NTRS)

    Rushby, John; Whitehurst, R. Alan

    1989-01-01

    The application of formal verification techniques to Artificial Intelligence (AI) software, particularly expert systems, is investigated. Constraint satisfaction and model inversion are identified as two formal specification paradigms for different classes of expert systems. A formal definition of consistency is developed, and the notion of approximate semantics is introduced. Examples are given of how these ideas can be applied in both declarative and imperative forms.

  3. Integration of an expert system into a user interface language demonstration

    NASA Technical Reports Server (NTRS)

    Stclair, D. C.

    1986-01-01

    The need for a User Interface Language (UIL) has been recognized by the Space Station Program Office as a necessary tool to aid in minimizing the cost of software generation by multiple users. Previous history in the Space Shuttle Program has shown that many different areas of software generation, such as operations, integration, testing, etc., have each used a different user command language although the types of operations being performed were similar in many respects. Since the Space Station represents a much more complex software task, a common user command language--a user interface language--is required to support the large spectrum of space station software developers and users. To assist in the selection of an appropriate set of definitions for a UIL, a series of demonstration programs was generated with which to test UIL concepts against specific Space Station scenarios using operators for the astronaut and scientific community. Because of the importance of expert system in the space station, it was decided that an expert system should be embedded in the UIL. This would not only provide insight into the UIL components required but would indicate the effectiveness with which an expert system could function in such an environment.

  4. Measuring University students' understanding of the greenhouse effect - a comparison of multiple-choice, short answer and concept sketch assessment tools with respect to students' mental models

    NASA Astrophysics Data System (ADS)

    Gold, A. U.; Harris, S. E.

    2013-12-01

    The greenhouse effect comes up in most discussions about climate and is a key concept related to climate change. Existing studies have shown that students and adults alike lack a detailed understanding of this important concept or might hold misconceptions. We studied the effectiveness of different interventions on University-level students' understanding of the greenhouse effect. Introductory level science students were tested for their pre-knowledge of the greenhouse effect using validated multiple-choice questions, short answers and concept sketches. All students participated in a common lesson about the greenhouse effect and were then randomly assigned to one of two lab groups. One group explored an existing simulation about the greenhouse effect (PhET-lesson) and the other group worked with absorption spectra of different greenhouse gases (Data-lesson) to deepen the understanding of the greenhouse effect. All students completed the same assessment including multiple choice, short answers and concept sketches after participation in their lab lesson. 164 students completed all the assessments, 76 completed the PhET lesson and 77 completed the data lesson. 11 students missed the contrasting lesson. In this presentation we show the comparison between the multiple-choice questions, short answer questions and the concept sketches of students. We explore how well each of these assessment types represents student's knowledge. We also identify items that are indicators of the level of understanding of the greenhouse effect as measured in correspondence of student answers to an expert mental model and expert responses. Preliminary data analysis shows that student who produce concept sketch drawings that come close to expert drawings also choose correct multiple-choice answers. However, correct multiple-choice answers are not necessarily an indicator that a student produces an expert-like correlating concept sketch items. Multiple-choice questions that require detailed knowledge of the greenhouse effect (e.g. direction of re-emission of infrared energy from greenhouse gas) are significantly more likely to be answered correctly by students who also produce expert-like concept sketch items than by students who don't include this aspect in their sketch and don't answer the multiple choice questions correctly. This difference is not as apparent for less technical multiple-choice questions (e.g. type of radiation emitted by Sun). Our findings explore the formation of student's mental models throughout different interventions and how well the different assessment techniques used in this study represent the student understanding of the overall concept.

  5. The Many Faces of a Software Engineer in a Research Community

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marinovici, Maria C.; Kirkham, Harold

    2013-10-14

    The ability to gather, analyze and make decisions based on real world data is changing nearly every field of human endeavor. These changes are particularly challenging for software engineers working in a scientific community, designing and developing large, complex systems. To avoid the creation of a communications gap (almost a language barrier), the software engineers should possess an ‘adaptive’ skill. In the science and engineering research community, the software engineers must be responsible for more than creating mechanisms for storing and analyzing data. They must also develop a fundamental scientific and engineering understanding of the data. This paper looks atmore » the many faces that a software engineer should have: developer, domain expert, business analyst, security expert, project manager, tester, user experience professional, etc. Observations made during work on a power-systems scientific software development are analyzed and extended to describe more generic software development projects.« less

  6. Expert Financial Advice Neurobiologically “Offloads” Financial Decision-Making under Risk

    PubMed Central

    Engelmann, Jan B.; Capra, C. Monica; Noussair, Charles; Berns, Gregory S.

    2009-01-01

    Background Financial advice from experts is commonly sought during times of uncertainty. While the field of neuroeconomics has made considerable progress in understanding the neurobiological basis of risky decision-making, the neural mechanisms through which external information, such as advice, is integrated during decision-making are poorly understood. In the current experiment, we investigated the neurobiological basis of the influence of expert advice on financial decisions under risk. Methodology/Principal Findings While undergoing fMRI scanning, participants made a series of financial choices between a certain payment and a lottery. Choices were made in two conditions: 1) advice from a financial expert about which choice to make was displayed (MES condition); and 2) no advice was displayed (NOM condition). Behavioral results showed a significant effect of expert advice. Specifically, probability weighting functions changed in the direction of the expert's advice. This was paralleled by neural activation patterns. Brain activations showing significant correlations with valuation (parametric modulation by value of lottery/sure win) were obtained in the absence of the expert's advice (NOM) in intraparietal sulcus, posterior cingulate cortex, cuneus, precuneus, inferior frontal gyrus and middle temporal gyrus. Notably, no significant correlations with value were obtained in the presence of advice (MES). These findings were corroborated by region of interest analyses. Neural equivalents of probability weighting functions showed significant flattening in the MES compared to the NOM condition in regions associated with probability weighting, including anterior cingulate cortex, dorsolateral PFC, thalamus, medial occipital gyrus and anterior insula. Finally, during the MES condition, significant activations in temporoparietal junction and medial PFC were obtained. Conclusions/Significance These results support the hypothesis that one effect of expert advice is to “offload” the calculation of value of decision options from the individual's brain. PMID:19308261

  7. Expert systems applied to fault isolation and energy storage management, phase 2

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A user's guide for the Fault Isolation and Energy Storage (FIES) II system is provided. Included are a brief discussion of the background and scope of this project, a discussion of basic and advanced operating installation and problem determination procedures for the FIES II system and information on hardware and software design and implementation. A number of appendices are provided including a detailed specification for the microprocessor software, a detailed description of the expert system rule base and a description and listings of the LISP interface software.

  8. What Is An Expert System? ERIC Digest.

    ERIC Educational Resources Information Center

    Boss, Richard W.

    This digest describes and defines the various components of an expert system, e.g., a computerized tool designed to enhance the quality and availability of knowledge required by decision makers. It is noted that expert systems differ from conventional applications software in the following areas: (1) the existence of the expert systems shell, or…

  9. Expert Systems: A Challenge for the Reading Profession.

    ERIC Educational Resources Information Center

    Balajthy, Ernest

    The expert systems are designed to imitate the reasoning of a human expert in a content area field. Designed to be advisors, these software systems combine the content area knowledge and decision-making ability of an expert with the user's understanding and knowledge of particular circumstances. The reading diagnosis system, the RD2P System…

  10. Expert system verification concerns in an operations environment

    NASA Technical Reports Server (NTRS)

    Goodwin, Mary Ann; Robertson, Charles C.

    1987-01-01

    The Space Shuttle community is currently developing a number of knowledge-based tools, primarily expert systems, to support Space Shuttle operations. It is proposed that anticipating and responding to the requirements of the operations environment will contribute to a rapid and smooth transition of expert systems from development to operations, and that the requirements for verification are critical to this transition. The paper identifies the requirements of expert systems to be used for flight planning and support and compares them to those of existing procedural software used for flight planning and support. It then explores software engineering concepts and methodology that can be used to satisfy these requirements, to aid the transition from development to operations and to support the operations environment during the lifetime of expert systems. Many of these are similar to those used for procedural hardware.

  11. Developing and Testing of a Software Prototype to Support Diagnostic Reasoning of Nursing Students.

    PubMed

    de Sousa, Vanessa Emille Carvalho; de Oliveira Lopes, Marcos Venícios; Keenan, Gail M; Lopez, Karen Dunn

    2018-04-01

    To design and test educational software to improve nursing students' diagnostic reasoning through NANDA-I-based clinical scenarios. A mixed method approach was used and included content validation by a panel of 13 experts and prototype testing by a sample of 56 students. Experts' suggestions included writing adjustments, new response options, and replacement of clinical information on the scenarios. Percentages of students' correct answers were 65.7%, 62.2%, and 60.5% for related factors, defining characteristics, and nursing diagnoses, respectively. Full development of this software shows strong potential for enhancing students' diagnostic reasoning. New graduates may be able to apply diagnostic reasoning more rapidly by exercising their diagnostic skills within this software. Desenvolver e testar um protótipo de software educativo para melhorar o raciocínio diagnóstico de estudantes de enfermagem. MÉTODOS: Uma abordagem mista foi utilizada e incluiu validação de conteúdo por 13 experts e testagem do protótipo por 56 estudantes. Sugestões dos experts incluíram ajustes na escrita, inclusão de novas opções de resposta e substituição de dados clínicos nos cenários. Os percentuais de respostas corretas dos estudantes foram 65,7%, 62,2% e 60,5% para fatores relacionados, características definidoras e diagnósticos de enfermagem respectivamente. CONCLUSÃO: O desenvolvimento deste software tem um forte potencial para melhorar o raciocínio diagnóstico de estudantes. IMPLICAÇÕES PARA A PRÁTICA EM ENFERMAGEM: Através deste software, enfermeiros poderão ser capazes de exercitar o raciocínio diagnóstico e aplicá-lo mais rapidamente. © 2016 NANDA International, Inc.

  12. The Effects of Word Processing Software on User Satisfaction: An Empirical Study of Micro, Mini, and Mainframe Computers Using an Interactive Artificial Intelligence Expert-System.

    ERIC Educational Resources Information Center

    Rushinek, Avi; Rushinek, Sara

    1984-01-01

    Describes results of a system rating study in which users responded to WPS (word processing software) questions. Study objectives were data collection and evaluation of variables; statistical quantification of WPS's contribution (along with other variables) to user satisfaction; design of an expert system to evaluate WPS; and database update and…

  13. Advanced automation of a prototypic thermal control system for Space Station

    NASA Technical Reports Server (NTRS)

    Dominick, Jeff

    1990-01-01

    Viewgraphs on an advanced automation of a prototypic thermal control system for space station are presented. The Thermal Expert System (TEXSYS) was initiated in 1986 as a cooperative project between ARC and JCS as a way to leverage on-going work at both centers. JSC contributed Thermal Control System (TCS) hardware and control software, TCS operational expertise, and integration expertise. ARC contributed expert system and display expertise. The first years of the project were dedicated to parallel development of expert system tools, displays, interface software, and TCS technology and procedures by a total of four organizations.

  14. Evaluation of Automated Fracture Risk Assessment Based on the Canadian Association of Radiologists and Osteoporosis Canada Assessment Tool.

    PubMed

    Allin, Sonya; Bleakney, Robert; Zhang, Julie; Munce, Sarah; Cheung, Angela M; Jaglal, Susan

    2016-01-01

    Fracture risk assessments are not always clearly communicated on bone mineral density (BMD) reports; evidence suggests that structured reporting (SR) tools may improve report clarity. The aim of this study is to compare fracture risk assessments automatically assigned by SR software in accordance with Canadian Association of Radiologists and Osteoporosis Canada (CAROC) recommendations to assessments from experts on narrative BMD reports. Charts for 500 adult patients who recently received a BMD exam were sampled from across University of Toronto's Joint Department of Medical Imaging. BMD measures and clinical details were manually abstracted from charts and were used to create structured reports with assessments generated by a software implementation of CAROC recommendations. CAROC calculations were statistically compared to experts' original assessments using percentage agreement (PA) and Krippendorff's alpha. Canadian FRAX calculations were also compared to experts', where possible. A total of 25 (5.0%) reported assessments did not conform to categorizations recommended by Canadian guidelines. Across the remainder, the Krippendorff's alpha relating software assigned assessments to physicians was high at 0.918; PA was 94.3%. Lower agreement was associated with reports for patients with documented modifying factors (alpha = 0.860, PA = 90.2%). Similar patterns of agreement related expert assessments to FRAX calculations, although statistics of agreement were lower. Categories of disagreement were defined by (1) gray areas in current guidelines, (2) margins of assessment categorizations, (3) dictation/transcription errors, (4) patients on low doses of steroids, and (5) ambiguous documentation of modifying factors. Results suggest that SR software can produce fracture risk assessments that agree with experts on most routine, adult BMD exams. Results also highlight situations where experts tend to diverge from guidelines and illustrate the potential for SR software to (1) reduce variability in, (2) ameliorate errors in, and (3) improve clarity of routine adult BMD exam reports. Copyright © 2016 International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  15. Software life cycle methodologies and environments

    NASA Technical Reports Server (NTRS)

    Fridge, Ernest

    1991-01-01

    Products of this project will significantly improve the quality and productivity of Space Station Freedom Program software processes by: improving software reliability and safety; and broadening the range of problems that can be solved with computational solutions. Projects brings in Computer Aided Software Engineering (CASE) technology for: Environments such as Engineering Script Language/Parts Composition System (ESL/PCS) application generator, Intelligent User Interface for cost avoidance in setting up operational computer runs, Framework programmable platform for defining process and software development work flow control, Process for bringing CASE technology into an organization's culture, and CLIPS/CLIPS Ada language for developing expert systems; and methodologies such as Method for developing fault tolerant, distributed systems and a method for developing systems for common sense reasoning and for solving expert systems problems when only approximate truths are known.

  16. A Model for the Development of Hospital Beds Using Fuzzy Analytical Hierarchy Process (Fuzzy AHP)

    PubMed Central

    RAVANGARD, Ramin; BAHADORI, Mohammadkarim; RAADABADI, Mehdi; TEYMOURZADEH, Ehsan; ALIMOMOHAMMADZADEH, Khalil; MEHRABIAN, Fardin

    2017-01-01

    Background: This study aimed to identify and prioritize factors affecting the development of military hospital beds and provide a model using fuzzy analytical hierarchy process (Fuzzy AHP). Methods: This applied study was conducted in 2016 in Iran using a mixed method. The sample included experts in the field of military health care system. The MAXQDA 10.0 and Expert Choice 10.0 software were used for analyzing the collected data. Results: Geographic situation, demographic status, economic status, health status, health care centers and organizations, financial and human resources, laws and regulations and by-laws, and the military nature of service recipients had effects on the development of military hospital beds. The military nature of service recipients (S=0.249) and economic status (S=0.040) received the highest and lowest priorities, respectively. Conclusion: Providing direct health care services to the military forces in order to maintain their dignity, and according to its effects in the crisis, as well as the necessity for maintaining the security of the armed forces, and the hospital beds per capita based on the existing laws, regulations and bylaws are of utmost importance. PMID:29167775

  17. The Expert Mathematician. Revised. What Works Clearinghouse Intervention Report

    ERIC Educational Resources Information Center

    What Works Clearinghouse, 2006

    2006-01-01

    "The Expert Mathematician" is designed to help middle school students develop the thinking processes for mathematical applications and communication. A three-year program of instruction, "The Expert Mathematician" uses a software and consumable print materials package with 196 lessons that teach the "Logo" programming…

  18. Advanced Automation for Ion Trap Mass Spectrometry-New Opportunities for Real-Time Autonomous Analysis

    NASA Technical Reports Server (NTRS)

    Palmer, Peter T.; Wong, C. M.; Salmonson, J. D.; Yost, R. A.; Griffin, T. P.; Yates, N. A.; Lawless, James G. (Technical Monitor)

    1994-01-01

    The utility of MS/MS for both target compound analysis and the structure elucidation of unknowns has been described in a number of references. A broader acceptance of this technique has not yet been realized as it requires large, complex, and costly instrumentation which has not been competitive with more conventional techniques. Recent advancements in ion trap mass spectrometry promise to change this situation. Although the ion trap's small size, sensitivity, and ability to perform multiple stages of mass spectrometry have made it eminently suitable for on-line, real-time monitoring applications, advance automation techniques are required to make these capabilities more accessible to non-experts. Towards this end we have developed custom software for the design and implementation of MS/MS experiments. This software allows the user to take full advantage of the ion trap's versatility with respect to ionization techniques, scan proxies, and ion accumulation/ejection methods. Additionally, expert system software has been developed for autonomous target compound analysis. This software has been linked to ion trap control software and a commercial data system to bring all of the steps in the analysis cycle under control of the expert system. These software development efforts and their utilization for a number of trace analysis applications will be described.

  19. [Use of Adobe Photoshop software in medical criminology].

    PubMed

    Nikitin, S A; Demidov, I V

    2000-01-01

    Describes the method of comparative analysis of various objects in practical medical criminology and making of high-quality photographs with the use of Adobe Photoshop software. Options of the software needed for expert evaluations are enumerated.

  20. Software Reviews: Programs Worth a Second Look.

    ERIC Educational Resources Information Center

    Classroom Computer Learning, 1989

    1989-01-01

    Reviews three software programs: (1) "Microsoft Works 2.0": word processing, data processing, and telecommunications, grades 7 and up; (2) "AppleWorks GS": word processor, database, spreadsheet, graphics, and telecommunications, grades 3-12, Apple IIGS; (3) "Choices, Choices: On the Playground, Taking Responsibility":…

  1. Desktop Publishing Choices: Making an Appropriate Decision.

    ERIC Educational Resources Information Center

    Crawford, Walt

    1991-01-01

    Discusses various choices available for desktop publishing systems. Four categories of software are described, including advanced word processing, graphics software, low-end desktop publishing, and mainstream desktop publishing; appropriate hardware is considered; and selection guidelines are offered, including current and future publishing needs,…

  2. Calculation Software

    NASA Technical Reports Server (NTRS)

    1994-01-01

    MathSoft Plus 5.0 is a calculation software package for electrical engineers and computer scientists who need advanced math functionality. It incorporates SmartMath, an expert system that determines a strategy for solving difficult mathematical problems. SmartMath was the result of the integration into Mathcad of CLIPS, a NASA-developed shell for creating expert systems. By using CLIPS, MathSoft, Inc. was able to save the time and money involved in writing the original program.

  3. Security System Software

    NASA Technical Reports Server (NTRS)

    1993-01-01

    C Language Integration Production System (CLIPS), a NASA-developed expert systems program, has enabled a security systems manufacturer to design a new generation of hardware. C.CURESystem 1 Plus, manufactured by Software House, is a software based system that is used with a variety of access control hardware at installations around the world. Users can manage large amounts of information, solve unique security problems and control entry and time scheduling. CLIPS acts as an information management tool when accessed by C.CURESystem 1 Plus. It asks questions about the hardware and when given the answer, recommends possible quick solutions by non-expert persons.

  4. Statistical Validation for Clinical Measures: Repeatability and Agreement of Kinect™-Based Software

    PubMed Central

    Tello, Emanuel; Rodrigo, Alejandro; Valentinuzzi, Max E.

    2018-01-01

    Background The rehabilitation process is a fundamental stage for recovery of people's capabilities. However, the evaluation of the process is performed by physiatrists and medical doctors, mostly based on their observations, that is, a subjective appreciation of the patient's evolution. This paper proposes a tracking platform of the movement made by an individual's upper limb using Kinect sensor(s) to be applied for the patient during the rehabilitation process. The main contribution is the development of quantifying software and the statistical validation of its performance, repeatability, and clinical use in the rehabilitation process. Methods The software determines joint angles and upper limb trajectories for the construction of a specific rehabilitation protocol and quantifies the treatment evolution. In turn, the information is presented via a graphical interface that allows the recording, storage, and report of the patient's data. For clinical purposes, the software information is statistically validated with three different methodologies, comparing the measures with a goniometer in terms of agreement and repeatability. Results The agreement of joint angles measured with the proposed software and goniometer is evaluated with Bland-Altman plots; all measurements fell well within the limits of agreement, meaning interchangeability of both techniques. Additionally, the results of Bland-Altman analysis of repeatability show 95% confidence. Finally, the physiotherapists' qualitative assessment shows encouraging results for the clinical use. Conclusion The main conclusion is that the software is capable of offering a clinical history of the patient and is useful for quantification of the rehabilitation success. The simplicity, low cost, and visualization possibilities enhance the use of the software Kinect for rehabilitation and other applications, and the expert's opinion endorses the choice of our approach for clinical practice. Comparison of the new measurement technique with established goniometric methods determines that the proposed software agrees sufficiently to be used interchangeably. PMID:29750166

  5. Experiences of Structured Elicitation for Model-Based Cost-Effectiveness Analyses.

    PubMed

    Soares, Marta O; Sharples, Linda; Morton, Alec; Claxton, Karl; Bojke, Laura

    2018-06-01

    Empirical evidence supporting the cost-effectiveness estimates of particular health care technologies may be limited, or it may even be missing entirely. In these situations, additional information, often in the form of expert judgments, is needed to reach a decision. There are formal methods to quantify experts' beliefs, termed as structured expert elicitation (SEE), but only limited research is available in support of methodological choices. Perhaps as a consequence, the use of SEE in the context of cost-effectiveness modelling is limited. This article reviews applications of SEE in cost-effectiveness modelling with the aim of summarizing the basis for methodological choices made in each application and recording the difficulties and challenges reported by the authors in the design, conduct, and analyses. The methods used in each application were extracted along with the criteria used to support methodological and practical choices and any issues or challenges discussed in the text. Issues and challenges were extracted using an open field, and then categorised and grouped for reporting. The review demonstrates considerable heterogeneity in methods used, and authors acknowledge great methodological uncertainty in justifying their choices. Specificities of the context area emerging as potentially important in determining further methodological research in elicitation are between- expert variation and its interpretation, the fact that substantive experts in the area may not be trained in quantitative subjects, that judgments are often needed on various parameter types, the need for some form of assessment of validity, and the need for more integration with behavioural research to devise relevant debiasing strategies. This review of experiences of SEE highlights a number of specificities/constraints that can shape the development of guidance and target future research efforts in this area. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  6. Artificial Intelligence: The Expert Way.

    ERIC Educational Resources Information Center

    Bitter, Gary G.

    1989-01-01

    Discussion of artificial intelligence (AI) and expert systems focuses on their use in education. Characteristics of good expert systems are explained; computer software programs that contain applications of AI are described, highlighting one used to help educators identify learning-disabled students; and the future of AI is discussed. (LRW)

  7. Techniques and implementation of the embedded rule-based expert system using Ada

    NASA Technical Reports Server (NTRS)

    Liberman, Eugene M.; Jones, Robert E.

    1991-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with its portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assured a growing role in providing human-like reasoning capability and expertise for computer systems. The integration of expert system technology with Ada programming language, specifically a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell is discussed. The NASA Lewis Research Center was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-base power expert system, in ART-Ada. Three components, the rule-based expert system, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  8. A thematic analysis for how patients, prescribers, experts, and patient advocates view the prescription choice process.

    PubMed

    Schommer, Jon C; Worley, Marcia M; Kjos, Andrea L; Pakhomov, Serguei V S; Schondelmeyer, Stephen W

    2009-06-01

    Typically, patients are unaware of the cost consequences regarding prescribing decisions during their clinical encounter and rarely talk with their physicians about costs of prescription drugs. Prescription medications that are deemed by patients to be too costly when the costs become known after purchase are discontinued or used at suboptimal doses compared to prescription medications that are deemed to be worth the cost. To learn more about the prescription choice process from several viewpoints, the purpose of this study was to uncover and describe how patients, prescribers, experts, and patient advocates view the prescription choice process. Data were collected via 9 focus group interviews held between April 24 and July 31, 2007 (3 with patients, 3 with prescribers, 2 with experts, and 1 with patient advocates). The interviews were audiotaped and transcribed. The resulting text was analyzed in a descriptive and interpretive manner. Theme extraction was based on convergence and external divergence; that is, identified themes were internally consistent but distinct from one and another. To ensure quality and credibility of analysis, multiple analysts and multiple methods were used to provide a quality check on selective perception and blind interpretive bias that could occur through a single person doing all of the analysis or through employment of a single method. The findings revealed 5 overall themes related to the prescription choice process: (1) information, (2) relationship, (3) patient variation, (4) practitioner variation, and (5) role expectations. The results showed that patients, prescribers, experts, and patient advocates viewed the themes within differing contexts. It appears that the prescription choice process entails an interplay among information, relationship, patient variation, practitioner variation, and role expectations, with each viewed within different contexts by individuals engaged in such decision making.

  9. Real Time Data System (RTDS)

    NASA Technical Reports Server (NTRS)

    Muratore, John F.

    1991-01-01

    Lessons learned from operational real time expert systems are examined. The basic system architecture is discussed. An expert system is any software that performs tasks to a standard that would normally require a human expert. An expert system implies knowledge contained in data rather than code. And an expert system implies the use of heuristics as well as algorithms. The 15 top lessons learned by the operation of a real time data system are presented.

  10. The expert consensus guideline series. Optimizing pharmacologic treatment of psychotic disorders. Introduction: methods, commentary, and summary.

    PubMed

    Kane, John M; Leucht, Stefan; Carpenter, Daniel; Docherty, John P

    2003-01-01

    A growing number of atypical antipsychotics are available for clinicians to choose from in the treatment of psychotic disorders. However, a number of important questions concerning medication selection, dosing and dose equivalence, and the management of inadequate response, compliance problems, and relapse have not been adequately addressed by clinical trials. To aid clinical decision-making, a consensus survey of expert opinion on the pharmacologic treatment of psychotic disorders was undertaken to address questions not definitively answered in the research literature. Based on a literature review, a written survey was developed with 60 questions and 994 options. Approximately half of the options were scored using a modified version of the RAND 9-point scale for rating the appropriateness of medical decisions. For the other options, the experts were asked to write in answers (e.g., average doses) or check a box to indicate their preferred answer. The survey was sent to 50 national experts on the pharmacologic treatment of psychotic disorders, 47 (94%) of whom completed it. In analyzing the responses to items rated on the 9-point scale, consensus on each option was defined as a non random distribution of scores by chi-square "goodness-of-fit"test. We assigned a categorical rank (first line/preferred choice,second line/alternate choice, third line/usually inappropriate) to each option based on the 95% confidence interval around the mean rating. Guideline tables indicating preferred treatment strategies were then developed for key clinical situations. The expert panel reached consensus on 88% of the options rated on the 9-point scale. The experts overwhelmingly endorsed the atypical antipsychotics for the treatment of psychotic disorders. Risperidone was the top choice for first-episode and multi-episode patients, with the other newer atypicals rated first line or high second line depending on the clinical situation. Clozapine and a long-acting injectable atypical (when available)were other high second line options for multi-episode patients. The expert's dosing recommendations agreed closely with the package inserts for the drugs, and their estimates of dose equivalence among the antipsychotics followed a linear pattern. The experts considered 3-6 weeks an adequate antipsychotic trial, but would wait a little longer (4-10 weeks) before making a major change in treatment regimen if there is a partial response. The experts recommended trying to improve response by increasing the dose of atypical and depot antipsychotics before switching to a different agent; there was less agreement about increasing the dose of conventional antipsychotics before switching, probably because of concern about side effects at higher doses. If it is decided to switch because of inadequate response, risperidone was the expert's first choice to switch to, no matter what drug was initially tried. Although there was some disparity in the expert's recommendations concerning how many agents to try before switching to clozapine, the expert's responses suggest that switching to clozapine should be Clozapine was also the antipsychotic of choice for patients with suicidal behavior. When switching oral antipsychotics,the experts considered cross-titration the preferred strategy. When switching to an injectable antipsychotic, the experts stressed the importance of continuing the oral antipsychotic until therapeutic levels of the injectable agent are achieved. The experts considered psychosocial interventions the first choice strategy for partially compliant patients, with pharmacologic interventions the first choice for patients with clear evidence of noncompliance. However, because it can be difficult to distinguish partially compliant from noncompliant patients, the editors recommended combining psychosocial and pharmacologic interventions to improve compliance whenever possible. When patients relapse because of compliance problems or if there is any doubt about compliance, the experts recommended the use of a long-acting injectable antipsychotic and would select an injectable atypical when this option becomes available. The experts would also consider using an injectable atypical antipsychotic (when available) in many clinical situations that do not involve compliance problems. The experts stressed the importance of monitoring for health problems-especially obesity, diabetes, cardiovascular problems,HIV risk behaviors, medical complications of substance abuse, heavy smoking and its effects, hypertension, and amenorrhea-in patients being treated with antipsychotics. Although many patients are prescribed adjunctive treatments,multiple antipsychotics, and combinations of different classes of drugs (e.g., antipsychotics plus mood stabilizers or antidepressants) in an effort to enhance response, the experts gave little support to any of these strategies, with the exception of antidepressants for patients with dysphoria/depression, antidepressants or ECT for patients with suicidal behavior, and mood stabilizers for patients with aggression/violence. When asked about indicators of remission and recovery, the experts considered acute improvement in psychotic symptoms the most important indicator of remission, whereas they considered more sustained improvement in multiple outcome domains (e.g., occupational/educational functioning, peer relationships,independent living) important in assessing recovery. The experts reached a high level of consensus on many of the key treatment questions in the survey. Within the limits of expert opinion and with the expectation that future research data will take precedence, these guidelines provide direction for addressing common clinical dilemmas that arise in the pharmacologic treatment of psychotic disorders. They can be used to inform clinicians and educate patients regarding the relative merits of a variety of interventions. Clinicians should keep in mind that no guidelines can address the complexities involved in the care of each individual patient and that sound clinical judgment based on clinical experience should be used in applying these recommendations.

  11. Management Expert Systems (M.E.S.): A Framework for Development and Implementation.

    ERIC Educational Resources Information Center

    Moser, Jorge; Christoph, Richard

    1987-01-01

    This description of the development of expert systems designed to solve management problems focuses on the issue of corporate divestment as an example. Software needs are discussed, and an example of a management expert system for divestment analysis at James Madison University is briefly described. (Author/LRW)

  12. Design and implementation of a status at a glance user interface for a power distribution expert system

    NASA Technical Reports Server (NTRS)

    Liberman, Eugene M.; Manner, David B.; Dolce, James L.; Mellor, Pamela A.

    1993-01-01

    A user interface to the power distribution expert system for Space Station Freedom is discussed. The importance of features which simplify assessing system status and which minimize navigating through layers of information are examined. Design rationale and implementation choices are also presented. The amalgamation of such design features as message linking arrows, reduced information content screens, high salience anomaly icons, and color choices with failure detection and diagnostic explanation from an expert system is shown to provide an effective status-at-a-glance monitoring system for power distribution. This user interface design offers diagnostic reasoning without compromising the monitoring of current events. The display can convey complex concepts in terms that are clear to its users.

  13. Final Technical Report on Quantifying Dependability Attributes of Software Based Safety Critical Instrumentation and Control Systems in Nuclear Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smidts, Carol; Huang, Funqun; Li, Boyuan

    With the current transition from analog to digital instrumentation and control systems in nuclear power plants, the number and variety of software-based systems have significantly increased. The sophisticated nature and increasing complexity of software raises trust in these systems as a significant challenge. The trust placed in a software system is typically termed software dependability. Software dependability analysis faces uncommon challenges since software systems’ characteristics differ from those of hardware systems. The lack of systematic science-based methods for quantifying the dependability attributes in software-based instrumentation as well as control systems in safety critical applications has proved itself to be amore » significant inhibitor to the expanded use of modern digital technology in the nuclear industry. Dependability refers to the ability of a system to deliver a service that can be trusted. Dependability is commonly considered as a general concept that encompasses different attributes, e.g., reliability, safety, security, availability and maintainability. Dependability research has progressed significantly over the last few decades. For example, various assessment models and/or design approaches have been proposed for software reliability, software availability and software maintainability. Advances have also been made to integrate multiple dependability attributes, e.g., integrating security with other dependability attributes, measuring availability and maintainability, modeling reliability and availability, quantifying reliability and security, exploring the dependencies between security and safety and developing integrated analysis models. However, there is still a lack of understanding of the dependencies between various dependability attributes as a whole and of how such dependencies are formed. To address the need for quantification and give a more objective basis to the review process -- therefore reducing regulatory uncertainty -- measures and methods are needed to assess dependability attributes early on, as well as throughout the life-cycle process of software development. In this research, extensive expert opinion elicitation is used to identify the measures and methods for assessing software dependability. Semi-structured questionnaires were designed to elicit expert knowledge. A new notation system, Causal Mechanism Graphing, was developed to extract and represent such knowledge. The Causal Mechanism Graphs were merged, thus, obtaining the consensus knowledge shared by the domain experts. In this report, we focus on how software contributes to dependability. However, software dependability is not discussed separately from the context of systems or socio-technical systems. Specifically, this report focuses on software dependability, reliability, safety, security, availability, and maintainability. Our research was conducted in the sequence of stages found below. Each stage is further examined in its corresponding chapter. Stage 1 (Chapter 2): Elicitation of causal maps describing the dependencies between dependability attributes. These causal maps were constructed using expert opinion elicitation. This chapter describes the expert opinion elicitation process, the questionnaire design, the causal map construction method and the causal maps obtained. Stage 2 (Chapter 3): Elicitation of the causal map describing the occurrence of the event of interest for each dependability attribute. The causal mechanisms for the “event of interest” were extracted for each of the software dependability attributes. The “event of interest” for a dependability attribute is generally considered to be the “attribute failure”, e.g. security failure. The extraction was based on the analysis of expert elicitation results obtained in Stage 1. Stage 3 (Chapter 4): Identification of relevant measurements. Measures for the “events of interest” and their causal mechanisms were obtained from expert opinion elicitation for each of the software dependability attributes. The measures extracted are presented in this chapter. Stage 4 (Chapter 5): Assessment of the coverage of the causal maps via measures. Coverage was assessed to determine whether the measures obtained were sufficient to quantify software dependability, and what measures are further required. Stage 5 (Chapter 6): Identification of “missing” measures and measurement approaches for concepts not covered. New measures, for concepts that had not been covered sufficiently as determined in Stage 4, were identified using supplementary expert opinion elicitation as well as literature reviews. Stage 6 (Chapter 7): Building of a detailed quantification model based on the causal maps and measurements obtained. Ability to derive such a quantification model shows that the causal models and measurements derived from the previous stages (Stage 1 to Stage 5) can form the technical basis for developing dependability quantification models. Scope restrictions have led us to prioritize this demonstration effort. The demonstration was focused on a critical system, i.e. the reactor protection system. For this system, a ranking of the software dependability attributes by nuclear stakeholders was developed. As expected for this application, the stakeholder ranking identified safety as the most critical attribute to be quantified. A safety quantification model limited to the requirements phase of development was built. Two case studies were conducted for verification. A preliminary control gate for software safety for the requirements stage was proposed and applied to the first case study. The control gate allows a cost effective selection of the duration of the requirements phase.« less

  14. Microcomputer data acquisition and control.

    PubMed

    East, T D

    1986-01-01

    In medicine and biology there are many tasks that involve routine well defined procedures. These tasks are ideal candidates for computerized data acquisition and control. As the performance of microcomputers rapidly increases and cost continues to go down the temptation to automate the laboratory becomes great. To the novice computer user the choices of hardware and software are overwhelming and sadly most of the computer sales persons are not at all familiar with real-time applications. If you want to bill your patients you have hundreds of packaged systems to choose from; however, if you want to do real-time data acquisition the choices are very limited and confusing. The purpose of this chapter is to provide the novice computer user with the basics needed to set up a real-time data acquisition system with the common microcomputers. This chapter will cover the following issues necessary to establish a real time data acquisition and control system: Analysis of the research problem: Definition of the problem; Description of data and sampling requirements; Cost/benefit analysis. Choice of Microcomputer hardware and software: Choice of microprocessor and bus structure; Choice of operating system; Choice of layered software. Digital Data Acquisition: Parallel Data Transmission; Serial Data Transmission; Hardware and software available. Analog Data Acquisition: Description of amplitude and frequency characteristics of the input signals; Sampling theorem; Specification of the analog to digital converter; Hardware and software available; Interface to the microcomputer. Microcomputer Control: Analog output; Digital output; Closed-Loop Control. Microcomputer data acquisition and control in the 21st Century--What is in the future? High speed digital medical equipment networks; Medical decision making and artificial intelligence.

  15. Software Size Estimation Using Expert Estimation: A Fuzzy Logic Approach

    ERIC Educational Resources Information Center

    Stevenson, Glenn A.

    2012-01-01

    For decades software managers have been using formal methodologies such as the Constructive Cost Model and Function Points to estimate the effort of software projects during the early stages of project development. While some research shows these methodologies to be effective, many software managers feel that they are overly complicated to use and…

  16. Making objective decisions in mechanical engineering problems

    NASA Astrophysics Data System (ADS)

    Raicu, A.; Oanta, E.; Sabau, A.

    2017-08-01

    Decision making process has a great influence in the development of a given project, the goal being to select an optimal choice in a given context. Because of its great importance, the decision making was studied using various science methods, finally being conceived the game theory that is considered the background for the science of logical decision making in various fields. The paper presents some basic ideas regarding the game theory in order to offer the necessary information to understand the multiple-criteria decision making (MCDM) problems in engineering. The solution is to transform the multiple-criteria problem in a one-criterion decision problem, using the notion of utility, together with the weighting sum model or the weighting product model. The weighted importance of the criteria is computed using the so-called Step method applied to a relation of preferences between the criteria. Two relevant examples from engineering are also presented. The future directions of research consist of the use of other types of criteria, the development of computer based instruments for decision making general problems and to conceive a software module based on expert system principles to be included in the Wiki software applications for polymeric materials that are already operational.

  17. Expert systems for C3I. Volume 1. A user's introduction

    NASA Astrophysics Data System (ADS)

    Clapp, J. A.; Hockett, S. M.; Prelle, M. J.; Tallant, A. M.; Triant, D. D.

    1985-10-01

    There has been a tremendous burgeoning of interest in artificial intelligence (AI) over the last few years. Investments of commercial and government sponsors reflect a widespread belief that AI is now ready for practical applications. The area of AI currently receiving the greatest attention and investment is expert system technology. Most major high tech corporations have begun to develop expert systems, and many software houses specializing in expert system tools and applications have recently appeared. The defense community is one of the heaviest investors in expert system technology, and within this community one of the application areas receiving greatest attention is C3I. Many ESD programs are now beginning to ask whether expert system applications for C3I are ready for incorporation into ESD-developed systems, and, if so, what are the potential benefits and risks of doing so. This report was prepared to help ESD and MITRE personnel working on acquisition programs to address these issues and to gain a better understanding of what expert systems are all about. The primary intention of this report is to investigate what expert systems are and the advances that are being made in expert system technology for C3I applications. The report begins with a brief tutorial on expert systems, emphasizing how they differ from conventional software systems and what they are best at doing.

  18. Better Nutrition Every Day: How to Make Healthier Food Choices

    MedlinePlus

    ... Issues Subscribe September 2015 Print this issue Better Nutrition Every Day How to Make Healthier Food Choices ... cook at home,” says Dr. Adam Drewnowski, a nutrition expert at the University of Washington in Seattle. ...

  19. The relative importance of information sources in consumers' choice of hospitals.

    PubMed

    Gooding, S K

    1995-01-01

    The research presented focuses on an examination of the relative importance of word-of-mouth, expert opinion, external communication, and past experience in the context of hospital choice. Past research has examined the effect of each individually and various combinations of the four sources, but not all four simultaneously. Results of the present study suggest that past experience plays a greater role in hospital choice than other information sources, including expert opinion. The strength of word-of-mouth as a source of information is also verified. The implications of this research include the following: (1) health care researchers need to incorporate word-of-mouth when investigating informations sources, and (2) local hospitals need to be aware of "negative perceptions" and strive for consumer satisfaction. Health care delivery systems incorporating consumer-based choice render these findings especially valuable as researchers and practitioners address the challenges that these evolving systems will bring.

  20. Special Report: Part One. New Tools for Professionals.

    ERIC Educational Resources Information Center

    Liskin, Miriam; And Others

    1984-01-01

    This collection of articles includes an examination of word-processing software; project management software; new expert systems that turn microcomputers into logical, well-informed consultants; simulated negotiation software; telephone management systems; and the physical design of an efficient microcomputer work space. (MBR)

  1. Living Design Memory: Framework, Implementation, Lessons Learned.

    ERIC Educational Resources Information Center

    Terveen, Loren G.; And Others

    1995-01-01

    Discusses large-scale software development and describes the development of the Designer Assistant to improve software development effectiveness. Highlights include the knowledge management problem; related work, including artificial intelligence and expert systems, software process modeling research, and other approaches to organizational memory;…

  2. Desiderata for product labeling of medical expert systems.

    PubMed

    Geissbühler, A; Miller, R A

    1997-12-01

    The proliferation and increasing complexity of medical expert systems raise ethical and legal concerns about the ability of practitioners to protect their patients from defective or misused software. Appropriate product labeling of expert systems can help clinical users to understand software indications and limitations. Mechanisms of action and knowledge representation schema should be explained in layperson's terminology. User qualifications and resources available for acquiring the skills necessary to understand and critique the system output should be listed. The processes used for building and maintaining the system's knowledge base are key determinants of the product's quality, and should be carefully documented. To meet these desiderata, a printed label is insufficient. The authors suggest a new, more active, model of product labeling for medical expert systems that involves embedding 'knowledge of the knowledge base', creating user-specific data, and sharing global information using the Internet.

  3. Open-Source web-based geographical information system for health exposure assessment

    PubMed Central

    2012-01-01

    This paper presents the design and development of an open source web-based Geographical Information System allowing users to visualise, customise and interact with spatial data within their web browser. The developed application shows that by using solely Open Source software it was possible to develop a customisable web based GIS application that provides functions necessary to convey health and environmental data to experts and non-experts alike without the requirement of proprietary software. PMID:22233606

  4. Confirmatory Factor Analysis Alternative: Free, Accessible CBID Software.

    PubMed

    Bott, Marjorie; Karanevich, Alex G; Garrard, Lili; Price, Larry R; Mudaranthakam, Dinesh Pal; Gajewski, Byron

    2018-02-01

    New software that performs Classical and Bayesian Instrument Development (CBID) is reported that seamlessly integrates expert (content validity) and participant data (construct validity) to produce entire reliability estimates with smaller sample requirements. The free CBID software can be accessed through a website and used by clinical investigators in new instrument development. Demonstrations are presented of the three approaches using the CBID software: (a) traditional confirmatory factor analysis (CFA), (b) Bayesian CFA using flat uninformative prior, and (c) Bayesian CFA using content expert data (informative prior). Outcomes of usability testing demonstrate the need to make the user-friendly, free CBID software available to interdisciplinary researchers. CBID has the potential to be a new and expeditious method for instrument development, adding to our current measurement toolbox. This allows for the development of new instruments for measuring determinants of health in smaller diverse populations or populations of rare diseases.

  5. Computer Series, 82. The Application of Expert Systems in the General Chemistry Laboratory.

    ERIC Educational Resources Information Center

    Settle, Frank A., Jr.

    1987-01-01

    Describes the construction of expert computer systems using artificial intelligence technology and commercially available software, known as an expert system shell. Provides two applications; a simple one, the identification of seven white substances, and a more complicated one involving the qualitative analysis of six metal ions. (TW)

  6. User Documentation; POTW EXPERT v1.1; An Advisory System for Improving the Performance of Wastewater Treatment Facilities

    EPA Science Inventory

    POTW Expert is a PCX-based software program modeled after EPA/s Handbook Retrofitting POTWs (EPA-625/6-89/020) (formerly, Handbook for Improving POTW Performance Using the Composite Correction Program Approach). POTW Expert assists POTW owners and operators, state and local regu...

  7. Bonneville Power Administration Communication Alarm Processor expert system:

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goeltz, R.; Purucker, S.; Tonn, B.

    This report describes the Communications Alarm Processor (CAP), a prototype expert system developed for the Bonneville Power Administration by Oak Ridge National Laboratory. The system is designed to receive and diagnose alarms from Bonneville's Microwave Communications System (MCS). The prototype encompasses one of seven branches of the communications network and a subset of alarm systems and alarm types from each system. The expert system employs a backward chaining approach to diagnosing alarms. Alarms are fed into the expert system directly from the communication system via RS232 ports and sophisticated alarm filtering and mailbox software. Alarm diagnoses are presented to operatorsmore » for their review and concurrence before the diagnoses are archived. Statistical software is incorporated to allow analysis of archived data for report generation and maintenance studies. The delivered system resides on a Digital Equipment Corporation VAX 3200 workstation and utilizes Nexpert Object and SAS for the expert system and statistical analysis, respectively. 11 refs., 23 figs., 7 tabs.« less

  8. Artificial Intelligence In Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Vogel, Alison Andrews

    1991-01-01

    Paper compares four first-generation artificial-intelligence (Al) software systems for computational fluid dynamics. Includes: Expert Cooling Fan Design System (EXFAN), PAN AIR Knowledge System (PAKS), grid-adaptation program MITOSIS, and Expert Zonal Grid Generation (EZGrid). Focuses on knowledge-based ("expert") software systems. Analyzes intended tasks, kinds of knowledge possessed, magnitude of effort required to codify knowledge, how quickly constructed, performances, and return on investment. On basis of comparison, concludes Al most successful when applied to well-formulated problems solved by classifying or selecting preenumerated solutions. In contrast, application of Al to poorly understood or poorly formulated problems generally results in long development time and large investment of effort, with no guarantee of success.

  9. TU-A-17A-02: In Memoriam of Ben Galkin: Virtual Tools for Validation of X-Ray Breast Imaging Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, K; Bakic, P; Abbey, C

    2014-06-15

    This symposium will explore simulation methods for the preclinical evaluation of novel 3D and 4D x-ray breast imaging systems – the subject of AAPM taskgroup TG234. Given the complex design of modern imaging systems, simulations offer significant advantages over long and costly clinical studies in terms of reproducibility, reduced radiation exposures, a known reference standard, and the capability for studying patient and disease subpopulations through appropriate choice of simulation parameters. Our focus will be on testing the realism of software anthropomorphic phantoms and virtual clinical trials tools developed for the optimization and validation of breast imaging systems. The symposium willmore » review the stateof- the-science, as well as the advantages and limitations of various approaches to testing realism of phantoms and simulated breast images. Approaches based upon the visual assessment of synthetic breast images by expert observers will be contrasted with approaches based upon comparing statistical properties between synthetic and clinical images. The role of observer models in the assessment of realism will be considered. Finally, an industry perspective will be presented, summarizing the role and importance of virtual tools and simulation methods in product development. The challenges and conditions that must be satisfied in order for computational modeling and simulation to play a significantly increased role in the design and evaluation of novel breast imaging systems will be addressed. Learning Objectives: Review the state-of-the science in testing realism of software anthropomorphic phantoms and virtual clinical trials tools; Compare approaches based upon the visual assessment by expert observers vs. the analysis of statistical properties of synthetic images; Discuss the role of observer models in the assessment of realism; Summarize the industry perspective to virtual methods for breast imaging.« less

  10. Generic domain models in software engineering

    NASA Technical Reports Server (NTRS)

    Maiden, Neil

    1992-01-01

    This paper outlines three research directions related to domain-specific software development: (1) reuse of generic models for domain-specific software development; (2) empirical evidence to determine these generic models, namely elicitation of mental knowledge schema possessed by expert software developers; and (3) exploitation of generic domain models to assist modelling of specific applications. It focuses on knowledge acquisition for domain-specific software development, with emphasis on tool support for the most important phases of software development.

  11. Translating expert system rules into Ada code with validation and verification

    NASA Technical Reports Server (NTRS)

    Becker, Lee; Duckworth, R. James; Green, Peter; Michalson, Bill; Gosselin, Dave; Nainani, Krishan; Pease, Adam

    1991-01-01

    The purpose of this ongoing research and development program is to develop software tools which enable the rapid development, upgrading, and maintenance of embedded real-time artificial intelligence systems. The goals of this phase of the research were to investigate the feasibility of developing software tools which automatically translate expert system rules into Ada code and develop methods for performing validation and verification testing of the resultant expert system. A prototype system was demonstrated which automatically translated rules from an Air Force expert system was demonstrated which detected errors in the execution of the resultant system. The method and prototype tools for converting AI representations into Ada code by converting the rules into Ada code modules and then linking them with an Activation Framework based run-time environment to form an executable load module are discussed. This method is based upon the use of Evidence Flow Graphs which are a data flow representation for intelligent systems. The development of prototype test generation and evaluation software which was used to test the resultant code is discussed. This testing was performed automatically using Monte-Carlo techniques based upon a constraint based description of the required performance for the system.

  12. Computer Sciences and Data Systems, volume 1

    NASA Technical Reports Server (NTRS)

    1987-01-01

    Topics addressed include: software engineering; university grants; institutes; concurrent processing; sparse distributed memory; distributed operating systems; intelligent data management processes; expert system for image analysis; fault tolerant software; and architecture research.

  13. Computer Software for Intelligent Systems.

    ERIC Educational Resources Information Center

    Lenat, Douglas B.

    1984-01-01

    Discusses the development and nature of computer software for intelligent systems, indicating that the key to intelligent problem-solving lies in reducing the random search for solutions. Formal reasoning methods, expert systems, and sources of power in problem-solving are among the areas considered. Specific examples of such software are…

  14. A four-alternative forced choice (4AFC) software for observer performance evaluation in radiology

    NASA Astrophysics Data System (ADS)

    Zhang, Guozhi; Cockmartin, Lesley; Bosmans, Hilde

    2016-03-01

    Four-alternative forced choice (4AFC) test is a psychophysical method that can be adopted for observer performance evaluation in radiological studies. While the concept of this method is well established, difficulties to handle large image data, perform unbiased sampling, and keep track of the choice made by the observer have restricted its application in practice. In this work, we propose an easy-to-use software that can help perform 4AFC tests with DICOM images. The software suits for any experimental design that follows the 4AFC approach. It has a powerful image viewing system that favorably simulates the clinical reading environment. The graphical interface allows the observer to adjust various viewing parameters and perform the selection with very simple operations. The sampling process involved in 4AFC as well as the speed and accuracy of the choice made by the observer is precisely monitored in the background and can be easily exported for test analysis. The software has also a defensive mechanism for data management and operation control that minimizes the possibility of mistakes from user during the test. This software can largely facilitate the use of 4AFC approach in radiological observer studies and is expected to have widespread applicability.

  15. The impact of two multiple-choice question formats on the problem-solving strategies used by novices and experts.

    PubMed

    Coderre, Sylvain P; Harasym, Peter; Mandin, Henry; Fick, Gordon

    2004-11-05

    Pencil-and-paper examination formats, and specifically the standard, five-option multiple-choice question, have often been questioned as a means for assessing higher-order clinical reasoning or problem solving. This study firstly investigated whether two paper formats with differing number of alternatives (standard five-option and extended-matching questions) can test problem-solving abilities. Secondly, the impact of the alternatives number on psychometrics and problem-solving strategies was examined. Think-aloud protocols were collected to determine the problem-solving strategy used by experts and non-experts in answering Gastroenterology questions, across the two pencil-and-paper formats. The two formats demonstrated equal ability in testing problem-solving abilities, while the number of alternatives did not significantly impact psychometrics or problem-solving strategies utilized. These results support the notion that well-constructed multiple-choice questions can in fact test higher order clinical reasoning. Furthermore, it can be concluded that in testing clinical reasoning, the question stem, or content, remains more important than the number of alternatives.

  16. Expert Systems for Libraries at SCIL [Small Computers in Libraries]'88.

    ERIC Educational Resources Information Center

    Kochtanek, Thomas R.; And Others

    1988-01-01

    Six brief papers on expert systems for libraries cover (1) a knowledge-based approach to database design; (2) getting started in expert systems; (3) using public domain software to develop a business reference system; (4) a music cataloging inquiry system; (5) linguistic analysis of reference transactions; and (6) a model of a reference librarian.…

  17. Perspective on intelligent avionics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, H.L.

    1987-01-01

    Technical issues which could potentially limit the capability and acceptibility of expert systems decision-making for avionics applications are addressed. These issues are: real-time AI, mission-critical software, conventional algorithms, pilot interface, knowledge acquisition, and distributed expert systems. Examples from on-going expert system development programs are presented to illustrate likely architectures and applications of future intelligent avionic systems. 13 references.

  18. The green choices project: integrating environmental health education into reproductive health care settings.

    PubMed

    Worthington, Sandra; Armstrong, Kay; Debevec, Elie

    2010-01-01

    A national reproductive health organization developed the Green Choices project to educate staff and clients about how to live in healthier environments by reducing potentially harmful environmental exposures to toxicants. An advisory group, comprised of experts in environmental and reproductive health and literacy, defined the project's scope and common environmental exposures to address. The following educational materials were developed: an online staff environmental health 101 curriculum, an environmental health assessment tool for clients to identify their potential risks, and information sheets for each environmental exposure that described potential risks and ways to reduce risks. Beta-testing methods included baseline and follow-up surveys, one-on-one interviews, focus groups, and recommendations from experts. Staff and client feedback on the educational materials resulted in increased clarity, sensitivity, relevancy, and appeal. Environmental health experts ensured accuracy of information, and reading experts lowered the reading level from 12th to 6th grade. A campaign to disseminate environmental health information and educational materials nationally is under way.

  19. Enhancements to highway construction scheduling expert system.

    DOT National Transportation Integrated Search

    2015-05-01

    This research was performed to enhance the software tool (Illinois Construction Scheduling Expert : System, ICSES) developed in Phase I of this project (ICT project R27-86) by mining data collected on : IDOT construction projects and differentiating ...

  20. The Delphi Method Online: Medical Expert Consensus Via the Internet

    PubMed Central

    Cam, Kenneth M.; McKnight, Patrick E.; Doctor, Jason N.

    2002-01-01

    Delphi is an expert consensus method. The theory behind the Delphi method is that the interaction of experts may lead to a reduction in individual bias. We have developed software that carries out all aspects of the Delphi method via the Internet. The Delphi method online consists of three components: 1) authorship, 2) interactive polling, and 3) reporting/results. We hope that researchers use this tool in future medical expert systems.

  1. A Data-Driven Solution for Performance Improvement

    NASA Technical Reports Server (NTRS)

    2002-01-01

    Marketed as the "Software of the Future," Optimal Engineering Systems P.I. EXPERT(TM) technology offers statistical process control and optimization techniques that are critical to businesses looking to restructure or accelerate operations in order to gain a competitive edge. Kennedy Space Center granted Optimal Engineering Systems the funding and aid necessary to develop a prototype of the process monitoring and improvement software. Completion of this prototype demonstrated that it was possible to integrate traditional statistical quality assurance tools with robust optimization techniques in a user- friendly format that is visually compelling. Using an expert system knowledge base, the software allows the user to determine objectives, capture constraints and out-of-control processes, predict results, and compute optimal process settings.

  2. Use of Software Tools in Teaching Relational Database Design.

    ERIC Educational Resources Information Center

    McIntyre, D. R.; And Others

    1995-01-01

    Discusses the use of state-of-the-art software tools in teaching a graduate, advanced, relational database design course. Results indicated a positive student response to the prototype of expert systems software and a willingness to utilize this new technology both in their studies and in future work applications. (JKP)

  3. Learning Content and Software Evaluation and Personalisation Problems

    ERIC Educational Resources Information Center

    Kurilovas, Eugenijus; Serikoviene, Silvija

    2010-01-01

    The paper aims to analyse several scientific approaches how to evaluate, implement or choose learning content and software suitable for personalised users/learners needs. Learning objects metadata customisation method as well as the Method of multiple criteria evaluation and optimisation of learning software represented by the experts' additive…

  4. High/Scope Buyer's Guide to Children's Software. 11th Edition.

    ERIC Educational Resources Information Center

    Hohmann, Charles; And Others

    This 11th edition of the High/Scope Buyer's Guide to Children's Software was designed to help teachers, caregivers, and parents make good choices when purchasing software to enhance children's learning. The book consists of an introduction, a chapter on finding the best software, software reviews for 48 different software products. The…

  5. A Thermal Expert System (TEXSYS) development overview - AI-based control of a Space Station prototype thermal bus

    NASA Technical Reports Server (NTRS)

    Glass, B. J.; Hack, E. C.

    1990-01-01

    A knowledge-based control system for real-time control and fault detection, isolation and recovery (FDIR) of a prototype two-phase Space Station Freedom external thermal control system (TCS) is discussed in this paper. The Thermal Expert System (TEXSYS) has been demonstrated in recent tests to be capable of both fault anticipation and detection and real-time control of the thermal bus. Performance requirements were achieved by using a symbolic control approach, layering model-based expert system software on a conventional numerical data acquisition and control system. The model-based capabilities of TEXSYS were shown to be advantageous during software development and testing. One representative example is given from on-line TCS tests of TEXSYS. The integration and testing of TEXSYS with a live TCS testbed provides some insight on the use of formal software design, development and documentation methodologies to qualify knowledge-based systems for on-line or flight applications.

  6. Hyperspectral Soil Mapper (HYSOMA) software interface: Review and future plans

    NASA Astrophysics Data System (ADS)

    Chabrillat, Sabine; Guillaso, Stephane; Eisele, Andreas; Rogass, Christian

    2014-05-01

    With the upcoming launch of the next generation of hyperspectral satellites that will routinely deliver high spectral resolution images for the entire globe (e.g. EnMAP, HISUI, HyspIRI, HypXIM, PRISMA), an increasing demand for the availability/accessibility of hyperspectral soil products is coming from the geoscience community. Indeed, many robust methods for the prediction of soil properties based on imaging spectroscopy already exist and have been successfully used for a wide range of soil mapping airborne applications. Nevertheless, these methods require expert know-how and fine-tuning, which makes them used sparingly. More developments are needed toward easy-to-access soil toolboxes as a major step toward the operational use of hyperspectral soil products for Earth's surface processes monitoring and modelling, to allow non-experienced users to obtain new information based on non-expensive software packages where repeatability of the results is an important prerequisite. In this frame, based on the EU-FP7 EUFAR (European Facility for Airborne Research) project and EnMAP satellite science program, higher performing soil algorithms were developed at the GFZ German Research Center for Geosciences as demonstrators for end-to-end processing chains with harmonized quality measures. The algorithms were built-in into the HYSOMA (Hyperspectral SOil MApper) software interface, providing an experimental platform for soil mapping applications of hyperspectral imagery that gives the choice of multiple algorithms for each soil parameter. The software interface focuses on fully automatic generation of semi-quantitative soil maps such as soil moisture, soil organic matter, iron oxide, clay content, and carbonate content. Additionally, a field calibration option calculates fully quantitative soil maps provided ground truth soil data are available. Implemented soil algorithms have been tested and validated using extensive in-situ ground truth data sets. The source of the HYSOMA code was developed as standalone IDL software to allow easy implementation in the hyperspectral and non-hyperspectral communities. Indeed, within the hyperspectral community, IDL language is very widely used, and for non-expert users that do not have an ENVI license, such software can be executed as a binary version using the free IDL virtual machine under various operating systems. Based on the growing interest of users in the software interface, the experimental software was adapted for public release version in 2012, and since then ~80 users of hyperspectral soil products downloaded the soil algorithms at www.gfz-potsdam.de/hysoma. The software interface was distributed for free as IDL plug-ins under the IDL-virtual machine. Up-to-now distribution of HYSOMA was based on a close source license model, for non-commercial and educational purposes. Currently, the HYSOMA is being under further development in the context of the EnMAP satellite mission, for extension and implementation in the EnMAP Box as EnSoMAP (EnMAP SOil MAPper). The EnMAP Box is a freely available, platform-independent software distributed under an open source license. In the presentation we will focus on an update of the HYSOMA software interface status and upcoming implementation in the EnMAP Box. Scientific software validation, associated publication record and users responses as well as software management and transition to open source will be discussed.

  7. An expert systems approach to automated fault management in a regenerative life support subsystem

    NASA Technical Reports Server (NTRS)

    Malin, J. T.; Lance, N., Jr.

    1986-01-01

    This paper describes FIXER, a prototype expert system for automated fault management in a regenerative life support subsystem typical of Space Station applications. The development project provided an evaluation of the use of expert systems technology to enhance controller functions in space subsystems. The software development approach permitted evaluation of the effectiveness of direct involvement of the expert in design and development. The approach also permitted intensive observation of the knowledge and methods of the expert. This paper describes the development of the prototype expert system and presents results of the evaluation.

  8. Presenting an evaluation model of the trauma registry software.

    PubMed

    Asadi, Farkhondeh; Paydar, Somayeh

    2018-04-01

    Trauma is a major cause of 10% death in the worldwide and is considered as a global concern. This problem has made healthcare policy makers and managers to adopt a basic strategy in this context. Trauma registry has an important and basic role in decreasing the mortality and the disabilities due to injuries resulted from trauma. Today, different software are designed for trauma registry. Evaluation of this software improves management, increases efficiency and effectiveness of these systems. Therefore, the aim of this study is to present an evaluation model for trauma registry software. The present study is an applied research. In this study, general and specific criteria of trauma registry software were identified by reviewing literature including books, articles, scientific documents, valid websites and related software in this domain. According to general and specific criteria and related software, a model for evaluating trauma registry software was proposed. Based on the proposed model, a checklist designed and its validity and reliability evaluated. Mentioned model by using of the Delphi technique presented to 12 experts and specialists. To analyze the results, an agreed coefficient of %75 was determined in order to apply changes. Finally, when the model was approved by the experts and professionals, the final version of the evaluation model for the trauma registry software was presented. For evaluating of criteria of trauma registry software, two groups were presented: 1- General criteria, 2- Specific criteria. General criteria of trauma registry software were classified into four main categories including: 1- usability, 2- security, 3- maintainability, and 4-interoperability. Specific criteria were divided into four main categories including: 1- data submission and entry, 2- reporting, 3- quality control, 4- decision and research support. The presented model in this research has introduced important general and specific criteria of trauma registry software and sub criteria related to each main criteria separately. This model was validated by experts in this field. Therefore, this model can be used as a comprehensive model and a standard evaluation tool for measuring efficiency and effectiveness and performance improvement of trauma registry software. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Conceptualizing physical activity parenting practices using expert informed concept mapping analysis.

    PubMed

    Mâsse, Louise C; O'Connor, Teresia M; Tu, Andrew W; Hughes, Sheryl O; Beauchamp, Mark R; Baranowski, Tom

    2017-06-14

    Parents are widely recognized as playing a central role in the development of child behaviors such as physical activity. As there is little agreement as to the dimensions of physical activity-related parenting practices that should be measured or how they should be operationalized, this study engaged experts to develop an integrated conceptual framework for assessing parenting practices that influence multiple aspects of 5 to 12 year old children's participation in physical activity. The ultimate goal of this study is to inform the development of an item bank (repository of calibrated items) aimed at measuring physical activity parenting practices. Twenty four experts from 6 countries (Australia, Canada, England, Scotland, the Netherlands, & United States (US)) sorted 77 physical activity parenting practice concepts identified from our previously published synthesis of the literature (74 measures) and survey of Canadian and US parents. Concept Mapping software was used to conduct the multi-dimensional scaling (MDS) analysis and a cluster analysis of the MDS solution of the Expert's sorting which was qualitatively reviewed and commented on by the Experts. The conceptual framework includes 12 constructs which are presented using three main domains of parenting practices (neglect/control, autonomy support, and structure). The neglect/control domain includes two constructs: permissive and pressuring parenting practices. The autonomy supportive domain includes four constructs: encouragement, guided choice, involvement in child physical activities, and praises/rewards for their child's physical activity. Finally, the structure domain includes six constructs: co-participation, expectations, facilitation, modeling, monitoring, and restricting physical activity for safety or academic concerns. The concept mapping analysis provided a useful process to engage experts in re-conceptualizing physical activity parenting practices and identified key constructs to include in measures of physical activity parenting. While the constructs identified ought to be included in measures of physical activity parenting practices, it will be important to collect data among parents to further validate the content of these constructs. In conclusion, the method provided a roadmap for developing an item bank that captures key facets of physical activity parenting and ultimately serves to standardize how we operationalize measures of physical activity parenting.

  10. Bayesian methods in reliability

    NASA Astrophysics Data System (ADS)

    Sander, P.; Badoux, R.

    1991-11-01

    The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.

  11. Multiple choice questions can be designed or revised to challenge learners' critical thinking.

    PubMed

    Tractenberg, Rochelle E; Gushta, Matthew M; Mulroney, Susan E; Weissinger, Peggy A

    2013-12-01

    Multiple choice (MC) questions from a graduate physiology course were evaluated by cognitive-psychology (but not physiology) experts, and analyzed statistically, in order to test the independence of content expertise and cognitive complexity ratings of MC items. Integration of higher order thinking into MC exams is important, but widely known to be challenging-perhaps especially when content experts must think like novices. Expertise in the domain (content) may actually impede the creation of higher-complexity items. Three cognitive psychology experts independently rated cognitive complexity for 252 multiple-choice physiology items using a six-level cognitive complexity matrix that was synthesized from the literature. Rasch modeling estimated item difficulties. The complexity ratings and difficulty estimates were then analyzed together to determine the relative contributions (and independence) of complexity and difficulty to the likelihood of correct answers on each item. Cognitive complexity was found to be statistically independent of difficulty estimates for 88 % of items. Using the complexity matrix, modifications were identified to increase some item complexities by one level, without affecting the item's difficulty. Cognitive complexity can effectively be rated by non-content experts. The six-level complexity matrix, if applied by faculty peer groups trained in cognitive complexity and without domain-specific expertise, could lead to improvements in the complexity targeted with item writing and revision. Targeting higher order thinking with MC questions can be achieved without changing item difficulties or other test characteristics, but this may be less likely if the content expert is left to assess items within their domain of expertise.

  12. An hierarchical approach to performance evaluation of expert systems

    NASA Technical Reports Server (NTRS)

    Dominick, Wayne D. (Editor); Kavi, Srinu

    1985-01-01

    The number and size of expert systems is growing rapidly. Formal evaluation of these systems - which is not performed for many systems - increases the acceptability by the user community and hence their success. Hierarchical evaluation that had been conducted for computer systems is applied for expert system performance evaluation. Expert systems are also evaluated by treating them as software systems (or programs). This paper reports many of the basic concepts and ideas in the Performance Evaluation of Expert Systems Study being conducted at the University of Southwestern Louisiana.

  13. Software cost/resource modeling: Software quality tradeoff measurement

    NASA Technical Reports Server (NTRS)

    Lawler, R. W.

    1980-01-01

    A conceptual framework for treating software quality from a total system perspective is developed. Examples are given to show how system quality objectives may be allocated to hardware and software; to illustrate trades among quality factors, both hardware and software, to achieve system performance objectives; and to illustrate the impact of certain design choices on software functionality.

  14. Can high quality overcome consumer resistance to restricted provider access? Evidence from a health plan choice experiment.

    PubMed

    Harris, Katherine M

    2002-06-01

    To investigate the impact of quality information on the willingness of consumers to enroll in health plans that restrict provider access. A survey administered to respondents between the ages of 25 and 64 in the West Los Angeles area with private health insurance. An experimental approach is used to measure the effect of variation in provider network features and information about the quality of network physicians on hypothetical plan choices. Conditional logit models are used to analyze the experimental choice data. Next, choice model parameter estimates are used to simulate the impact of changes in plan features on the market shares of competing health plans and to calculate the quality level required to make consumers indifferent to changes in provider access. The presence of quality information reduced the importance of provider network features in plan choices as hypothesized. However, there were not statistically meaningful differences by type of quality measure (i.e., consumer assessed versus expert assessed). The results imply that large quality differences are required to make consumers indifferent to changes in provider access. The impact of quality on plan choices depended more on the particular measure and less on the type of measure. Quality ratings based on the proportion of survey respondents "extremely satisfied with results of care" had the greatest impact on plan choice while the proportion of network doctors "affiliated with university medical centers" had the least. Other consumer and expert assessed measures had more comparable effects. Overall the results provide empirical evidence that consumers are willing to trade high quality for restrictions on provider access. This willingness to trade implies that relatively small plans that place restrictions on provider access can successfully compete against less restrictive plans when they can demonstrate high quality. However, the results of this study suggest that in many cases, the level of quality required for consumers to accept access restrictions may be so high as to be unattainable. The results provide empirical support for the current focus of decision support efforts on consumer assessed quality measures. At the same time, however, the results suggest that consumers would also value quality measures based on expert assessments. This finding is relevant given the lack of comparative quality information based on expert judgment and research suggesting that consumers have apprehensions about their ability to meaningfully interpret performance-based quality measures.

  15. Prioritizing public- private partnership models for public hospitals of iran based on performance indicators.

    PubMed

    Gholamzadeh Nikjoo, Raana; Jabbari Beyrami, Hossein; Jannati, Ali; Asghari Jaafarabadi, Mohammad

    2012-01-01

    The present study was conducted to scrutinize Public- Private Partnership (PPP) models in public hospitals of different countries based on performance indicators in order to se-lect appropriated models for Iran hospitals. In this mixed (quantitative-qualitative) study, systematic review and expert panel has been done to identify varied models of PPP as well as performance indicators. In the second step we prioritized performance indicator and PPP models based on selected performance indicators by Analytical Hierarchy process (AHP) technique. The data were analyzed by Excel 2007 and Expert Choice11 software's. In quality - effectiveness area, indicators like the rate of hospital infections (100%), hospital accidents prevalence rate (73%), pure rate of hospital mortality (63%), patient satisfaction percentage (53%), in accessibility equity area indicators such as average inpatient waiting time (100%) and average outpatient waiting time (74%), and in financial - efficiency area, indicators including average length of stay (100%), bed occupation ratio (99%), specific income to total cost ratio (97%) have been chosen to be the most key performance indicators. In the pri¬oritization of the PPP models clinical outsourcing, management, privatization, BOO (build, own, operate) and non-clinical outsourcing models, achieved high priority for various performance in¬dicator areas. This study had been provided the most common PPP options in the field of public hospitals and had gathered suitable evidences from experts for choosing appropriate PPP option for public hospitals. Effect of private sector presence in public hospital performance, based on which PPP options undertaken, will be different.

  16. Understanding Expertise-Based Training Effects on the Software Evaluation Process of Mathematics Education Teachers

    ERIC Educational Resources Information Center

    Incikabi, Lutfi; Sancar Tokmak, Hatice

    2012-01-01

    This case study examined the educational software evaluation processes of pre-service teachers who attended either expertise-based training (XBT) or traditional training in conjunction with a Software-Evaluation checklist. Forty-three mathematics teacher candidates and three experts participated in the study. All participants evaluated educational…

  17. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    NASA Astrophysics Data System (ADS)

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.

  18. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations

    PubMed Central

    Vergara-Perez, Sandra; Marucho, Marcelo

    2015-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules. PMID:26924848

  19. MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.

    PubMed

    Vergara-Perez, Sandra; Marucho, Marcelo

    2016-01-01

    One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.

  20. Using Dissimilarity Metrics to Identify Interesting Designs

    NASA Technical Reports Server (NTRS)

    Feather, Martin; Kiper, James

    2006-01-01

    A computer program helps to blend the power of automated-search software, which is able to generate large numbers of design solutions, with the insight of expert designers, who are able to identify preferred designs but do not have time to examine all the solutions. From among the many automated solutions to a given design problem, the program selects a smaller number of solutions that are worthy of scrutiny by the experts in the sense that they are sufficiently dissimilar from each other. The program makes the selection in an interactive process that involves a sequence of data-mining steps interspersed with visual displays of results of these steps to the experts. At crucial points between steps, the experts provide directives to guide the process. The program uses heuristic search techniques to identify nearly optimal design solutions and uses dissimilarity metrics defined by the experts to characterize the degree to which solutions are interestingly different. The search, data-mining, and visualization features of the program were derived from previously developed risk-management software used to support a risk-centric design methodology

  1. Simulation Of Combat With An Expert System

    NASA Technical Reports Server (NTRS)

    Provenzano, J. P.

    1989-01-01

    Proposed expert system predicts outcomes of combat situations. Called "COBRA", combat outcome based on rules for attrition, system selects rules for mathematical modeling of losses and discrete events in combat according to previous experiences. Used with another software module known as the "Game". Game/COBRA software system, consisting of Game and COBRA modules, provides for both quantitative aspects and qualitative aspects in simulations of battles. COBRA intended for simulation of large-scale military exercises, concepts embodied in it have much broader applicability. In industrial research, knowledge-based system enables qualitative as well as quantitative simulations.

  2. Expert system verification and validation study. Delivery 3A and 3B: Trip summaries

    NASA Technical Reports Server (NTRS)

    French, Scott

    1991-01-01

    Key results are documented from attending the 4th workshop on verification, validation, and testing. The most interesting part of the workshop was when representatives from the U.S., Japan, and Europe presented surveys of VV&T within their respective regions. Another interesting part focused on current efforts to define industry standards for artificial intelligence and how that might affect approaches to VV&T of expert systems. The next part of the workshop focused on VV&T methods of applying mathematical techniques to verification of rule bases and techniques for capturing information relating to the process of developing software. The final part focused on software tools. A summary is also presented of the EPRI conference on 'Methodologies, Tools, and Standards for Cost Effective Reliable Software Verification and Validation. The conference was divided into discussion sessions on the following issues: development process, automated tools, software reliability, methods, standards, and cost/benefit considerations.

  3. Artificial intelligence against breast cancer (A.N.N.E.S-B.C.-Project).

    PubMed

    Parmeggiani, Domenico; Avenia, Nicola; Sanguinetti, Alessandro; Ruggiero, Roberto; Docimo, Giovanni; Siciliano, Mattia; Ambrosino, Pasquale; Madonna, Imma; Peltrini, Roberto; Parmeggiani, Umberto

    2012-01-01

    Our preliminary study examined the development of an advanced innovative technology with the objectives of--developing methodologies and algorithms for a Artificial Neural Network (ANN) system, improving mammography and ultra-sonography images interpretation;--creating autonomous software as a diagnostic tool for the physicians, allowing the possibility for the advanced application of databases using Artificial Intelligence (Expert System). Since 2004 550 F patients over 40 yrs old were divided in two groups: 1) 310 pts underwent echo every 6 months and mammography every year by expert radiologists. 2) 240 pts had the same screening program and were also examined by our diagnosis software, developed with ANN-ES technology by the Engineering Aircraft Research Project team. The information was continually updated and returned to the Expert System, defining the principal rules of automatic diagnosis. In the second group we selected: Expert radiologist decision; ANN-ES decision; Expert radiologists with ANN-ES decision. The second group had significantly better diagnosis for cancer and better specificity for breast lesions risk as well as the highest percentage account when the radiologist's decision was helped by the ANN software. The ANN-ES group was able to select, by anamnestic, diagnostic and genetic means, 8 patients for prophylactic surgery, finding 4 cancers in a very early stage. Although it is only a preliminary study, this innovative diagnostic tool seems to provide better positive and negative predictive value in cancer diagnosis as well as in breast risk lesion identification.

  4. A preference-based approach to deriving breeding objectives: applied to sheep breeding.

    PubMed

    Byrne, T J; Amer, P R; Fennessy, P F; Hansen, P; Wickham, B W

    2012-05-01

    Using internet-based software known as 1000Minds, choice-experiment surveys were administered to experts and farmers from the Irish sheep industry to capture their preferences with respect to the relative importance - represented by part-worth utilities - of target traits in the definition of a breeding objective for sheep in Ireland. Sheep production in Ireland can be broadly separated into lowland and hill farming systems; therefore, each expert was asked to answer the survey first as if he or she were a lowland farmer and second as a hill farmer. In addition to the experts, a group of lowland and a group of hill farmers were surveyed to assess whether, and to what extent, the groups' preferences differ from the experts' preferences. The part-worth utilities obtained from the surveys were converted into relative economic value terms per unit change in each trait. These measures - referred to as 'preference economic values' (pEVs) - were compared with economic values for the traits obtained from bio-economic models. The traits 'value per lamb at the meat processor' and 'lamb survival to slaughter' were revealed as being the two most important traits for the surveyed experts responding as lowland and hill farmers, respectively. In contrast, 'number of foot baths per year for ewes' and 'number of anthelmintic treatments per year for ewes' were the two least important traits. With the exception of 'carcase fat class' (P < 0.05), there were no statistically significant differences in the mean pEVs obtained from the surveyed experts under both the lowland and hill farming scenarios. Compared with the economic values obtained from bio-economic models, the pEVs for 'lambing difficulty' when the experts responded as lowland farmers were higher (P < 0.001); and they were lower (P < 0.001) for 'carcase conformation class', 'carcase fat class' (less negative) and 'ewe mature weight' (less negative) under both scenarios. Compared with surveyed experts, pEVs from lowland farmers differed significantly for 'lambing difficulty', 'lamb survival to slaughter', 'average days to slaughter of lambs', 'number of foot baths per year for ewes', 'number of anthelmintic treatments per year for ewes' and 'ewe mature weight'. Compared with surveyed experts, pEVs from hill farmers differed significantly for 'lambing difficulty', 'average days to slaughter of lambs' and 'number of foot baths per year for ewes'. This study indicates that preference-based tools have the potential to contribute to the definition of breeding objectives where production and price data are not available.

  5. Discrete Choice Experiments: A Guide to Model Specification, Estimation and Software.

    PubMed

    Lancsar, Emily; Fiebig, Denzil G; Hole, Arne Risa

    2017-07-01

    We provide a user guide on the analysis of data (including best-worst and best-best data) generated from discrete-choice experiments (DCEs), comprising a theoretical review of the main choice models followed by practical advice on estimation and post-estimation. We also provide a review of standard software. In providing this guide, we endeavour to not only provide guidance on choice modelling but to do so in a way that provides a 'way in' for researchers to the practicalities of data analysis. We argue that choice of modelling approach depends on the research questions, study design and constraints in terms of quality/quantity of data and that decisions made in relation to analysis of choice data are often interdependent rather than sequential. Given the core theory and estimation of choice models is common across settings, we expect the theoretical and practical content of this paper to be useful to researchers not only within but also beyond health economics.

  6. Expert AIV: Study and Prototyping of an Expert System, To Support the Conceptual AIV Phases Of Space Programs

    NASA Astrophysics Data System (ADS)

    Andrina, G.; Basso, V.; Saitta, L.

    2004-08-01

    The effort in optimising the AIV process has been mainly focused in the recent years on the standardisation of approaches and on the application of new methodologies. But the earlier the intervention, the greater the benefits in terms of cost and schedule. Early phases of AIV process relied up to now on standards that need to be tailored through company and personal expertise. A study has then been conducted in order to exploit the possibility to develop an expert system helping in making choices in the early, conceptual phase of Assembly, Integration and Verification, namely the Model Philosophy and the test definition. The work focused on a hybrid approach, allowing interaction between historical data and human expertise. The expert system that has been prototyped exploits both information elicited from domain experts and results of a Data Mining activity on the existent data bases of completed projects verification data. The Data Mining algorithms allow the extraction of past experience resident on ESA/ MATD data base, which contains information in the form of statistical summaries, costs, frequencies of on-ground and in flight failures. Finding non-trivial associations could then be utilised by the experts to manage new decisions in a controlled way (Standards driven) at the beginning or during the AIV Process Moreover, the Expert AIV could allow compilation of a set of feasible AIV schedules to support further programmatic-driven choices.

  7. A Bibliography of Externally Published Works by the SEI Engineering Techniques Program

    DTIC Science & Technology

    1992-08-01

    media, and virtual reality * model- based engineering * programming languages * reuse * software architectures * software engineering as a discipline...Knowledge- Based Engineering Environments." IEEE Expert 3, 2 (May 1988): 18-23, 26-32. Audience: Practitioner [Klein89b] Klein, D.V. "Comparison of...Terms with Software Reuse Terminology: A Model- Based Approach." ACM SIGSOFT Software Engineering Notes 16, 2 (April 1991): 45-51. Audience: Practitioner

  8. Voices from the Field: 30 Expert Opinions on America 2000, The Bush Administration Strategy To "Reinvent" America's Schools.

    ERIC Educational Resources Information Center

    Institute for Educational Leadership, Washington, DC.

    "America 2000," President Bush's national strategy for "Reinventing America's Schools" is evaluated by 30 invited experts in the following papers: "Bottom-up Reform From the Top Down" (John E. Chubb); "Would Choice + Competition Yield Quality Education?" (Richard F. Elmore); "The Federal Education Role…

  9. Are Future Teachers Methodically Trained to Distinguish Good from Bad Educational Software?

    ERIC Educational Resources Information Center

    Pjanic, Karmelita; Hamzabegovic, Jasna

    2016-01-01

    In the era of information technology and general digitization of society, invasion of every kind of software is evident. No matter how laudable is the existence and development of educational software, taking into account its role, its quality and whether it achieves the desired goal is very important. In addition to programming experts it is…

  10. Expert system verification and validation guidelines/workshop task. Deliverable no. 1: ES V/V guidelines

    NASA Technical Reports Server (NTRS)

    French, Scott W.

    1991-01-01

    The goals are to show that verifying and validating a software system is a required part of software development and has a direct impact on the software's design and structure. Workshop tasks are given in the areas of statistics, integration/system test, unit and architectural testing, and a traffic controller problem.

  11. The Coming of Digital Desktop Media.

    ERIC Educational Resources Information Center

    Galbreath, Jeremy

    1992-01-01

    Discusses the movement toward digital-based platforms including full-motion video for multimedia products. Hardware- and software-based compression techniques for digital data storage are considered, and a chart summarizes features of Digital Video Interactive, Moving Pictures Experts Group, P x 64, Joint Photographic Experts Group, Apple…

  12. NASA's Software Bank (CLIPS)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    C Language Integrated Production System (CLIPS) was used by Esse Systems to develop an expert system for clients who want to automate portions of their operations. The resulting program acts as a scheduling expert and automates routine, repetitive scheduling decisions, allowing employees to spend time on more creative projects.

  13. Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results

    NASA Technical Reports Server (NTRS)

    Glass, B. J. (Editor)

    1992-01-01

    The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.

  14. Thermal Expert System (TEXSYS): Systems automony demonstration project, volume 1. Overview

    NASA Technical Reports Server (NTRS)

    Glass, B. J. (Editor)

    1992-01-01

    The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS test bed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.

  15. Thermal Expert System (TEXSYS): Systems autonomy demonstration project, volume 2. Results

    NASA Astrophysics Data System (ADS)

    Glass, B. J.

    1992-10-01

    The Systems Autonomy Demonstration Project (SADP) produced a knowledge-based real-time control system for control and fault detection, isolation, and recovery (FDIR) of a prototype two-phase Space Station Freedom external active thermal control system (EATCS). The Thermal Expert System (TEXSYS) was demonstrated in recent tests to be capable of reliable fault anticipation and detection, as well as ordinary control of the thermal bus. Performance requirements were addressed by adopting a hierarchical symbolic control approach-layering model-based expert system software on a conventional, numerical data acquisition and control system. The model-based reasoning capabilities of TEXSYS were shown to be advantageous over typical rule-based expert systems, particularly for detection of unforeseen faults and sensor failures. Volume 1 gives a project overview and testing highlights. Volume 2 provides detail on the EATCS testbed, test operations, and online test results. Appendix A is a test archive, while Appendix B is a compendium of design and user manuals for the TEXSYS software.

  16. Eye-tracking of visual attention in web-based assessment using the Force Concept Inventory

    NASA Astrophysics Data System (ADS)

    Han, Jing; Chen, Li; Fu, Zhao; Fritchman, Joseph; Bao, Lei

    2017-07-01

    This study used eye-tracking technology to investigate students’ visual attention while taking the Force Concept Inventory (FCI) in a web-based interface. Eighty nine university students were randomly selected into a pre-test group and a post-test group. Students took the 30-question FCI on a computer equipped with an eye-tracker. There were seven weeks of instruction between the pre- and post-test data collection. Students’ performance on the FCI improved significantly from pre-test to post-test. Meanwhile, the eye-tracking results reveal that the time students spent on taking the FCI test was not affected by student performance and did not change from pre-test to post-test. Analysis of students’ attention to answer choices shows that on the pre-test students primarily focused on the naïve choices and ignored the expert choices. On the post-test, although students had shifted their primary attention to the expert choices, they still kept a high level of attention to the naïve choices, indicating significant conceptual mixing and competition during problem solving. Outcomes of this study provide new insights on students’ conceptual development in learning physics.

  17. Mapping analysis and planning system for the John F. Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Hall, C. R.; Barkaszi, M. J.; Provancha, M. J.; Reddick, N. A.; Hinkle, C. R.; Engel, B. A.; Summerfield, B. R.

    1994-01-01

    Environmental management, impact assessment, research and monitoring are multidisciplinary activities which are ideally suited to incorporate a multi-media approach to environmental problem solving. Geographic information systems (GIS), simulation models, neural networks and expert-system software are some of the advancing technologies being used for data management, query, analysis and display. At the 140,000 acre John F. Kennedy Space Center, the Advanced Software Technology group has been supporting development and implementation of a program that integrates these and other rapidly evolving hardware and software capabilities into a comprehensive Mapping, Analysis and Planning System (MAPS) based in a workstation/local are network environment. An expert-system shell is being developed to link the various databases to guide users through the numerous stages of a facility siting and environmental assessment. The expert-system shell approach is appealing for its ease of data access by management-level decision makers while maintaining the involvement of the data specialists. This, as well as increased efficiency and accuracy in data analysis and report preparation, can benefit any organization involved in natural resources management.

  18. Rule groupings: An approach towards verification of expert systems

    NASA Technical Reports Server (NTRS)

    Mehrotra, Mala

    1991-01-01

    Knowledge-based expert systems are playing an increasingly important role in NASA space and aircraft systems. However, many of NASA's software applications are life- or mission-critical and knowledge-based systems do not lend themselves to the traditional verification and validation techniques for highly reliable software. Rule-based systems lack the control abstractions found in procedural languages. Hence, it is difficult to verify or maintain such systems. Our goal is to automatically structure a rule-based system into a set of rule-groups having a well-defined interface to other rule-groups. Once a rule base is decomposed into such 'firewalled' units, studying the interactions between rules would become more tractable. Verification-aid tools can then be developed to test the behavior of each such rule-group. Furthermore, the interactions between rule-groups can be studied in a manner similar to integration testing. Such efforts will go a long way towards increasing our confidence in the expert-system software. Our research efforts address the feasibility of automating the identification of rule groups, in order to decompose the rule base into a number of meaningful units.

  19. Implementation of a data management software system for SSME test history data

    NASA Technical Reports Server (NTRS)

    Abernethy, Kenneth

    1986-01-01

    The implementation of a software system for managing Space Shuttle Main Engine (SSME) test/flight historical data is presented. The software system uses the database management system RIM7 for primary data storage and routine data management, but includes several FORTRAN programs, described here, which provide customized access to the RIM7 database. The consolidation, modification, and transfer of data from the database THIST, to the RIM7 database THISRM is discussed. The RIM7 utility modules for generating some standard reports from THISRM and performing some routine updating and maintenance are briefly described. The FORTRAN accessing programs described include programs for initial loading of large data sets into the database, capturing data from files for database inclusion, and producing specialized statistical reports which cannot be provided by the RIM7 report generator utility. An expert system tutorial, constructed using the expert system shell product INSIGHT2, is described. Finally, a potential expert system, which would analyze data in the database, is outlined. This system could use INSIGHT2 as well and would take advantage of RIM7's compatibility with the microcomputer database system RBase 5000.

  20. Knowledge-based control of an adaptive interface

    NASA Technical Reports Server (NTRS)

    Lachman, Roy

    1989-01-01

    The analysis, development strategy, and preliminary design for an intelligent, adaptive interface is reported. The design philosophy couples knowledge-based system technology with standard human factors approaches to interface development for computer workstations. An expert system has been designed to drive the interface for application software. The intelligent interface will be linked to application packages, one at a time, that are planned for multiple-application workstations aboard Space Station Freedom. Current requirements call for most Space Station activities to be conducted at the workstation consoles. One set of activities will consist of standard data management services (DMS). DMS software includes text processing, spreadsheets, data base management, etc. Text processing was selected for the first intelligent interface prototype because text-processing software can be developed initially as fully functional but limited with a small set of commands. The program's complexity then can be increased incrementally. The intelligent interface includes the operator's behavior and three types of instructions to the underlying application software are included in the rule base. A conventional expert-system inference engine searches the data base for antecedents to rules and sends the consequents of fired rules as commands to the underlying software. Plans for putting the expert system on top of a second application, a database management system, will be carried out following behavioral research on the first application. The intelligent interface design is suitable for use with ground-based workstations now common in government, industrial, and educational organizations.

  1. Two New Tools for Glycopeptide Analysis Researchers: A Glycopeptide Decoy Generator and a Large Data Set of Assigned CID Spectra of Glycopeptides.

    PubMed

    Lakbub, Jude C; Su, Xiaomeng; Zhu, Zhikai; Patabandige, Milani W; Hua, David; Go, Eden P; Desaire, Heather

    2017-08-04

    The glycopeptide analysis field is tightly constrained by a lack of effective tools that translate mass spectrometry data into meaningful chemical information, and perhaps the most challenging aspect of building effective glycopeptide analysis software is designing an accurate scoring algorithm for MS/MS data. We provide the glycoproteomics community with two tools to address this challenge. The first tool, a curated set of 100 expert-assigned CID spectra of glycopeptides, contains a diverse set of spectra from a variety of glycan types; the second tool, Glycopeptide Decoy Generator, is a new software application that generates glycopeptide decoys de novo. We developed these tools so that emerging methods of assigning glycopeptides' CID spectra could be rigorously tested. Software developers or those interested in developing skills in expert (manual) analysis can use these tools to facilitate their work. We demonstrate the tools' utility in assessing the quality of one particular glycopeptide software package, GlycoPep Grader, which assigns glycopeptides to CID spectra. We first acquired the set of 100 expert assigned CID spectra; then, we used the Decoy Generator (described herein) to generate 20 decoys per target glycopeptide. The assigned spectra and decoys were used to test the accuracy of GlycoPep Grader's scoring algorithm; new strengths and weaknesses were identified in the algorithm using this approach. Both newly developed tools are freely available. The software can be downloaded at http://glycopro.chem.ku.edu/GPJ.jar.

  2. Explanation Generation in Expert Systems (A Literature Review and Implementation)

    DTIC Science & Technology

    1989-01-01

    Rubinoff. Explaining concepts in expert systems: The clear system. In Proceedings of the Second Conference on Aritificial Intelligence Applications. pages... intelligent computer software systems are Heedled. The Expert System (ES) technology of Artificial Intelligence (Al) is ore solution that is (nerging to...Random House College Dictionary defines explanation as: "to make plain, clear, or intelligible something that is not known or understood". [33] While

  3. An overview of expert systems. [artificial intelligence

    NASA Technical Reports Server (NTRS)

    Gevarter, W. B.

    1982-01-01

    An expert system is defined and its basic structure is discussed. The knowledge base, the inference engine, and uses of expert systems are discussed. Architecture is considered, including choice of solution direction, reasoning in the presence of uncertainty, searching small and large search spaces, handling large search spaces by transforming them and by developing alternative or additional spaces, and dealing with time. Existing expert systems are reviewed. Tools for building such systems, construction, and knowledge acquisition and learning are discussed. Centers of research and funding sources are listed. The state-of-the-art, current problems, required research, and future trends are summarized.

  4. Systemic Sclerosis Classification Criteria: Developing methods for multi-criteria decision analysis with 1000Minds

    PubMed Central

    Johnson, Sindhu R.; Naden, Raymond P.; Fransen, Jaap; van den Hoogen, Frank; Pope, Janet E.; Baron, Murray; Tyndall, Alan; Matucci-Cerinic, Marco; Denton, Christopher P.; Distler, Oliver; Gabrielli, Armando; van Laar, Jacob M.; Mayes, Maureen; Steen, Virginia; Seibold, James R.; Clements, Phillip; Medsger, Thomas A.; Carreira, Patricia E.; Riemekasten, Gabriela; Chung, Lorinda; Fessler, Barri J.; Merkel, Peter A.; Silver, Richard; Varga, John; Allanore, Yannick; Mueller-Ladner, Ulf; Vonk, Madelon C.; Walker, Ulrich A.; Cappelli, Susanna; Khanna, Dinesh

    2014-01-01

    Objective Classification criteria for systemic sclerosis (SSc) are being developed. The objectives were to: develop an instrument for collating case-data and evaluate its sensibility; use forced-choice methods to reduce and weight criteria; and explore agreement between experts on the probability that cases were classified as SSc. Study Design and Setting A standardized instrument was tested for sensibility. The instrument was applied to 20 cases covering a range of probabilities that each had SSc. Experts rank-ordered cases from highest to lowest probability; reduced and weighted the criteria using forced-choice methods; and re-ranked the cases. Consistency in rankings was evaluated using intraclass correlation coefficients (ICC). Results Experts endorsed clarity (83%), comprehensibility (100%), face and content validity (100%). Criteria were weighted (points): finger skin thickening (14–22), finger-tip lesions (9–21), friction rubs (21), finger flexion contractures (16), pulmonary fibrosis (14), SSc-related antibodies (15), Raynaud’s phenomenon (13), calcinosis (12), pulmonary hypertension (11), renal crisis (11), telangiectasia (10), abnormal nailfold capillaries (10), esophageal dilation (7) and puffy fingers (5). The ICC across experts was 0.73 (95%CI 0.58,0.86) and improved to 0.80 (95%CI 0.68,0.90). Conclusions Using a sensible instrument and forced-choice methods, the number of criteria were reduced by 39% (23 to 14) and weighted. Our methods reflect the rigors of measurement science, and serves as a template for developing classification criteria. PMID:24721558

  5. Towards a Methodology for Identifying Program Constraints During Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Romo, Lilly; Gates, Ann Q.; Della-Piana, Connie Kubo

    1997-01-01

    Requirements analysis is the activity that involves determining the needs of the customer, identifying the services that the software system should provide and understanding the constraints on the solution. The result of this activity is a natural language document, typically referred to as the requirements definition document. Some of the problems that exist in defining requirements in large scale software projects includes synthesizing knowledge from various domain experts and communicating this information across multiple levels of personnel. One approach that addresses part of this problem is called context monitoring and involves identifying the properties of and relationships between objects that the system will manipulate. This paper examines several software development methodologies, discusses the support that each provide for eliciting such information from experts and specifying the information, and suggests refinements to these methodologies.

  6. The integration of automated knowledge acquisition with computer-aided software engineering for space shuttle expert systems

    NASA Technical Reports Server (NTRS)

    Modesitt, Kenneth L.

    1990-01-01

    A prediction was made that the terms expert systems and knowledge acquisition would begin to disappear over the next several years. This is not because they are falling into disuse; it is rather that practitioners are realizing that they are valuable adjuncts to software engineering, in terms of problem domains addressed, user acceptance, and in development methodologies. A specific problem was discussed, that of constructing an automated test analysis system for the Space Shuttle Main Engine. In this domain, knowledge acquisition was part of requirements systems analysis, and was performed with the aid of a powerful inductive ESBT in conjunction with a computer aided software engineering (CASE) tool. The original prediction is not a very risky one -- it has already been accomplished.

  7. An expert system for the design of heating, ventilating, and air-conditioning systems

    NASA Astrophysics Data System (ADS)

    Camejo, Pedro Jose

    1989-12-01

    Expert systems are computer programs that seek to mimic human reason. An expert system shelf, a software program commonly used for developing expert systems in a relatively short time, was used to develop a prototypical expert system for the design of heating, ventilating, and air-conditioning (HVAC) systems in buildings. Because HVAC design involves several related knowledge domains, developing an expert system for HVAC design requires the integration of several smaller expert systems known as knowledge bases. A menu program and several auxiliary programs for gathering data, completing calculations, printing project reports, and passing data between the knowledge bases are needed and have been developed to join the separate knowledge bases into one simple-to-use program unit.

  8. Space shuttle onboard navigation console expert/trainer system

    NASA Technical Reports Server (NTRS)

    Wang, Lui; Bochsler, Dan

    1987-01-01

    A software system for use in enhancing operational performance as well as training ground controllers in monitoring onboard Space Shuttle navigation sensors is described. The Onboard Navigation (ONAV) development reflects a trend toward following a structured and methodical approach to development. The ONAV system must deal with integrated conventional and expert system software, complex interfaces, and implementation limitations due to the target operational environment. An overview of the onboard navigation sensor monitoring function is presented, along with a description of guidelines driving the development effort, requirements that the system must meet, current progress, and future efforts.

  9. End effector monitoring system: An illustrated case of operational prototyping

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Land, Sherry A.; Thronesbery, Carroll

    1994-01-01

    Operational prototyping is introduced to help developers apply software innovations to real-world problems, to help users articulate requirements, and to help develop more usable software. Operational prototyping has been applied to an expert system development project. The expert system supports fault detection and management during grappling operations of the Space Shuttle payload bay arm. The dynamic exchanges among operational prototyping team members are illustrated in a specific prototyping session. We discuss the requirements for operational prototyping technology, types of projects for which operational prototyping is best suited and when it should be applied to those projects.

  10. Engineering monitoring expert system's developer

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.

    1991-01-01

    This research project is designed to apply artificial intelligence technology including expert systems, dynamic interface of neural networks, and hypertext to construct an expert system developer. The developer environment is specifically suited to building expert systems which monitor the performance of ground support equipment for propulsion systems and testing facilities. The expert system developer, through the use of a graphics interface and a rule network, will be transparent to the user during rule constructing and data scanning of the knowledge base. The project will result in a software system that allows its user to build specific monitoring type expert systems which monitor various equipments used for propulsion systems or ground testing facilities and accrues system performance information in a dynamic knowledge base.

  11. Development of a Spacecraft Materials Selector Expert System

    NASA Technical Reports Server (NTRS)

    Pippin, G.; Kauffman, W. (Technical Monitor)

    2002-01-01

    This report contains a description of the knowledge base tool and examples of its use. A downloadable version of the Spacecraft Materials Selector (SMS) knowledge base is available through the NASA Space Environments and Effects Program. The "Spacecraft Materials Selector" knowledge base is part of an electronic expert system. The expert system consists of an inference engine that contains the "decision-making" code and the knowledge base that contains the selected body of information. The inference engine is a software package previously developed at Boeing, called the Boeing Expert System Tool (BEST) kit.

  12. Fuzzy Logic Engine

    NASA Technical Reports Server (NTRS)

    Howard, Ayanna

    2005-01-01

    The Fuzzy Logic Engine is a software package that enables users to embed fuzzy-logic modules into their application programs. Fuzzy logic is useful as a means of formulating human expert knowledge and translating it into software to solve problems. Fuzzy logic provides flexibility for modeling relationships between input and output information and is distinguished by its robustness with respect to noise and variations in system parameters. In addition, linguistic fuzzy sets and conditional statements allow systems to make decisions based on imprecise and incomplete information. The user of the Fuzzy Logic Engine need not be an expert in fuzzy logic: it suffices to have a basic understanding of how linguistic rules can be applied to the user's problem. The Fuzzy Logic Engine is divided into two modules: (1) a graphical-interface software tool for creating linguistic fuzzy sets and conditional statements and (2) a fuzzy-logic software library for embedding fuzzy processing capability into current application programs. The graphical- interface tool was developed using the Tcl/Tk programming language. The fuzzy-logic software library was written in the C programming language.

  13. Teen Choices, an Online Stage-Based Program for Healthy, Nonviolent Relationships: Development and Feasibility Trial

    ERIC Educational Resources Information Center

    Levesque, Deborah A.; Johnson, Janet L.; Prochaska, Janice M.

    2017-01-01

    This article describes the theoretical foundation, development, and feasibility testing of an online, evidence-based intervention for teen dating violence prevention designed for dissemination. Teen Choices, a program for healthy, nonviolent relationships, relies on the transtheoretical model of behavior change and expert system technology to…

  14. Girls, Girls, Girls: Gender Composition and Female School Choice

    ERIC Educational Resources Information Center

    Schneeweis, Nicole; Zweimuller, Martina

    2012-01-01

    Gender segregation in employment may be explained by women's reluctance to choose technical occupations. However, the foundations for career choices are laid much earlier. Educational experts claim that female students are doing better in math and science and are more likely to choose these subjects if they are in single-sex classes. One possible…

  15. Online Patent Searching: Guided by an Expert System.

    ERIC Educational Resources Information Center

    Ardis, Susan B.

    1990-01-01

    Describes the development of an expert system for online patent searching that uses menu driven software to interpret the user's knowledge level and the general nature of the search problem. The discussion covers the rationale for developing such a system, current system functions, cost effectiveness, user reactions, and plans for future…

  16. New Web-Monitoring Service Worries Some Legal Experts

    ERIC Educational Resources Information Center

    Sander, Libby

    2008-01-01

    A software program that searches for offensive content on college athletes' social-networking sites has drawn skeptical reactions from legal experts, who say it could threaten students' constitutional rights. Billed as a "social-network monitoring service" and marketed exclusively to college athletics departments, YouDiligence was on display at…

  17. Automatic segmentation software in locally advanced rectal cancer: READY (REsearch program in Auto Delineation sYstem)-RECTAL 02: prospective study.

    PubMed

    Gambacorta, Maria A; Boldrini, Luca; Valentini, Chiara; Dinapoli, Nicola; Mattiucci, Gian C; Chiloiro, Giuditta; Pasini, Danilo; Manfrida, Stefania; Caria, Nicola; Minsky, Bruce D; Valentini, Vincenzo

    2016-07-05

    To validate autocontouring software (AS) in a clinical practice including a two steps delineation quality assurance (QA) procedure.The existing delineation agreement among experts for rectal cancer and the overlap and time criteria that have to be verified to allow the use of AS were defined.Median Dice Similarity Coefficient (MDSC), Mean slicewise Hausdorff Distances (MSHD) and Total-Time saving (TT) were analyzed.Two expert Radiation Oncologists reviewed CT-scans of 44 patients and agreed the reference-CTV: the first 14 consecutive cases were used to populate the software Atlas and 30 were used as Test.Each expert performed a manual (group A) and an automatic delineation (group B) of 15 Test patients.The delineations were compared with the reference contours.The overlap between the manual and automatic delineations with MDSC and MSHD and the TT were analyzed.Three acceptance criteria were set: MDSC ≥ 0.75, MSHD ≤1mm and TT sparing ≥ 50%.At least 2 criteria had to be met, one of which had to be TT saving, to validate the system.The MDSC was 0.75, MSHD 2.00 mm and the TT saving 55.5% between group A and group B. MDSC among experts was 0.84.Autosegmentation systems in rectal cancer partially met acceptability criteria with the present version.

  18. A usability evaluation of medical software at an expert conference setting.

    PubMed

    Bond, Raymond Robert; Finlay, Dewar D; Nugent, Chris D; Moore, George; Guldenring, Daniel

    2014-01-01

    A usability test was employed to evaluate two medical software applications at an expert conference setting. One software application is a medical diagnostic tool (electrocardiogram [ECG] viewer) and the other is a medical research tool (electrode misplacement simulator [EMS]). These novel applications have yet to be adopted by the healthcare domain, thus, (1) we wanted to determine the potential user acceptance of these applications and (2) we wanted to determine the feasibility of evaluating medical diagnostic and medical research software at a conference setting as opposed to the conventional laboratory setting. The medical diagnostic tool (ECG viewer) was evaluated using seven delegates and the medical research tool (EMS) was evaluated using 17 delegates that were recruited at the 2010 International Conference on Computing in Cardiology. Each delegate/participant was required to use the software and undertake a set of predefined tasks during the session breaks at the conference. User interactions with the software were recorded using screen-recording software. The 'think-aloud' protocol was also used to elicit verbal feedback from the participants whilst they attempted the pre-defined tasks. Before and after each session, participants completed a pre-test and a post-test questionnaire respectively. The average duration of a usability session at the conference was 34.69 min (SD=10.28). However, taking into account that 10 min was dedicated to the pre-test and post-test questionnaires, the average time dedication to user interaction of the medical software was 24.69 min (SD=10.28). Given we have shown that usability data can be collected at conferences, this paper details the advantages of conference-based usability studies over the laboratory-based approach. For example, given delegates gather at one geographical location, a conference-based usability evaluation facilitates recruitment of a convenient sample of international subject experts. This would otherwise be very expensive to arrange. A conference-based approach also allows for data to be collected over a few days as opposed to months by avoiding administration duties normally involved in laboratory based approach, e.g. mailing invitation letters as part of a recruitment campaign. Following analysis of the user video recordings, 41 (previously unknown) use errors were identified in the advanced ECG viewer and 29 were identified in the EMS application. All use errors were given a consensus severity rating from two independent usability experts. Out of a rating scale of 4 (where 1=cosmetic and 4=critical), the average severity rating for the ECG viewer was 2.24 (SD=1.09) and the average severity rating for the EMS application was 2.34 (SD=0.97). We were also able to extract task completion rates and times from the video recordings to determine the effectiveness of the software applications. For example, six out of seven tasks were completed by all participants when using both applications. This statistic alone suggests both applications already have a high degree of usability. As well as extracting data from the video recordings, we were also able to extract data from the questionnaires. Using a semantic differential scale (where 1=poor and 5=excellent), delegates highly rated the 'responsiveness', 'usefulness', 'learnability' and the 'look and feel' of both applications. This study has shown the potential user acceptance and user-friendliness of the novel EMS and the ECG viewer applications within the healthcare domain. It has also shown that both medical diagnostic software and medical research software can be evaluated for their usability at an expert conference setting. The primary advantage of a conference-based usability evaluation over a laboratory-based evaluation is the high concentration of experts at one location, which is convenient, less time consuming and less expensive. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Selection of infectious medical waste disposal firms by using the analytic hierarchy process and sensitivity analysis.

    PubMed

    Hsu, Pi-Fang; Wu, Cheng-Ru; Li, Ya-Ting

    2008-01-01

    While Taiwanese hospitals dispose of large amounts of medical waste to ensure sanitation and personal hygiene, doing so inefficiently creates potential environmental hazards and increases operational expenses. However, hospitals lack objective criteria to select the most appropriate waste disposal firm and evaluate its performance, instead relying on their own subjective judgment and previous experiences. Therefore, this work presents an analytic hierarchy process (AHP) method to objectively select medical waste disposal firms based on the results of interviews with experts in the field, thus reducing overhead costs and enhancing medical waste management. An appropriate weight criterion based on AHP is derived to assess the effectiveness of medical waste disposal firms. The proposed AHP-based method offers a more efficient and precise means of selecting medical waste firms than subjective assessment methods do, thus reducing the potential risks for hospitals. Analysis results indicate that the medical sector selects the most appropriate infectious medical waste disposal firm based on the following rank: matching degree, contractor's qualifications, contractor's service capability, contractor's equipment and economic factors. By providing hospitals with an effective means of evaluating medical waste disposal firms, the proposed AHP method can reduce overhead costs and enable medical waste management to understand the market demand in the health sector. Moreover, performed through use of Expert Choice software, sensitivity analysis can survey the criterion weight of the degree of influence with an alternative hierarchy.

  20. Can High Quality Overcome Consumer Resistance to Restricted Provider Access? Evidence from a Health Plan Choice Experiment

    PubMed Central

    Harris, Katherine M

    2002-01-01

    Objective To investigate the impact of quality information on the willingness of consumers to enroll in health plans that restrict provider access. Data Sources and Setting A survey administered to respondents between the ages of 25 and 64 in the West Los Angeles area with private health insurance. Study Design An experimental approach is used to measure the effect of variation in provider network features and information about the quality of network physicians on hypothetical plan choices. Conditional logit models are used to analyze the experimental choice data. Next, choice model parameter estimates are used to simulate the impact of changes in plan features on the market shares of competing health plans and to calculate the quality level required to make consumers indifferent to changes in provider access. Principal Findings The presence of quality information reduced the importance of provider network features in plan choices as hypothesized. However, there were not statistically meaningful differences by type of quality measure (i.e., consumer assessed versus expert assessed). The results imply that large quality differences are required to make consumers indifferent to changes in provider access. The impact of quality on plan choices depended more on the particular measure and less on the type of measure. Quality ratings based on the proportion of survey respondents “extremely satisfied with results of care” had the greatest impact on plan choice while the proportion of network doctors “affiliated with university medical centers” had the least. Other consumer and expert assessed measures had more comparable effects. Conclusions Overall the results provide empirical evidence that consumers are willing to trade high quality for restrictions on provider access. This willingness to trade implies that relatively small plans that place restrictions on provider access can successfully compete against less restrictive plans when they can demonstrate high quality. However, the results of this study suggest that in many cases, the level of quality required for consumers to accept access restrictions may be so high as to be unattainable. The results provide empirical support for the current focus of decision support efforts on consumer assessed quality measures. At the same time, however, the results suggest that consumers would also value quality measures based on expert assessments. This finding is relevant given the lack of comparative quality information based on expert judgment and research suggesting that consumers have apprehensions about their ability to meaningfully interpret performance-based quality measures. PMID:12132595

  1. A knowledge based expert system for propellant system monitoring at the Kennedy Space Center

    NASA Technical Reports Server (NTRS)

    Jamieson, J. R.; Delaune, C.; Scarl, E.

    1985-01-01

    The Lox Expert System (LES) is the first attempt to build a realtime expert system capable of simulating the thought processes of NASA system engineers, with regard to fluids systems analysis and troubleshooting. An overview of the hardware and software describes the techniques used, and possible applications to other process control systems. LES is now in the advanced development stage, with a full implementation planned for late 1985.

  2. Digital casts in orthodontics: a comparison of 4 software systems.

    PubMed

    Westerlund, Anna; Tancredi, Weronika; Ransjö, Maria; Bresin, Andrea; Psonis, Spyros; Torgersson, Olof

    2015-04-01

    The introduction of digital cast models is inevitable in the otherwise digitized everyday life of orthodontics. The introduction of this new technology, however, is not straightforward, and selecting an appropriate system can be difficult. The aim of the study was to compare 4 orthodontic digital software systems regarding service, features, and usability. Information regarding service offered by the companies was obtained from questionnaires and Web sites. The features of each software system were collected by exploring the user manuals and the software programs. Replicas of pretreatment casts were sent to Cadent (OrthoCAD; Cadent, Carlstadt, NJ), OthoLab (O3DM; OrthoLab, Poznan, Poland), OrthoProof (DigiModel; OrthoProof, Nieuwegein, The Netherlands), and 3Shape (OrthoAnalyzer; 3Shape, Copenhagen, Denmark). The usability of the programs was assessed by experts in interaction design and usability using the "enhanced cognitive walkthrough" method: 4 tasks were defined and performed by a group of domain experts while they were observed by usability experts. The services provided by the companies were similar. Regarding the features, all 4 systems were able to perform basic measurements; however, not all provided the peer assessment rating index or the American Board of Orthodontics analysis, simulation of the treatment with braces, or digital articulation of the casts. All systems demonstrated weaknesses in usability. However, OrthoCAD and 03DM were considered to be easier to learn for first-time users. In general, the usability of these programs was poor and needs to be further developed. Hands-on training supervised by the program experts is recommended for beginners. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  3. Software Product Liability

    DTIC Science & Technology

    1993-08-01

    disclaimers should be a top priority. Contract law involves the Uniform Commercial Code (UCC). This is an agreement between all the states (except...to contract law than this, the basic issue with software is that the sup- plier is generally an expert on an arcane and sophisticated technology and

  4. An artificial intelligence tool for complex age-depth models

    NASA Astrophysics Data System (ADS)

    Bradley, E.; Anderson, K. A.; de Vesine, L. R.; Lai, V.; Thomas, M.; Nelson, T. H.; Weiss, I.; White, J. W. C.

    2017-12-01

    CSciBox is an integrated software system for age modeling of paleoenvironmental records. It incorporates an array of data-processing and visualization facilities, ranging from 14C calibrations to sophisticated interpolation tools. Using CSciBox's GUI, a scientist can build custom analysis pipelines by composing these built-in components or adding new ones. Alternatively, she can employ CSciBox's automated reasoning engine, Hobbes, which uses AI techniques to perform an in-depth, autonomous exploration of the space of possible age-depth models and presents the results—both the models and the reasoning that was used in constructing and evaluating them—to the user for her inspection. Hobbes accomplishes this using a rulebase that captures the knowledge of expert geoscientists, which was collected over the course of more than 100 hours of interviews. It works by using these rules to generate arguments for and against different age-depth model choices for a given core. Given a marine-sediment record containing uncalibrated 14C dates, for instance, Hobbes tries CALIB-style calibrations using a choice of IntCal curves, with reservoir age correction values chosen from the 14CHRONO database using the lat/long information provided with the core, and finally composes the resulting age points into a full age model using different interpolation methods. It evaluates each model—e.g., looking for outliers or reversals—and uses that information to guide the next steps of its exploration, and presents the results to the user in human-readable form. The most powerful of CSciBox's built-in interpolation methods is BACON, a Bayesian sedimentation-rate algorithm—a powerful but complex tool that can be difficult to use. Hobbes adjusts BACON's many parameters autonomously to match the age model to the expectations of expert geoscientists, as captured in its rulebase. It then checks the model against the data and iteratively re-calculates until it is a good fit to the data.

  5. Modeling the Theory of Planned Behavior from Survey Data for Action Choice in Social Simulations

    DTIC Science & Technology

    2010-03-01

    develop an argument for the benefits of informing action choice models such as TPB from representative survey data. 1. Introduction Icek Ajzen’s...matter expert (SME) input, such as the development of narrative ethnographies, and quantitative survey and polling data, such as the U.S. General...societies, a full description of the TPB implementation within an artificial society, and develop an argument for the benefits of informing action choice

  6. Differential neurobiological effects of expert advice on risky choice in adolescents and adults.

    PubMed

    Engelmann, Jan B; Moore, Sara; Monica Capra, C; Berns, Gregory S

    2012-06-01

    We investigated behavioral and neurobiological mechanisms by which risk-averse advice, provided by an expert, affected risky decisions across three developmental groups [early adolescents (12-14 years), late adolescents (15-17 years), adults (18+ years)]. Using cumulative prospect theory, we modeled choice behavior during a risky-choice task. Results indicate that advice had a significantly greater impact on risky choice in both adolescent groups than in adults. Using functional magnetic resonance imaging, we investigated the neural correlates of this behavioral effect. Developmental effects on correlations between brain activity and valuation parameters were obtained in regions that can be classified into (i) cognitive control regions, such as dorsolateral prefrontal cortex (DLPFC) and ventrolateral PFC; (ii) social cognition regions, such as posterior temporoparietal junction; and (iii) reward-related regions, such as ventromedial PFC (vmPFC) and ventral striatum. Within these regions, differential effects of advice on neural correlates of valuation were observed across development. Specifically, advice increased the correlation strength between brain activity and parameters reflective of safe choice options in adolescent DLPFC and decreased correlation strength between activity and parameters reflective of risky choice options in adult vmPFC. Taken together, results indicate that, across development, distinct brain systems involved in cognitive control and valuation mediate the risk-reducing effect of advice during decision making under risk via specific enhancements and reductions of the correlation strength between brain activity and valuation parameters.

  7. Expert system for the design of heating, ventilating, and air-conditioning systems. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camejo, P.J.

    1989-12-01

    Expert systems are computer programs that seek to mimic human reason. An expert system shelf, a software program commonly used for developing expert systems in a relatively short time, was used to develop a prototypical expert system for the design of heating, ventilating, and air-conditioning (HVAC) systems in buildings. Because HVAC design involves several related knowledge domains, developing an expert system for HVAC design requires the integration of several smaller expert systems known as knowledge bases. A menu program and several auxiliary programs for gathering data, completing calculations, printing project reports, and passing data between the knowledge bases are neededmore » and have been developed to join the separate knowledge bases into one simple-to-use program unit.« less

  8. PSG-EXPERT. An expert system for the diagnosis of sleep disorders.

    PubMed

    Fred, A; Filipe, J; Partinen, M; Paiva, T

    2000-01-01

    This paper describes PSG-EXPERT, an expert system in the domain of sleep disorders exploring polysomnographic data. The developed software tool is addressed from two points of view: (1)--as an integrated environment for the development of diagnosis-oriented expert systems; (2)--as an auxiliary diagnosis tool in the particular domain of sleep disorders. Developed over a Windows platform, this software tool extends one of the most popular shells--CLIPS (C Language Integrated Production System) with the following features: backward chaining engine; graph-based explanation facilities; knowledge editor including a fuzzy fact editor and a rules editor, with facts-rules integrity checking; belief revision mechanism; built-in case generator and validation module. It therefore provides graphical support for knowledge acquisition, edition, explanation and validation. From an application domain point of view, PSG-Expert is an auxiliary diagnosis system for sleep disorders based on polysomnographic data, that aims at assisting the medical expert in his diagnosis task by providing automatic analysis of polysomnographic data, summarising the results of this analysis in terms of a report of major findings and possible diagnosis consistent with the polysomnographic data. Sleep disorders classification follows the International Classification of Sleep Disorders. Major features of the system include: browsing on patients data records; structured navigation on Sleep Disorders descriptions according to ASDA definitions; internet links to related pages; diagnosis consistent with polysomnographic data; graphical user-interface including graph-based explanatory facilities; uncertainty modelling and belief revision; production of reports; connection to remote databases.

  9. An Embedded Rule-Based Diagnostic Expert System in Ada

    NASA Technical Reports Server (NTRS)

    Jones, Robert E.; Liberman, Eugene M.

    1992-01-01

    Ada is becoming an increasingly popular programming language for large Government-funded software projects. Ada with it portability, transportability, and maintainability lends itself well to today's complex programming environment. In addition, expert systems have also assumed a growing role in providing human-like reasoning capability expertise for computer systems. The integration is discussed of expert system technology with Ada programming language, especially a rule-based expert system using an ART-Ada (Automated Reasoning Tool for Ada) system shell. NASA Lewis was chosen as a beta test site for ART-Ada. The test was conducted by implementing the existing Autonomous Power EXpert System (APEX), a Lisp-based power expert system, in ART-Ada. Three components, the rule-based expert systems, a graphics user interface, and communications software make up SMART-Ada (Systems fault Management with ART-Ada). The rules were written in the ART-Ada development environment and converted to Ada source code. The graphics interface was developed with the Transportable Application Environment (TAE) Plus, which generates Ada source code to control graphics images. SMART-Ada communicates with a remote host to obtain either simulated or real data. The Ada source code generated with ART-Ada, TAE Plus, and communications code was incorporated into an Ada expert system that reads the data from a power distribution test bed, applies the rule to determine a fault, if one exists, and graphically displays it on the screen. The main objective, to conduct a beta test on the ART-Ada rule-based expert system shell, was achieved. The system is operational. New Ada tools will assist in future successful projects. ART-Ada is one such tool and is a viable alternative to the straight Ada code when an application requires a rule-based or knowledge-based approach.

  10. Machine learning research 1989-90

    NASA Technical Reports Server (NTRS)

    Porter, Bruce W.; Souther, Arthur

    1990-01-01

    Multifunctional knowledge bases offer a significant advance in artificial intelligence because they can support numerous expert tasks within a domain. As a result they amortize the costs of building a knowledge base over multiple expert systems and they reduce the brittleness of each system. Due to the inevitable size and complexity of multifunctional knowledge bases, their construction and maintenance require knowledge engineering and acquisition tools that can automatically identify interactions between new and existing knowledge. Furthermore, their use requires software for accessing those portions of the knowledge base that coherently answer questions. Considerable progress was made in developing software for building and accessing multifunctional knowledge bases. A language was developed for representing knowledge, along with software tools for editing and displaying knowledge, a machine learning program for integrating new information into existing knowledge, and a question answering system for accessing the knowledge base.

  11. A Software Engine to Justify the Conclusions of an Expert System for Detecting Renal Obstruction on 99mTc-MAG3 Scans

    PubMed Central

    Garcia, Ernest V.; Taylor, Andrew; Manatunga, Daya; Folks, Russell

    2013-01-01

    The purposes of this study were to describe and evaluate a software engine to justify the conclusions reached by a renal expert system (RENEX) for assessing patients with suspected renal obstruction and to obtain from this evaluation new knowledge that can be incorporated into RENEX to attempt to improve diagnostic performance. Methods RENEX consists of 60 heuristic rules extracted from the rules used by a domain expert to generate the knowledge base and a forward-chaining inference engine to determine obstruction. The justification engine keeps track of the sequence of the rules that are instantiated to reach a conclusion. The interpreter can then request justification by clicking on the specific conclusion. The justification process then reports the English translation of all concatenated rules instantiated to reach that conclusion. The justification engine was evaluated with a prospective group of 60 patients (117 kidneys). After reviewing the standard renal mercaptoacetyltriglycine (MAG3) scans obtained before and after the administration of furosemide, a masked expert determined whether each kidney was obstructed, whether the results were equivocal, or whether the kidney was not obstructed and identified and ranked the main variables associated with each interpretation. Two parameters were then tabulated: the frequency with which the main variables associated with obstruction by the expert were also justified by RENEX and the frequency with which the justification rules provided by RENEX were deemed to be correct by the expert. Only when RENEX and the domain expert agreed on the diagnosis (87 kidneys) were the results used to test the justification. Results RENEX agreed with 91% (184/203) of the rules supplied by the expert for justifying the diagnosis. RENEX provided 103 additional rules justifying the diagnosis; the expert agreed that 102 (99%) were correct, although the rules were considered to be of secondary importance. Conclusion We have described and evaluated a software engine to justify the conclusions of RENEX for detecting renal obstruction with MAG3 renal scans obtained before and after the administration of furosemide. This tool is expected to increase physician confidence in the interpretations provided by RENEX and to assist physicians and trainees in gaining a higher level of expertise. PMID:17332625

  12. A software engine to justify the conclusions of an expert system for detecting renal obstruction on 99mTc-MAG3 scans.

    PubMed

    Garcia, Ernest V; Taylor, Andrew; Manatunga, Daya; Folks, Russell

    2007-03-01

    The purposes of this study were to describe and evaluate a software engine to justify the conclusions reached by a renal expert system (RENEX) for assessing patients with suspected renal obstruction and to obtain from this evaluation new knowledge that can be incorporated into RENEX to attempt to improve diagnostic performance. RENEX consists of 60 heuristic rules extracted from the rules used by a domain expert to generate the knowledge base and a forward-chaining inference engine to determine obstruction. The justification engine keeps track of the sequence of the rules that are instantiated to reach a conclusion. The interpreter can then request justification by clicking on the specific conclusion. The justification process then reports the English translation of all concatenated rules instantiated to reach that conclusion. The justification engine was evaluated with a prospective group of 60 patients (117 kidneys). After reviewing the standard renal mercaptoacetyltriglycine (MAG3) scans obtained before and after the administration of furosemide, a masked expert determined whether each kidney was obstructed, whether the results were equivocal, or whether the kidney was not obstructed and identified and ranked the main variables associated with each interpretation. Two parameters were then tabulated: the frequency with which the main variables associated with obstruction by the expert were also justified by RENEX and the frequency with which the justification rules provided by RENEX were deemed to be correct by the expert. Only when RENEX and the domain expert agreed on the diagnosis (87 kidneys) were the results used to test the justification. RENEX agreed with 91% (184/203) of the rules supplied by the expert for justifying the diagnosis. RENEX provided 103 additional rules justifying the diagnosis; the expert agreed that 102 (99%) were correct, although the rules were considered to be of secondary importance. We have described and evaluated a software engine to justify the conclusions of RENEX for detecting renal obstruction with MAG3 renal scans obtained before and after the administration of furosemide. This tool is expected to increase physician confidence in the interpretations provided by RENEX and to assist physicians and trainees in gaining a higher level of expertise.

  13. Multiple Choice Questions Can Be Designed or Revised to Challenge Learners' Critical Thinking

    ERIC Educational Resources Information Center

    Tractenberg, Rochelle E.; Gushta, Matthew M.; Mulroney, Susan E.; Weissinger, Peggy A.

    2013-01-01

    Multiple choice (MC) questions from a graduate physiology course were evaluated by cognitive-psychology (but not physiology) experts, and analyzed statistically, in order to test the independence of content expertise and cognitive complexity ratings of MC items. Integration of higher order thinking into MC exams is important, but widely known to…

  14. Eliciting preferences for medical devices in South Korea: A discrete choice experiment.

    PubMed

    Lee, Hye-Jae; Bae, Eun-Young

    2017-03-01

    This study aims to identify the attributes that contribute to the value of medical devices and quantify the relative importance of them using a discrete choice experiment. Based on a literature review and expert consultation, seven attributes and their levels were identified-severity of disease (2), availability of substitutes (2), improvement in procedure (3), improvement in clinical outcomes (2), increase in survival (2), improvement in quality of life (3), and cost (4). Among 576 hypothetical profiles, optimal choice sets with 20 choices were developed and experts experienced in health technology assessment and reimbursement decision making in South Korea were surveyed. A total of 102 respondents participated in the survey. The results of the random-effect probit model showed that among the seven attributes, six, except for improvement in procedure, had a significant impact on respondents' choices on medical devices. Respondents were willing to pay the highest amount for devices that provided substantial improvements in quality of life, followed by increased survival, improved clinical outcome, treatment without substitutes, and technology for treating severe diseases. The findings of this experiment will inform decision-makers of the relative importance of the criteria and help them in reimbursement decision making of medical devices. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. The integration of quantitative information with an intelligent decision support system for residential energy retrofits

    NASA Astrophysics Data System (ADS)

    Mo, Yunjeong

    The purpose of this research is to support the development of an intelligent Decision Support System (DSS) by integrating quantitative information with expert knowledge in order to facilitate effective retrofit decision-making. To achieve this goal, the Energy Retrofit Decision Process Framework is analyzed. Expert system shell software, a retrofit measure cost database, and energy simulation software are needed for developing the DSS; Exsys Corvid, the NREM database and BEopt were chosen for implementing an integration model. This integration model demonstrates the holistic function of a residential energy retrofit system for existing homes, by providing a prioritized list of retrofit measures with cost information, energy simulation and expert advice. The users, such as homeowners and energy auditors, can acquire all of the necessary retrofit information from this unified system without having to explore several separate systems. The integration model plays the role of a prototype for the finalized intelligent decision support system. It implements all of the necessary functions for the finalized DSS, including integration of the database, energy simulation and expert knowledge.

  16. AMPHION: Specification-based programming for scientific subroutine libraries

    NASA Technical Reports Server (NTRS)

    Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Waldinger, Richard; Stickel, Mark

    1994-01-01

    AMPHION is a knowledge-based software engineering (KBSE) system that guides a user in developing a diagram representing a formal problem specification. It then automatically implements a solution to this specification as a program consisting of calls to subroutines from a library. The diagram provides an intuitive domain oriented notation for creating a specification that also facilitates reuse and modification. AMPHION'S architecture is domain independent. AMPHION is specialized to an application domain by developing a declarative domain theory. Creating a domain theory is an iterative process that currently requires the joint expertise of domain experts and experts in automated formal methods for software development.

  17. [The Strategic Organization of Skill

    NASA Technical Reports Server (NTRS)

    Roberts, Ralph

    1996-01-01

    Eye-movement software was developed in addition to several studies that focused on expert-novice differences in the acquisition and organization of skill. These studies focused on how increasingly complex strategies utilize and incorporate visual look-ahead to calibrate action. Software for collecting, calibrating, and scoring eye-movements was refined and updated. Some new algorithms were developed for analyzing corneal-reflection eye movement data that detect the location of saccadic eye movements in space and time. Two full-scale studies were carried out which examined how experts use foveal and peripheral vision to acquire information about upcoming environmental circumstances in order to plan future action(s) accordingly.

  18. A software-based tool for video motion tracking in the surgical skills assessment landscape.

    PubMed

    Ganni, Sandeep; Botden, Sanne M B I; Chmarra, Magdalena; Goossens, Richard H M; Jakimowicz, Jack J

    2018-01-16

    The use of motion tracking has been proved to provide an objective assessment in surgical skills training. Current systems, however, require the use of additional equipment or specialised laparoscopic instruments and cameras to extract the data. The aim of this study was to determine the possibility of using a software-based solution to extract the data. 6 expert and 23 novice participants performed a basic laparoscopic cholecystectomy procedure in the operating room. The recorded videos were analysed using Kinovea 0.8.15 and the following parameters calculated the path length, average instrument movement and number of sudden or extreme movements. The analysed data showed that experts had significantly shorter path length (median 127 cm vs. 187 cm, p = 0.01), smaller average movements (median 0.40 cm vs. 0.32 cm, p = 0.002) and fewer sudden movements (median 14.00 vs. 21.61, p = 0.001) than their novice counterparts. The use of software-based video motion tracking of laparoscopic cholecystectomy is a simple and viable method enabling objective assessment of surgical performance. It provides clear discrimination between expert and novice performance.

  19. Current trends for customized biomedical software tools.

    PubMed

    Khan, Haseeb Ahmad

    2017-01-01

    In the past, biomedical scientists were solely dependent on expensive commercial software packages for various applications. However, the advent of user-friendly programming languages and open source platforms has revolutionized the development of simple and efficient customized software tools for solving specific biomedical problems. Many of these tools are designed and developed by biomedical scientists independently or with the support of computer experts and often made freely available for the benefit of scientific community. The current trends for customized biomedical software tools are highlighted in this short review.

  20. Quantitative Measures for Software Independent Verification and Validation

    NASA Technical Reports Server (NTRS)

    Lee, Alice

    1996-01-01

    As software is maintained or reused, it undergoes an evolution which tends to increase the overall complexity of the code. To understand the effects of this, we brought in statistics experts and leading researchers in software complexity, reliability, and their interrelationships. These experts' project has resulted in our ability to statistically correlate specific code complexity attributes, in orthogonal domains, to errors found over time in the HAL/S flight software which flies in the Space Shuttle. Although only a prototype-tools experiment, the result of this research appears to be extendable to all other NASA software, given appropriate data similar to that logged for the Shuttle onboard software. Our research has demonstrated that a more complete domain coverage can be mathematically demonstrated with the approach we have applied, thereby ensuring full insight into the cause-and-effects relationship between the complexity of a software system and the fault density of that system. By applying the operational profile we can characterize the dynamic effects of software path complexity under this same approach We now have the ability to measure specific attributes which have been statistically demonstrated to correlate to increased error probability, and to know which actions to take, for each complexity domain. Shuttle software verifiers can now monitor the changes in the software complexity, assess the added or decreased risk of software faults in modified code, and determine necessary corrections. The reports, tool documentation, user's guides, and new approach that have resulted from this research effort represent advances in the state of the art of software quality and reliability assurance. Details describing how to apply this technique to other NASA code are contained in this document.

  1. Maximum entropy approach to fuzzy control

    NASA Technical Reports Server (NTRS)

    Ramer, Arthur; Kreinovich, Vladik YA.

    1992-01-01

    For the same expert knowledge, if one uses different &- and V-operations in a fuzzy control methodology, one ends up with different control strategies. Each choice of these operations restricts the set of possible control strategies. Since a wrong choice can lead to a low quality control, it is reasonable to try to loose as few possibilities as possible. This idea is formalized and it is shown that it leads to the choice of min(a + b,1) for V and min(a,b) for &. This choice was tried on NASA Shuttle simulator; it leads to a maximally stable control.

  2. Demonstration of Multi- and Single-Reader Sample Size Program for Diagnostic Studies software.

    PubMed

    Hillis, Stephen L; Schartz, Kevin M

    2015-02-01

    The recently released software Multi- and Single-Reader Sample Size Sample Size Program for Diagnostic Studies , written by Kevin Schartz and Stephen Hillis, performs sample size computations for diagnostic reader-performance studies. The program computes the sample size needed to detect a specified difference in a reader performance measure between two modalities, when using the analysis methods initially proposed by Dorfman, Berbaum, and Metz (DBM) and Obuchowski and Rockette (OR), and later unified and improved by Hillis and colleagues. A commonly used reader performance measure is the area under the receiver-operating-characteristic curve. The program can be used with typical common reader-performance measures which can be estimated parametrically or nonparametrically. The program has an easy-to-use step-by-step intuitive interface that walks the user through the entry of the needed information. Features of the software include the following: (1) choice of several study designs; (2) choice of inputs obtained from either OR or DBM analyses; (3) choice of three different inference situations: both readers and cases random, readers fixed and cases random, and readers random and cases fixed; (4) choice of two types of hypotheses: equivalence or noninferiority; (6) choice of two output formats: power for specified case and reader sample sizes, or a listing of case-reader combinations that provide a specified power; (7) choice of single or multi-reader analyses; and (8) functionality in Windows, Mac OS, and Linux.

  3. Early-Stage Software Design for Usability

    ERIC Educational Resources Information Center

    Golden, Elspeth

    2010-01-01

    In spite of the goodwill and best efforts of software engineers and usability professionals, systems continue to be built and released with glaring usability flaws that are costly and difficult to fix after the system has been built. Although user interface (UI) designers, be they usability or design experts, communicate usability requirements to…

  4. Advanced technologies for Mission Control Centers

    NASA Technical Reports Server (NTRS)

    Dalton, John T.; Hughes, Peter M.

    1991-01-01

    Advance technologies for Mission Control Centers are presented in the form of the viewgraphs. The following subject areas are covered: technology needs; current technology efforts at GSFC (human-machine interface development, object oriented software development, expert systems, knowledge-based software engineering environments, and high performance VLSI telemetry systems); and test beds.

  5. NASA's Software Bank (CLIPS)

    NASA Technical Reports Server (NTRS)

    1991-01-01

    C Language Integrated Production System (CLIPS) is a NASA Johnson Space Center developed software shell for developing expert systems, is used by researchers at Ohio State University to determine solid waste disposal sites to assist in historic preservation. The program has various other applications and has even been included in a widely-used textbook.

  6. Abstract for 1999 Rational Software User Conference

    NASA Technical Reports Server (NTRS)

    Dunphy, Julia; Rouquette, Nicolas; Feather, Martin; Tung, Yu-Wen

    1999-01-01

    We develop spacecraft fault-protection software at NASA/JPL. Challenges exemplified by our task: 1) high-quality systems - need for extensive validation & verification; 2) multi-disciplinary context - involves experts from diverse areas; 3) embedded systems - must adapt to external practices, notations, etc.; and 4) development pressures - NASA's mandate of "better, faster, cheaper".

  7. Collaborative Software and Focused Distraction in the Classroom

    ERIC Educational Resources Information Center

    Rhine, Steve; Bailey, Mark

    2011-01-01

    In search of strategies for increasing their pre-service teachers' thoughtful engagement with content and in an effort to model connection between choice of technology and pedagogical goals, the authors utilized collaborative software during class time. Collaborative software allows all students to write simultaneously on a single collective…

  8. A Knowledge Engineering Approach to Analysis and Evaluation of Construction Schedules

    DTIC Science & Technology

    1990-02-01

    software engineering discipline focusing on constructing KBSs. It is an incremental and cyclical process that requires the interaction of a domain expert(s...the U.S. Army Coips of Engineers ; and (3) the project management software developer, represented by Pinnell Engineering , Inc. Since the primary...the programming skills necessary to convert the raw knowledge intn a form a computer can understand. knowledge engineering : The software engineering

  9. Efficient processing of two-dimensional arrays with C or C++

    USGS Publications Warehouse

    Donato, David I.

    2017-07-20

    Because fast and efficient serial processing of raster-graphic images and other two-dimensional arrays is a requirement in land-change modeling and other applications, the effects of 10 factors on the runtimes for processing two-dimensional arrays with C and C++ are evaluated in a comparative factorial study. This study’s factors include the choice among three C or C++ source-code techniques for array processing; the choice of Microsoft Windows 7 or a Linux operating system; the choice of 4-byte or 8-byte array elements and indexes; and the choice of 32-bit or 64-bit memory addressing. This study demonstrates how programmer choices can reduce runtimes by 75 percent or more, even after compiler optimizations. Ten points of practical advice for faster processing of two-dimensional arrays are offered to C and C++ programmers. Further study and the development of a C and C++ software test suite are recommended.Key words: array processing, C, C++, compiler, computational speed, land-change modeling, raster-graphic image, two-dimensional array, software efficiency

  10. Nontargeted multicomponent analytical screening of plastic food contact materials using fast interpretation of deliverables via expert structure-activity relationship software.

    PubMed

    Rothenbacher, Thorsten; Schwack, Wolfgang

    2009-01-01

    Plastic packaging materials may release compounds into packed foodstuffs. To identify potential migrants of toxicological concern, resins, and multilayer foils (mainly polyethylene) intended for the production of food contact materials were extracted and analyzed by GC/mass spectrometry. To identify even compounds of low concentrations, AMDIS software was used and data evaluation was safeguarded by the Kovats retention index (RI) system. In this way, 46 compounds were identified as possible migrants. The expert structure-activity relationship software DEREK for Windows was utilized to evaluate all identified substances in terms of carcinogenicity, genotoxicity, thyroid toxicity, and miscellaneous endpoints for humans. Additionally, a literature search for these compounds was performed with Sci-Finder, but relevant data were missing for 28 substances. Seven compounds with adverse toxicological effects were identified. In addition, the RIs of 24 commercial additive standards, measured with a GC capillary column of intermediate polarity, are given.

  11. Automated Predictive Diagnosis (APD): A 3-tiered shell for building expert systems for automated predictions and decision making

    NASA Technical Reports Server (NTRS)

    Steib, Michael

    1991-01-01

    The APD software features include: On-line help, Three level architecture, (Logic environments, Setup/Application environment, Data environment), Explanation capability, and File handling. The kinds of experimentation and record keeping that leads to effective expert systems is facilitated by: (1) a library of inferencing modules (in the logic environment); (2) an explanation capability which reveals logic strategies to users; (3) automated file naming conventions; (4) an information retrieval system; and (5) on-line help. These aid with effective use of knowledge, debugging and experimentation. Since the APD software anticipates the logical rules becoming complicated, it is embedded in a production system language (CLIPS) to insure the full power of the production system paradigm of CLIPS and availability of the procedural language C. The development is discussed of the APD software and three example applications: toy, experimental, and operational prototype for submarine maintenance predictions.

  12. Expert system development for commonality analysis in space programs

    NASA Technical Reports Server (NTRS)

    Yeager, Dorian P.

    1987-01-01

    This report is a combination of foundational mathematics and software design. A mathematical model of the Commonality Analysis problem was developed and some important properties discovered. The complexity of the problem is described herein and techniques, both deterministic and heuristic, for reducing that complexity are presented. Weaknesses are pointed out in the existing software (System Commonality Analysis Tool) and several improvements are recommended. It is recommended that: (1) an expert system for guiding the design of new databases be developed; (2) a distributed knowledge base be created and maintained for the purpose of encoding the commonality relationships between design items in commonality databases; (3) a software module be produced which automatically generates commonality alternative sets from commonality databases using the knowledge associated with those databases; and (4) a more complete commonality analysis module be written which is capable of generating any type of feasible solution.

  13. Research Needs for Human Factors

    DTIC Science & Technology

    1983-01-01

    their relative merits. Until such comparisons are made, practitioners will continue to advocate their own products without a basis for choice among ...judgments among a group of experts; (2) formulating questions for "experts in a way that is compatible with their mental structures or "cognitive...system* Typically the operators work in teams and control compute3, which in turn mediate information flow among various automatic components. Other

  14. [Translation and cultural adaptation of the questionnaire on the reason for food choices (Food Choice Questionnaire - FCQ) into Portuguese].

    PubMed

    Heitor, Sara Franco Diniz; Estima, Camilla Chermont Prochnik; das Neves, Fabricia Junqueira; de Aguiar, Aline Silva; Castro, Sybelle de Souza; Ferreira, Julia Elba de Souza

    2015-08-01

    The Food Choice Questionnaire (FCQ) assesses the importance that subjects attribute to nine factors related to food choices: health, mood, convenience, sensory appeal, natural content, price, weight control, familiarity and ethical concern. This study sought to assess the applicability of the FCQ in Brazil; it describes the translation and cultural adaptation from English into Portuguese of the FCQ via the following steps: independent translations, consensus, back-translation, evaluation by a committee of experts, semantic validation and pre-test. The pre-test was run with a randomly sampled group of 86 male and female college students from different courses with a median age of 19. Slight differences between the versions were observed and adjustments were made. After minor changes in the translation process, the committee of experts considered that the Brazilian Portuguese version was semantically and conceptually equivalent to the English original. Semantic validation showed that the questionnaire is easily understood. The instrument presented a high degree of internal consistency. The study is the first stage in the process of validating an instrument, which consists of face and content validity. Further stages, already underway, are needed before other researchers can use it.

  15. Computer-aided software development process design

    NASA Technical Reports Server (NTRS)

    Lin, Chi Y.; Levary, Reuven R.

    1989-01-01

    The authors describe an intelligent tool designed to aid managers of software development projects in planning, managing, and controlling the development process of medium- to large-scale software projects. Its purpose is to reduce uncertainties in the budget, personnel, and schedule planning of software development projects. It is based on dynamic model for the software development and maintenance life-cycle process. This dynamic process is composed of a number of time-varying, interacting developmental phases, each characterized by its intended functions and requirements. System dynamics is used as a modeling methodology. The resulting Software LIfe-Cycle Simulator (SLICS) and the hybrid expert simulation system of which it is a subsystem are described.

  16. Distributed expert systems for ground and space applications

    NASA Technical Reports Server (NTRS)

    Buckley, Brian; Wheatcraft, Louis

    1992-01-01

    Presented here is the Spacecraft Command Language (SCL) concept of the unification of ground and space operations using a distributed approach. SCL is a hybrid software environment borrowing from expert system technology, fifth generation language development, and multitasking operating system environments. Examples of potential uses for the system and current distributed applications of SCL are given.

  17. Development of a comprehensive software engineering environment

    NASA Technical Reports Server (NTRS)

    Hartrum, Thomas C.; Lamont, Gary B.

    1987-01-01

    The generation of a set of tools for software lifecycle is a recurring theme in the software engineering literature. The development of such tools and their integration into a software development environment is a difficult task because of the magnitude (number of variables) and the complexity (combinatorics) of the software lifecycle process. An initial development of a global approach was initiated in 1982 as the Software Development Workbench (SDW). Continuing efforts focus on tool development, tool integration, human interfacing, data dictionaries, and testing algorithms. Current efforts are emphasizing natural language interfaces, expert system software development associates and distributed environments with Ada as the target language. The current implementation of the SDW is on a VAX-11/780. Other software development tools are being networked through engineering workstations.

  18. Design and Pedagogical Issues in the Development of the InSight Series of Instructional Software.

    ERIC Educational Resources Information Center

    Baro, John A.; Lehmkulke, Stephen

    1993-01-01

    Design issues in development of InSight software for optometric education include choice of hardware, identification of audience, definition of scope and limitations of content, selection of user interface and programing environment, obtaining user feedback, and software distribution. Pedagogical issues include practicality and improvement on…

  19. Examining Operational Software Influence on User Satisfaction within Small Manufacturing Businesses

    ERIC Educational Resources Information Center

    Frey, W. Bruce

    2010-01-01

    Managing a business requires vigilance and diligence. Small business owners are often ignored by IT vendors and inundated by the choices of software applications and therefore, need help finding a viable operating software solution for small business decisions and development. The extent, if any, of a significant influence of operational software…

  20. Choices and Consequences.

    ERIC Educational Resources Information Center

    Thorp, Carmany

    1995-01-01

    Describes student use of Hyperstudio computer software to create history adventure games. History came alive while students learned efficient writing skills; learned to understand and manipulate cause, effect choice and consequence; and learned to incorporate succinct locational, climatic, and historical detail. (ET)

  1. CytoSpectre: a tool for spectral analysis of oriented structures on cellular and subcellular levels.

    PubMed

    Kartasalo, Kimmo; Pölönen, Risto-Pekka; Ojala, Marisa; Rasku, Jyrki; Lekkala, Jukka; Aalto-Setälä, Katriina; Kallio, Pasi

    2015-10-26

    Orientation and the degree of isotropy are important in many biological systems such as the sarcomeres of cardiomyocytes and other fibrillar structures of the cytoskeleton. Image based analysis of such structures is often limited to qualitative evaluation by human experts, hampering the throughput, repeatability and reliability of the analyses. Software tools are not readily available for this purpose and the existing methods typically rely at least partly on manual operation. We developed CytoSpectre, an automated tool based on spectral analysis, allowing the quantification of orientation and also size distributions of structures in microscopy images. CytoSpectre utilizes the Fourier transform to estimate the power spectrum of an image and based on the spectrum, computes parameter values describing, among others, the mean orientation, isotropy and size of target structures. The analysis can be further tuned to focus on targets of particular size at cellular or subcellular scales. The software can be operated via a graphical user interface without any programming expertise. We analyzed the performance of CytoSpectre by extensive simulations using artificial images, by benchmarking against FibrilTool and by comparisons with manual measurements performed for real images by a panel of human experts. The software was found to be tolerant against noise and blurring and superior to FibrilTool when analyzing realistic targets with degraded image quality. The analysis of real images indicated general good agreement between computational and manual results while also revealing notable expert-to-expert variation. Moreover, the experiment showed that CytoSpectre can handle images obtained of different cell types using different microscopy techniques. Finally, we studied the effect of mechanical stretching on cardiomyocytes to demonstrate the software in an actual experiment and observed changes in cellular orientation in response to stretching. CytoSpectre, a versatile, easy-to-use software tool for spectral analysis of microscopy images was developed. The tool is compatible with most 2D images and can be used to analyze targets at different scales. We expect the tool to be useful in diverse applications dealing with structures whose orientation and size distributions are of interest. While designed for the biological field, the software could also be useful in non-biological applications.

  2. Computer Software: Does It Support a New View of Reading?

    ERIC Educational Resources Information Center

    Case, Carolyn J.

    A study examined commercially available computer software to ascertain its degree of congruency with current methods of reading instruction (the Interactive model) at the first and second grade levels. A survey was conducted of public school educators in Connecticut and experts in the field to determine their level of satisfaction with available…

  3. Social Software for Life-Long Learning

    ERIC Educational Resources Information Center

    Klamma, Ralf; Chatti, Mohamed Amine; Duval, Erik; Hummel, Hans; Hvannberg, Ebba Thora; Kravcik, Milos; Law, Effie; Naeve, Ambjorn; Scott, Peter

    2007-01-01

    Life-long learning is a key issue for our knowledge society. With social software systems new heterogeneous kinds of technology enhanced informal learning are now available to the life-long learner. Learners outside of learning institutions now have access to powerful social communities of experts and peers who are together forging a new web 2.0.…

  4. Expert System Software

    NASA Technical Reports Server (NTRS)

    1989-01-01

    C Language Integrated Production System (CLIPS) is a software shell for developing expert systems is designed to allow research and development of artificial intelligence on conventional computers. Originally developed by Johnson Space Center, it enables highly efficient pattern matching. A collection of conditions and actions to be taken if the conditions are met is built into a rule network. Additional pertinent facts are matched to the rule network. Using the program, E.I. DuPont de Nemours & Co. is monitoring chemical production machines; California Polytechnic State University is investigating artificial intelligence in computer aided design; Mentor Graphics has built a new Circuit Synthesis system, and Brooke and Brooke, a law firm, can determine which facts from a file are most important.

  5. Integration of an expert teaching assistant with distance learning software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca, S.P.; Reed, N.E.

    1996-12-31

    The Remote Teaching Assistant (RTA) software currently under development at UC Davis allows students and Teaching Assistants (TA`s) to interact through multimedia communication via the Internet. To resolve the problem of TA unavailability and limited knowledge, an Expert Teaching Assistant (ETA) module is being developed. When TA`s are not on-line, students in need of help consult ETA. The focus of this research is the development and integration of ETA with RTA, the establishment of an architecture suitable for use with education (the domain) in any sub-domain (course), and the creation of a mechanism usable by non-technical personnel to maintain knowledgemore » bases.« less

  6. Software Design for Interactive Graphic Radiation Treatment Simulation Systems*

    PubMed Central

    Kalet, Ira J.; Sweeney, Christine; Jacky, Jonathan

    1990-01-01

    We examine issues in the design of interactive computer graphic simulation programs for radiation treatment planning (RTP), as well as expert system programs that automate parts of the RTP process, in light of ten years of experience at designing, building and using such programs. An experiment in object-oriented design using standard Pascal shows that while some advantage is gained from the design, it is still difficult to achieve modularity and to integrate expert system components. A new design based on the Common LISP Object System (CLOS) is described. This series of designs for RTP software shows that this application benefits in specific ways from object-oriented design methods and appropriate languages and tools.

  7. Security. Review Software for Advanced CHOICE. CHOICE (Challenging Options in Career Education).

    ERIC Educational Resources Information Center

    Pitts, Ilse M.; And Others

    CHOICE Security is an Apple computer game activity designed to help secondary migrant students memorize their social security numbers and reinforce job and role information presented in "Career Notes, First Applications." The learner may choose from four time options and whether to have the social security number visible on the screen or…

  8. Diagnosis - Using automatic test equipment and artificial intelligence expert systems

    NASA Astrophysics Data System (ADS)

    Ramsey, J. E., Jr.

    Three expert systems (ATEOPS, ATEFEXPERS, and ATEFATLAS), which were created to direct automatic test equipment (ATE), are reviewed. The purpose of the project was to develop an expert system to troubleshoot the converter-programmer power supply card for the F-15 aircraft and have that expert system direct the automatic test equipment. Each expert system uses a different knowledge base or inference engine, basing the testing on the circuit schematic, test requirements document, or ATLAS code. Implementing generalized modules allows the expert systems to be used for any different unit under test. Using converted ATLAS to LISP code allows the expert system to direct any ATE using ATLAS. The constraint propagated frame system allows for the expansion of control by creating the ATLAS code, checking the code for good software engineering techniques, directing the ATE, and changing the test sequence as needed (planning).

  9. An expert system to manage the operation of the Space Shuttle's fuel cell cryogenic reactant tanks

    NASA Technical Reports Server (NTRS)

    Murphey, Amy Y.

    1990-01-01

    This paper describes a rule-based expert system to manage the operation of the Space Shuttle's cryogenic fuel system. Rules are based on standard fuel tank operating procedures described in the EECOM Console Handbook. The problem of configuring the operation of the Space Shuttle's fuel tanks is well-bounded and well defined. Moreover, the solution of this problem can be encoded in a knowledge-based system. Therefore, a rule-based expert system is the appropriate paradigm. Furthermore, the expert system could be used in coordination with power system simulation software to design operating procedures for specific missions.

  10. SigmaCLIPSE = presentation management + NASA CLI PS + SQL

    NASA Technical Reports Server (NTRS)

    Weiss, Bernard P., Jr.

    1990-01-01

    SigmaCLIPSE provides an expert systems and 'intelligent' data base development program for diverse systems integration environments that require support for automated reasoning and expert systems technology, presentation management, and access to 'intelligent' SQL data bases. The SigmaCLIPSE technology and and its integrated ability to access 4th generation application development and decision support tools through a portable SQL interface, comprises a sophisticated software development environment for solving knowledge engineering and expert systems development problems in information intensive commercial environments -- financial services, health care, and distributed process control -- where the expert system must be extendable -- a major architectural advantage of NASA CLIPS. SigmaCLIPSE is a research effort intended to test the viability of merging SQL data bases with expert systems technology.

  11. How to choose the right statistical software?-a method increasing the post-purchase satisfaction.

    PubMed

    Cavaliere, Roberto

    2015-12-01

    Nowadays, we live in the "data era" where the use of statistical or data analysis software is inevitable, in any research field. This means that the choice of the right software tool or platform is a strategic issue for a research department. Nevertheless, in many cases decision makers do not pay the right attention to a comprehensive and appropriate evaluation of what the market offers. Indeed, the choice still depends on few factors like, for instance, researcher's personal inclination, e.g., which software have been used at the university or is already known. This is not wrong in principle, but in some cases it's not enough at all and might lead to a "dead end" situation, typically after months or years of investments already done on the wrong software. This article, far from being a full and complete guide to statistical software evaluation, aims to illustrate some key points of the decision process and introduce an extended range of factors which can help to undertake the right choice, at least in potential. There is not enough literature about that topic, most of the time underestimated, both in the traditional literature and even in the so called "gray literature", even if some documents or short pages can be found online. Anyhow, it seems there is not a common and known standpoint about the process of software evaluation from the final user perspective. We suggests a multi-factor analysis leading to an evaluation matrix tool, to be intended as a flexible and customizable tool, aimed to provide a clearer picture of the software alternatives available, not in abstract but related to the researcher's own context and needs. This method is a result of about twenty years of experience of the author in the field of evaluating and using technical-computing software and partially arises from a research made about such topics as part of a project funded by European Commission under the Lifelong Learning Programme 2011.

  12. The Fiscal Impact of a Tuition Assistance Grant for Virginia's Special Education Students. Parent Choice Issues in the State

    ERIC Educational Resources Information Center

    Aud, Susan L.

    2007-01-01

    Parents of students with disabilities face a number of difficult choices in determining how to get the best education for their children. Too often, the special education system in public schools fails its students. Parents must become both experts and advocates for their children in order to navigate a burdensome maze of regulations to fight for…

  13. Biology Teacher and Expert Opinions about Computer Assisted Biology Instruction Materials: A Software Entitled Nucleic Acids and Protein Synthesis

    ERIC Educational Resources Information Center

    Hasenekoglu, Ismet; Timucin, Melih

    2007-01-01

    The aim of this study is to collect and evaluate opinions of CAI experts and biology teachers about a high school level Computer Assisted Biology Instruction Material presenting computer-made modelling and simulations. It is a case study. A material covering "Nucleic Acids and Protein Synthesis" topic was developed as the…

  14. Retinopathy of Prematurity-assist: Novel Software for Detecting Plus Disease

    PubMed Central

    Pour, Elias Khalili; Pourreza, Hamidreza; Zamani, Kambiz Ameli; Mahmoudi, Alireza; Sadeghi, Arash Mir Mohammad; Shadravan, Mahla; Karkhaneh, Reza; Pour, Ramak Rouhi

    2017-01-01

    Purpose To design software with a novel algorithm, which analyzes the tortuosity and vascular dilatation in fundal images of retinopathy of prematurity (ROP) patients with an acceptable accuracy for detecting plus disease. Methods Eighty-seven well-focused fundal images taken with RetCam were classified to three groups of plus, non-plus, and pre-plus by agreement between three ROP experts. Automated algorithms in this study were designed based on two methods: the curvature measure and distance transform for assessment of tortuosity and vascular dilatation, respectively as two major parameters of plus disease detection. Results Thirty-eight plus, 12 pre-plus, and 37 non-plus images, which were classified by three experts, were tested by an automated algorithm and software evaluated the correct grouping of images in comparison to expert voting with three different classifiers, k-nearest neighbor, support vector machine and multilayer perceptron network. The plus, pre-plus, and non-plus images were analyzed with 72.3%, 83.7%, and 84.4% accuracy, respectively. Conclusions The new automated algorithm used in this pilot scheme for diagnosis and screening of patients with plus ROP has acceptable accuracy. With more improvements, it may become particularly useful, especially in centers without a skilled person in the ROP field. PMID:29022295

  15. QUEST/Ada (Query Utility Environment for Software Testing of Ada): The development of a prgram analysis environment for Ada, task 1, phase 2

    NASA Technical Reports Server (NTRS)

    Brown, David B.

    1990-01-01

    The results of research and development efforts are described for Task one, Phase two of a general project entitled The Development of a Program Analysis Environment for Ada. The scope of this task includes the design and development of a prototype system for testing Ada software modules at the unit level. The system is called Query Utility Environment for Software Testing of Ada (QUEST/Ada). The prototype for condition coverage provides a platform that implements expert system interaction with program testing. The expert system can modify data in the instrument source code in order to achieve coverage goals. Given this initial prototype, it is possible to evaluate the rule base in order to develop improved rules for test case generation. The goals of Phase two are the following: (1) to continue to develop and improve the current user interface to support the other goals of this research effort (i.e., those related to improved testing efficiency and increased code reliable); (2) to develop and empirically evaluate a succession of alternative rule bases for the test case generator such that the expert system achieves coverage in a more efficient manner; and (3) to extend the concepts of the current test environment to address the issues of Ada concurrency.

  16. The development of AR book for computer learning

    NASA Astrophysics Data System (ADS)

    Phadung, Muneeroh; Wani, Najela; Tongmnee, Nur-aiynee

    2017-08-01

    Educators need to provide the alternative educational tools to foster learning outcomes of students. By using AR technology to create exciting edutainment experiences, this paper presents how augmented reality (AR) can be applied in the education. This study aims to develop the AR book for tenth grade students (age 15-16) and evaluate its quality. The AR book was developed based on ADDIE framework processes to provide computer learning on software computer knowledge. The content was accorded with the current Thai education curriculum. The AR book had 10 pages in three topics (the first was "Introduction," the second was "System Software" and the third was "Application Software"). Each page contained markers that placed virtual objects (2D animation and video clip). The obtained data were analyzed in terms of average and standard deviation. The validity of multimedia design of the AR book was assessed by three experts in multimedia design. A five-point Likert scale was used and the values were X¯ =4 .84 , S.D. = 1.27 which referred to very high. Moreover, three content experts, who specialize in computer teaching, evaluated the AR book's validity. The values determined by the experts were X¯ =4 .69 , S.D. = 0.29 which referred to very high. Implications for future study and education are discussed.

  17. Use of the Analytic Hierarchy Process for Medication Decision-Making in Type 2 Diabetes

    PubMed Central

    Maruthur, Nisa M.; Joy, Susan M.; Dolan, James G.; Shihab, Hasan M.; Singh, Sonal

    2015-01-01

    Aim To investigate the feasibility and utility of the Analytic Hierarchy Process (AHP) for medication decision-making in type 2 diabetes. Methods We conducted an AHP with nine diabetes experts using structured interviews to rank add-on therapies (to metformin) for type 2 diabetes. During the AHP, participants compared treatment alternatives relative to eight outcomes (hemoglobin A1c-lowering and seven potential harms) and the relative importance of the different outcomes. The AHP model and instrument were pre-tested and pilot-tested prior to use. Results were discussed and an evaluation of the AHP was conducted during a group session. We conducted the quantitative analysis using Expert Choice software with the ideal mode to determine the priority of treatment alternatives. Results Participants judged exenatide to be the best add-on therapy followed by sitagliptin, sulfonylureas, and then pioglitazone. Maximizing benefit was judged 21% more important than minimizing harm. Minimizing severe hypoglycemia was judged to be the most important harm to avoid. Exenatide was the best overall alternative if the importance of minimizing harms was prioritized completely over maximizing benefits. Participants reported that the AHP improved transparency, consistency, and an understanding of others’ perspectives and agreed that the results reflected the views of the group. Conclusions The AHP is feasible and useful to make decisions about diabetes medications. Future studies which incorporate stakeholder preferences should evaluate other decision contexts, objectives, and treatments. PMID:26000636

  18. Evaluating the Effect of Display Realism on Natural Resource Decision Making

    NASA Astrophysics Data System (ADS)

    Chong, Steven S.

    2018-05-01

    Geographic information systems (GIS) facilitate location-based decision making. Despite the improved availability of GIS software to non-professionals, training in cartographic design has not followed suit. Prior research indicates that when presented with map choices, users are influenced by naïve realism, a preference for realistic displays cotaining irrelevant, extraneous details, leading to decreased task efficiency. This study investigated the role of naïve realism in decision making for natural resource management, a field that often employs geospatial tools. Data was collected through a GIS user ability test, a questionnaire and direct observation. Forty volunteer expert and non-expert resource managers evaluated the suitability of different sites for a land management scenario. Each participant was tested on two map display treatments containing different levels of realism - a simpler 2D display and a more complex 3D display - to compare task performance. Performance was measured by task accuracy and task completion time. User perceptions and preferences about the displays were also recorded. Display realism had an impact on performance and there were indications naïve realism was present. Users completed tasks significantly faster on the 2D display and many individuals misjudged which display they were most accurate or fastest with. The results are informative for designing information systems containing interactive maps, particularly for resource management applications. The results also suggest that the order displays were presented had a significant effect and may have implications for teaching users map-based tasks.

  19. The Implementation of Analytical Hierarchy Process Method for Outstanding Achievement Scholarship Reception Selection at Universal University of Batam

    NASA Astrophysics Data System (ADS)

    Marfuah; Widiantoro, Suryo

    2017-12-01

    Universal University of Batam offers outstanding achievement scholarship to the current students to be each year of new academic year, seeing the large number of new Students who are interested to get it then the selection team should be able to filter and choose the eligible ones. The selection process starting with evaluation and judgement made by the experts. There were five criteria as the basic of selection and each had three alternatives that must be considered. Based on the policy of University the maximum number of recipients are five for each of six study programs. Those programs are art of music, dance, industrial engineering, environmental engineering, telecommunication engineering, and software engineering. The expert choice was subjective that AHP method was used to help in making decision consistently by doing pairwise comparison matrix process between criteria based on selected alternatives, by determining the priority order of criteria and alternatives used. The results of these calculations were used as supporting decision-making to determine the eligible students receiving scholarships based on alternatives of selected criteria determined by the final results of AHP method calculation with the priority criterion A (0.37%), C (0.23%), E (0.21%), D (0.14%) and B (0.06%), value of consistency ratio 0.05. Then the alternative priorities 1 (0.63), 2 (0.26) and 3 (0.11) the consistency ratio values 0.03, where each CR ≤ 0.1 or consistent weighting preference.

  20. Selection of infectious medical waste disposal firms by using the analytic hierarchy process and sensitivity analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, P.-F.; Wu, C.-R.; Li, Y.-T.

    2008-07-01

    While Taiwanese hospitals dispose of large amounts of medical waste to ensure sanitation and personal hygiene, doing so inefficiently creates potential environmental hazards and increases operational expenses. However, hospitals lack objective criteria to select the most appropriate waste disposal firm and evaluate its performance, instead relying on their own subjective judgment and previous experiences. Therefore, this work presents an analytic hierarchy process (AHP) method to objectively select medical waste disposal firms based on the results of interviews with experts in the field, thus reducing overhead costs and enhancing medical waste management. An appropriate weight criterion based on AHP is derivedmore » to assess the effectiveness of medical waste disposal firms. The proposed AHP-based method offers a more efficient and precise means of selecting medical waste firms than subjective assessment methods do, thus reducing the potential risks for hospitals. Analysis results indicate that the medical sector selects the most appropriate infectious medical waste disposal firm based on the following rank: matching degree, contractor's qualifications, contractor's service capability, contractor's equipment and economic factors. By providing hospitals with an effective means of evaluating medical waste disposal firms, the proposed AHP method can reduce overhead costs and enable medical waste management to understand the market demand in the health sector. Moreover, performed through use of Expert Choice software, sensitivity analysis can survey the criterion weight of the degree of influence with an alternative hierarchy.« less

  1. The accuracy of a designed software for automated localization of craniofacial landmarks on CBCT images.

    PubMed

    Shahidi, Shoaleh; Bahrampour, Ehsan; Soltanimehr, Elham; Zamani, Ali; Oshagh, Morteza; Moattari, Marzieh; Mehdizadeh, Alireza

    2014-09-16

    Two-dimensional projection radiographs have been traditionally considered the modality of choice for cephalometric analysis. To overcome the shortcomings of two-dimensional images, three-dimensional computed tomography (CT) has been used to evaluate craniofacial structures. However, manual landmark detection depends on medical expertise, and the process is time-consuming. The present study was designed to produce software capable of automated localization of craniofacial landmarks on cone beam (CB) CT images based on image registration and to evaluate its accuracy. The software was designed using MATLAB programming language. The technique was a combination of feature-based (principal axes registration) and voxel similarity-based methods for image registration. A total of 8 CBCT images were selected as our reference images for creating a head atlas. Then, 20 CBCT images were randomly selected as the test images for evaluating the method. Three experts twice located 14 landmarks in all 28 CBCT images during two examinations set 6 weeks apart. The differences in the distances of coordinates of each landmark on each image between manual and automated detection methods were calculated and reported as mean errors. The combined intraclass correlation coefficient for intraobserver reliability was 0.89 and for interobserver reliability 0.87 (95% confidence interval, 0.82 to 0.93). The mean errors of all 14 landmarks were <4 mm. Additionally, 63.57% of landmarks had a mean error of <3 mm compared with manual detection (gold standard method). The accuracy of our approach for automated localization of craniofacial landmarks, which was based on combining feature-based and voxel similarity-based methods for image registration, was acceptable. Nevertheless we recommend repetition of this study using other techniques, such as intensity-based methods.

  2. Software Organization in Student Data Banks for Research and Evaluation: Four Institutional Models.

    ERIC Educational Resources Information Center

    Friedman, Charles P.

    Student data banks for ongoing research and evaluation have been implemented by a number of professional schools. Institutions selecting software designs for the establishment of such systems are often faced with making their choice before all the possible uses of the system are determined. Making software design decisions involves "rational"…

  3. Quality dependent fusion of intramodal and multimodal biometric experts

    NASA Astrophysics Data System (ADS)

    Kittler, J.; Poh, N.; Fatukasi, O.; Messer, K.; Kryszczuk, K.; Richiardi, J.; Drygajlo, A.

    2007-04-01

    We address the problem of score level fusion of intramodal and multimodal experts in the context of biometric identity verification. We investigate the merits of confidence based weighting of component experts. In contrast to the conventional approach where confidence values are derived from scores, we use instead raw measures of biometric data quality to control the influence of each expert on the final fused score. We show that quality based fusion gives better performance than quality free fusion. The use of quality weighted scores as features in the definition of the fusion functions leads to further improvements. We demonstrate that the achievable performance gain is also affected by the choice of fusion architecture. The evaluation of the proposed methodology involves 6 face and one speech verification experts. It is carried out on the XM2VTS data base.

  4. The weighted priors approach for combining expert opinions in logistic regression experiments

    DOE PAGES

    Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.

    2017-04-24

    When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less

  5. Daily baseline skin care in the prevention, treatment, and supportive care of skin toxicity in oncology patients: recommendations from a multinational expert panel

    PubMed Central

    Bensadoun, René-Jean; Humbert, Phillipe; Krutman, Jean; Luger, Thomas; Triller, Raoul; Rougier, André; Seite, Sophie; Dreno, Brigitte

    2013-01-01

    Skin reactions due to radiotherapy and chemotherapy are a significant problem for an important number of cancer patients. While effective for treating cancer, they disturb cutaneous barrier function, causing a reaction soon after initiation of treatment that impacts patient quality of life. Managing these symptoms with cosmetics and nonpharmaceutical skin care products for camouflage or personal hygiene may be important for increasing patient self-esteem. However, inappropriate product choice or use could worsen side effects. Although recommendations exist for the pharmaceutical treatment of skin reactions, there are no recommendations for the choice or use of dermatologic skin care products for oncology patients. The present guidelines were developed by a board of European experts in dermatology and oncology to provide cancer care professionals with guidance for the appropriate use of non-pharmaceutical, dermocosmetic skin care management of cutaneous toxicities associated with radiotherapy and systemic chemotherapy, including epidermal growth factor inhibitors and monoclonal antibodies. The experts hope that these recommendations will improve the management of cutaneous side effects and hence quality of life for oncology patients. PMID:24353440

  6. The weighted priors approach for combining expert opinions in logistic regression experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quinlan, Kevin R.; Anderson-Cook, Christine M.; Myers, Kary L.

    When modeling the reliability of a system or component, it is not uncommon for more than one expert to provide very different prior estimates of the expected reliability as a function of an explanatory variable such as age or temperature. Our goal in this paper is to incorporate all information from the experts when choosing a design about which units to test. Bayesian design of experiments has been shown to be very successful for generalized linear models, including logistic regression models. We use this approach to develop methodology for the case where there are several potentially non-overlapping priors under consideration.more » While multiple priors have been used for analysis in the past, they have never been used in a design context. The Weighted Priors method performs well for a broad range of true underlying model parameter choices and is more robust when compared to other reasonable design choices. Finally, we illustrate the method through multiple scenarios and a motivating example. Additional figures for this article are available in the online supplementary information.« less

  7. Relationship between Effective Application of Machine Learning and Malware Detection: A Quantitative Study

    ERIC Educational Resources Information Center

    Enfinger, Kerry Wayne

    2016-01-01

    The number of malicious files present in the public domain continues to rise at a substantial rate. Current anti-malware software utilizes a signature-based method to detect the presence of malicious software. Generating these pattern signatures is time consuming due to malicious code complexity and the need for expert analysis, however, by making…

  8. Developing Software For Monitoring And Diagnosis

    NASA Technical Reports Server (NTRS)

    Edwards, S. J.; Caglayan, A. K.

    1993-01-01

    Expert-system software shell produces executable code. Report discusses beginning phase of research directed toward development of artificial intelligence for real-time monitoring of, and diagnosis of faults in, complicated systems of equipment. Motivated by need for onboard monitoring and diagnosis of electronic sensing and controlling systems of advanced aircraft. Also applicable to such equipment systems as refineries, factories, and powerplants.

  9. A Framework for Instrument Development of a Choice Experiment: An Application to Type 2 Diabetes.

    PubMed

    Janssen, Ellen M; Segal, Jodi B; Bridges, John F P

    2016-10-01

    Choice experiments are increasingly used to obtain patient preference information for regulatory benefit-risk assessments. Despite the importance of instrument design, there remains a paucity of literature applying good research principles. We applied a novel framework for instrument development of a choice experiment to measure type 2 diabetes mellitus treatment preferences. Applying the framework, we used evidence synthesis, expert consultation, stakeholder engagement, pretest interviews, and pilot testing to develop a best-worst scaling (BWS) and discrete choice experiment (DCE). We synthesized attributes from published DCEs for type 2 diabetes, consulted clinical experts, engaged a national advisory board, conducted local cognitive interviews, and pilot tested a national survey. From published DCEs (n = 17), ten attribute categories were extracted with cost (n = 11) having the highest relative attribute importance (RAI) (range 6-10). Clinical consultation and stakeholder engagement identified six attributes for inclusion. Cognitive pretesting with local diabetes patients (n = 25) ensured comprehension of the choice experiment. Pilot testing with patients from a national sample (n = 50) identified nausea as most important (RAI for DCE: 10 [95 % CI 8.5-11.5]; RAI for BWS: 10 [95 % CI 8.9-11.1]). The developed choice experiment contained five attributes (A1c decrease, blood glucose stability, low blood glucose, nausea, additional medicine, and cost). The framework for instrument development of a choice experiment included five stages of development and incorporated multiple stakeholder perspectives. Further comparisons of instrument development approaches are needed to identify best practices. To facilitate comparisons, researchers need to be encouraged to publish or discuss their instrument development strategies and findings.

  10. Cheating experience: Guiding novices to adopt the gaze strategies of experts expedites the learning of technical laparoscopic skills.

    PubMed

    Vine, Samuel J; Masters, Rich S W; McGrath, John S; Bright, Elizabeth; Wilson, Mark R

    2012-07-01

    Previous research has demonstrated that trainees can be taught (via explicit verbal instruction) to adopt the gaze strategies of expert laparoscopic surgeons. The current study examined a software template designed to guide trainees to adopt expert gaze control strategies passively, without being provided with explicit instructions. We examined 27 novices (who had no laparoscopic training) performing 50 learning trials of a laparoscopic training task in either a discovery-learning (DL) group or a gaze-training (GT) group while wearing an eye tracker to assess gaze control. The GT group performed trials using a surgery-training template (STT); software that is designed to guide expert-like gaze strategies by highlighting the key locations on the monitor screen. The DL group had a normal, unrestricted view of the scene on the monitor screen. Both groups then took part in a nondelayed retention test (to assess learning) and a stress test (under social evaluative threat) with a normal view of the scene. The STT was successful in guiding the GT group to adopt an expert-like gaze strategy (displaying more target-locking fixations). Adopting expert gaze strategies led to an improvement in performance for the GT group, which outperformed the DL group in both retention and stress tests (faster completion time and fewer errors). The STT is a practical and cost-effective training interface that automatically promotes an optimal gaze strategy. Trainees who are trained to adopt the efficient target-locking gaze strategy of experts gain a performance advantage over trainees left to discover their own strategies for task completion. Copyright © 2012 Mosby, Inc. All rights reserved.

  11. Advanced software development workstation project: Engineering scripting language. Graphical editor

    NASA Technical Reports Server (NTRS)

    1992-01-01

    Software development is widely considered to be a bottleneck in the development of complex systems, both in terms of development and in terms of maintenance of deployed systems. Cost of software development and maintenance can also be very high. One approach to reducing costs and relieving this bottleneck is increasing the reuse of software designs and software components. A method for achieving such reuse is a software parts composition system. Such a system consists of a language for modeling software parts and their interfaces, a catalog of existing parts, an editor for combining parts, and a code generator that takes a specification and generates code for that application in the target language. The Advanced Software Development Workstation is intended to be an expert system shell designed to provide the capabilities of a software part composition system.

  12. On the Multilevel Nature of Meta-Analysis: A Tutorial, Comparison of Software Programs, and Discussion of Analytic Choices.

    PubMed

    Pastor, Dena A; Lazowski, Rory A

    2018-01-01

    The term "multilevel meta-analysis" is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term "multilevel meta-analysis" is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.

  13. A rule-based expert system for generating control displays at the Advanced Photon Source

    NASA Astrophysics Data System (ADS)

    Coulter, Karen J.

    1994-12-01

    The integration of a rule-based expert system for generating screen displays for controlling and monitoring instrumentation under the Experimental Physics and Industrial Control System (EPICS) is presented. The expert system is implemented using CLIPS, an expert system shell from the Software Technology Branch at Lyndon B. Johnson Space Center. The user selects the hardware input and output to be displayed and the expert system constructs a graphical control screen appropriate for the data. Such a system provides a method for implementing a common look and feel for displays created by several different users and reduces the amount of time required to create displays for new hardware configurations. Users are able to modify the displays as needed using the EPICS display editor tool.

  14. Expert system development methodology and the transition from prototyping to operations: FIESTA, a case study

    NASA Technical Reports Server (NTRS)

    Happell, Nadine; Miksell, Steve; Carlisle, Candace

    1989-01-01

    A major barrier in taking expert systems from prototype to operational status involves instilling end user confidence in the operational system. The software of different life cycle models is examined and the advantages and disadvantages of each when applied to expert system development are explored. The Fault Isolation Expert System for Tracking and data relay satellite system Applications (FIESTA) is presented as a case study of development of an expert system. The end user confidence necessary for operational use of this system is accentuated by the fact that it will handle real-time data in a secure environment, allowing little tolerance for errors. How FIESTA is dealing with transition problems as it moves from an off-line standalone prototype to an on-line real-time system is discussed.

  15. Expert system development methodology and the transition from prototyping to operations - Fiesta, a case study

    NASA Technical Reports Server (NTRS)

    Happell, Nadine; Miksell, Steve; Carlisle, Candace

    1989-01-01

    A major barrier in taking expert systems from prototype to operational status involves instilling end user confidence in the operational system. The software of different life cycle models is examined and the advantages and disadvantages of each when applied to expert system development are explored. The Fault Isolation Expert System for Tracking and data relay satellite system Applications (FIESTA) is presented as a case study of development of an expert system. The end user confidence necessary for operational use of this system is accentuated by the fact that it will handle real-time data in a secure environment, allowing little tolerance for errors. How FIESTA is dealing with transition problems as it moves from an off-line standalone prototype to an on-line real-time system is discussed.

  16. Independent Verification and Validation of Complex User Interfaces: A Human Factors Approach

    NASA Technical Reports Server (NTRS)

    Whitmore, Mihriban; Berman, Andrea; Chmielewski, Cynthia

    1996-01-01

    The Usability Testing and Analysis Facility (UTAF) at the NASA Johnson Space Center has identified and evaluated a potential automated software interface inspection tool capable of assessing the degree to which space-related critical and high-risk software system user interfaces meet objective human factors standards across each NASA program and project. Testing consisted of two distinct phases. Phase 1 compared analysis times and similarity of results for the automated tool and for human-computer interface (HCI) experts. In Phase 2, HCI experts critiqued the prototype tool's user interface. Based on this evaluation, it appears that a more fully developed version of the tool will be a promising complement to a human factors-oriented independent verification and validation (IV&V) process.

  17. Heat exchanger expert system logic

    NASA Technical Reports Server (NTRS)

    Cormier, R.

    1988-01-01

    The reduction is described of the operation and fault diagnostics of a Deep Space Network heat exchanger to a rule base by the application of propositional calculus to a set of logic statements. The value of this approach lies in the ease of converting the logic and subsequently implementing it on a computer as an expert system. The rule base was written in Process Intelligent Control software.

  18. Change in Microsoft's Licensing Prices Attracts Some Colleges and Worries Others.

    ERIC Educational Resources Information Center

    Olsen, Florence

    2002-01-01

    Discusses the difficult choices facing campus officials as Microsoft pressures colleges to sign lease agreements for desktop software rather than continue to buy licenses; the new leasing option saves money in the short term but might limit choices later. (EV)

  19. Prioritization of factors impacting on performance of power looms using AHP

    NASA Astrophysics Data System (ADS)

    Dulange, S. R.; Pundir, A. K.; Ganapathy, L.

    2014-08-01

    The purpose of this paper is to identify the critical success factors influencing the performance of power loom textiles, to evaluate their impact on the organizational performance and to find out the effect of these factors on the organizational performance of small and medium-sized enterprises (SMEs) in the Solapur (Maharashtra) industrial sector using AHP. In the methodology adopted, factors are identified through the literature survey and finalization of these factors is done by taking the opinion of experts in the Indian context. By cognitive map, the relation between these factors (direct and indirect effect) is determined and cause and effect diagram is prepared. Then these factors are arranged hierarchically and tree diagram is prepared. A questionnaire was designed and distributed among the experts; data is collected. Using expert choice software data is filled to quantify by pair-wise comparison of these factors and are prioritized. The weights demonstrate several key findings: local and global priority reveals that there is a substantial effect of the human resource, product style, and volume on the organizational performance. The skills and technology upgradation impact on organizational performance. Maintenance plays an important role in improving the organizational performances of the SMEs. Overall, the results showed the central role of the operational factors are important. The research is subject to the normal limitations of AHP. The study is using perceptual data provided by Experts which may not provide clear measures of impact factors. However, this can be overcome using more experts to collect data in future studies. Interestingly, the findings here may be generalisable outside Solapur like Ichalkarnji, Malegaon, and Bhiwadi (Maharashtra). Solapur power loom SMEs should consider AHP as an innovative tool for quantification of factors impacting on performance and improving operational and organizational performance in today's dynamic manufacturing environment. The finding suggests the notion that these critical success factors (CSFs) are to be studied carefully and improvement strategy should be developed. Moreover, the study emphasizes the need to link priority of factors to organizational performance and improvement. The study integrates the CSFs of performance and its quantification using AHP and its effect on performance of power loom textiles. The indirect impacts of underlying and fundamental factors are considered. Very few studies have been performed to investigate and understand this issue. Therefore, the research can make a useful contribution.

  20. Space Station Software Recommendations

    NASA Technical Reports Server (NTRS)

    Voigt, S. (Editor)

    1985-01-01

    Four panels of invited experts and NASA representatives focused on the following topics: software management, software development environment, languages, and software standards. Each panel deliberated in private, held two open sessions with audience participation, and developed recommendations for the NASA Space Station Program. The major thrusts of the recommendations were as follows: (1) The software management plan should establish policies, responsibilities, and decision points for software acquisition; (2) NASA should furnish a uniform modular software support environment and require its use for all space station software acquired (or developed); (3) The language Ada should be selected for space station software, and NASA should begin to address issues related to the effective use of Ada; and (4) The space station software standards should be selected (based upon existing standards where possible), and an organization should be identified to promulgate and enforce them. These and related recommendations are described in detail in the conference proceedings.

  1. An expert system for integrated structural analysis and design optimization for aerospace structures

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.

  2. An expert system for integrated structural analysis and design optimization for aerospace structures

    NASA Astrophysics Data System (ADS)

    1992-04-01

    The results of a research study on the development of an expert system for integrated structural analysis and design optimization is presented. An Object Representation Language (ORL) was developed first in conjunction with a rule-based system. This ORL/AI shell was then used to develop expert systems to provide assistance with a variety of structural analysis and design optimization tasks, in conjunction with procedural modules for finite element structural analysis and design optimization. The main goal of the research study was to provide expertise, judgment, and reasoning capabilities in the aerospace structural design process. This will allow engineers performing structural analysis and design, even without extensive experience in the field, to develop error-free, efficient and reliable structural designs very rapidly and cost-effectively. This would not only improve the productivity of design engineers and analysts, but also significantly reduce time to completion of structural design. An extensive literature survey in the field of structural analysis, design optimization, artificial intelligence, and database management systems and their application to the structural design process was first performed. A feasibility study was then performed, and the architecture and the conceptual design for the integrated 'intelligent' structural analysis and design optimization software was then developed. An Object Representation Language (ORL), in conjunction with a rule-based system, was then developed using C++. Such an approach would improve the expressiveness for knowledge representation (especially for structural analysis and design applications), provide ability to build very large and practical expert systems, and provide an efficient way for storing knowledge. Functional specifications for the expert systems were then developed. The ORL/AI shell was then used to develop a variety of modules of expert systems for a variety of modeling, finite element analysis, and design optimization tasks in the integrated aerospace structural design process. These expert systems were developed to work in conjunction with procedural finite element structural analysis and design optimization modules (developed in-house at SAT, Inc.). The complete software, AutoDesign, so developed, can be used for integrated 'intelligent' structural analysis and design optimization. The software was beta-tested at a variety of companies, used by a range of engineers with different levels of background and expertise. Based on the feedback obtained by such users, conclusions were developed and are provided.

  3. Expert system technology

    NASA Technical Reports Server (NTRS)

    Prince, Mary Ellen

    1987-01-01

    The expert system is a computer program which attempts to reproduce the problem-solving behavior of an expert, who is able to view problems from a broad perspective and arrive at conclusions rapidly, using intuition, shortcuts, and analogies to previous situations. Expert systems are a departure from the usual artificial intelligence approach to problem solving. Researchers have traditionally tried to develop general modes of human intelligence that could be applied to many different situations. Expert systems, on the other hand, tend to rely on large quantities of domain specific knowledge, much of it heuristic. The reasoning component of the system is relatively simple and straightforward. For this reason, expert systems are often called knowledge based systems. The report expands on the foregoing. Section 1 discusses the architecture of a typical expert system. Section 2 deals with the characteristics that make a problem a suitable candidate for expert system solution. Section 3 surveys current technology, describing some of the software aids available for expert system development. Section 4 discusses the limitations of the latter. The concluding section makes predictions of future trends.

  4. Perspectives on NASA flight software development - Apollo, Shuttle, Space Station

    NASA Technical Reports Server (NTRS)

    Garman, John R.

    1990-01-01

    Flight data systems' software development is chronicled for the period encompassing NASA's Apollo, Space Shuttle, and (ongoing) Space Station Freedom programs, with attention to the methodologies and 'development tools' employed in each case and their mutual relationships. A dominant concern in all three programs has been the accommodation of software change; it has also been noted that any such long-term program carries the additional challenge of identifying which elements of its software-related 'institutional memory' are most critical, in order to preclude their loss through the retirement, promotion, or transfer of its 'last expert'.

  5. Interfacing An Intelligent Decision-Maker To A Real-Time Control System

    NASA Astrophysics Data System (ADS)

    Evers, D. C.; Smith, D. M.; Staros, C. J.

    1984-06-01

    This paper discusses some of the practical aspects of implementing expert systems in a real-time environment. There is a conflict between the needs of a process control system and the computational load imposed by intelligent decision-making software. The computation required to manage a real-time control problem is primarily concerned with routine calculations which must be executed in real time. On most current hardware, non-trivial AI software should not be forced to operate under real-time constraints. In order for the system to work efficiently, the two processes must be separated by a well-defined interface. Although the precise nature of the task separation will vary with the application, the definition of the interface will need to follow certain fundamental principles in order to provide functional separation. This interface was successfully implemented in the expert scheduling software currently running the automated chemical processing facility at Lockheed-Georgia. Potential applications of this concept in the areas of airborne avionics and robotics will be discussed.

  6. Analytic hierarchy process helps select site for limestone quarry expansion in Barbados.

    PubMed

    Dey, Prasanta Kumar; Ramcharan, Eugene K

    2008-09-01

    Site selection is a key activity for quarry expansion to support cement production, and is governed by factors such as resource availability, logistics, costs, and socio-economic-environmental factors. Adequate consideration of all the factors facilitates both industrial productivity and sustainable economic growth. This study illustrates the site selection process that was undertaken for the expansion of limestone quarry operations to support cement production in Barbados. First, alternate sites with adequate resources to support a 25-year development horizon were identified. Second, technical and socio-economic-environmental factors were then identified. Third, a database was developed for each site with respect to each factor. Fourth, a hierarchical model in analytic hierarchy process (AHP) framework was then developed. Fifth, the relative ranking of the alternate sites was then derived through pair wise comparison in all the levels and through subsequent synthesizing of the results across the hierarchy through computer software (Expert Choice). The study reveals that an integrated framework using the AHP can help select a site for the quarry expansion project in Barbados.

  7. Designing an E-Learning Platform for Postoperative Arthroplasty Adverse Events.

    PubMed

    Krumsvik, Ole Andreas; Babic, Ankica

    2017-01-01

    This paper presents a mobile software application development for e-learning based on the adverse events data within the field of arthroplasty. The application aims at providing a learning platform for physicians, patients, and medical students. Design of user interface aims to meet requirements of several user groups concerned with the adverse events of the knee and hip implants. Besides the clinical patient data, the platform wants to include even electronic patient data as a result of self-monitoring. Two different modules were created, one for medical staff and one for patients, both divided into the knee and hip areas. Knowledge is represented in forms of statistics, treatment options, and detailed, actual adverse event reports. Patients are given a choice of recommendation for two main situations: 'about your diagnosis', and 'what if you get a problem' as advice and guidance during the postoperative rehabilitation. Expert evaluation resulted in acceptance of the concept and provided feedback ideas. The patient evaluation has also been positive. Implementation will mean that a high-fidelity prototype will be developed and tested in larger user groups (medical staff, patients).

  8. Performance evaluation of medical records departments by analytical hierarchy process (AHP) approach in the selected hospitals in Isfahan : medical records dep. & AHP.

    PubMed

    Ajami, Sima; Ketabi, Saeedeh

    2012-06-01

    Medical Records Department (MRD) is an important unit for evaluating and planning of care services. The goal of this study is evaluating the performance of the Medical Records Departments (MRDs) of the selected hospitals in Isfahan, Iran by using Analytical Hierarchy Process (AHP). This was an analytic of cross-sectional study that was done in spring 2008 in Isfahan, Iran. The statistical population consisted of MRDs of Alzahra, Kashani and Khorshid Hospitals in Isfahan. Data were collected by forms and through brainstorm technique. To analyze and perform AHP, Expert Choice software was used by researchers. Results were showed archiving unit has received the largest importance weight with respect to information management. However, on customer aspect admission unit has received the largest weight. Ordering weights of Medical Records Departments' Alzahra, Kashani and Khorshid Hospitals in Isfahan were with 0.394, 0.342 and 0.264 respectively. It is useful for managers to allocate and prioritize resources according to AHP technique for ranking at the Medical Records Departments.

  9. A Matlab user interface for the statistically assisted fluid registration algorithm and tensor-based morphometry

    NASA Astrophysics Data System (ADS)

    Yepes-Calderon, Fernando; Brun, Caroline; Sant, Nishita; Thompson, Paul; Lepore, Natasha

    2015-01-01

    Tensor-Based Morphometry (TBM) is an increasingly popular method for group analysis of brain MRI data. The main steps in the analysis consist of a nonlinear registration to align each individual scan to a common space, and a subsequent statistical analysis to determine morphometric differences, or difference in fiber structure between groups. Recently, we implemented the Statistically-Assisted Fluid Registration Algorithm or SAFIRA,1 which is designed for tracking morphometric differences among populations. To this end, SAFIRA allows the inclusion of statistical priors extracted from the populations being studied as regularizers in the registration. This flexibility and degree of sophistication limit the tool to expert use, even more so considering that SAFIRA was initially implemented in command line mode. Here, we introduce a new, intuitive, easy to use, Matlab-based graphical user interface for SAFIRA's multivariate TBM. The interface also generates different choices for the TBM statistics, including both the traditional univariate statistics on the Jacobian matrix, and comparison of the full deformation tensors.2 This software will be freely disseminated to the neuroimaging research community.

  10. The effect of Cardiac Arrhythmias Simulation Software on the nurses' learning and professional development.

    PubMed

    Bazrafkan, Leila; Hemmati, Mehdi

    2018-04-01

    One of the important tasks of nurses in intensive care unit is interpretation of ECG. The use of training simulator is a new paradigm in the age of computers. This study was performed to evaluate the impact of cardiac arrhythmias simulator software on nurses' learning in the subspecialty Vali-Asr Hospital in 2016. This study was conducted by quasi-experimental randomized Salomon four group design with the participation of 120 nurses in subspecialty Vali-Asr Hospital in Tehran, Iran in 2016 that were selected purposefully and allocated in 4 groups. By this design other confounding factors such as the prior information, maturation and the role of sex and age were controlled by Solomon 4 design. The valid and reliable multiple choice test tools were used to gather information; the validity of the test was approved by experts and its reliability was obtained by Cronbach's alpha coefficient 0.89. At first, the knowledge and skills of the participants were assessed by a pre-test; following the educational intervention with cardiac arrhythmias simulator software during 14 days in ICUs, the mentioned factors were measured for the two groups again by a post-test in the four groups. Data were analyzed using the two way ANOVA. The significance level was considered as p<0.05. Based on randomized four-group Solomon designs and our test results, using cardiac arrhythmias simulator software as an intervention was effective in the nurses' learning since a significant difference was found between pre-test and post-test in the first group (p<0.05). Also, other comparisons by ANOVA test showed that there was no interaction between pre-test and intervention in all of the three knowledge areas of cardiac arrhythmias, their treatments and their diagnosis (P>0.05). The use of software-based simulator for cardiac arrhythmias was effective in nurses' learning in light of its attractive components and interactive method. This intervention increased the knowledge of the nurses in cognitive domain of cardiac arrhythmias in addition to their diagnosis and treatment. Also, the package can be used for training in other areas such as continuing medical education.

  11. Current and future trends in marine image annotation software

    NASA Astrophysics Data System (ADS)

    Gomes-Pereira, Jose Nuno; Auger, Vincent; Beisiegel, Kolja; Benjamin, Robert; Bergmann, Melanie; Bowden, David; Buhl-Mortensen, Pal; De Leo, Fabio C.; Dionísio, Gisela; Durden, Jennifer M.; Edwards, Luke; Friedman, Ariell; Greinert, Jens; Jacobsen-Stout, Nancy; Lerner, Steve; Leslie, Murray; Nattkemper, Tim W.; Sameoto, Jessica A.; Schoening, Timm; Schouten, Ronald; Seager, James; Singh, Hanumant; Soubigou, Olivier; Tojeira, Inês; van den Beld, Inge; Dias, Frederico; Tempera, Fernando; Santos, Ricardo S.

    2016-12-01

    Given the need to describe, analyze and index large quantities of marine imagery data for exploration and monitoring activities, a range of specialized image annotation tools have been developed worldwide. Image annotation - the process of transposing objects or events represented in a video or still image to the semantic level, may involve human interactions and computer-assisted solutions. Marine image annotation software (MIAS) have enabled over 500 publications to date. We review the functioning, application trends and developments, by comparing general and advanced features of 23 different tools utilized in underwater image analysis. MIAS requiring human input are basically a graphical user interface, with a video player or image browser that recognizes a specific time code or image code, allowing to log events in a time-stamped (and/or geo-referenced) manner. MIAS differ from similar software by the capability of integrating data associated to video collection, the most simple being the position coordinates of the video recording platform. MIAS have three main characteristics: annotating events in real time, posteriorly to annotation and interact with a database. These range from simple annotation interfaces, to full onboard data management systems, with a variety of toolboxes. Advanced packages allow to input and display data from multiple sensors or multiple annotators via intranet or internet. Posterior human-mediated annotation often include tools for data display and image analysis, e.g. length, area, image segmentation, point count; and in a few cases the possibility of browsing and editing previous dive logs or to analyze the annotations. The interaction with a database allows the automatic integration of annotations from different surveys, repeated annotation and collaborative annotation of shared datasets, browsing and querying of data. Progress in the field of automated annotation is mostly in post processing, for stable platforms or still images. Integration into available MIAS is currently limited to semi-automated processes of pixel recognition through computer-vision modules that compile expert-based knowledge. Important topics aiding the choice of a specific software are outlined, the ideal software is discussed and future trends are presented.

  12. Webinar Software: A Tool for Developing More Effective Lectures (Online or In-Person)

    PubMed Central

    Mayorga, Eduardo P.; Bekerman, Jesica G.; Palis, Ana G.

    2014-01-01

    Purpose: To describe the use of online seminars (webinars) to improve learning experience for medical residents and fostering critical thinking. Materials and Methods: Sixty-one online seminars (webinars) for residents were developed from April 2012 to February 2013. Residents attended the lectures in the same room as the presenter or from distant locations. Residents interacted with the presenter using their personal computers, tablets, or smartphones. They were able to ask questions and answer the instructor's multiple choice or open-ended questions. The lecture dynamics consisted of: (1) The presentation of a clinical case by an expert on the clinical topic; (2) the instructor asked open-ended and multiple-choice questions about the problem-resolution process; (3) participants responded questions individually; (4) participants received feedback on their answers; (5) a brief conference was given on the learning objectives and the content, also fostering interactive participation; (6) lectures were complemented with work documents. Results: This method allowed for exploration of learning of scientific knowledge and the acquisition of other medical competences (such as patient care, interpersonal and communication skills, and professionalism). The question-and-answer activity and immediate feedback gave attendees the chance to participate actively in the conference, reflect on the topic, correct conceptual errors, and exercise critical thinking. All these factors are necessary for learning. Conclusions: This modality, which facilitates interaction, active participation, and immediate feedback, could allow learners to acquire knowledge more effectively. PMID:24791102

  13. A common distributed language approach to software integration

    NASA Technical Reports Server (NTRS)

    Antonelli, Charles J.; Volz, Richard A.; Mudge, Trevor N.

    1989-01-01

    An important objective in software integration is the development of techniques to allow programs written in different languages to function together. Several approaches are discussed toward achieving this objective and the Common Distributed Language Approach is presented as the approach of choice.

  14. Human Benchmarking of Expert Systems. Literature Review

    DTIC Science & Technology

    1990-01-01

    effetiveness of the development procedures used in order to predict whether the aplication of similar approaches will likely have effective and...they used in their learning and problem solving. We will describe these approaches later. Reasoning. Reasoning usually includes inference. Because to ... in the software engineering process. For example, existing approaches to software evaluation in the military are based on a model of conventional

  15. Multidisciplinary Tool for Systems Analysis of Planetary Entry, Descent, and Landing

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2011-01-01

    Systems analysis of a planetary entry (SAPE), descent, and landing (EDL) is a multidisciplinary activity in nature. SAPE improves the performance of the systems analysis team by automating and streamlining the process, and this improvement can reduce the errors that stem from manual data transfer among discipline experts. SAPE is a multidisciplinary tool for systems analysis of planetary EDL for Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune, and Titan. It performs EDL systems analysis for any planet, operates cross-platform (i.e., Windows, Mac, and Linux operating systems), uses existing software components and open-source software to avoid software licensing issues, performs low-fidelity systems analysis in one hour on a computer that is comparable to an average laptop, and keeps discipline experts in the analysis loop. SAPE uses Python, a platform-independent, open-source language, for integration and for the user interface. Development has relied heavily on the object-oriented programming capabilities that are available in Python. Modules are provided to interface with commercial and government off-the-shelf software components (e.g., thermal protection systems and finite-element analysis). SAPE currently includes the following analysis modules: geometry, trajectory, aerodynamics, aerothermal, thermal protection system, and interface for structural sizing.

  16. Does Expert Advice Improve Educational Choice?

    PubMed Central

    2015-01-01

    This paper reports evidence that an individual meeting with a study counselor at high school significantly improves the quality of choice of tertiary educational field, as self-assessed 18 months after graduation from college. To address endogeneity, we explore the variation in study counseling practices between schools as an instrumental variable (IV). Following careful scrutiny of the validity of the IV, our results indicate a significant and positive influence of study counseling on the quality of educational choice, foremost among males and those with low educated parents. The overall result is stable across a number of robustness checks. PMID:26692388

  17. Spacecraft attitude control using a smart control system

    NASA Technical Reports Server (NTRS)

    Buckley, Brian; Wheatcraft, Louis

    1992-01-01

    Traditionally, spacecraft attitude control has been implemented using control loops written in native code for a space hardened processor. The Naval Research Lab has taken this approach during the development of the Attitude Control Electronics (ACE) package. After the system was developed and delivered, NRL decided to explore alternate technologies to accomplish this same task more efficiently. The approach taken by NRL was to implement the ACE control loops using systems technologies. The purpose of this effort was to: (1) research capabilities required of an expert system in processing a classic closed-loop control algorithm; (2) research the development environment required to design and test an embedded expert systems environment; (3) research the complexity of design and development of expert systems versus a conventional approach; and (4) test the resulting systems against the flight acceptance test software for both response and accuracy. Two expert systems were selected to implement the control loops. Criteria used for the selection of the expert systems included that they had to run in both embedded systems and ground based environments. Using two different expert systems allowed a comparison of the real-time capabilities, inferencing capabilities, and the ground-based development environment. The two expert systems chosen for the evaluation were Spacecraft Command Language (SCL), and NEXTPERT Object. SCL is a smart control system produced for the NRL by Interface and Control Systems (ICS). SCL was developed to be used for real-time command, control, and monitoring of a new generation of spacecraft. NEXPERT Object is a commercially available product developed by Neuron Data. Results of the effort were evaluated using the ACE test bed. The ACE test bed had been developed and used to test the original flight hardware and software using simulators and flight-like interfaces. The test bed was used for testing the expert systems in a 'near-flight' environment. The technical approach, the system architecture, the development environments, knowledge base development, and results of this effort are detailed.

  18. Top 10 metrics for life science software good practices.

    PubMed

    Artaza, Haydee; Chue Hong, Neil; Corpas, Manuel; Corpuz, Angel; Hooft, Rob; Jimenez, Rafael C; Leskošek, Brane; Olivier, Brett G; Stourac, Jan; Svobodová Vařeková, Radka; Van Parys, Thomas; Vaughan, Daniel

    2016-01-01

    Metrics for assessing adoption of good development practices are a useful way to ensure that software is sustainable, reusable and functional. Sustainability means that the software used today will be available - and continue to be improved and supported - in the future. We report here an initial set of metrics that measure good practices in software development. This initiative differs from previously developed efforts in being a community-driven grassroots approach where experts from different organisations propose good software practices that have reasonable potential to be adopted by the communities they represent. We not only focus our efforts on understanding and prioritising good practices, we assess their feasibility for implementation and publish them here.

  19. Top 10 metrics for life science software good practices

    PubMed Central

    2016-01-01

    Metrics for assessing adoption of good development practices are a useful way to ensure that software is sustainable, reusable and functional. Sustainability means that the software used today will be available - and continue to be improved and supported - in the future. We report here an initial set of metrics that measure good practices in software development. This initiative differs from previously developed efforts in being a community-driven grassroots approach where experts from different organisations propose good software practices that have reasonable potential to be adopted by the communities they represent. We not only focus our efforts on understanding and prioritising good practices, we assess their feasibility for implementation and publish them here. PMID:27635232

  20. Rhea: a transparent and modular R pipeline for microbial profiling based on 16S rRNA gene amplicons

    PubMed Central

    Fischer, Sandra; Kumar, Neeraj

    2017-01-01

    The importance of 16S rRNA gene amplicon profiles for understanding the influence of microbes in a variety of environments coupled with the steep reduction in sequencing costs led to a surge of microbial sequencing projects. The expanding crowd of scientists and clinicians wanting to make use of sequencing datasets can choose among a range of multipurpose software platforms, the use of which can be intimidating for non-expert users. Among available pipeline options for high-throughput 16S rRNA gene analysis, the R programming language and software environment for statistical computing stands out for its power and increased flexibility, and the possibility to adhere to most recent best practices and to adjust to individual project needs. Here we present the Rhea pipeline, a set of R scripts that encode a series of well-documented choices for the downstream analysis of Operational Taxonomic Units (OTUs) tables, including normalization steps, alpha- and beta-diversity analysis, taxonomic composition, statistical comparisons, and calculation of correlations. Rhea is primarily a straightforward starting point for beginners, but can also be a framework for advanced users who can modify and expand the tool. As the community standards evolve, Rhea will adapt to always represent the current state-of-the-art in microbial profiles analysis in the clear and comprehensive way allowed by the R language. Rhea scripts and documentation are freely available at https://lagkouvardos.github.io/Rhea. PMID:28097056

  1. Rhea: a transparent and modular R pipeline for microbial profiling based on 16S rRNA gene amplicons.

    PubMed

    Lagkouvardos, Ilias; Fischer, Sandra; Kumar, Neeraj; Clavel, Thomas

    2017-01-01

    The importance of 16S rRNA gene amplicon profiles for understanding the influence of microbes in a variety of environments coupled with the steep reduction in sequencing costs led to a surge of microbial sequencing projects. The expanding crowd of scientists and clinicians wanting to make use of sequencing datasets can choose among a range of multipurpose software platforms, the use of which can be intimidating for non-expert users. Among available pipeline options for high-throughput 16S rRNA gene analysis, the R programming language and software environment for statistical computing stands out for its power and increased flexibility, and the possibility to adhere to most recent best practices and to adjust to individual project needs. Here we present the Rhea pipeline, a set of R scripts that encode a series of well-documented choices for the downstream analysis of Operational Taxonomic Units (OTUs) tables, including normalization steps, alpha - and beta -diversity analysis, taxonomic composition, statistical comparisons, and calculation of correlations. Rhea is primarily a straightforward starting point for beginners, but can also be a framework for advanced users who can modify and expand the tool. As the community standards evolve, Rhea will adapt to always represent the current state-of-the-art in microbial profiles analysis in the clear and comprehensive way allowed by the R language. Rhea scripts and documentation are freely available at https://lagkouvardos.github.io/Rhea.

  2. Expert Decision-Making in Naturalistic Environments: A Summary of Research

    DTIC Science & Technology

    2005-03-01

    a number of descriptive decision theories arose (Plous, 1993). One of these is the rational choice model of decision - making (Janis & Mann, 1977...possible association between time pressure and increased levels of emotion . To date, the role played by emotion in decision - making has not been given... rational choice model seems to describe some decision events and Janis and Mann (1977) have highlighted emotion as a potential influence on decision

  3. Space shuttle main engine anomaly data and inductive knowledge based systems: Automated corporate expertise

    NASA Technical Reports Server (NTRS)

    Modesitt, Kenneth L.

    1987-01-01

    Progress is reported on the development of SCOTTY, an expert knowledge-based system to automate the analysis procedure following test firings of the Space Shuttle Main Engine (SSME). The integration of a large-scale relational data base system, a computer graphics interface for experts and end-user engineers, potential extension of the system to flight engines, application of the system for training of newly-hired engineers, technology transfer to other engines, and the essential qualities of good software engineering practices for building expert knowledge-based systems are among the topics discussed.

  4. SUVI Thematic Maps: A new tool for space weather forecasting

    NASA Astrophysics Data System (ADS)

    Hughes, J. M.; Seaton, D. B.; Darnel, J.

    2017-12-01

    The new Solar Ultraviolet Imager (SUVI) instruments aboard NOAA's GOES-R series satellites collect continuous, high-quality imagery of the Sun in six wavelengths. SUVI imagers produce at least one image every 10 seconds, or 8,640 images per day, considerably more data than observers can digest in real time. Over the projected 20-year lifetime of the four GOES-R series spacecraft, SUVI will provide critical imagery for space weather forecasters and produce an extensive but unwieldy archive. In order to condense the database into a dynamic and searchable form we have developed solar thematic maps, maps of the Sun with key features, such as coronal holes, flares, bright regions, quiet corona, and filaments, identified. Thematic maps will be used in NOAA's Space Weather Prediction Center to improve forecaster response time to solar events and generate several derivative products. Likewise, scientists use thematic maps to find observations of interest more easily. Using an expert-trained, naive Bayesian classifier to label each pixel, we create thematic maps in real-time. We created software to collect expert classifications of solar features based on SUVI images. Using this software, we compiled a database of expert classifications, from which we could characterize the distribution of pixels associated with each theme. Given new images, the classifier assigns each pixel the most appropriate label according to the trained distribution. Here we describe the software to collect expert training and the successes and limitations of the classifier. The algorithm excellently identifies coronal holes but fails to consistently detect filaments and prominences. We compare the Bayesian classifier to an artificial neural network, one of our attempts to overcome the aforementioned limitations. These results are very promising and encourage future research into an ensemble classification approach.

  5. Linking medical records to an expert system

    NASA Technical Reports Server (NTRS)

    Naeymi-Rad, Frank; Trace, David; Desouzaalmeida, Fabio

    1991-01-01

    This presentation will be done using the IMR-Entry (Intelligent Medical Record Entry) system. IMR-Entry is a software program developed as a front-end to our diagnostic consultant software MEDAS (Medical Emergency Decision Assistance System). MEDAS (the Medical Emergency Diagnostic Assistance System) is a diagnostic consultant system using a multimembership Bayesian design for its inference engine and relational database technology for its knowledge base maintenance. Research on MEDAS began at the University of Southern California and the Institute of Critical Care in the mid 1970's with support from NASA and NSF. The MEDAS project moved to Chicago in 1982; its current progress is due to collaboration between Illinois Institute of Technology, The Chicago Medical School, Lake Forest College and NASA at KSC. Since the purpose of an expert system is to derive a hypothesis, its communication vocabulary is limited to features used by its knowledge base. The development of a comprehensive problem based medical record entry system which could handshake with an expert system while creating an electronic medical record at the same time was studied. IMR-E is a computer based patient record that serves as a front end to the expert system MEDAS. IMR-E is a graphically oriented comprehensive medical record. The programs major components are demonstrated.

  6. Plug into a Network.

    ERIC Educational Resources Information Center

    Vander Linden, Doug; Clark, Larry

    1994-01-01

    Stand-alone, single-user software programs for classroom use can be prohibitively expensive, compared to information-sharing network systems. Based on a Kansas district's experience, this article explains three types of networks (device-sharing, operating-system-based, and client-server-based) and discusses network protocols, software choices,…

  7. When Blood Cells Bend: Understanding Sickle Cell Disease

    MedlinePlus

    ... Subscribe April 2012 Print this issue When Blood Cells Bend Understanding Sickle Cell Disease Send us your ... Diabetes? Sound Health Wise Choices Living with Sickle Cell Disease See a sickle cell disease expert regularly. ...

  8. How to choose the right statistical software?—a method increasing the post-purchase satisfaction

    PubMed Central

    2015-01-01

    Nowadays, we live in the “data era” where the use of statistical or data analysis software is inevitable, in any research field. This means that the choice of the right software tool or platform is a strategic issue for a research department. Nevertheless, in many cases decision makers do not pay the right attention to a comprehensive and appropriate evaluation of what the market offers. Indeed, the choice still depends on few factors like, for instance, researcher’s personal inclination, e.g., which software have been used at the university or is already known. This is not wrong in principle, but in some cases it’s not enough at all and might lead to a “dead end” situation, typically after months or years of investments already done on the wrong software. This article, far from being a full and complete guide to statistical software evaluation, aims to illustrate some key points of the decision process and introduce an extended range of factors which can help to undertake the right choice, at least in potential. There is not enough literature about that topic, most of the time underestimated, both in the traditional literature and even in the so called “gray literature”, even if some documents or short pages can be found online. Anyhow, it seems there is not a common and known standpoint about the process of software evaluation from the final user perspective. We suggests a multi-factor analysis leading to an evaluation matrix tool, to be intended as a flexible and customizable tool, aimed to provide a clearer picture of the software alternatives available, not in abstract but related to the researcher’s own context and needs. This method is a result of about twenty years of experience of the author in the field of evaluating and using technical-computing software and partially arises from a research made about such topics as part of a project funded by European Commission under the Lifelong Learning Programme 2011. PMID:26793368

  9. Design and Development of a User Interface for the Dynamic Model of Software Project Management.

    DTIC Science & Technology

    1988-03-01

    rectory of the user’s choice for future...the last choice selected. Let us assume for the sake of this tour that the user has selected all eight choices . ESTIMATED ACTUAL PROJECT SIZE DEFINITION...manipulation of varaibles in the * •. TJin~ca model "h ... ser Inter ace for the Dynamica model was designed b in iterative process of prototyping

  10. Development of expert system for biobased polymer material selection: food packaging application.

    PubMed

    Sanyang, M L; Sapuan, S M

    2015-10-01

    Biobased food packaging materials are gaining more attention owing to their intrinsic biodegradable nature and renewability. Selection of suitable biobased polymers for food packaging applications could be a tedious task with potential mistakes in choosing the best materials. In this paper, an expert system was developed using Exsys Corvid software to select suitable biobased polymer materials for packaging fruits, dry food and dairy products. If - Then rule based system was utilized to accomplish the material selection process whereas a score system was formulated to facilitate the ranking of selected materials. The expert system selected materials that satisfied all constraints and selection results were presented in suitability sequence depending on their scores. The expert system selected polylactic acid (PLA) as the most suitable material.

  11. Identification of features of electronic prescribing systems to support quality and safety in primary care using a modified Delphi process.

    PubMed

    Sweidan, Michelle; Williamson, Margaret; Reeve, James F; Harvey, Ken; O'Neill, Jennifer A; Schattner, Peter; Snowdon, Teri

    2010-04-15

    Electronic prescribing is increasingly being used in primary care and in hospitals. Studies on the effects of e-prescribing systems have found evidence for both benefit and harm. The aim of this study was to identify features of e-prescribing software systems that support patient safety and quality of care and that are useful to the clinician and the patient, with a focus on improving the quality use of medicines. Software features were identified by a literature review, key informants and an expert group. A modified Delphi process was used with a 12-member multidisciplinary expert group to reach consensus on the expected impact of the features in four domains: patient safety, quality of care, usefulness to the clinician and usefulness to the patient. The setting was electronic prescribing in general practice in Australia. A list of 114 software features was developed. Most of the features relate to the recording and use of patient data, the medication selection process, prescribing decision support, monitoring drug therapy and clinical reports. The expert group rated 78 of the features (68%) as likely to have a high positive impact in at least one domain, 36 features (32%) as medium impact, and none as low or negative impact. Twenty seven features were rated as high positive impact across 3 or 4 domains including patient safety and quality of care. Ten features were considered "aspirational" because of a lack of agreed standards and/or suitable knowledge bases. This study defines features of e-prescribing software systems that are expected to support safety and quality, especially in relation to prescribing and use of medicines in general practice. The features could be used to develop software standards, and could be adapted if necessary for use in other settings and countries.

  12. Identification of features of electronic prescribing systems to support quality and safety in primary care using a modified Delphi process

    PubMed Central

    2010-01-01

    Background Electronic prescribing is increasingly being used in primary care and in hospitals. Studies on the effects of e-prescribing systems have found evidence for both benefit and harm. The aim of this study was to identify features of e-prescribing software systems that support patient safety and quality of care and that are useful to the clinician and the patient, with a focus on improving the quality use of medicines. Methods Software features were identified by a literature review, key informants and an expert group. A modified Delphi process was used with a 12-member multidisciplinary expert group to reach consensus on the expected impact of the features in four domains: patient safety, quality of care, usefulness to the clinician and usefulness to the patient. The setting was electronic prescribing in general practice in Australia. Results A list of 114 software features was developed. Most of the features relate to the recording and use of patient data, the medication selection process, prescribing decision support, monitoring drug therapy and clinical reports. The expert group rated 78 of the features (68%) as likely to have a high positive impact in at least one domain, 36 features (32%) as medium impact, and none as low or negative impact. Twenty seven features were rated as high positive impact across 3 or 4 domains including patient safety and quality of care. Ten features were considered "aspirational" because of a lack of agreed standards and/or suitable knowledge bases. Conclusions This study defines features of e-prescribing software systems that are expected to support safety and quality, especially in relation to prescribing and use of medicines in general practice. The features could be used to develop software standards, and could be adapted if necessary for use in other settings and countries. PMID:20398294

  13. Inductive knowledge acquisition experience with commercial tools for space shuttle main engine testing

    NASA Technical Reports Server (NTRS)

    Modesitt, Kenneth L.

    1990-01-01

    Since 1984, an effort has been underway at Rocketdyne, manufacturer of the Space Shuttle Main Engine (SSME), to automate much of the analysis procedure conducted after engine test firings. Previously published articles at national and international conferences have contained the context of and justification for this effort. Here, progress is reported in building the full system, including the extensions of integrating large databases with the system, known as Scotty. Inductive knowledge acquisition has proven itself to be a key factor in the success of Scotty. The combination of a powerful inductive expert system building tool (ExTran), a relational data base management system (Reliance), and software engineering principles and Computer-Assisted Software Engineering (CASE) tools makes for a practical, useful and state-of-the-art application of an expert system.

  14. Quality assessment of Isfahan Medical Faculty web site electronic services and prioritizing solutions using analytic hierarchy process approach.

    PubMed

    Hajrahimi, Nafiseh; Dehaghani, Sayed Mehdi Hejazi; Hajrahimi, Nargess; Sarmadi, Sima

    2014-01-01

    Implementing information technology in the best possible way can bring many advantages such as applying electronic services and facilitating tasks. Therefore, assessment of service providing systems is a way to improve the quality and elevate these systems including e-commerce, e-government, e-banking, and e-learning. This study was aimed to evaluate the electronic services in the website of Isfahan University of Medical Sciences in order to propose solutions to improve them. Furthermore, we aim to rank the solutions based on the factors that enhance the quality of electronic services by using analytic hierarchy process (AHP) method. Non-parametric test was used to assess the quality of electronic services. The assessment of propositions was based on Aqual model and they were prioritized using AHP approach. The AHP approach was used because it directly applies experts' deductions in the model, and lead to more objective results in the analysis and prioritizing the risks. After evaluating the quality of the electronic services, a multi-criteria decision making frame-work was used to prioritize the proposed solutions. Non-parametric tests and AHP approach using Expert Choice software. The results showed that students were satisfied in most of the indicators. Only a few indicators received low satisfaction from students including, design attractiveness, the amount of explanation and details of information, honesty and responsiveness of authorities, and the role of e-services in the user's relationship with university. After interviewing with Information and Communications Technology (ICT) experts at the university, measurement criteria, and solutions to improve the quality were collected. The best solutions were selected by EC software. According to the results, the solution "controlling and improving the process in handling users complaints" is of the utmost importance and authorities have to have it on the website and place great importance on updating this process. Although, 4 out of the 22 indicators used in the test hypothesis were not confirmed, the results show that these assumptions are accepted at 95% confidence level. To improve the quality of electronic services, special attention should be paid to "services interaction." As the results showed having "controlling and improving the process in handling users complaints" on the website is the first and most important one and the process of "changing brand/factory name/address in the text of the factory license/renewal or modification of manufacturing license/changing the formula" is the least important one.

  15. [4 years' experience with use of the "Ambulancia, Oddelenie, Sklad" software made by Softprogress Piestany installed on the local computer network in the Ophthalmology Department of the Trnava Hospital].

    PubMed

    Gavorník, P; Kristofovicová, A

    1997-10-01

    The authors give information about their positive and negative experience at using computers connected to local network in ophthalmology department. They put the stress on a need to respect the requirements on hygiene of work at the beginning of the process of planning of suitable hardware choice software and workplace equipment. The need of sufficient number of workplace-terminals and software choice are considered to be necessary and the latter one allows a complex data processing and forming of the common database of patients for outpatient department, department and storage files. They refer to risks of local network switching over with other workplaces and possibility of undesirable information escapes and virus penetration into the network.

  16. Proceedings of the Workshop on software tools for distributed intelligent control systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herget, C.J.

    1990-09-01

    The Workshop on Software Tools for Distributed Intelligent Control Systems was organized by Lawrence Livermore National Laboratory for the United States Army Headquarters Training and Doctrine Command and the Defense Advanced Research Projects Agency. The goals of the workshop were to the identify the current state of the art in tools which support control systems engineering design and implementation, identify research issues associated with writing software tools which would provide a design environment to assist engineers in multidisciplinary control design and implementation, formulate a potential investment strategy to resolve the research issues and develop public domain code which can formmore » the core of more powerful engineering design tools, and recommend test cases to focus the software development process and test associated performance metrics. Recognizing that the development of software tools for distributed intelligent control systems will require a multidisciplinary effort, experts in systems engineering, control systems engineering, and compute science were invited to participate in the workshop. In particular, experts who could address the following topics were selected: operating systems, engineering data representation and manipulation, emerging standards for manufacturing data, mathematical foundations, coupling of symbolic and numerical computation, user interface, system identification, system representation at different levels of abstraction, system specification, system design, verification and validation, automatic code generation, and integration of modular, reusable code.« less

  17. Feedback in Technology-Based Instruction: Learner Preferences

    ERIC Educational Resources Information Center

    Lefevre, David; Cox, Benita

    2016-01-01

    This research investigates learner preferences for the format of feedback?when using technology-based instruction (TBI). The primary method of data collection was to provide subjects with a range of options for TBI feedback following responses to multiple-choice questions and then observe their choices. A software tool both presented the feedback…

  18. Constructing Benchmark Databases and Protocols for Medical Image Analysis: Diabetic Retinopathy

    PubMed Central

    Kauppi, Tomi; Kämäräinen, Joni-Kristian; Kalesnykiene, Valentina; Sorri, Iiris; Uusitalo, Hannu; Kälviäinen, Heikki

    2013-01-01

    We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions. PMID:23956787

  19. Network approaches for expert decisions in sports.

    PubMed

    Glöckner, Andreas; Heinen, Thomas; Johnson, Joseph G; Raab, Markus

    2012-04-01

    This paper focuses on a model comparison to explain choices based on gaze behavior via simulation procedures. We tested two classes of models, a parallel constraint satisfaction (PCS) artificial neuronal network model and an accumulator model in a handball decision-making task from a lab experiment. Both models predict action in an option-generation task in which options can be chosen from the perspective of a playmaker in handball (i.e., passing to another player or shooting at the goal). Model simulations are based on a dataset of generated options together with gaze behavior measurements from 74 expert handball players for 22 pieces of video footage. We implemented both classes of models as deterministic vs. probabilistic models including and excluding fitted parameters. Results indicated that both classes of models can fit and predict participants' initially generated options based on gaze behavior data, and that overall, the classes of models performed about equally well. Early fixations were thereby particularly predictive for choices. We conclude that the analyses of complex environments via network approaches can be successfully applied to the field of experts' decision making in sports and provide perspectives for further theoretical developments. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Modernization of software quality assurance

    NASA Technical Reports Server (NTRS)

    Bhaumik, Gokul

    1988-01-01

    The customers satisfaction depends not only on functional performance, it also depends on the quality characteristics of the software products. An examination of this quality aspect of software products will provide a clear, well defined framework for quality assurance functions, which improve the life-cycle activities of software development. Software developers must be aware of the following aspects which have been expressed by many quality experts: quality cannot be added on; the level of quality built into a program is a function of the quality attributes employed during the development process; and finally, quality must be managed. These concepts have guided our development of the following definition for a Software Quality Assurance function: Software Quality Assurance is a formal, planned approach of actions designed to evaluate the degree of an identifiable set of quality attributes present in all software systems and their products. This paper is an explanation of how this definition was developed and how it is used.

  1. Proposal for a CLIPS software library

    NASA Technical Reports Server (NTRS)

    Porter, Ken

    1991-01-01

    This paper is a proposal to create a software library for the C Language Integrated Production System (CLIPS) expert system shell developed by NASA. Many innovative ideas for extending CLIPS were presented at the First CLIPS Users Conference, including useful user and database interfaces. CLIPS developers would benefit from a software library of reusable code. The CLIPS Users Group should establish a software library-- a course of action to make that happen is proposed. Open discussion to revise this library concept is essential, since only a group effort is likely to succeed. A response form intended to solicit opinions and support from the CLIPS community is included.

  2. ElectroMagnetoEncephalography Software: Overview and Integration with Other EEG/MEG Toolboxes

    PubMed Central

    Peyk, Peter; De Cesarei, Andrea; Junghöfer, Markus

    2011-01-01

    EMEGS (electromagnetic encephalography software) is a MATLAB toolbox designed to provide novice as well as expert users in the field of neuroscience with a variety of functions to perform analysis of EEG and MEG data. The software consists of a set of graphical interfaces devoted to preprocessing, analysis, and visualization of electromagnetic data. Moreover, it can be extended using a plug-in interface. Here, an overview of the capabilities of the toolbox is provided, together with a simple tutorial for both a standard ERP analysis and a time-frequency analysis. Latest features and future directions of the software development are presented in the final section. PMID:21577273

  3. ElectroMagnetoEncephalography software: overview and integration with other EEG/MEG toolboxes.

    PubMed

    Peyk, Peter; De Cesarei, Andrea; Junghöfer, Markus

    2011-01-01

    EMEGS (electromagnetic encephalography software) is a MATLAB toolbox designed to provide novice as well as expert users in the field of neuroscience with a variety of functions to perform analysis of EEG and MEG data. The software consists of a set of graphical interfaces devoted to preprocessing, analysis, and visualization of electromagnetic data. Moreover, it can be extended using a plug-in interface. Here, an overview of the capabilities of the toolbox is provided, together with a simple tutorial for both a standard ERP analysis and a time-frequency analysis. Latest features and future directions of the software development are presented in the final section.

  4. Expert system decision support for low-cost launch vehicle operations

    NASA Technical Reports Server (NTRS)

    Szatkowski, G. P.; Levin, Barry E.

    1991-01-01

    Progress in assessing the feasibility, benefits, and risks associated with AI expert systems applied to low cost expendable launch vehicle systems is described. Part one identified potential application areas in vehicle operations and on-board functions, assessed measures of cost benefit, and identified key technologies to aid in the implementation of decision support systems in this environment. Part two of the program began the development of prototypes to demonstrate real-time vehicle checkout with controller and diagnostic/analysis intelligent systems and to gather true measures of cost savings vs. conventional software, verification and validation requirements, and maintainability improvement. The main objective of the expert advanced development projects was to provide a robust intelligent system for control/analysis that must be performed within a specified real-time window in order to meet the demands of the given application. The efforts to develop the two prototypes are described. Prime emphasis was on a controller expert system to show real-time performance in a cryogenic propellant loading application and safety validation implementation of this system experimentally, using commercial-off-the-shelf software tools and object oriented programming techniques. This smart ground support equipment prototype is based in C with imbedded expert system rules written in the CLIPS protocol. The relational database, ORACLE, provides non-real-time data support. The second demonstration develops the vehicle/ground intelligent automation concept, from phase one, to show cooperation between multiple expert systems. This automated test conductor (ATC) prototype utilizes a knowledge-bus approach for intelligent information processing by use of virtual sensors and blackboards to solve complex problems. It incorporates distributed processing of real-time data and object-oriented techniques for command, configuration control, and auto-code generation.

  5. Discus: investigating subjective judgment of optic disc damage.

    PubMed

    Denniss, Jonathan; Echendu, Damian; Henson, David B; Artes, Paul H

    2011-01-01

    To describe a software package (Discus) for investigating clinicians' subjective assessment of optic disc damage [diagnostic accuracy in detecting visual field (VF) damage, decision criteria, and agreement with a panel of experts] and to provide reference data from a group of expert observers. Optic disc images were selected from patients with manifest or suspected glaucoma or ocular hypertension who attended the Manchester Royal Eye Hospital. Eighty images came from eyes without evidence of VF loss in at least four consecutive tests (VF negatives), and 20 images from eyes with repeatable VF loss (VF positives). Software was written to display these images in randomized order, for up to 60 s. Expert observers (n = 12) rated optic disc damage on a 5-point scale (definitely healthy, probably healthy, not sure, probably damaged, and definitely damaged). Optic disc damage as determined by the expert observers predicted VF loss with less than perfect accuracy (mean area under receiver-operating characteristic curve, 0.78; range, 0.72 to 0.85). When the responses were combined across the panel of experts, the area under receiver-operating characteristic curve reached 0.87, corresponding to a sensitivity of ∼60% at 90% specificity. Although the observers' performances were similar, there were large differences between the criteria they adopted (p < 0.001), even though all observers had been given identical instructions. Discus provides a simple and rapid means for assessing important aspects of optic disc interpretation. The data from the panel of expert observers provide a reference against which students, trainees, and clinicians may compare themselves. The program and the analyses described in this article are freely accessible from http://www.discusproject.blogspot.com/.

  6. Down syndrome: coercion and eugenics.

    PubMed

    McCabe, Linda L; McCabe, Edward R B

    2011-08-01

    Experts agree that coercion by insurance companies or governmental authorities to limit reproductive choice constitutes a eugenic practice. We discuss discrimination against families of children with Down syndrome who chose not to have prenatal testing or chose to continue a pregnancy after a prenatal diagnosis. We argue that this discrimination represents economic and social coercion to limit reproductive choice, and we present examples of governmental rhetoric and policies condoning eugenics and commercial policies meeting criteria established by experts for eugenics. Our purpose is to sensitize the clinical genetics community to these issues as we attempt to provide the most neutral nondirective prenatal genetic counseling we can, and as we provide postnatal care and counseling to children with Down syndrome and their families. We are concerned that if eugenic policies and practices targeting individuals with Down syndrome and their families are tolerated by clinical geneticists and the broader citizenry, then we increase the probability of eugenics directed toward other individuals and communities.

  7. Wireless smart meters and public acceptance: the environment, limited choices, and precautionary politics.

    PubMed

    Hess, David J; Coley, Jonathan S

    2014-08-01

    Wireless smart meters (WSMs) promise numerous environmental benefits, but they have been installed without full consideration of public acceptance issues. Although societal-implications research and regulatory policy have focused on privacy, security, and accuracy issues, our research indicates that health concerns have played an important role in the public policy debates that have emerged in California. Regulatory bodies do not recognize non-thermal health effects for non-ionizing electromagnetic radiation, but both homeowners and counter-experts have contested the official assurances that WSMs pose no health risks. Similarities and differences with the existing social science literature on mobile phone masts are discussed, as are the broader political implications of framing an alternative policy based on an opt-out choice. The research suggests conditions under which health-oriented precautionary politics can be particularly effective, namely, if there is a mandatory technology, a network of counter-experts, and a broader context of democratic contestation.

  8. Automated software development workstation

    NASA Technical Reports Server (NTRS)

    1986-01-01

    Engineering software development was automated using an expert system (rule-based) approach. The use of this technology offers benefits not available from current software development and maintenance methodologies. A workstation was built with a library or program data base with methods for browsing the designs stored; a system for graphical specification of designs including a capability for hierarchical refinement and definition in a graphical design system; and an automated code generation capability in FORTRAN. The workstation was then used in a demonstration with examples from an attitude control subsystem design for the space station. Documentation and recommendations are presented.

  9. IEEE/AIAA/NASA Digital Avionics Systems Conference, 9th, Virginia Beach, VA, Oct. 15-18, 1990, Proceedings

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The present conference on digital avionics discusses vehicle-management systems, spacecraft avionics, special vehicle avionics, communication/navigation/identification systems, software qualification and quality assurance, launch-vehicle avionics, Ada applications, sensor and signal processing, general aviation avionics, automated software development, design-for-testability techniques, and avionics-software engineering. Also discussed are optical technology and systems, modular avionics, fault-tolerant avionics, commercial avionics, space systems, data buses, crew-station technology, embedded processors and operating systems, AI and expert systems, data links, and pilot/vehicle interfaces.

  10. A parallel strategy for implementing real-time expert systems using CLIPS

    NASA Technical Reports Server (NTRS)

    Ilyes, Laszlo A.; Villaseca, F. Eugenio; Delaat, John

    1994-01-01

    As evidenced by current literature, there appears to be a continued interest in the study of real-time expert systems. It is generally recognized that speed of execution is only one consideration when designing an effective real-time expert system. Some other features one must consider are the expert system's ability to perform temporal reasoning, handle interrupts, prioritize data, contend with data uncertainty, and perform context focusing as dictated by the incoming data to the expert system. This paper presents a strategy for implementing a real time expert system on the iPSC/860 hypercube parallel computer using CLIPS. The strategy takes into consideration not only the execution time of the software, but also those features which define a true real-time expert system. The methodology is then demonstrated using a practical implementation of an expert system which performs diagnostics on the Space Shuttle Main Engine (SSME). This particular implementation uses an eight node hypercube to process ten sensor measurements in order to simultaneously diagnose five different failure modes within the SSME. The main program is written in ANSI C and embeds CLIPS to better facilitate and debug the rule based expert system.

  11. An expert-based model for selecting the most suitable substrate material type for antenna circuits

    NASA Astrophysics Data System (ADS)

    AL-Oqla, Faris M.; Omar, Amjad A.

    2015-06-01

    Quality and properties of microwave circuits depend on all the circuit components. One of these components is the substrate. The process of substrate material selection is a decision-making problem that involves multicriteria with objectives that are diverse and conflicting. The aim of this work was to select the most suitable substrate material type to be used in antennas in the microwave frequency range that gives best performance and reliability of the substrate. For this purpose, a model was built to ease the decision-making that includes hierarchical alternatives and criteria. The substrate material type options considered were limited to fiberglass-reinforced epoxy laminates (FR4 εr = 4.8), aluminium (III) oxide (alumina εr = 9.6), gallium arsenide III-V compound (GaAs εr = 12.8) and PTFE composites reinforced with glass microfibers (Duroid εr = 2.2-2.3). To assist in building the model and making decisions, the analytical hierarchy process (AHP) was used. The decision-making process revealed that alumina substrate material type was the most suitable choice for the antennas in the microwave frequency range that yields best performance and reliability. In addition, both the size of the circuit and the loss tangent of the substrates were found to be the most contributing subfactors in the antenna circuit specifications criterion. Experimental assessments were conducted utilising The Expert Choice™ software. The judgments were tested and found to be precise, consistent and justifiable, and the marginal inconsistency values were found to be very narrow. A sensitivity analysis was also presented to demonstrate the confidence in the drawn conclusions.

  12. Human vs. Computer Diagnosis of Students' Natural Selection Knowledge: Testing the Efficacy of Text Analytic Software

    NASA Astrophysics Data System (ADS)

    Nehm, Ross H.; Haertig, Hendrik

    2012-02-01

    Our study examines the efficacy of Computer Assisted Scoring (CAS) of open-response text relative to expert human scoring within the complex domain of evolutionary biology. Specifically, we explored whether CAS can diagnose the explanatory elements (or Key Concepts) that comprise undergraduate students' explanatory models of natural selection with equal fidelity as expert human scorers in a sample of >1,000 essays. We used SPSS Text Analysis 3.0 to perform our CAS and measure Kappa values (inter-rater reliability) of KC detection (i.e., computer-human rating correspondence). Our first analysis indicated that the text analysis functions (or extraction rules) developed and deployed in SPSSTA to extract individual Key Concepts (KCs) from three different items differing in several surface features (e.g., taxon, trait, type of evolutionary change) produced "substantial" (Kappa 0.61-0.80) or "almost perfect" (0.81-1.00) agreement. The second analysis explored the measurement of human-computer correspondence for KC diversity (the number of different accurate knowledge elements) in the combined sample of all 827 essays. Here we found outstanding correspondence; extraction rules generated using one prompt type are broadly applicable to other evolutionary scenarios (e.g., bacterial resistance, cheetah running speed, etc.). This result is encouraging, as it suggests that the development of new item sets may not necessitate the development of new text analysis rules. Overall, our findings suggest that CAS tools such as SPSS Text Analysis may compensate for some of the intrinsic limitations of currently used multiple-choice Concept Inventories designed to measure student knowledge of natural selection.

  13. Optimizing in a complex world: A statistician's role in decision making

    DOE PAGES

    Anderson-Cook, Christine M.

    2016-08-09

    As applied statisticians increasingly participate as active members of problem-solving and decision-making teams, our role continues to evolve. Historically, we may have been seen as those who can help with data collection strategies or answer a specific question from a set of data. Nowadays, we are or strive to be more deeply involved throughout the entire problem-solving process. An emerging role is to provide a set of leading choices from which subject matter experts and managers can choose to make informed decisions. A key to success is to provide vehicles for understanding the trade-offs between candidates and interpreting the meritsmore » of each choice in the context of the decision-makers priorities. To achieve this objective, it is helpful to be able (a) to help subject matter experts identify quantitative criteria that match their priorities, (b) eliminate non-competitive choices through the use of a Pareto front, and (c) provide summary tools from which the trade-offs between alternatives can be quantitatively evaluated and discussed. A structured but flexible process for contributing to team decisions is described for situations when all choices can easily be enumerated as well as when a search algorithm to explore a vast number of potential candidates is required. In conclusion, a collection of diverse examples ranging from model selection, through multiple response optimization, and designing an experiment illustrate the approach.« less

  14. Optimizing in a complex world: A statistician's role in decision making

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-Cook, Christine M.

    As applied statisticians increasingly participate as active members of problem-solving and decision-making teams, our role continues to evolve. Historically, we may have been seen as those who can help with data collection strategies or answer a specific question from a set of data. Nowadays, we are or strive to be more deeply involved throughout the entire problem-solving process. An emerging role is to provide a set of leading choices from which subject matter experts and managers can choose to make informed decisions. A key to success is to provide vehicles for understanding the trade-offs between candidates and interpreting the meritsmore » of each choice in the context of the decision-makers priorities. To achieve this objective, it is helpful to be able (a) to help subject matter experts identify quantitative criteria that match their priorities, (b) eliminate non-competitive choices through the use of a Pareto front, and (c) provide summary tools from which the trade-offs between alternatives can be quantitatively evaluated and discussed. A structured but flexible process for contributing to team decisions is described for situations when all choices can easily be enumerated as well as when a search algorithm to explore a vast number of potential candidates is required. In conclusion, a collection of diverse examples ranging from model selection, through multiple response optimization, and designing an experiment illustrate the approach.« less

  15. Expert system verification and validation study. Phase 2: Requirements Identification. Delivery 2: Current requirements applicability

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The second phase of a task is described which has the ultimate purpose of ensuring that adequate Expert Systems (ESs) Verification and Validation (V and V) tools and techniques are available for Space Station Freedom Program Knowledge Based Systems development. The purpose of this phase is to recommend modifications to current software V and V requirements which will extend the applicability of the requirements to NASA ESs.

  16. Robotic air vehicle. Blending artificial intelligence with conventional software

    NASA Technical Reports Server (NTRS)

    Mcnulty, Christa; Graham, Joyce; Roewer, Paul

    1987-01-01

    The Robotic Air Vehicle (RAV) system is described. The program's objectives were to design, implement, and demonstrate cooperating expert systems for piloting robotic air vehicles. The development of this system merges conventional programming used in passive navigation with Artificial Intelligence techniques such as voice recognition, spatial reasoning, and expert systems. The individual components of the RAV system are discussed as well as their interactions with each other and how they operate as a system.

  17. Design and Implementation of an Intelligent Cost Estimation Model for Decision Support System Software

    DTIC Science & Technology

    1990-09-01

    following two chapters. 28 V. COCOMO MODEL A. OVERVIEW The COCOMO model which stands for COnstructive COst MOdel was developed by Barry Boehm and is...estimation model which uses an expert system to automate the Intermediate COnstructive Cost Estimation MOdel (COCOMO), developed by Barry W. Boehm and...cost estimation model which uses an expert system to automate the Intermediate COnstructive Cost Estimation MOdel (COCOMO), developed by Barry W

  18. [Research & development on computer expert system for forensic bones estimation].

    PubMed

    Zhao, Jun-ji; Zhang, Jan-zheng; Liu, Nin-guo

    2005-08-01

    To build an expert system for forensic bones estimation. By using the object oriented method, employing statistical data of forensic anthropology, combining the statistical data frame knowledge representation with productions and also using the fuzzy matching and DS evidence theory method. Software for forensic estimation of sex, age and height with opened knowledge base was designed. This system is reliable and effective, and it would be a good assistant of the forensic technician.

  19. Use (and abuse) of expert elicitation in support of decision making for public policy

    PubMed Central

    Morgan, M. Granger

    2014-01-01

    The elicitation of scientific and technical judgments from experts, in the form of subjective probability distributions, can be a valuable addition to other forms of evidence in support of public policy decision making. This paper explores when it is sensible to perform such elicitation and how that can best be done. A number of key issues are discussed, including topics on which there are, and are not, experts who have knowledge that provides a basis for making informed predictive judgments; the inadequacy of only using qualitative uncertainty language; the role of cognitive heuristics and of overconfidence; the choice of experts; the development, refinement, and iterative testing of elicitation protocols that are designed to help experts to consider systematically all relevant knowledge when they make their judgments; the treatment of uncertainty about model functional form; diversity of expert opinion; and when it does or does not make sense to combine judgments from different experts. Although it may be tempting to view expert elicitation as a low-cost, low-effort alternative to conducting serious research and analysis, it is neither. Rather, expert elicitation should build on and use the best available research and analysis and be undertaken only when, given those, the state of knowledge will remain insufficient to support timely informed assessment and decision making. PMID:24821779

  20. An expert system for choosing the best combination of options in a general purpose program for automated design synthesis

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.; Barthelemy, J.-F. M.

    1986-01-01

    An expert system called EXADS has been developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. ADS has approximately 100 combinations of strategy, optimizer, and one-dimensional search options from which to choose. It is difficult for a nonexpert to make this choice. This expert system aids the user in choosing the best combination of options based on the users knowledge of the problem and the expert knowledge stored in the knowledge base. The knowledge base is divided into three categories; constrained problems, unconstrained problems, and constrained problems being treated as unconstrained problems. The inference engine and rules are written in LISP, contains about 200 rules, and executes on DEC-VAX (with Franz-LISP) and IBM PC (with IQ-LISP) computers.

  1. Changes in Body Weight and Psychotropic Drugs: A Systematic Synthesis of the Literature

    PubMed Central

    Dent, Robert; Blackmore, Angelique; Peterson, Joan; Habib, Rami; Kay, Gary Peter; Gervais, Alan; Taylor, Valerie; Wells, George

    2012-01-01

    Introduction Psychotropic medication use is associated with weight gain. While there are studies and reviews comparing weight gain for psychotropics within some classes, clinicians frequently use drugs from different classes to treat psychiatric disorders. Objective To undertake a systematic review of all classes of psychotropics to provide an all encompassing evidence-based tool that would allow clinicians to determine the risks of weight gain in making both intra-class and interclass choices of psychotropics. Methodology and Results We developed a novel hierarchical search strategy that made use of systematic reviews that were already available. When such evidence was not available we went on to evaluate randomly controlled trials, followed by cohort and other clinical trials, narrative reviews, and, where necessary, clinical opinion and anecdotal evidence. The data from the publication with the highest level of evidence based on our hierarchical classification was presented. Recommendations from an expert panel supplemented the evidence used to rank these drugs within their respective classes. Approximately 9500 articles were identified in our literature search of which 666 citations were retrieved. We were able to rank most of the psychotropics based on the available evidence and recommendations from subject matter experts. There were few discrepancies between published evidence and the expert panel in ranking these drugs. Conclusion Potential for weight gain is an important consideration in choice of any psychotropic. This tool will help clinicians select psychotropics on a case-by-case basis in order to minimize the impact of weight gain when making both intra-class and interclass choices. PMID:22719834

  2. The decision-making capacity of elderly hospitalized patients: validation of a test on their choice of return home.

    PubMed

    Romdhani, Mouna; Abbas, Rachid; Peyneau, Cécile; Koskas, Pierre; Houenou Quenum, Nadège; Galleron, Sandrine; Drunat, Olivier

    2018-03-01

    Elderly hospitalized patients have uncertain or questionable capacity to make decisions about their care. Determining whether an elderly patient possesses decision-making capacity to return at home is a major concern for geriatricians in everyday practice. To construct and internally validate a new tool, the dream of home test (DROM-test), as support for decision making hospitalization discharge destination for the elderly in the acute or sub-acute care setting. The DROM-test consists of 10 questions and 4 vignettes based upon the 4 relevant criteria for decision-making: capacity to understand information, to appreciate and reason about medical risks and to communicate a choice. A prospective observational study was conducted during 6 months in 2 geriatric care units in Bretonneau Hospital (Assistance publique, Hôpitaux de Paris). We compared the patient decision of DROM-test regarding discharge recommendations with those of an Expert committee and of the team in charge of the patient. 102 were included: mean age 83.1 + 6.7 [70; 97], 66.67% females. Principal components analysis revealed four dimensions: choice, understanding, reasoning and understanding. The area under the ROC curve was 0.64 for the choice dimension, 0.59 for the understanding, 0.53 for the reasoning and 0.52 for the apprehension. Only the choice dimension was statistically associated with the decision of the committee of experts (p=0.017). Even though Drom-test has limitations, it provides an objective way to ascertain decision-making capacity for hospitalised elderly patients.

  3. Temporal and contextual knowledge in model-based expert systems

    NASA Technical Reports Server (NTRS)

    Toth-Fejel, Tihamer; Heher, Dennis

    1987-01-01

    A basic paradigm that allows representation of physical systems with a focus on context and time is presented. Paragon provides the capability to quickly capture an expert's knowledge in a cognitively resonant manner. From that description, Paragon creates a simulation model in LISP, which when executed, verifies that the domain expert did not make any mistakes. The Achille's heel of rule-based systems has been the lack of a systematic methodology for testing, and Paragon's developers are certain that the model-based approach overcomes that problem. The reason this testing is now possible is that software, which is very difficult to test, has in essence been transformed into hardware.

  4. A representational basis for the development of a distributed expert system for Space Shuttle flight control

    NASA Technical Reports Server (NTRS)

    Helly, J. J., Jr.; Bates, W. V.; Cutler, M.; Kelem, S.

    1984-01-01

    A new representation of malfunction procedure logic which permits the automation of these procedures using Boolean normal forms is presented. This representation is discussed in the context of the development of an expert system for space shuttle flight control including software and hardware implementation modes, and a distributed architecture. The roles and responsibility of the flight control team as well as previous work toward the development of expert systems for flight control support at Johnson Space Center are discussed. The notion of malfunction procedures as graphs is introduced as well as the concept of hardware-equivalence.

  5. Towards an operational fault isolation expert system for French telecommunication satellite Telecom 2

    NASA Astrophysics Data System (ADS)

    Haziza, M.

    1990-10-01

    The DIAMS satellite fault isolation expert system shell concept is described. The project, initiated in 1985, has led to the development of a prototype Expert System (ES) dedicated to the Telecom 1 attitude and orbit control system. The prototype ES has been installed in the Telecom 1 satellite control center and evaluated by Telecom 1 operations. The development of a fault isolation ES covering a whole spacecraft (the French telecommunication satellite Telecom 2) is currently being undertaken. Full scale industrial applications raise stringent requirements in terms of knowledge management and software development methodology. The approach used by MATRA ESPACE to face this challenge is outlined.

  6. EXADS - EXPERT SYSTEM FOR AUTOMATED DESIGN SYNTHESIS

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.

    1994-01-01

    The expert system called EXADS was developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. Because of the general purpose nature of ADS, it is difficult for a nonexpert to select the best choice of strategy, optimizer, and one-dimensional search options from the one hundred or so combinations that are available. EXADS aids engineers in determining the best combination based on their knowledge of the problem and the expert knowledge previously stored by experts who developed ADS. EXADS is a customized application of the AESOP artificial intelligence program (the general version of AESOP is available separately from COSMIC. The ADS program is also available from COSMIC.) The expert system consists of two main components. The knowledge base contains about 200 rules and is divided into three categories: constrained, unconstrained, and constrained treated as unconstrained. The EXADS inference engine is rule-based and makes decisions about a particular situation using hypotheses (potential solutions), rules, and answers to questions drawn from the rule base. EXADS is backward-chaining, that is, it works from hypothesis to facts. The rule base was compiled from sources such as literature searches, ADS documentation, and engineer surveys. EXADS will accept answers such as yes, no, maybe, likely, and don't know, or a certainty factor ranging from 0 to 10. When any hypothesis reaches a confidence level of 90% or more, it is deemed as the best choice and displayed to the user. If no hypothesis is confirmed, the user can examine explanations of why the hypotheses failed to reach the 90% level. The IBM PC version of EXADS is written in IQ-LISP for execution under DOS 2.0 or higher with a central memory requirement of approximately 512K of 8 bit bytes. This program was developed in 1986.

  7. UNESCO's HOPE Initiative—Providing Free and Open-Source Hydrologic Software for Effective and Sustainable Management of Africa's Water Resources Temporary Title

    NASA Astrophysics Data System (ADS)

    Barlow, P. M.; Filali-Meknassi, Y.; Sanford, W. E.; Winston, R. B.; Kuniansky, E.; Dawson, C.

    2015-12-01

    UNESCO's HOPE Initiative—the Hydro Free and (or) Open-source Platform of Experts—was launched in June 2013 as part of UNESCO's International Hydrological Programme. The Initiative arose in response to a recognized need to make free and (or) open-source water-resources software more widely accessible to Africa's water sector. A kit of software is being developed to provide African water authorities, teachers, university lecturers, and researchers with a set of programs that can be enhanced and (or) applied to the development of efficient and sustainable management strategies for Africa's water resources. The Initiative brings together experts from the many fields of water resources to identify software that might be included in the kit, to oversee an objective process for selecting software for the kit, and to engage in training and other modes of capacity building to enhance dissemination of the software. To date, teams of experts from the fields of wastewater treatment, groundwater hydrology, surface-water hydrology, and data management have been formed to identify relevant software from their respective fields. An initial version of the HOPE Software Kit was released in late August 2014 and consists of the STOAT model for wastewater treatment developed by the Water Research Center (United Kingdom) and the MODFLOW-2005 model for groundwater-flow simulation developed by the U.S. Geological Survey. The Kit is available on the UNESCO HOPE website (http://www.hope-initiative.net/).Training in the theory and use of MODFLOW-2005 is planned in southern Africa in conjunction with UNESCO's study of the Kalahari-Karoo/Stampriet Transboundary Aquifer, which extends over an area that includes parts of Botswana, Namibia, and South Africa, and in support of the European Commission's Horizon 2020 FREEWAT project (FREE and open source software tools for WATer resource management; see the UNESCO HOPE website).

  8. EHR Improvement Using Incident Reports.

    PubMed

    Teame, Tesfay; Stålhane, Tor; Nytrø, Øystein

    2017-01-01

    This paper discusses reactive improvement of clinical software using methods for incident analysis. We used the "Five Whys" method because we had only descriptive data and depended on a domain expert for the analysis. The analysis showed that there are two major root causes for EHR software failure, and that they are related to human and organizational errors. A main identified improvement is allocating more resources to system maintenance and user training.

  9. Software Engineering and Its Application to Avionics

    DTIC Science & Technology

    1988-01-01

    34Automated Software Development Methodolgy (ASDM): An Architecture of a Knowledge-Based Expert System," Masters Thesis , Florida Atlantic University, Boca...operating system provides the control semnrim and aplication services within the miltiproossur system. Them processes timt aks up the application sofhwae...as a high-value target may no longer be occupied by the time the film is processed and analyzed. With the high mobility of today’s enemy forces

  10. C++ and operating systems performance - A case study

    NASA Technical Reports Server (NTRS)

    Russo, Vincent F.; Madany, Peter W.; Campbell, Roy H.

    1990-01-01

    Object-oriented design and programming has many software engineering advantages. Its application to large systems, however, has previously been constrained by performance concerns. The Choices operating system, which has over 75,000 lines of code, is object-oriented and programmed in C++. This paper is a case study of the performance of Choices.

  11. Best Practices for Reduction of Uncertainty in CFD Results

    NASA Technical Reports Server (NTRS)

    Mendenhall, Michael R.; Childs, Robert E.; Morrison, Joseph H.

    2003-01-01

    This paper describes a proposed best-practices system that will present expert knowledge in the use of CFD. The best-practices system will include specific guidelines to assist the user in problem definition, input preparation, grid generation, code selection, parameter specification, and results interpretation. The goal of the system is to assist all CFD users in obtaining high quality CFD solutions with reduced uncertainty and at lower cost for a wide range of flow problems. The best-practices system will be implemented as a software product which includes an expert system made up of knowledge databases of expert information with specific guidelines for individual codes and algorithms. The process of acquiring expert knowledge is discussed, and help from the CFD community is solicited. Benefits and challenges associated with this project are examined.

  12. Expert system for web based collaborative CAE

    NASA Astrophysics Data System (ADS)

    Hou, Liang; Lin, Zusheng

    2006-11-01

    An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.

  13. Using hybrid expert system approaches for engineering applications

    NASA Technical Reports Server (NTRS)

    Allen, R. H.; Boarnet, M. G.; Culbert, C. J.; Savely, R. T.

    1987-01-01

    In this paper, the use of hybrid expert system shells and hybrid (i.e., algorithmic and heuristic) approaches for solving engineering problems is reported. Aspects of various engineering problem domains are reviewed for a number of examples with specific applications made to recently developed prototype expert systems. Based on this prototyping experience, critical evaluations of and comparisons between commercially available tools, and some research tools, in the United States and Australia, and their underlying problem-solving paradigms are made. Characteristics of the implementation tool and the engineering domain are compared and practical software engineering issues are discussed with respect to hybrid tools and approaches. Finally, guidelines are offered with the hope that expert system development will be less time consuming, more effective, and more cost-effective than it has been in the past.

  14. Expert Recommender: Designing for a Network Organization

    NASA Astrophysics Data System (ADS)

    Reichling, Tim; Veith, Michael; Wulf, Volker

    Recent knowledge management initiatives focus on expertise sharing within formal organizational units and informal communities of practice. Expert recommender systems seem to be a promising tool in support of these initiatives. This paper presents experiences in designing an expert recommender system for a knowledge- intensive organization, namely the National Industry Association (NIA). Field study results provide a set of specific design requirements. Based on these requirements, we have designed an expert recommender system which is integrated into the specific software infrastructure of the organizational setting. The organizational setting is, as we will show, specific for historical, political, and economic reasons. These particularities influence the employees’ organizational and (inter-)personal needs within this setting. The paper connects empirical findings of a long-term case study with design experiences of an expertise recommender system.

  15. A toolbox for developing bioinformatics software

    PubMed Central

    Potrzebowski, Wojciech; Puton, Tomasz; Rother, Magdalena; Wywial, Ewa; Bujnicki, Janusz M.

    2012-01-01

    Creating useful software is a major activity of many scientists, including bioinformaticians. Nevertheless, software development in an academic setting is often unsystematic, which can lead to problems associated with maintenance and long-term availibility. Unfortunately, well-documented software development methodology is difficult to adopt, and technical measures that directly improve bioinformatic programming have not been described comprehensively. We have examined 22 software projects and have identified a set of practices for software development in an academic environment. We found them useful to plan a project, support the involvement of experts (e.g. experimentalists), and to promote higher quality and maintainability of the resulting programs. This article describes 12 techniques that facilitate a quick start into software engineering. We describe 3 of the 22 projects in detail and give many examples to illustrate the usage of particular techniques. We expect this toolbox to be useful for many bioinformatics programming projects and to the training of scientific programmers. PMID:21803787

  16. Knowledge Base Editor (SharpKBE)

    NASA Technical Reports Server (NTRS)

    Tikidjian, Raffi; James, Mark; Mackey, Ryan

    2007-01-01

    The SharpKBE software provides a graphical user interface environment for domain experts to build and manage knowledge base systems. Knowledge bases can be exported/translated to various target languages automatically, including customizable target languages.

  17. FTDD973: A multimedia knowledge-based system and methodology for operator training and diagnostics

    NASA Technical Reports Server (NTRS)

    Hekmatpour, Amir; Brown, Gary; Brault, Randy; Bowen, Greg

    1993-01-01

    FTDD973 (973 Fabricator Training, Documentation, and Diagnostics) is an interactive multimedia knowledge based system and methodology for computer-aided training and certification of operators, as well as tool and process diagnostics in IBM's CMOS SGP fabrication line (building 973). FTDD973 is an example of what can be achieved with modern multimedia workstations. Knowledge-based systems, hypertext, hypergraphics, high resolution images, audio, motion video, and animation are technologies that in synergy can be far more useful than each by itself. FTDD973's modular and object-oriented architecture is also an example of how improvements in software engineering are finally making it possible to combine many software modules into one application. FTDD973 is developed in ExperMedia/2; and OS/2 multimedia expert system shell for domain experts.

  18. Planning bioinformatics workflows using an expert system.

    PubMed

    Chen, Xiaoling; Chang, Jeffrey T

    2017-04-15

    Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  19. Planning bioinformatics workflows using an expert system

    PubMed Central

    Chen, Xiaoling; Chang, Jeffrey T.

    2017-01-01

    Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928

  20. Software Development Standard Processes (SDSP)

    NASA Technical Reports Server (NTRS)

    Lavin, Milton L.; Wang, James J.; Morillo, Ronald; Mayer, John T.; Jamshidian, Barzia; Shimizu, Kenneth J.; Wilkinson, Belinda M.; Hihn, Jairus M.; Borgen, Rosana B.; Meyer, Kenneth N.; hide

    2011-01-01

    A JPL-created set of standard processes is to be used throughout the lifecycle of software development. These SDSPs cover a range of activities, from management and engineering activities, to assurance and support activities. These processes must be applied to software tasks per a prescribed set of procedures. JPL s Software Quality Improvement Project is currently working at the behest of the JPL Software Process Owner to ensure that all applicable software tasks follow these procedures. The SDSPs are captured as a set of 22 standards in JPL s software process domain. They were developed in-house at JPL by a number of Subject Matter Experts (SMEs) residing primarily within the Engineering and Science Directorate, but also from the Business Operations Directorate and Safety and Mission Success Directorate. These practices include not only currently performed best practices, but also JPL-desired future practices in key thrust areas like software architecting and software reuse analysis. Additionally, these SDSPs conform to many standards and requirements to which JPL projects are beholden.

  1. Automated Help System For A Supercomputer

    NASA Technical Reports Server (NTRS)

    Callas, George P.; Schulbach, Catherine H.; Younkin, Michael

    1994-01-01

    Expert-system software developed to provide automated system of user-helping displays in supercomputer system at Ames Research Center Advanced Computer Facility. Users located at remote computer terminals connected to supercomputer and each other via gateway computers, local-area networks, telephone lines, and satellite links. Automated help system answers routine user inquiries about how to use services of computer system. Available 24 hours per day and reduces burden on human experts, freeing them to concentrate on helping users with complicated problems.

  2. Use of artificial intelligence in analytical systems for the clinical laboratory

    PubMed Central

    Truchaud, Alain; Ozawa, Kyoichi; Pardue, Harry; Schnipelsky, Paul

    1995-01-01

    The incorporation of information-processing technology into analytical systems in the form of standard computing software has recently been advanced by the introduction of artificial intelligence (AI), both as expert systems and as neural networks. This paper considers the role of software in system operation, control and automation, and attempts to define intelligence. AI is characterized by its ability to deal with incomplete and imprecise information and to accumulate knowledge. Expert systems, building on standard computing techniques, depend heavily on the domain experts and knowledge engineers that have programmed them to represent the real world. Neural networks are intended to emulate the pattern-recognition and parallel processing capabilities of the human brain and are taught rather than programmed. The future may lie in a combination of the recognition ability of the neural network and the rationalization capability of the expert system. In the second part of the paper, examples are given of applications of AI in stand-alone systems for knowledge engineering and medical diagnosis and in embedded systems for failure detection, image analysis, user interfacing, natural language processing, robotics and machine learning, as related to clinical laboratories. It is concluded that AI constitutes a collective form of intellectual propery, and that there is a need for better documentation, evaluation and regulation of the systems already being used in clinical laboratories. PMID:18924784

  3. Automated software development workstation

    NASA Technical Reports Server (NTRS)

    Prouty, Dale A.; Klahr, Philip

    1988-01-01

    A workstation is being developed that provides a computational environment for all NASA engineers across application boundaries, which automates reuse of existing NASA software and designs, and efficiently and effectively allows new programs and/or designs to be developed, catalogued, and reused. The generic workstation is made domain specific by specialization of the user interface, capturing engineering design expertise for the domain, and by constructing/using a library of pertinent information. The incorporation of software reusability principles and expert system technology into this workstation provide the obvious benefits of increased productivity, improved software use and design reliability, and enhanced engineering quality by bringing engineering to higher levels of abstraction based on a well tested and classified library.

  4. The CORALS Connection

    ERIC Educational Resources Information Center

    Plankis, Brian; Klein, Carolyn

    2010-01-01

    The Ocean, Reefs, Aquariums, Literacy, and Stewardship (CORALS) research program helps students connect global environmental issues to local concerns and personal choices. During the 18-week program, students strengthen their understanding of coral reef decline through a classroom aquarium activity, communicate with science experts, and create…

  5. The Transition to a Many-core World

    NASA Astrophysics Data System (ADS)

    Mattson, T. G.

    2012-12-01

    The need to increase performance within a fixed energy budget has pushed the computer industry to many core processors. This is grounded in the physics of computing and is not a trend that will just go away. It is hard to overestimate the profound impact of many-core processors on software developers. Virtually every facet of the software development process will need to change to adapt to these new processors. In this talk, we will look at many-core hardware and consider its evolution from a perspective grounded in the CPU. We will show that the number of cores will inevitably increase, but in addition, a quest to maximize performance per watt will push these cores to be heterogeneous. We will show that the inevitable result of these changes is a computing landscape where the distinction between the CPU and the GPU is blurred. We will then consider the much more pressing problem of software in a many core world. Writing software for heterogeneous many core processors is well beyond the ability of current programmers. One solution is to support a software development process where programmer teams are split into two distinct groups: a large group of domain-expert productivity programmers and much smaller team of computer-scientist efficiency programmers. The productivity programmers work in terms of high level frameworks to express the concurrency in their problems while avoiding any details for how that concurrency is exploited. The second group, the efficiency programmers, map applications expressed in terms of these frameworks onto the target many-core system. In other words, we can solve the many-core software problem by creating a software infrastructure that only requires a small subset of programmers to become master parallel programmers. This is different from the discredited dream of automatic parallelism. Note that productivity programmers still need to define the architecture of their software in a way that exposes the concurrency inherent in their problem. We submit that domain-expert programmers understand "what is concurrent". The parallel programming problem emerges from the complexity of "how that concurrency is utilized" on real hardware. The research described in this talk was carried out in collaboration with the ParLab at UC Berkeley. We use a design pattern language to define the high level frameworks exposed to domain-expert, productivity programmers. We then use tools from the SEJITS project (Selective embedded Just In time Specializers) to build the software transformation tool chains thst turn these framework-oriented designs into highly efficient code. The final ingredient is a software platform to serve as a target for these tools. One such platform is the OpenCL industry standard for programming heterogeneous systems. We will briefly describe OpenCL and show how it provides a vendor-neutral software target for current and future many core systems; both CPU-based, GPU-based, and heterogeneous combinations of the two.

  6. Design and implementation of a status at a glance user interface for a power distribution expert system

    NASA Technical Reports Server (NTRS)

    Liberman, Eugene M.; Manner, David B.; Dolce, James L.; Mellor, Pamela A.

    1993-01-01

    Expert systems are widely used in health monitoring and fault detection applications. One of the key features of an expert system is that it possesses a large body of knowledge about the application for which it was designed. When the user consults this knowledge base, it is essential that the expert system's reasoning process and its conclusions be as concise as possible. If, in addition, an expert system is part of a process monitoring system, the expert system's conclusions must be combined with current events of the process. Under these circumstances, it is difficult for a user to absorb and respond to all the available information. For example, a user can become distracted and confused if two or more unrelated devices in different parts of the system require attention. A human interface designed to integrate expert system diagnoses with process data and to focus the user's attention to the important matters provides a solution to the 'information overload' problem. This paper will discuss a user interface to the power distribution expert system for Space Station Freedom. The importance of features which simplify assessing system status and which minimize navigating through layers of information will be discussed. Design rationale and implementation choices will also be presented.

  7. Neural basis of nonanalytical reasoning expertise during clinical evaluation.

    PubMed

    Durning, Steven J; Costanzo, Michelle E; Artino, Anthony R; Graner, John; van der Vleuten, Cees; Beckman, Thomas J; Wittich, Christopher M; Roy, Michael J; Holmboe, Eric S; Schuwirth, Lambert

    2015-03-01

    Understanding clinical reasoning is essential for patient care and medical education. Dual-processing theory suggests that nonanalytic reasoning is an essential aspect of expertise; however, assessing nonanalytic reasoning is challenging because it is believed to occur on the subconscious level. This assumption makes concurrent verbal protocols less reliable assessment tools. Functional magnetic resonance imaging was used to explore the neural basis of nonanalytic reasoning in internal medicine interns (novices) and board-certified staff internists (experts) while completing United States Medical Licensing Examination and American Board of Internal Medicine multiple-choice questions. The results demonstrated that novices and experts share a common neural network in addition to nonoverlapping neural resources. However, experts manifested greater neural processing efficiency in regions such as the prefrontal cortex during nonanalytical reasoning. These findings reveal a multinetwork system that supports the dual-process mode of expert clinical reasoning during medical evaluation.

  8. a New Method for Fmeca Based on Fuzzy Theory and Expert System

    NASA Astrophysics Data System (ADS)

    Byeon, Yoong-Tae; Kim, Dong-Jin; Kim, Jin-O.

    2008-10-01

    Failure Mode Effects and Criticality Analysis (FMECA) is one of most widely used methods in modern engineering system to investigate potential failure modes and its severity upon the system. FMECA evaluates criticality and severity of each failure mode and visualize the risk level matrix putting those indices to column and row variable respectively. Generally, those indices are determined subjectively by experts and operators. However, this process has no choice but to include uncertainty. In this paper, a method for eliciting expert opinions considering its uncertainty is proposed to evaluate the criticality and severity. In addition, a fuzzy expert system is constructed in order to determine the crisp value of risk level for each failure mode. Finally, an illustrative example system is analyzed in the case study. The results are worth considering in deciding the proper policies for each component of the system.

  9. Collaboration in Global Software Engineering Based on Process Description Integration

    NASA Astrophysics Data System (ADS)

    Klein, Harald; Rausch, Andreas; Fischer, Edward

    Globalization is one of the big trends in software development. Development projects need a variety of different resources with appropriate expert knowledge to be successful. More and more of these resources are nowadays obtained from specialized organizations and countries all over the world, varying in development approaches, processes, and culture. As seen with early outsourcing attempts, collaboration may fail due to these differences. Hence, the major challenge in global software engineering is to streamline collaborating organizations towards a successful conjoint development. Based on typical collaboration scenarios, this paper presents a structured approach to integrate processes in a comprehensible way.

  10. Software engineering with application-specific languages

    NASA Technical Reports Server (NTRS)

    Campbell, David J.; Barker, Linda; Mitchell, Deborah; Pollack, Robert H.

    1993-01-01

    Application-Specific Languages (ASL's) are small, special-purpose languages that are targeted to solve a specific class of problems. Using ASL's on software development projects can provide considerable cost savings, reduce risk, and enhance quality and reliability. ASL's provide a platform for reuse within a project or across many projects and enable less-experienced programmers to tap into the expertise of application-area experts. ASL's have been used on several software development projects for the Space Shuttle Program. On these projects, the use of ASL's resulted in considerable cost savings over conventional development techniques. Two of these projects are described.

  11. Visualization support for risk-informed decision making when planning and managing software developments

    NASA Technical Reports Server (NTRS)

    Feather, Martin S.; Kiper, James D.; Menzies, Tim

    2005-01-01

    Key decisions are made in the early stages of planning and management of software developments. The information basis for these decisions is often a mix of analogy with past developments, and the best judgments of domain experts. Visualization of this information can support to such decision making by clarifying the status of the information and yielding insights into the ramifications of that information vis-a-vis decision alternatives.

  12. Measurement of students' perceptions of nursing as a career.

    PubMed

    Matutina, Robin E; Newman, Susan D; Jenkins, Carolyn M

    2010-09-01

    Middle school has been identified as the prime age group to begin nursing recruitment efforts because students have malleable perceptions about nursing as a future career choice. The purpose of this integrative review is to present a brief overview of research processes related to middle school students' perceptions of nursing as a future career choice and to critically evaluate the current instruments used to measure middle and high school students' perceptions of nursing as a career choice. An integrative review of the years 1989 to 2009 was conducted searching Cumulative Index to Nursing and Allied Health Literature (CINAHL), National Library of Medicine PubMed service (PubMed), and Ovid MEDLINE databases using the key words career, choice, future, ideal, nursing, and perception. Reference lists of retrieved studies were hand searched, yielding a total of 22 studies. Inclusion criteria were (a) sample of middle school students, (b) sample of high school students, (c) mixed sample including middle or high school students, and (4) samples other than middle or high school students if the instrument was tested with middle or high school students in a separate study. Ten studies met these criteria. Of the 10 studies, samples were 30% middle school students; 40% high school students; 10% mixed, including school-aged students; and 20% college students with an instrument tested in middle school students. Eighty percent of participants were White females. Overall, participants' socioeconomic status was not identified. A single study included a theoretical framework. Five instruments were identified and each could be completed in 15 to 30 min. The most commonly used instrument is available free of charge. Seventy percent of the studies used Cronbach's alpha to report instrument reliability (0.63 to 0.93), whereas 30% failed to report reliability. Fifty percent of the studies established validity via a "panel of experts," with three of those studies further describing the panel of experts. Samples of white females may hinder generalization. Socioeconomic status was not consistently reported and may be an important factor with regard to perceptions of nursing as a career choice. An overall absence of theoretical framework hinders empirical data from being applied to nursing theories that in turn may support nursing concepts. The reporting of reliability and validity may be improved by further defining panel of experts and expanding the number of experts (more than seven). More in-depth evaluation of the psychometric properties of the instruments with more diverse populations is needed. Rigorously tested instruments may be useful in determining middle school students' perceptions about nursing. Therefore, future researchers should consider testing existing instruments in the middle school population, adhering to theoretical frameworks, diversifying the sample population, and clearly reporting reliability and validity to gain knowledge about middle school students' perceptions about a nursing career.

  13. Utilizing Expert Knowledge in Estimating Future STS Costs

    NASA Technical Reports Server (NTRS)

    Fortner, David B.; Ruiz-Torres, Alex J.

    2004-01-01

    A method of estimating the costs of future space transportation systems (STSs) involves classical activity-based cost (ABC) modeling combined with systematic utilization of the knowledge and opinions of experts to extend the process-flow knowledge of existing systems to systems that involve new materials and/or new architectures. The expert knowledge is particularly helpful in filling gaps that arise in computational models of processes because of inconsistencies in historical cost data. Heretofore, the costs of planned STSs have been estimated following a "top-down" approach that tends to force the architectures of new systems to incorporate process flows like those of the space shuttles. In this ABC-based method, one makes assumptions about the processes, but otherwise follows a "bottoms up" approach that does not force the new system architecture to incorporate a space-shuttle-like process flow. Prototype software has been developed to implement this method. Through further development of software, it should be possible to extend the method beyond the space program to almost any setting in which there is a need to estimate the costs of a new system and to extend the applicable knowledge base in order to make the estimate.

  14. An autonomous fault detection, isolation, and recovery system for a 20-kHz electric power distribution test bed

    NASA Technical Reports Server (NTRS)

    Quinn, Todd M.; Walters, Jerry L.

    1991-01-01

    Future space explorations will require long term human presence in space. Space environments that provide working and living quarters for manned missions are becoming increasingly larger and more sophisticated. Monitor and control of the space environment subsystems by expert system software, which emulate human reasoning processes, could maintain the health of the subsystems and help reduce the human workload. The autonomous power expert (APEX) system was developed to emulate a human expert's reasoning processes used to diagnose fault conditions in the domain of space power distribution. APEX is a fault detection, isolation, and recovery (FDIR) system, capable of autonomous monitoring and control of the power distribution system. APEX consists of a knowledge base, a data base, an inference engine, and various support and interface software. APEX provides the user with an easy-to-use interactive interface. When a fault is detected, APEX will inform the user of the detection. The user can direct APEX to isolate the probable cause of the fault. Once a fault has been isolated, the user can ask APEX to justify its fault isolation and to recommend actions to correct the fault. APEX implementation and capabilities are discussed.

  15. Assistant for Analyzing Tropical-Rain-Mapping Radar Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2006-01-01

    A document is defined that describes an approach for a Tropical Rain Mapping Radar Data System (TDS). TDS is composed of software and hardware elements incorporating a two-frequency spaceborne radar system for measuring tropical precipitation. The TDS would be used primarily in generating data products for scientific investigations. The most novel part of the TDS would be expert-system software to aid in the selection of algorithms for converting raw radar-return data into such primary observables as rain rate, path-integrated rain rate, and surface backscatter. The expert-system approach would address the issue that selection of algorithms for processing the data requires a significant amount of preprocessing, non-intuitive reasoning, and heuristic application, making it infeasible, in many cases, to select the proper algorithm in real time. In the TDS, tentative selections would be made to enable conversions in real time. The expert system would remove straightforwardly convertible data from further consideration, and would examine ambiguous data, performing analysis in depth to determine which algorithms to select. Conversions performed by these algorithms, presumed to be correct, would be compared with the corresponding real-time conversions. Incorrect real-time conversions would be updated using the correct conversions.

  16. Using XML and XSLT for flexible elicitation of mental-health risk knowledge.

    PubMed

    Buckingham, C D; Ahmed, A; Adams, A E

    2007-03-01

    Current tools for assessing risks associated with mental-health problems require assessors to make high-level judgements based on clinical experience. This paper describes how new technologies can enhance qualitative research methods to identify lower-level cues underlying these judgements, which can be collected by people without a specialist mental-health background. Content analysis of interviews with 46 multidisciplinary mental-health experts exposed the cues and their interrelationships, which were represented by a mind map using software that stores maps as XML. All 46 mind maps were integrated into a single XML knowledge structure and analysed by a Lisp program to generate quantitative information about the numbers of experts associated with each part of it. The knowledge was refined by the experts, using software developed in Flash to record their collective views within the XML itself. These views specified how the XML should be transformed by XSLT, a technology for rendering XML, which resulted in a validated hierarchical knowledge structure associating patient cues with risks. Changing knowledge elicitation requirements were accommodated by flexible transformations of XML data using XSLT, which also facilitated generation of multiple data-gathering tools suiting different assessment circumstances and levels of mental-health knowledge.

  17. Second CLIPS Conference Proceedings, volume 2

    NASA Technical Reports Server (NTRS)

    Giarratano, Joseph (Editor); Culbert, Christopher J. (Editor)

    1991-01-01

    Papers presented at the 2nd C Language Integrated Production System (CLIPS) Conference held at the Lyndon B. Johnson Space Center (JSC) on 23-25 September 1991 are documented in these proceedings. CLIPS is an expert system tool developed by the Software Technology Branch at NASA JSC and is used at over 4000 sites by government, industry, and business. During the three days of the conference, over 40 papers were presented by experts from NASA, Department of Defense, other government agencies, universities, and industry.

  18. An Expert System for Searching in Full-Text

    DTIC Science & Technology

    1989-12-01

    2 An Expert System for Searching in Full-Text 00~ TR89-043 -December, 1989 cc D IF L, ~r." T M~ EA 13~ 1991ON- The University of North Carolina at...Full-Text by Susan Evalyn Gauch A dissertation submitted to the faculty of The University of North Carolina at Chapel Hill in partial fulfillment of...MICROARRAS, the retrieval software. MICROARRAS developed at the University of North Carolina under the direction of John B. Smith and Stephen Weiss [Smith et

  19. Proceedings of the 1986 IEEE international conference on systems, man and cybernetics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1986-01-01

    This book presents the papers given at a conference on man-machine systems. Topics considered at the conference included neural model-based cognitive theory and engineering, user interfaces, adaptive and learning systems, human interaction with robotics, decision making, the testing and evaluation of expert systems, software development, international conflict resolution, intelligent interfaces, automation in man-machine system design aiding, knowledge acquisition in expert systems, advanced architectures for artificial intelligence, pattern recognition, knowledge bases, and machine vision.

  20. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    NASA Technical Reports Server (NTRS)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1987-01-01

    Expert systems that require access to data bases, complex simulations and real time instrumentation have both symbolic as well as algorithmic computing needs. These needs could both be met using a general computing workstation running both symbolic and algorithmic code, or separate, specialized computers networked together. The later approach was chosen to implement TEXSYS, the thermal expert system, developed to demonstrate the ability of an expert system to autonomously control the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. Integration options are explored and several possible solutions are presented.

  1. Automated Software Development Workstation (ASDW)

    NASA Technical Reports Server (NTRS)

    Fridge, Ernie

    1990-01-01

    Software development is a serious bottleneck in the construction of complex automated systems. An increase of the reuse of software designs and components has been viewed as a way to relieve this bottleneck. One approach to achieving software reusability is through the development and use of software parts composition systems. A software parts composition system is a software development environment comprised of a parts description language for modeling parts and their interfaces, a catalog of existing parts, a composition editor that aids a user in the specification of a new application from existing parts, and a code generator that takes a specification and generates an implementation of a new application in a target language. The Automated Software Development Workstation (ASDW) is an expert system shell that provides the capabilities required to develop and manipulate these software parts composition systems. The ASDW is now in Beta testing at the Johnson Space Center. Future work centers on responding to user feedback for capability and usability enhancement, expanding the scope of the software lifecycle that is covered, and in providing solutions to handling very large libraries of reusable components.

  2. Pointo - a Low Cost Solution to Point Cloud Processing

    NASA Astrophysics Data System (ADS)

    Houshiar, H.; Winkler, S.

    2017-11-01

    With advance in technology access to data especially 3D point cloud data becomes more and more an everyday task. 3D point clouds are usually captured with very expensive tools such as 3D laser scanners or very time consuming methods such as photogrammetry. Most of the available softwares for 3D point cloud processing are designed for experts and specialists in this field and are usually very large software packages containing variety of methods and tools. This results in softwares that are usually very expensive to acquire and also very difficult to use. Difficulty of use is caused by complicated user interfaces that is required to accommodate a large list of features. The aim of these complex softwares is to provide a powerful tool for a specific group of specialist. However they are not necessary required by the majority of the up coming average users of point clouds. In addition to complexity and high costs of these softwares they generally rely on expensive and modern hardware and only compatible with one specific operating system. Many point cloud customers are not point cloud processing experts or willing to spend the high acquisition costs of these expensive softwares and hardwares. In this paper we introduce a solution for low cost point cloud processing. Our approach is designed to accommodate the needs of the average point cloud user. To reduce the cost and complexity of software our approach focuses on one functionality at a time in contrast with most available softwares and tools that aim to solve as many problems as possible at the same time. Our simple and user oriented design improve the user experience and empower us to optimize our methods for creation of an efficient software. In this paper we introduce Pointo family as a series of connected softwares to provide easy to use tools with simple design for different point cloud processing requirements. PointoVIEWER and PointoCAD are introduced as the first components of the Pointo family to provide a fast and efficient visualization with the ability to add annotation and documentation to the point clouds.

  3. Health software: a new CEI Guide for software management in medical environment.

    PubMed

    Giacomozzi, Claudia; Martelli, Francesco

    2016-01-01

    The increasing spread of software components in the healthcare context renders explanatory guides relevant and mandatory to interpret laws and standards, and to support safe management of software products in healthcare. In 2012 a working group has been settled for the above purposes at Italian Electrotechnical Committee (CEI), made of experts from Italian National Institute of Health (ISS), representatives of industry, and representatives of the healthcare organizations. As a first outcome of the group activity, Guide CEI 62-237 was published in February 2015. The Guide incorporates an innovative approach based on the proper contextualization of software products, either medical devices or not, to the specific healthcare scenario, and addresses the risk management of IT systems. The Guide provides operators and manufacturers with an interpretative support with many detailed examples to facilitate the proper contextualization and management of health software, in compliance with related European and international regulations and standards.

  4. MVP-CA Methodology for the Expert System Advocate's Advisor (ESAA)

    DOT National Transportation Integrated Search

    1997-11-01

    The Multi-Viewpoint Clustering Analysis (MVP-CA) tool is a semi-automated tool to provide a valuable aid for comprehension, verification, validation, maintenance, integration, and evolution of complex knowledge-based software systems. In this report,...

  5. PC graphics generation and management tool for real-time applications

    NASA Technical Reports Server (NTRS)

    Truong, Long V.

    1992-01-01

    A graphics tool was designed and developed for easy generation and management of personal computer graphics. It also provides methods and 'run-time' software for many common artificial intelligence (AI) or expert system (ES) applications.

  6. Success Rates by Software Development Methodology in Information Technology Project Management: A Quantitative Analysis

    ERIC Educational Resources Information Center

    Wright, Gerald P.

    2013-01-01

    Despite over half a century of Project Management research, project success rates are still too low. Organizations spend a tremendous amount of valuable resources on Information Technology projects and seek to maximize the utility gained from their efforts. The author investigated the impact of software development methodology choice on ten…

  7. Design and Acquisition of Software for Defense Systems

    DTIC Science & Technology

    2018-02-14

    enterprise business systems and related information technology (IT) services, the role software plays in enabling and enhancing weapons systems often...3 The information in this chart was compiled from Christian Hagen, Jeff Sorenson, Steven Hurt...understanding to make an informed choice of final architecture. The Task Force found commercial practice starts with several competing architectures and

  8. VCFR: A package to manipulate and visualize variant call format data in R

    USDA-ARS?s Scientific Manuscript database

    Software to call single nucleotide polymorphisms or related genetic variants has converged on the variant call format (vcf) as their output format of choice. This has created a need for tools to work with vcf files. While an increasing number of software exists to read vcf data, many of them only ex...

  9. Relating Communications Mode Choice and Teamwork Quality: Conversational versus Textual Communication in IT System and Software Development Teams

    ERIC Educational Resources Information Center

    Smith, James Robert

    2012-01-01

    This cross-sectional study explored how IT system and software development team members communicated in the workplace and whether teams that used more verbal communication (and less text-based communication) experienced higher levels of collaboration as measured using the Teamwork Quality (TWQ) scale. Although computer-mediated communication tools…

  10. Factors that Influence First-Career Choice of Undergraduate Engineers in Software Services Companies: A South Indian Experience

    ERIC Educational Resources Information Center

    Gokuladas, V. K.

    2010-01-01

    Purpose: The purpose of this paper is to identify how undergraduate engineering students differ in their perception about software services companies in India based on variables like gender, locations of the college and branches of engineering. Design/methodology/approach: Data obtained from 560 undergraduate engineering students who had the…

  11. An expert system for prediction of chemical toxicity

    USGS Publications Warehouse

    Hickey, James P.; Aldridge, Andrew J.; Passino-Reader, Dora R.; Frank, Anthony M.

    1992-01-01

    The National Fisheries Research Center- Great Lakes has developed an interactive computer program that uses the structure of an organic molecule to predict its acute toxicity to four aquatic species. The expert system software, written in the muLISP language, identifies the skeletal structures and substituent groups of an organic molecule from a user-supplied standard chemical notation known as a SMILES string, and then generates values for four solvatochromic parameters. Multiple regression equations relate these parameters to the toxicities (expressed as log10LC50s and log10EC50s, along with 95% confidence intervals) for four species. The system is demonstrated by prediction of toxicity for anilide-type pesticides to the fathead minnow (Pimephales promelas). This software is designed for use on an IBM-compatible personal computer by personnel with minimal toxicology background for rapid estimation of chemical toxicity. The system has numerous applications, with much potential for use in the pharmaceutical industry

  12. A survey of Canadian medical physicists: software quality assurance of in-house software.

    PubMed

    Salomons, Greg J; Kelly, Diane

    2015-01-05

    This paper reports on a survey of medical physicists who write and use in-house written software as part of their professional work. The goal of the survey was to assess the extent of in-house software usage and the desire or need for related software quality guidelines. The survey contained eight multiple-choice questions, a ranking question, and seven free text questions. The survey was sent to medical physicists associated with cancer centers across Canada. The respondents to the survey expressed interest in having guidelines to help them in their software-related work, but also demonstrated extensive skills in the area of testing, safety, and communication. These existing skills form a basis for medical physicists to establish a set of software quality guidelines.

  13. Choice: 36 band feature selection software with applications to multispectral pattern recognition

    NASA Technical Reports Server (NTRS)

    Jones, W. C.

    1973-01-01

    Feature selection software was developed at the Earth Resources Laboratory that is capable of inputting up to 36 channels and selecting channel subsets according to several criteria based on divergence. One of the criterion used is compatible with the table look-up classifier requirements. The software indicates which channel subset best separates (based on average divergence) each class from all other classes. The software employs an exhaustive search technique, and computer time is not prohibitive. A typical task to select the best 4 of 22 channels for 12 classes takes 9 minutes on a Univac 1108 computer.

  14. A Clear and Present Choice: Global or Provincial Scholar?

    ERIC Educational Resources Information Center

    Tripses, Jenny S.

    2016-01-01

    Globalization provides rich opportunities to educational administration professors for teach and learn. This position paper explores globalization realities and role options for educational leadership professors: (1) to understand globalization implications for education, (2) collaborate at multiple levels with like minded educational experts, and…

  15. The tale of hearts and reason: the influence of mood on decision making.

    PubMed

    Laborde, Sylvain; Raab, Markus

    2013-08-01

    In decision-making research, one important aspect of real-life decisions has so far been neglected: the mood of the decision maker when generating options. The authors tested the use of the take-the-first (TTF) heuristic and extended the TTF model to understand how mood influences the option-generation process of individuals in two studies, the first using a between-subjects design (30 nonexperts, 30 near-experts, and 30 experts) and the second conceptually replicating the first using a within-subject design (30 nonexperts). Participants took part in an experimental option-generation task, with 31 three-dimensional videos of choices in team handball. Three moods were elicited: positive, neutral, and negative. The findings (a) replicate previous results concerning TTF and (b) show that the option-generation process was associated with the physiological component of mood, supporting the neurovisceral integration model. The extension of TTF to processing emotional factors is an important step forward in explaining fast choices in real-life situations.

  16. Using Ada to implement the operations management system in a community of experts

    NASA Technical Reports Server (NTRS)

    Frank, M. S.

    1986-01-01

    An architecture is described for the Space Station Operations Management System (OMS), consisting of a distributed expert system framework implemented in Ada. The motivation for such a scheme is based on the desire to integrate the very diverse elements of the OMS while taking maximum advantage of knowledge based systems technology. Part of the foundation of an Ada based distributed expert system was accomplished in the form of a proof of concept prototype for the KNOMES project (Knowledge-based Maintenance Expert System). This prototype successfully used concurrently active experts to accomplish monitoring and diagnosis for the Remote Manipulator System. The basic concept of this software architecture is named ACTORS for Ada Cognitive Task ORganization Scheme. It is when one considers the overall problem of integrating all of the OMS elements into a cooperative system that the AI solution stands out. By utilizing a distributed knowledge based system as the framework for OMS, it is possible to integrate those components which need to share information in an intelligent manner.

  17. Capital Expert System

    NASA Astrophysics Data System (ADS)

    Dowell, Laurie; Gary, Jack; Illingworth, Bill; Sargent, Tom

    1987-05-01

    Gathering information, necessary forms, and financial calculations needed to generate a "capital investment proposal" is an extremely complex and difficult process. The intent of the capital investment proposal is to ensure management that the proposed investment has been thoroughly investigated and will have a positive impact on corporate goals. Meeting this requirement typically takes four or five experts a total of 12 hours to generate a "Capital Package." A Capital Expert System was therefore developed using "Personal Consultant." The completed system is hybrid and as such does not depend solely on rules but incorporates several different software packages that communicate through variables and functions passed from one to another. This paper describes the use of expert system techniques, methodology in building the knowledge base, contexts, LISP functions, data base, and special challenges that had to be overcome to create this system. The Capital Expert System is the successful result of a unique integration of artificial intelligence with business accounting, financial forms generation, and investment proposal expertise.

  18. Expert System Development Methodology (ESDM)

    NASA Technical Reports Server (NTRS)

    Sary, Charisse; Gilstrap, Lewey; Hull, Larry G.

    1990-01-01

    The Expert System Development Methodology (ESDM) provides an approach to developing expert system software. Because of the uncertainty associated with this process, an element of risk is involved. ESDM is designed to address the issue of risk and to acquire the information needed for this purpose in an evolutionary manner. ESDM presents a life cycle in which a prototype evolves through five stages of development. Each stage consists of five steps, leading to a prototype for that stage. Development may proceed to a conventional development methodology (CDM) at any time if enough has been learned about the problem to write requirements. ESDM produces requirements so that a product may be built with a CDM. ESDM is considered preliminary because is has not yet been applied to actual projects. It has been retrospectively evaluated by comparing the methods used in two ongoing expert system development projects that did not explicitly choose to use this methodology but which provided useful insights into actual expert system development practices and problems.

  19. A Model-Based Expert System for Space Power Distribution Diagnostics

    NASA Technical Reports Server (NTRS)

    Quinn, Todd M.; Schlegelmilch, Richard F.

    1994-01-01

    When engineers diagnose system failures, they often use models to confirm system operation. This concept has produced a class of advanced expert systems that perform model-based diagnosis. A model-based diagnostic expert system for the Space Station Freedom electrical power distribution test bed is currently being developed at the NASA Lewis Research Center. The objective of this expert system is to autonomously detect and isolate electrical fault conditions. Marple, a software package developed at TRW, provides a model-based environment utilizing constraint suspension. Originally, constraint suspension techniques were developed for digital systems. However, Marple provides the mechanisms for applying this approach to analog systems such as the test bed, as well. The expert system was developed using Marple and Lucid Common Lisp running on a Sun Sparc-2 workstation. The Marple modeling environment has proved to be a useful tool for investigating the various aspects of model-based diagnostics. This report describes work completed to date and lessons learned while employing model-based diagnostics using constraint suspension within an analog system.

  20. A class Hierarchical, object-oriented approach to virtual memory management

    NASA Technical Reports Server (NTRS)

    Russo, Vincent F.; Campbell, Roy H.; Johnston, Gary M.

    1989-01-01

    The Choices family of operating systems exploits class hierarchies and object-oriented programming to facilitate the construction of customized operating systems for shared memory and networked multiprocessors. The software is being used in the Tapestry laboratory to study the performance of algorithms, mechanisms, and policies for parallel systems. Described here are the architectural design and class hierarchy of the Choices virtual memory management system. The software and hardware mechanisms and policies of a virtual memory system implement a memory hierarchy that exploits the trade-off between response times and storage capacities. In Choices, the notion of a memory hierarchy is captured by abstract classes. Concrete subclasses of those abstractions implement a virtual address space, segmentation, paging, physical memory management, secondary storage, and remote (that is, networked) storage. Captured in the notion of a memory hierarchy are classes that represent memory objects. These classes provide a storage mechanism that contains encapsulated data and have methods to read or write the memory object. Each of these classes provides specializations to represent the memory hierarchy.

  1. Nestly--a framework for running software with nested parameter choices and aggregating results.

    PubMed

    McCoy, Connor O; Gallagher, Aaron; Hoffman, Noah G; Matsen, Frederick A

    2013-02-01

    The execution of a software application or pipeline using various combinations of parameters and inputs is a common task in bioinformatics. In the absence of a specialized tool to organize, streamline and formalize this process, scientists must write frequently complex scripts to perform these tasks. We present nestly, a Python package to facilitate running tools with nested combinations of parameters and inputs. nestly provides three components. First, a module to build nested directory structures corresponding to choices of parameters. Second, the nestrun script to run a given command using each set of parameter choices. Third, the nestagg script to aggregate results of the individual runs into a CSV file, as well as support for more complex aggregation. We also include a module for easily specifying nested dependencies for the SCons build tool, enabling incremental builds. Source, documentation and tutorial examples are available at http://github.com/fhcrc/nestly. nestly can be installed from the Python Package Index via pip; it is open source (MIT license).

  2. Comparison of software packages for detecting differential expression in RNA-seq studies

    PubMed Central

    Seyednasrollah, Fatemeh; Laiho, Asta

    2015-01-01

    RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. PMID:24300110

  3. Comparison of software packages for detecting differential expression in RNA-seq studies.

    PubMed

    Seyednasrollah, Fatemeh; Laiho, Asta; Elo, Laura L

    2015-01-01

    RNA-sequencing (RNA-seq) has rapidly become a popular tool to characterize transcriptomes. A fundamental research problem in many RNA-seq studies is the identification of reliable molecular markers that show differential expression between distinct sample groups. Together with the growing popularity of RNA-seq, a number of data analysis methods and pipelines have already been developed for this task. Currently, however, there is no clear consensus about the best practices yet, which makes the choice of an appropriate method a daunting task especially for a basic user without a strong statistical or computational background. To assist the choice, we perform here a systematic comparison of eight widely used software packages and pipelines for detecting differential expression between sample groups in a practical research setting and provide general guidelines for choosing a robust pipeline. In general, our results demonstrate how the data analysis tool utilized can markedly affect the outcome of the data analysis, highlighting the importance of this choice. © The Author 2013. Published by Oxford University Press.

  4. A topological substructural molecular design approach for predicting mutagenesis end-points of alpha, beta-unsaturated carbonyl compounds.

    PubMed

    Pérez-Garrido, Alfonso; Helguera, Aliuska Morales; López, Gabriel Caravaca; Cordeiro, M Natália D S; Escudero, Amalio Garrido

    2010-01-31

    Chemically reactive, alpha, beta-unsaturated carbonyl compounds are common environmental pollutants able to produce a wide range of adverse effects, including, e.g. mutagenicity. This toxic property can often be related to chemical structure, in particular to specific molecular substructures or fragments (alerts), which can then be used in specialized software or expert systems for predictive purposes. In the past, there have been many attempts to predict the mutagenicity of alpha, beta-unsaturated carbonyl compounds through quantitative structure activity relationships (QSAR) but considering only one exclusive endpoint: the Ames test. Besides, even though those studies give a comprehensive understanding of the phenomenon, they do not provide substructural information that could be useful forward improving expert systems based on structural alerts (SAs). This work reports an evaluation of classification models to probe the mutagenic activity of alpha, beta-unsaturated carbonyl compounds over two endpoints--the Ames and mammalian cell gene mutation tests--based on linear discriminant analysis along with the topological Substructure molecular design (TOPS-MODE) approach. The obtained results showed the better ability of the TOPS-MODE approach in flagging structural alerts for the mutagenicity of these compounds compared to the expert system TOXTREE. Thus, the application of the present QSAR models can aid toxicologists in risk assessment and in prioritizing testing, as well as in the improvement of expert systems, such as the TOXTREE software, where SAs are implemented. 2009 Elsevier Ireland Ltd. All rights reserved.

  5. 12 CFR 517.1 - Purpose and scope.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., expert witnesses, customized training, relocation services, information systems technology (computer systems, database management, software and office automation), or micrographic services; or in support of...-Owned Businesses Outreach Program (Outreach Program) is to ensure that firms owned and operated by...

  6. 12 CFR 517.1 - Purpose and scope.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., expert witnesses, customized training, relocation services, information systems technology (computer systems, database management, software and office automation), or micrographic services; or in support of...-Owned Businesses Outreach Program (Outreach Program) is to ensure that firms owned and operated by...

  7. 12 CFR 517.1 - Purpose and scope.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., expert witnesses, customized training, relocation services, information systems technology (computer systems, database management, software and office automation), or micrographic services; or in support of...-Owned Businesses Outreach Program (Outreach Program) is to ensure that firms owned and operated by...

  8. Are Earth System model software engineering practices fit for purpose? A case study.

    NASA Astrophysics Data System (ADS)

    Easterbrook, S. M.; Johns, T. C.

    2009-04-01

    We present some analysis and conclusions from a case study of the culture and practices of scientists at the Met Office and Hadley Centre working on the development of software for climate and Earth System models using the MetUM infrastructure. The study examined how scientists think about software correctness, prioritize their requirements in making changes, and develop a shared understanding of the resulting models. We conclude that highly customized techniques driven strongly by scientific research goals have evolved for verification and validation of such models. In a formal software engineering context these represents costly, but invaluable, software integration tests with considerable benefits. The software engineering practices seen also exhibit recognisable features of both agile and open source software development projects - self-organisation of teams consistent with a meritocracy rather than top-down organisation, extensive use of informal communication channels, and software developers who are generally also users and science domain experts. We draw some general conclusions on whether these practices work well, and what new software engineering challenges may lie ahead as Earth System models become ever more complex and petascale computing becomes the norm.

  9. The Texas Children's Hospital immunization forecaster: conceptualization to implementation.

    PubMed

    Cunningham, Rachel M; Sahni, Leila C; Kerr, G Brady; King, Laura L; Bunker, Nathan A; Boom, Julie A

    2014-12-01

    Immunization forecasting systems evaluate patient vaccination histories and recommend the dates and vaccines that should be administered. We described the conceptualization, development, implementation, and distribution of a novel immunization forecaster, the Texas Children's Hospital (TCH) Forecaster. In 2007, TCH convened an internal expert team that included a pediatrician, immunization nurse, software engineer, and immunization subject matter experts to develop the TCH Forecaster. Our team developed the design of the model, wrote the software, populated the Excel tables, integrated the software, and tested the Forecaster. We created a table of rules that contained each vaccine's recommendations, minimum ages and intervals, and contraindications, which served as the basis for the TCH Forecaster. We created 15 vaccine tables that incorporated 79 unique dose states and 84 vaccine types to operationalize the entire United States recommended immunization schedule. The TCH Forecaster was implemented throughout the TCH system, the Indian Health Service, and the Virginia Department of Health. The TCH Forecast Tester is currently being used nationally. Immunization forecasting systems might positively affect adherence to vaccine recommendations. Efforts to support health care provider utilization of immunization forecasting systems and to evaluate their impact on patient care are needed.

  10. Helping Students make the transition from novice learner of ground-water concepts to expert using the Plume Busters software

    USGS Publications Warehouse

    Macfarlane, P.A.; Bohling, G.; Thompson, K.W.; Townsend, M.

    2006-01-01

    Environmental and earth science students are novice learners and lack the experience needed to rise to the level of expert. To address this problem we have developed the prototype Plume Busters?? software as a capstone educational experience, in which students take on the role of an environmental consultant. Following a pipeline spill, the environmental consultant is hired by the pipeline owner to locate the resulting plume created by spill and remediate the contaminated aquifer at minimum monetary and time cost. The contamination must be removed from the aquifer before it reaches the river and eventually a downstream public water supply. The software consists of an interactive Java application and accompanying HTML linked pages. The application simulates movement of a plume from a pipeline break throug h a shallow alluvial aquifer towards the river. The accompanying web pages establish the simulated contamination scenario and provide students with background material on ground-water flow and transport principles. To make the role-play more realistic, the student must consider cost and time when making decisions about siting observation wells and wells for the pump-and-treat remediation system.

  11. Revealing the ISO/IEC 9126-1 Clique Tree for COTS Software Evaluation

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    2007-01-01

    Previous research has shown that acyclic dependency models, if they exist, can be extracted from software quality standards and that these models can be used to assess software safety and product quality. In the case of commercial off-the-shelf (COTS) software, the extracted dependency model can be used in a probabilistic Bayesian network context for COTS software evaluation. Furthermore, while experts typically employ Bayesian networks to encode domain knowledge, secondary structures (clique trees) from Bayesian network graphs can be used to determine the probabilistic distribution of any software variable (attribute) using any clique that contains that variable. Secondary structures, therefore, provide insight into the fundamental nature of graphical networks. This paper will apply secondary structure calculations to reveal the clique tree of the acyclic dependency model extracted from the ISO/IEC 9126-1 software quality standard. Suggestions will be provided to describe how the clique tree may be exploited to aid efficient transformation of an evaluation model.

  12. The U.S./IAEA Workshop on Software Sustainability for Safeguards Instrumentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pepper S. E.; .; Worrall, L.

    2014-08-08

    The U.S. National Nuclear Security Administration’s Next Generation Safeguards Initiative, the U.S. Department of State, and the International Atomic Energy Agency (IAEA) organized a a workshop on the subject of ”Software Sustainability for Safeguards Instrumentation.” The workshop was held at the Vienna International Centre in Vienna, Austria, May 6-8, 2014. The workshop participants included software and hardware experts from national laboratories, industry, government, and IAEA member states who were specially selected by the workshop organizers based on their experience with software that is developed for the control and operation of safeguards instrumentation. The workshop included presentations, to orient the participantsmore » to the IAEA Department of Safeguards software activities related to instrumentation data collection and processing, and case studies that were designed to inspire discussion of software development, use, maintenance, and upgrades in breakout sessions and to result in recommendations for effective software practices and management. This report summarizes the results of the workshop.« less

  13. Lessons learned in deploying software estimation technology and tools

    NASA Technical Reports Server (NTRS)

    Panlilio-Yap, Nikki; Ho, Danny

    1994-01-01

    Developing a software product involves estimating various project parameters. This is typically done in the planning stages of the project when there is much uncertainty and very little information. Coming up with accurate estimates of effort, cost, schedule, and reliability is a critical problem faced by all software project managers. The use of estimation models and commercially available tools in conjunction with the best bottom-up estimates of software-development experts enhances the ability of a product development group to derive reasonable estimates of important project parameters. This paper describes the experience of the IBM Software Solutions (SWS) Toronto Laboratory in selecting software estimation models and tools and deploying their use to the laboratory's product development groups. It introduces the SLIM and COSTAR products, the software estimation tools selected for deployment to the product areas, and discusses the rationale for their selection. The paper also describes the mechanisms used for technology injection and tool deployment, and concludes with a discussion of important lessons learned in the technology and tool insertion process.

  14. Effects of Medical Device Regulations on the Development of Stand-Alone Medical Software: A Pilot Study.

    PubMed

    Blagec, Kathrin; Jungwirth, David; Haluza, Daniela; Samwald, Matthias

    2018-01-01

    Medical device regulations which aim to ensure safety standards do not only apply to hardware devices but also to standalone medical software, e.g. mobile apps. To explore the effects of these regulations on the development and distribution of medical standalone software. We invited a convenience sample of 130 domain experts to participate in an online survey about the impact of current regulations on the development and distribution of medical standalone software. 21 respondents completed the questionnaire. Participants reported slight positive effects on usability, reliability, and data security of their products, whereas the ability to modify already deployed software and customization by end users were negatively impacted. The additional time and costs needed to go through the regulatory process were perceived as the greatest obstacles in developing and distributing medical software. Further research is needed to compare positive effects on software quality with negative impacts on market access and innovation. Strategies for avoiding over-regulation while still ensuring safety standards need to be devised.

  15. A Fuzzy Expert System for Fault Management of Water Supply Recovery in the ALSS Project

    NASA Technical Reports Server (NTRS)

    Tohala, Vapsi J.

    1998-01-01

    Modeling with a new software is a challenge. CONFIG is a challenge and is design to work with many types of systems in which discrete and continuous processes occur. The CONFIG software was used to model the two subsystem of the Water Recovery system: ICB and TFB. The model worked manually only for water flows with further implementation to be done in the future. Activities in the models are stiff need to be implemented based on testing of the hardware for phase III. More improvements to CONFIG are in progress to make it a more user friendly software.

  16. Issues in Software System Safety: Polly Ann Smith Co. versus Ned I. Ludd

    NASA Technical Reports Server (NTRS)

    Holloway, C. Michael

    2002-01-01

    This paper is a work of fiction, but it is fiction with a very real purpose: to stimulate careful thought and friendly discussion about some questions for which thought is often careless and discussion is often unfriendly. To accomplish this purpose, the paper creates a fictional legal case. The most important issue in this fictional case is whether certain proffered expert testimony about software engineering for safety critical systems should be admitted. Resolving this issue requires deciding the extent to which current practices and research in software engineering, especially for safety-critical systems, can rightly be considered based on knowledge, rather than opinion.

  17. End-to-end observatory software modeling using domain specific languages

    NASA Astrophysics Data System (ADS)

    Filgueira, José M.; Bec, Matthieu; Liu, Ning; Peng, Chien; Soto, José

    2014-07-01

    The Giant Magellan Telescope (GMT) is a 25-meter extremely large telescope that is being built by an international consortium of universities and research institutions. Its software and control system is being developed using a set of Domain Specific Languages (DSL) that supports a model driven development methodology integrated with an Agile management process. This approach promotes the use of standardized models that capture the component architecture of the system, that facilitate the construction of technical specifications in a uniform way, that facilitate communication between developers and domain experts and that provide a framework to ensure the successful integration of the software subsystems developed by the GMT partner institutions.

  18. Directions in Rehabilitation Counseling, 1993.

    ERIC Educational Resources Information Center

    Directions in Rehabilitation Counseling, 1993

    1993-01-01

    This volume of 12 lessons--each written by either a medical or a mental health professional--provides expert information on a variety of medical and psychological issues in rehabilitative counseling. The lessons, each of which concludes with a few multiple-choice questions, are: (1) "Geriatric Alcoholism: Identification and Elder-Specific…

  19. catcher: A Software Program to Detect Answer Copying in Multiple-Choice Tests Based on Nominal Response Model

    ERIC Educational Resources Information Center

    Kalender, Ilker

    2012-01-01

    catcher is a software program designed to compute the [omega] index, a common statistical index for the identification of collusions (cheating) among examinees taking an educational or psychological test. It requires (a) responses and (b) ability estimations of individuals, and (c) item parameters to make computations and outputs the results of…

  20. The Development and Evaluation of Software to Foster Professional Development in Educational Assessment

    ERIC Educational Resources Information Center

    Benton, Morgan C.

    2008-01-01

    This dissertation sought to answer the question: Is it possible to build a software tool that will allow teachers to write better multiple-choice questions? The thesis proceeded from the finding that the quality of teaching is very influential in the amount that students learn. A basic premise of this research, then, is that improving teachers…

  1. Food Intake Recording Software System, version 4 (FIRSSt4): A self-completed 24-h dietary recall for children

    USDA-ARS?s Scientific Manuscript database

    The Food Intake Recording Software System, version 4 (FIRSSt4), is a web-based 24-h dietary recall (24 hdr) self-administered by children based on the Automated Self-Administered 24-h recall (ASA24) (a self-administered 24 hdr for adults). The food choices in FIRSSt4 are abbreviated to include only ...

  2. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    NASA Technical Reports Server (NTRS)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1987-01-01

    Traditional expert systems, such as diagnostic and training systems, interact with users only through a keyboard and screen, and are usually symbolic in nature. Expert systems that require access to data bases, complex simulations and real-time instrumentation have both symbolic as well as algorithmic computing needs. These needs could both be met using a general purpose workstation running both symbolic and algorithmic code, or separate, specialized computers networked together. The latter approach was chosen to implement TEXSYS, the thermal expert system, developed by NASA Ames Research Center in conjunction with Johnson Space Center to demonstrate the ability of an expert system to autonomously monitor the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. This paper will explore the integration options, and present several possible solutions.

  3. Attacking the information access problem with expert systems

    NASA Technical Reports Server (NTRS)

    Ragusa, James M.; Orwig, Gary W.

    1991-01-01

    The results of applications research directed at finding an improved method of storing and accessing information are presented. Twelve microcomputer-based expert systems shells and five laser-optical formats have been studied, and the general and specific methods of interfacing these technologies are being tested in prototype systems. Shell features and interfacing capabilities are discussed, and results from the study of five laser-optical formats are recounted including the video laser, compact, and WORM disks, and laser cards and film. Interfacing, including laser disk device driver interfacing, is discussed and it is pointed out that in order to control the laser device from within the expert systems application, the expert systems shell must be able to access the device driver software. Potential integrated applications are investigated and an initial list is provided including consumer services, travel, law enforcement, human resources, marketing, and education and training.

  4. A framework for building real-time expert systems

    NASA Technical Reports Server (NTRS)

    Lee, S. Daniel

    1991-01-01

    The Space Station Freedom is an example of complex systems that require both traditional and artificial intelligence (AI) real-time methodologies. It was mandated that Ada should be used for all new software development projects. The station also requires distributed processing. Catastrophic failures on the station can cause the transmission system to malfunction for a long period of time, during which ground-based expert systems cannot provide any assistance to the crisis situation on the station. This is even more critical for other NASA projects that would have longer transmission delays (e.g., the lunar base, Mars missions, etc.). To address these issues, a distributed agent architecture (DAA) is proposed that can support a variety of paradigms based on both traditional real-time computing and AI. The proposed testbed for DAA is an autonomous power expert (APEX) which is a real-time monitoring and diagnosis expert system for the electrical power distribution system of the space station.

  5. Reaction time and anticipatory skill of athletes in open and closed skill-dominated sport.

    PubMed

    Nuri, Leila; Shadmehr, Azadeh; Ghotbi, Nastaran; Attarbashi Moghadam, Behrouz

    2013-01-01

    In sports, reaction time and anticipatory skill are critical aspects of perceptual abilities. To date, no study has compared reaction time and anticipatory skill of athletes from open and closed skill-dominated sport. Accordingly, the present study investigated whether a difference exists in sensory-cognitive skills between these two different sport domains. Eleven volleyball players and 11 sprinters participated in this experiment. Reaction time and anticipatory skill of both groups were recorded by a custom-made software called SART (speed anticipation and reaction time test). This software consists of six sensory-cognitive tests that evaluate visual choice reaction time, visual complex choice reaction time, auditory choice reaction time, auditory complex choice reaction time, and anticipatory skill of the high speed and low speed of the ball. For each variable, an independent t-test was performed. Results suggested that sprinters were better in both auditory reaction times (P<0.001 for both tests) and volleyball players were better in both anticipatory skill tests (P = 0.007 and P = 0.04 for anticipatory skill of the high speed and low speed of the ball, respectively). However, no significant differences were found in both visual choice reaction time tests (P > 0.05 for both visual reaction time tests). It is concluded that athletes have greater sensory-cognitive skills related to their specific sport domain either open or closed.

  6. The MICA Case Conference Program at Tewksbury Hospital, Mass.: an integrated treatment model.

    PubMed

    Clodfelter, Reynolds C; Albanese, Mark J; Baker, Gregg; Domoto, Katherine; Gui, Amy L; Khantzian, Edward J

    2003-01-01

    This report describes the MICA (Mentally Ill Chemically Abusing) Program at the Tewksbury Hospital campus in Tewksbury, Massachusetts. Several campus facilities collaborate in the MICA Program. Through Expert Case Conferences, principles of integrated psychosocial treatment with dual diagnosis patients are demonstrated. An expert clinician focuses on the interplay between psychological pain, characterological traits, defenses, and the patient's drug of choice. Patients who have participated in the program have reported positive experiences. The staff reported that the program has resulted in facility improvement in assessment and treatment of complex dual diagnosis patients.

  7. Usability study of clinical exome analysis software: top lessons learned and recommendations.

    PubMed

    Shyr, Casper; Kushniruk, Andre; Wasserman, Wyeth W

    2014-10-01

    New DNA sequencing technologies have revolutionized the search for genetic disruptions. Targeted sequencing of all protein coding regions of the genome, called exome analysis, is actively used in research-oriented genetics clinics, with the transition to exomes as a standard procedure underway. This transition is challenging; identification of potentially causal mutation(s) amongst ∼10(6) variants requires specialized computation in combination with expert assessment. This study analyzes the usability of user interfaces for clinical exome analysis software. There are two study objectives: (1) To ascertain the key features of successful user interfaces for clinical exome analysis software based on the perspective of expert clinical geneticists, (2) To assess user-system interactions in order to reveal strengths and weaknesses of existing software, inform future design, and accelerate the clinical uptake of exome analysis. Surveys, interviews, and cognitive task analysis were performed for the assessment of two next-generation exome sequence analysis software packages. The subjects included ten clinical geneticists who interacted with the software packages using the "think aloud" method. Subjects' interactions with the software were recorded in their clinical office within an urban research and teaching hospital. All major user interface events (from the user interactions with the packages) were time-stamped and annotated with coding categories to identify usability issues in order to characterize desired features and deficiencies in the user experience. We detected 193 usability issues, the majority of which concern interface layout and navigation, and the resolution of reports. Our study highlights gaps in specific software features typical within exome analysis. The clinicians perform best when the flow of the system is structured into well-defined yet customizable layers for incorporation within the clinical workflow. The results highlight opportunities to dramatically accelerate clinician analysis and interpretation of patient genomic data. We present the first application of usability methods to evaluate software interfaces in the context of exome analysis. Our results highlight how the study of user responses can lead to identification of usability issues and challenges and reveal software reengineering opportunities for improving clinical next-generation sequencing analysis. While the evaluation focused on two distinctive software tools, the results are general and should inform active and future software development for genome analysis software. As large-scale genome analysis becomes increasingly common in healthcare, it is critical that efficient and effective software interfaces are provided to accelerate clinical adoption of the technology. Implications for improved design of such applications are discussed. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  8. Automated Software Vulnerability Analysis

    NASA Astrophysics Data System (ADS)

    Sezer, Emre C.; Kil, Chongkyung; Ning, Peng

    Despite decades of research, software continues to have vulnerabilities. Successful exploitations of these vulnerabilities by attackers cost millions of dollars to businesses and individuals. Unfortunately, most effective defensive measures, such as patching and intrusion prevention systems, require an intimate knowledge of the vulnerabilities. Many systems for detecting attacks have been proposed. However, the analysis of the exploited vulnerabilities is left to security experts and programmers. Both the human effortinvolved and the slow analysis process are unfavorable for timely defensive measure to be deployed. The problem is exacerbated by zero-day attacks.

  9. Software for rapid prototyping in the pharmaceutical and biotechnology industries.

    PubMed

    Kappler, Michael A

    2008-05-01

    The automation of drug discovery methods continues to develop, especially techniques that process information, represent workflow and facilitate decision-making. The magnitude of data and the plethora of questions in pharmaceutical and biotechnology research give rise to the need for rapid prototyping software. This review describes the advantages and disadvantages of three solutions: Competitive Workflow, Taverna and Pipeline Pilot. Each of these systems processes large amounts of data, integrates diverse systems and assists novice programmers and human experts in critical decision-making steps.

  10. Developing multiple-choices test items as tools for measuring the scientific-generic skills on solar system

    NASA Astrophysics Data System (ADS)

    Bhakti, Satria Seto; Samsudin, Achmad; Chandra, Didi Teguh; Siahaan, Parsaoran

    2017-05-01

    The aim of research is developing multiple-choices test items as tools for measuring the scientific of generic skills on solar system. To achieve the aim that the researchers used the ADDIE model consisting Of: Analyzing, Design, Development, Implementation, dan Evaluation, all of this as a method research. While The scientific of generic skills limited research to five indicator including: (1) indirect observation, (2) awareness of the scale, (3) inference logic, (4) a causal relation, and (5) mathematical modeling. The participants are 32 students at one of junior high schools in Bandung. The result shown that multiple-choices that are constructed test items have been declared valid by the expert validator, and after the tests show that the matter of developing multiple-choices test items be able to measuring the scientific of generic skills on solar system.

  11. Participation rate or informed choice? Rethinking the European key performance indicators for mammography screening.

    PubMed

    Strech, Daniel

    2014-03-01

    Despite the intensive controversies about the likelihood of benefits and harms of mammography screening almost all experts conclude that the choice to screen or not to screen needs to be made by the individual patient who is adequately informed. However, the "European guideline for quality assurance in breast cancer screening and diagnosis" specifies a participation rate of 70% as the key performance indicator for mammography screening. This paper argues that neither the existing evidence on benefits and harms, nor survey research with women, nor compliance rates in clinical trials, nor cost-effectiveness ratios justify participation rates as a reasonable performance indicator for preference-sensitive condition such as mammography screening. In contrast, an informed choice rate would be more reasonable. Further research needs to address the practical challenges in assessing informed choice rates. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Automated Diagnosis Of Conditions In A Plant-Growth Chamber

    NASA Technical Reports Server (NTRS)

    Clinger, Barry R.; Damiano, Alfred L.

    1995-01-01

    Biomass Production Chamber Operations Assistant software and hardware constitute expert system that diagnoses mechanical failures in controlled-environment hydroponic plant-growth chamber and recommends corrective actions to be taken by technicians. Subjects of continuing research directed toward development of highly automated closed life-support systems aboard spacecraft to process animal (including human) and plant wastes into food and oxygen. Uses Microsoft Windows interface to give technicians intuitive, efficient access to critical data. In diagnostic mode, system prompts technician for information. When expert system has enough information, it generates recovery plan.

  13. Improved Real-Time Monitoring Using Multiple Expert Systems

    NASA Technical Reports Server (NTRS)

    Schwuttke, Ursula M.; Angelino, Robert; Quan, Alan G.; Veregge, John; Childs, Cynthia

    1993-01-01

    Monitor/Analyzer of Real-Time Voyager Engineering Link (MARVEL) computer program implements combination of techniques of both conventional automation and artificial intelligence to improve monitoring of complicated engineering system. Designed to support ground-based operations of Voyager spacecraft, also adapted to other systems. Enables more-accurate monitoring and analysis of telemetry, enhances productivity of monitoring personnel, reduces required number of such personnel by performing routine monitoring tasks, and helps ensure consistency in face of turnover of personnel. Programmed in C language and includes commercial expert-system software shell also written in C.

  14. Directions in Rehabilitation Counseling, 1991.

    ERIC Educational Resources Information Center

    Directions in Rehabilitation Counseling, 1991

    1991-01-01

    This volume of 12 lessons--each one written by either a medical or a mental health professional--provides expert information on a variety of medical and psychological issues in rehabilitative counseling. The lessons, each of which concludes with a few multiple-choice questions, are as follows: (1) "An Update on Post-Traumatic Stress…

  15. Factors Influencing Choices of Contextualized versus Traditional Practices with Children and Adolescents Who Have Traumatic Brain Injury

    ERIC Educational Resources Information Center

    Koole, Heather; Nelson, Nickola W.; Curtis, Amy B.

    2015-01-01

    Purpose: This preliminary investigation examined speech-language pathologists' (SLPs') use of contextualized practices (i.e., functional, personally relevant, nonhierarchical, and collaborative) compared with traditional practices (i.e., clinical, generic, hierarchical, and expert driven) with school-age children and adolescents with traumatic…

  16. Directions in Rehabilitation Counseling, 1994.

    ERIC Educational Resources Information Center

    Directions in Rehabilitation Counseling, 1994

    1994-01-01

    This volume of 12 lessons--each one written by either a medical or mental health professional--provides expert information on a variety of medical and psychological issues in rehabilitative counseling. The lessons, each of which concludes with a few multiple-choice questions, are as follows: (1) "Behavioral Techniques for Treatment of…

  17. Directions in Rehabilitation Counseling, 1990.

    ERIC Educational Resources Information Center

    Directions in Rehabilitation Counseling, 1990

    1990-01-01

    This volume of 12 lessons--each one written by either a medical or a mental health professional--provides expert information on a variety of medical and psychological issues in rehabilitative counseling. The lessons, each of which concludes with a few multiple-choice questions, are as follows: (1) "Rehabilitation of the Seriously Mentally…

  18. Liberating Schools: Education in the Inner City.

    ERIC Educational Resources Information Center

    Boaz, David, Ed.

    This volume offers the analysis and suggestions for reform of leading educational experts on the topic of education in the inner cities. An introduction provides an overview of the problems of American education and a proposed solution: educational choice. The 12 chapters are as follows: (1) "The Public School Monopoly: America's Berlin…

  19. Adolescent Health: A Generation at Risk.

    ERIC Educational Resources Information Center

    Hechinger, Fred M.

    1994-01-01

    A 3-day conference brought together health and education experts to explore responses to adolescent health problems and to suggest ways to implement the recommendations put forward in "Fateful Choices: Healthy Youth for the 21st Century," by Fred M. Hechinger. Conference participants identified a number of adolescent health problems and the areas…

  20. Directions in Rehabilitation Counseling, 1992.

    ERIC Educational Resources Information Center

    Directions in Rehabilitation Counseling, 1992

    1992-01-01

    This volume of 12 separate lessons--each written by either a medical or mental health professional--provides expert information on a wide variety of medical and psychological issues in rehabilitative counseling. The lessons, each of which concludes with a few multiple-choice questions, are as follows: (1) "Adaptive Styles in the Etiology of…

  1. Counselor and Student at Talk: A Case Study.

    ERIC Educational Resources Information Center

    He, Agnes Weiyun; Keating, Elizabeth

    1991-01-01

    Explores ways in which expert and novice roles are constituted and maintained in an academic counseling encounter. Characterizes the meeting as a socializing, problem-solving event and uses functional linguistics and discourse analysis to describe how the counselor and student mark stance through linguistic choices such as polarity, modality,…

  2. Copyright, Public Policy, and the Scholarly Community.

    ERIC Educational Resources Information Center

    Matthews, Michael, Ed.; Brennan, Patricia, Ed.

    At the May 1995 Membership Meeting of the Association of Research Libraries (ARL), a panel of experts offered four perspectives on strategies and public policy choices involved in defining the rights and responsibilities of copyright owners, users, and the libraries in the networked environment. These perspectives, and an additional paper…

  3. Development and Validation of the Homeostasis Concept Inventory

    ERIC Educational Resources Information Center

    McFarland, Jenny L.; Price, Rebecca M.; Wenderoth, Mary Pat; Martinková, Patrícia; Cliff, William; Michael, Joel; Modell, Harold; Wright, Ann

    2017-01-01

    We present the Homeostasis Concept Inventory (HCI), a 20-item multiple-choice instrument that assesses how well undergraduates understand this critical physiological concept. We used an iterative process to develop a set of questions based on elements in the Homeostasis Concept Framework. This process involved faculty experts and undergraduate…

  4. Performance of e-ASPECTS software in comparison to that of stroke physicians on assessing CT scans of acute ischemic stroke patients.

    PubMed

    Herweh, Christian; Ringleb, Peter A; Rauch, Geraldine; Gerry, Steven; Behrens, Lars; Möhlenbruch, Markus; Gottorf, Rebecca; Richter, Daniel; Schieber, Simon; Nagel, Simon

    2016-06-01

    The Alberta Stroke Program Early CT score (ASPECTS) is an established 10-point quantitative topographic computed tomography scan score to assess early ischemic changes. We compared the performance of the e-ASPECTS software with those of stroke physicians at different professional levels. The baseline computed tomography scans of acute stroke patients, in whom computed tomography and diffusion-weighted imaging scans were obtained less than two hours apart, were retrospectively scored by e-ASPECTS as well as by three stroke experts and three neurology trainees blinded to any clinical information. The ground truth was defined as the ASPECTS on diffusion-weighted imaging scored by another two non-blinded independent experts on consensus basis. Sensitivity and specificity in an ASPECTS region-based and an ASPECTS score-based analysis as well as receiver-operating characteristic curves, Bland-Altman plots with mean score error, and Matthews correlation coefficients were calculated. Comparisons were made between the human scorers and e-ASPECTS with diffusion-weighted imaging being the ground truth. Two methods for clustered data were used to estimate sensitivity and specificity in the region-based analysis. In total, 34 patients were included and 680 (34 × 20) ASPECTS regions were scored. Mean time from onset to computed tomography was 172 ± 135 min and mean time difference between computed tomographyand magnetic resonance imaging was 41 ± 31 min. The region-based sensitivity (46.46% [CI: 30.8;62.1]) of e-ASPECTS was better than three trainees and one expert (p ≤ 0.01) and not statistically different from another two experts. Specificity (94.15% [CI: 91.7;96.6]) was lower than one expert and one trainee (p < 0.01) and not statistically different to the other four physicians. e-ASPECTS had the best Matthews correlation coefficient of 0.44 (experts: 0.38 ± 0.08 and trainees: 0.19 ± 0.05) and the lowest mean score error of 0.56 (experts: 1.44 ± 1.79 and trainees: 1.97 ± 2.12). e-ASPECTS showed a similar performance to that of stroke experts in the assessment of brain computed tomographys of acute ischemic stroke patients with the Alberta Stroke Program Early CT score method. © 2016 World Stroke Organization.

  5. The role and choice criteria of antihistamines in allergy management – expert opinion

    PubMed Central

    Jurkiewicz, Dariusz; Czarnecka-Operacz, Magdalena M.; Pawliczak, Rafał; Woroń, Jarosław; Moniuszko, Marcin; Emeryk, Andrzej

    2016-01-01

    Allergic diseases are the most common chronic conditions lasting throughout the patient’s life. They not only cause significant deterioration in the quality of life of patients but also lead to significant absenteeism and reduced productivity, resulting in very high costs for society. Effective and safe treatment of allergic diseases is therefore one of the main challenges for public health and should be carried out by all the specialists in family medicine, internists and paediatricians in collaboration with allergists, otorhinolaryngologists and dermatologists. Antihistamines are most commonly used in the treatment of allergies. Several dozen drugs are available on the pharmaceutical market, and their generic forms are advertised widely as very effective drugs for the treatment of allergic diseases. What is the truth? What are the data from clinical trials and observational studies? Are all drugs equally effective and safe for the patient? According to a panel of experts representing various fields of medicine, inappropriate treatment of allergies can be very risky for patients, and seemingly equally acting medications may differ greatly. Therefore, a panel of experts gathered the latest data from the entire scientific literature and analysed the latest standards and recommendations prepared by scientific societies. This paper provides a summary of these studies and highlights the importance for the patient of the proper choice of drug to treat his allergies. PMID:28035215

  6. Developing a driving Safety Index using a Delphi stated preference experiment.

    PubMed

    Jamson, Samantha; Wardman, Mark; Batley, Richard; Carsten, Oliver

    2008-03-01

    Whilst empirical evidence is available concerning the effect of some aspects of driving behaviour on safety (e.g. speed choice), there is scant knowledge about safety thresholds, i.e. the point at which behaviour can be considered unsafe. Furthermore, it is almost impossible to ascertain the interaction between various aspects of driving behaviour. For example, how might drivers' lateral control of a vehicle be mediated by their speed choice-are the effects additive or do they cancel each other out. Complex experimental or observational studies would need to be undertaken to establish the nature of such effects. As an alternative, a Delphi study was undertaken to use expert judgement as a way of deriving a first approximation of these threshold and combinatory effects. Using a stated preference technique, road safety professionals make judgements about drivers' safe or unsafe behaviour. The aim was to understand the relative weightings that are assigned to a number of driver behaviours and thereby to construct a Safety Index. As expected, experts were able to establish thresholds, above (or below) which changes to the behavioural parameters had minimal impact on safety. This provided us with a Safety Index, based on a model that had face validity and a convincing range of values. However, the experts found the task of combining these driver behaviours more difficult, reflecting the elusive nature of safety estimates. Suggestions for future validation of our Safety Index are provided.

  7. The role and choice criteria of antihistamines in allergy management - expert opinion.

    PubMed

    Kuna, Piotr; Jurkiewicz, Dariusz; Czarnecka-Operacz, Magdalena M; Pawliczak, Rafał; Woroń, Jarosław; Moniuszko, Marcin; Emeryk, Andrzej

    2016-12-01

    Allergic diseases are the most common chronic conditions lasting throughout the patient's life. They not only cause significant deterioration in the quality of life of patients but also lead to significant absenteeism and reduced productivity, resulting in very high costs for society. Effective and safe treatment of allergic diseases is therefore one of the main challenges for public health and should be carried out by all the specialists in family medicine, internists and paediatricians in collaboration with allergists, otorhinolaryngologists and dermatologists. Antihistamines are most commonly used in the treatment of allergies. Several dozen drugs are available on the pharmaceutical market, and their generic forms are advertised widely as very effective drugs for the treatment of allergic diseases. What is the truth? What are the data from clinical trials and observational studies? Are all drugs equally effective and safe for the patient? According to a panel of experts representing various fields of medicine, inappropriate treatment of allergies can be very risky for patients, and seemingly equally acting medications may differ greatly. Therefore, a panel of experts gathered the latest data from the entire scientific literature and analysed the latest standards and recommendations prepared by scientific societies. This paper provides a summary of these studies and highlights the importance for the patient of the proper choice of drug to treat his allergies.

  8. Application of Data Provenance in Healthcare Analytics Software: Information Visualisation of User Activities

    PubMed Central

    Xu, Shen; Rogers, Toby; Fairweather, Elliot; Glenn, Anthony; Curran, James; Curcin, Vasa

    2018-01-01

    Data provenance is a technique that describes the history of digital objects. In health data settings, it can be used to deliver auditability and transparency, and to achieve trust in a software system. However, implementing data provenance in analytics software at an enterprise level presents a different set of challenges from the research environments where data provenance was originally devised. In this paper, the challenges of reporting provenance information to the user is presented. Provenance captured from analytics software can be large and complex and visualizing a series of tasks over a long period can be overwhelming even for a domain expert, requiring visual aggregation mechanisms that fit with complex human cognitive activities involved in the process. This research studied how provenance-based reporting can be integrated into a health data analytics software, using the example of Atmolytics visual reporting tool. PMID:29888084

  9. Webulous and the Webulous Google Add-On--a web service and application for ontology building from templates.

    PubMed

    Jupp, Simon; Burdett, Tony; Welter, Danielle; Sarntivijai, Sirarat; Parkinson, Helen; Malone, James

    2016-01-01

    Authoring bio-ontologies is a task that has traditionally been undertaken by skilled experts trained in understanding complex languages such as the Web Ontology Language (OWL), in tools designed for such experts. As requests for new terms are made, the need for expert ontologists represents a bottleneck in the development process. Furthermore, the ability to rigorously enforce ontology design patterns in large, collaboratively developed ontologies is difficult with existing ontology authoring software. We present Webulous, an application suite for supporting ontology creation by design patterns. Webulous provides infrastructure to specify templates for populating ontology design patterns that get transformed into OWL assertions in a target ontology. Webulous provides programmatic access to the template server and a client application has been developed for Google Sheets that allows templates to be loaded, populated and resubmitted to the Webulous server for processing. The development and delivery of ontologies to the community requires software support that goes beyond the ontology editor. Building ontologies by design patterns and providing simple mechanisms for the addition of new content helps reduce the overall cost and effort required to develop an ontology. The Webulous system provides support for this process and is used as part of the development of several ontologies at the European Bioinformatics Institute.

  10. An evaluation of a computer based education program for the diagnosis and management of dementia in primary care. An international study of the transcultural adaptations necessary for European dissemination.

    PubMed

    Degryse, J; De Lepeleire, J; Southgate, L; Vernooij-Dassen, M; Gay, B; Heyrman, J

    2009-05-01

    The aim of this study is to make an inventory of the changes that are needed to make an interactive computer based training program (ICBT) with a specific educational content, acceptable to professional communities with different linguistic,cultural and health care backgrounds in different European countries. Existing educational software, written in two languages was reviewed by GPs and primary care professionals in three different countries. Reviewers worked through the program using a structured critical reading grid. A 'simple' translation of the program is not sufficient. Minor changes are needed to take account of linguistic differences and medical semantics. Major changes are needed in respect of the existing clinical guidelines in every country related to differences in the existing health care systems. ICTB programs cannot easily be used in different countries and cultures. The development of a structured educational program needs collaboration between educationalists, domain experts, information technology advisers and software engineers. Simple validation of the content by local expert groups will not guarantee the program's exportability. It is essential to involve different national expert groups at every phase of the development process in order to disseminate it in other countries.

  11. Advanced Computing Technologies for Rocket Engine Propulsion Systems: Object-Oriented Design with C++

    NASA Technical Reports Server (NTRS)

    Bekele, Gete

    2002-01-01

    This document explores the use of advanced computer technologies with an emphasis on object-oriented design to be applied in the development of software for a rocket engine to improve vehicle safety and reliability. The primary focus is on phase one of this project, the smart start sequence module. The objectives are: 1) To use current sound software engineering practices, object-orientation; 2) To improve on software development time, maintenance, execution and management; 3) To provide an alternate design choice for control, implementation, and performance.

  12. Computer-aided decision making.

    Treesearch

    Keith M. Reynolds; Daniel L. Schmoldt

    2006-01-01

    Several major classes of software technologies have been used in decisionmaking for forest management applications over the past few decades. These computer-based technologies include mathematical programming, expert systems, network models, multi-criteria decisionmaking, and integrated systems. Each technology possesses unique advantages and disadvantages, and has...

  13. Design Recovery for Software Library Population

    DTIC Science & Technology

    1992-12-01

    increase understandability, efficiency, and maintainability of the software and the design. A good representation choice will also aid in...required for a reengineering project. It details the analysis and planning phase and gives good criteria for determining the need for a reengineering...because it deals with all of these issues. With his complete description of the analysis and planning phase, Byrne has a good foundation for

  14. Weighing In: The Taste-Engineering Frame in Obesity Expert Discourse

    PubMed Central

    Zimmerman, Frederick J.; Gilliam, Franklin D.

    2015-01-01

    Objectives. We sought expert opinion on the problems with 2 dominant obesity-prevention discourse frames—personal responsibility and the environment—and examined alternative frames for understanding and addressing obesity. Methods. We conducted 60-minute, semistructured interviews with 15 US-based obesity experts. We manually coded and entered interview transcripts into software, generating themes and subthematic areas that captured the debate’s essence. Results. Although the environmental frame is the dominant model used in communications with the public and policymakers, several experts found that communicating key messages within this frame was difficult because of the enormity of the obesity problem. A subframe of the environmental frame—the taste-engineering frame—identifies food industry strategies to influence the overconsumption of certain foods and beverages. This emerging frame deconstructs the environmental frame so that causal attributes and responsible agents are more easily identifiable and proposed policies and public health interventions more salient. Conclusions. Expert interviews are an invaluable resource for understanding how experts use frames in discussing their work and in conversations with the public and policymakers. Future empirical studies testing the effectiveness of the taste-engineering frame on public opinion and support for structural-level health policies are needed. PMID:25602888

  15. Weighing in: the taste-engineering frame in obesity expert discourse.

    PubMed

    Ortiz, Selena E; Zimmerman, Frederick J; Gilliam, Franklin D

    2015-03-01

    We sought expert opinion on the problems with 2 dominant obesity-prevention discourse frames-personal responsibility and the environment-and examined alternative frames for understanding and addressing obesity. We conducted 60-minute, semistructured interviews with 15 US-based obesity experts. We manually coded and entered interview transcripts into software, generating themes and subthematic areas that captured the debate's essence. Although the environmental frame is the dominant model used in communications with the public and policymakers, several experts found that communicating key messages within this frame was difficult because of the enormity of the obesity problem. A subframe of the environmental frame--the taste-engineering frame--identifies food industry strategies to influence the overconsumption of certain foods and beverages. This emerging frame deconstructs the environmental frame so that causal attributes and responsible agents are more easily identifiable and proposed policies and public health interventions more salient. Expert interviews are an invaluable resource for understanding how experts use frames in discussing their work and in conversations with the public and policymakers. Future empirical studies testing the effectiveness of the taste-engineering frame on public opinion and support for structural-level health policies are needed.

  16. Influence of Professional Affiliation on Expert’s View on Welfare Measures

    PubMed Central

    Rousing, Tine; Forkman, Björn

    2017-01-01

    Simple Summary Animal welfare can be assessed from different ethical points of view, which may vary from one individual to another. This is often met by including different stakeholders’ opinions in the process of adding up welfare benefits and or welfare risks. However, in order to obtain the most reliable results, these expert panels should be balanced; since experts’ professional affiliations can influence their judgment on different welfare aspects as shown in the present study. Abstract The present study seeks to investigate the influence of expert affiliation in the weighing procedures within animal welfare assessments. Experts are often gathered with different backgrounds with differing approaches to animal welfare posing a potential pitfall if affiliation groups are not balanced in numbers of experts. At two time points (2012 and 2016), dairy cattle and swine experts from four different stakeholder groups, namely researchers (RES), production advisors (CONS), practicing veterinarians (VET) and animal welfare control officers (AWC) were asked to weigh eight different welfare criteria: Hunger, Thirst, Resting comfort, Ease of movement, Injuries, Disease, Human-animal bond and Emotional state. A total of 54 dairy cattle experts (RES = 15%, CONS = 22%, VET = 35%, AWC = 28%) and 34 swine experts (RES = 24%, CONS = 35%, AWC = 41%) participated. Between—and within—group differences in the prioritization of criteria were assessed. AWC cattle experts differed consistently from the other cattle expert groups but only significantly for the criteria Hunger (p = 0.04), and tendencies towards significance within the criteria Thirst (p = 0.06). No significant differences were found between expert groups among swine experts. Inter-expert differences were more pronounced for both species. The results highlight the challenges of using expert weightings in aggregated welfare assessment models, as the choice of expert affiliation may play a confounding role in the final aggregation due to different prioritization of criteria. PMID:29140262

  17. Rules of thumb to increase the software quality through testing

    NASA Astrophysics Data System (ADS)

    Buttu, M.; Bartolini, M.; Migoni, C.; Orlati, A.; Poppi, S.; Righini, S.

    2016-07-01

    The software maintenance typically requires 40-80% of the overall project costs, and this considerable variability mostly depends on the software internal quality: the more the software is designed and implemented to constantly welcome new changes, the lower will be the maintenance costs. The internal quality is typically enforced through testing, which in turn also affects the development and maintenance costs. This is the reason why testing methodologies have become a major concern for any company that builds - or is involved in building - software. Although there is no testing approach that suits all contexts, we infer some general guidelines learned during the Development of the Italian Single-dish COntrol System (DISCOS), which is a project aimed at producing the control software for the three INAF radio telescopes (the Medicina and Noto dishes, and the newly-built SRT). These guidelines concern both the development and the maintenance phases, and their ultimate goal is to maximize the DISCOS software quality through a Behavior-Driven Development (BDD) workflow beside a continuous delivery pipeline. We consider different topics and patterns; they involve the proper apportion of the tests (from end-to-end to low-level tests), the choice between hardware simulators and mockers, why and how to apply TDD and the dependency injection to increase the test coverage, the emerging technologies available for test isolation, bug fixing, how to protect the system from the external resources changes (firmware updating, hardware substitution, etc.) and, eventually, how to accomplish BDD starting from functional tests and going through integration and unit tests. We discuss pros and cons of each solution and point out the motivations of our choices either as a general rule or narrowed in the context of the DISCOS project.

  18. FIGENIX: Intelligent automation of genomic annotation: expertise integration in a new software platform

    PubMed Central

    Gouret, Philippe; Vitiello, Vérane; Balandraud, Nathalie; Gilles, André; Pontarotti, Pierre; Danchin, Etienne GJ

    2005-01-01

    Background Two of the main objectives of the genomic and post-genomic era are to structurally and functionally annotate genomes which consists of detecting genes' position and structure, and inferring their function (as well as of other features of genomes). Structural and functional annotation both require the complex chaining of numerous different software, algorithms and methods under the supervision of a biologist. The automation of these pipelines is necessary to manage huge amounts of data released by sequencing projects. Several pipelines already automate some of these complex chaining but still necessitate an important contribution of biologists for supervising and controlling the results at various steps. Results Here we propose an innovative automated platform, FIGENIX, which includes an expert system capable to substitute to human expertise at several key steps. FIGENIX currently automates complex pipelines of structural and functional annotation under the supervision of the expert system (which allows for example to make key decisions, check intermediate results or refine the dataset). The quality of the results produced by FIGENIX is comparable to those obtained by expert biologists with a drastic gain in terms of time costs and avoidance of errors due to the human manipulation of data. Conclusion The core engine and expert system of the FIGENIX platform currently handle complex annotation processes of broad interest for the genomic community. They could be easily adapted to new, or more specialized pipelines, such as for example the annotation of miRNAs, the classification of complex multigenic families, annotation of regulatory elements and other genomic features of interest. PMID:16083500

  19. A flexible, interactive software tool for fitting the parameters of neuronal models.

    PubMed

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I; Freund, Tamás F; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool.

  20. A flexible, interactive software tool for fitting the parameters of neuronal models

    PubMed Central

    Friedrich, Péter; Vella, Michael; Gulyás, Attila I.; Freund, Tamás F.; Káli, Szabolcs

    2014-01-01

    The construction of biologically relevant neuronal models as well as model-based analysis of experimental data often requires the simultaneous fitting of multiple model parameters, so that the behavior of the model in a certain paradigm matches (as closely as possible) the corresponding output of a real neuron according to some predefined criterion. Although the task of model optimization is often computationally hard, and the quality of the results depends heavily on technical issues such as the appropriate choice (and implementation) of cost functions and optimization algorithms, no existing program provides access to the best available methods while also guiding the user through the process effectively. Our software, called Optimizer, implements a modular and extensible framework for the optimization of neuronal models, and also features a graphical interface which makes it easy for even non-expert users to handle many commonly occurring scenarios. Meanwhile, educated users can extend the capabilities of the program and customize it according to their needs with relatively little effort. Optimizer has been developed in Python, takes advantage of open-source Python modules for nonlinear optimization, and interfaces directly with the NEURON simulator to run the models. Other simulators are supported through an external interface. We have tested the program on several different types of problems of varying complexity, using different model classes. As targets, we used simulated traces from the same or a more complex model class, as well as experimental data. We successfully used Optimizer to determine passive parameters and conductance densities in compartmental models, and to fit simple (adaptive exponential integrate-and-fire) neuronal models to complex biological data. Our detailed comparisons show that Optimizer can handle a wider range of problems, and delivers equally good or better performance than any other existing neuronal model fitting tool. PMID:25071540

  1. Using failure mode and effects analysis to plan implementation of smart i.v. pump technology.

    PubMed

    Wetterneck, Tosha B; Skibinski, Kathleen A; Roberts, Tanita L; Kleppin, Susan M; Schroeder, Mark E; Enloe, Myra; Rough, Steven S; Hundt, Ann Schoofs; Carayon, Pascale

    2006-08-15

    Failure mode and effects analysis (FMEA) was used to evaluate a smart i.v. pump as it was implemented into a redesigned medication-use process. A multidisciplinary team conducted a FMEA to guide the implementation of a smart i.v. pump that was designed to prevent pump programming errors. The smart i.v. pump was equipped with a dose-error reduction system that included a pre-defined drug library in which dosage limits were set for each medication. Monitoring for potential failures and errors occurred for three months postimplementation of FMEA. Specific measures were used to determine the success of the actions that were implemented as a result of the FMEA. The FMEA process at the hospital identified key failure modes in the medication process with the use of the old and new pumps, and actions were taken to avoid errors and adverse events. I.V. pump software and hardware design changes were also recommended. Thirteen of the 18 failure modes reported in practice after pump implementation had been identified by the team. A beneficial outcome of FMEA was the development of a multidisciplinary team that provided the infrastructure for safe technology implementation and effective event investigation after implementation. With the continual updating of i.v. pump software and hardware after implementation, FMEA can be an important starting place for safe technology choice and implementation and can produce site experts to follow technology and process changes over time. FMEA was useful in identifying potential problems in the medication-use process with the implementation of new smart i.v. pumps. Monitoring for system failures and errors after implementation remains necessary.

  2. [Nursing physical examination of the full-term neonate: self-instructional software].

    PubMed

    Fernandes, Maria das Graças de Oliveira; Barbosa, Vera Lucia; Naganuma, Masuco

    2006-01-01

    The purpose of this research is to elaborate software about the physical examination of full-term newborns (TNB) for neonatal nursing teaching at undergraduate level. The software was developed according to the phases of planning, content development and evaluation. The construction of the modules was based on Gagné's modern learning theory and structured on the Keller Plan, in line with the systemic approach. The objectives were to elaborate and evaluate the contents of the self-instructional modules, to be used as a teaching strategy in the undergraduate course. After being structured, the material was reviewed and analyzed by 11 neonatal nursing experts, who rated the 42 exposed items as good or excellent.

  3. Managing MDO Software Development Projects

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.; Salas, A. O.

    2002-01-01

    Over the past decade, the NASA Langley Research Center developed a series of 'grand challenge' applications demonstrating the use of parallel and distributed computation and multidisciplinary design optimization. All but the last of these applications were focused on the high-speed civil transport vehicle; the final application focused on reusable launch vehicles. Teams of discipline experts developed these multidisciplinary applications by integrating legacy engineering analysis codes. As teams became larger and the application development became more complex with increasing levels of fidelity and numbers of disciplines, the need for applying software engineering practices became evident. This paper briefly introduces the application projects and then describes the approaches taken in project management and software engineering for each project; lessons learned are highlighted.

  4. Classification of voting algorithms for N-version software

    NASA Astrophysics Data System (ADS)

    Tsarev, R. Yu; Durmuş, M. S.; Üstoglu, I.; Morozov, V. A.

    2018-05-01

    A voting algorithm in N-version software is a crucial component that evaluates the execution of each of the N versions and determines the correct result. Obviously, the result of the voting algorithm determines the outcome of the N-version software in general. Thus, the choice of the voting algorithm is a vital issue. A lot of voting algorithms were already developed and they may be selected for implementation based on the specifics of the analysis of input data. However, the voting algorithms applied in N-version software are not classified. This article presents an overview of classic and recent voting algorithms used in N-version software and the authors' classification of the voting algorithms. Moreover, the steps of the voting algorithms are presented and the distinctive features of the voting algorithms in Nversion software are defined.

  5. A survey of Canadian medical physicists: software quality assurance of in‐house software

    PubMed Central

    Kelly, Diane

    2015-01-01

    This paper reports on a survey of medical physicists who write and use in‐house written software as part of their professional work. The goal of the survey was to assess the extent of in‐house software usage and the desire or need for related software quality guidelines. The survey contained eight multiple‐choice questions, a ranking question, and seven free text questions. The survey was sent to medical physicists associated with cancer centers across Canada. The respondents to the survey expressed interest in having guidelines to help them in their software‐related work, but also demonstrated extensive skills in the area of testing, safety, and communication. These existing skills form a basis for medical physicists to establish a set of software quality guidelines. PACS number: 87.55.Qr PMID:25679168

  6. Generic Safety Requirements for Developing Safe Insulin Pump Software

    PubMed Central

    Zhang, Yi; Jetley, Raoul; Jones, Paul L; Ray, Arnab

    2011-01-01

    Background The authors previously introduced a highly abstract generic insulin infusion pump (GIIP) model that identified common features and hazards shared by most insulin pumps on the market. The aim of this article is to extend our previous work on the GIIP model by articulating safety requirements that address the identified GIIP hazards. These safety requirements can be validated by manufacturers, and may ultimately serve as a safety reference for insulin pump software. Together, these two publications can serve as a basis for discussing insulin pump safety in the diabetes community. Methods In our previous work, we established a generic insulin pump architecture that abstracts functions common to many insulin pumps currently on the market and near-future pump designs. We then carried out a preliminary hazard analysis based on this architecture that included consultations with many domain experts. Further consultation with domain experts resulted in the safety requirements used in the modeling work presented in this article. Results Generic safety requirements for the GIIP model are presented, as appropriate, in parameterized format to accommodate clinical practices or specific insulin pump criteria important to safe device performance. Conclusions We believe that there is considerable value in having the diabetes, academic, and manufacturing communities consider and discuss these generic safety requirements. We hope that the communities will extend and revise them, make them more representative and comprehensive, experiment with them, and use them as a means for assessing the safety of insulin pump software designs. One potential use of these requirements is to integrate them into model-based engineering (MBE) software development methods. We believe, based on our experiences, that implementing safety requirements using MBE methods holds promise in reducing design/implementation flaws in insulin pump development and evolutionary processes, therefore improving overall safety of insulin pump software. PMID:22226258

  7. Intelligent fault management for the Space Station active thermal control system

    NASA Technical Reports Server (NTRS)

    Hill, Tim; Faltisco, Robert M.

    1992-01-01

    The Thermal Advanced Automation Project (TAAP) approach and architecture is described for automating the Space Station Freedom (SSF) Active Thermal Control System (ATCS). The baseline functionally and advanced automation techniques for Fault Detection, Isolation, and Recovery (FDIR) will be compared and contrasted. Advanced automation techniques such as rule-based systems and model-based reasoning should be utilized to efficiently control, monitor, and diagnose this extremely complex physical system. TAAP is developing advanced FDIR software for use on the SSF thermal control system. The goal of TAAP is to join Knowledge-Based System (KBS) technology, using a combination of rules and model-based reasoning, with conventional monitoring and control software in order to maximize autonomy of the ATCS. TAAP's predecessor was NASA's Thermal Expert System (TEXSYS) project which was the first large real-time expert system to use both extensive rules and model-based reasoning to control and perform FDIR on a large, complex physical system. TEXSYS showed that a method is needed for safely and inexpensively testing all possible faults of the ATCS, particularly those potentially damaging to the hardware, in order to develop a fully capable FDIR system. TAAP therefore includes the development of a high-fidelity simulation of the thermal control system. The simulation provides realistic, dynamic ATCS behavior and fault insertion capability for software testing without hardware related risks or expense. In addition, thermal engineers will gain greater confidence in the KBS FDIR software than was possible prior to this kind of simulation testing. The TAAP KBS will initially be a ground-based extension of the baseline ATCS monitoring and control software and could be migrated on-board as additional computation resources are made available.

  8. First Annual Workshop on Space Operations Automation and Robotics (SOAR 87)

    NASA Technical Reports Server (NTRS)

    Griffin, Sandy (Editor)

    1987-01-01

    Several topics relative to automation and robotics technology are discussed. Automation of checkout, ground support, and logistics; automated software development; man-machine interfaces; neural networks; systems engineering and distributed/parallel processing architectures; and artificial intelligence/expert systems are among the topics covered.

  9. Automated Induction Of Rule-Based Neural Networks

    NASA Technical Reports Server (NTRS)

    Smyth, Padhraic J.; Goodman, Rodney M.

    1994-01-01

    Prototype expert systems implemented in software and are functionally equivalent to neural networks set up automatically and placed into operation within minutes following information-theoretic approach to automated acquisition of knowledge from large example data bases. Approach based largely on use of ITRULE computer program.

  10. Exploring Planet PDA: The Librarian as Astronaut, Innovator, and Expert.

    ERIC Educational Resources Information Center

    Galganski, Carol; Peters, Tom; Bell, Lori

    2002-01-01

    Describes the integration of personal digital assistants into a medical center library's services in Illinois. Discusses training for users; hardware selection; software selection and content; technical support; the role of libraries, including the creation of policies and procedures; and future challenges. (LRW)

  11. Using ArchE in the Classroom: One Experience

    DTIC Science & Technology

    2007-09-01

    The Architecture Expert (ArchE) tool serves as a software architecture design assistant. It embodies knowledge of quality attributes and the relation...between the achievement of quality attribute requirements and architecture design . This technical note describes the use of a pre-alpha release of

  12. A Starter's Guide to Artificial Intelligence.

    ERIC Educational Resources Information Center

    McConnell, Barry A.; McConnell, Nancy J.

    1988-01-01

    Discussion of the history and development of artificial intelligence (AI) highlights a bibliography of introductory books on various aspects of AI, including AI programing; problem solving; automated reasoning; game playing; natural language; expert systems; machine learning; robotics and vision; critics of AI; and representative software. (LRW)

  13. Warpage investigation on side arms using response surface methodology (RSM) and glow-worm swarm optimizations (GSO)

    NASA Astrophysics Data System (ADS)

    Sow, C. K.; Fathullah, M.; Nasir, S. M.; Shayfull, Z.; Shazzuan, S.

    2017-09-01

    This paper discusses on an analysis run via injection moulding process in determination of the optimum processing parameters used for manufacturing side arms of catheters in minimizing the warpage issues. The optimization method used was RSM. Moreover, in this research tries to find the most significant factor affecting the warpage. From the previous literature review,4 most significant parameters on warpage defect was selected. Those parameters were melt temperature, packing time, packing pressure, mould temperature and cooling time. At the beginning, side arm was drawn using software of CATIA V5. Then, software Mouldflow and Design Expert were employed to analyses on the popular warpage issues. After that, GSO artificial intelligence was apply using the mathematical model from Design Expert for more optimization on RSM result. Recommended parameter settings from the simulation work were then compared with the optimization work of RSM and GSO. The result show that the warpage on the side arm was improved by 3.27 %

  14. A validation framework for brain tumor segmentation.

    PubMed

    Archip, Neculai; Jolesz, Ferenc A; Warfield, Simon K

    2007-10-01

    We introduce a validation framework for the segmentation of brain tumors from magnetic resonance (MR) images. A novel unsupervised semiautomatic brain tumor segmentation algorithm is also presented. The proposed framework consists of 1) T1-weighted MR images of patients with brain tumors, 2) segmentation of brain tumors performed by four independent experts, 3) segmentation of brain tumors generated by a semiautomatic algorithm, and 4) a software tool that estimates the performance of segmentation algorithms. We demonstrate the validation of the novel segmentation algorithm within the proposed framework. We show its performance and compare it with existent segmentation. The image datasets and software are available at http://www.brain-tumor-repository.org/. We present an Internet resource that provides access to MR brain tumor image data and segmentation that can be openly used by the research community. Its purpose is to encourage the development and evaluation of segmentation methods by providing raw test and image data, human expert segmentation results, and methods for comparing segmentation results.

  15. Utility of non-rule-based visual matching as a strategy to allow novices to achieve skin lesion diagnosis.

    PubMed

    Aldridge, R Benjamin; Glodzik, Dominik; Ballerini, Lucia; Fisher, Robert B; Rees, Jonathan L

    2011-05-01

    Non-analytical reasoning is thought to play a key role in dermatology diagnosis. Considering its potential importance, surprisingly little work has been done to research whether similar identification processes can be supported in non-experts. We describe here a prototype diagnostic support software, which we have used to examine the ability of medical students (at the beginning and end of a dermatology attachment) and lay volunteers, to diagnose 12 images of common skin lesions. Overall, the non-experts using the software had a diagnostic accuracy of 98% (923/936) compared with 33% for the control group (215/648) (Wilcoxon p < 0.0001). We have demonstrated, within the constraints of a simplified clinical model, that novices' diagnostic scores are significantly increased by the use of a structured image database coupled with matching of index and referent images. The novices achieve this high degree of accuracy without any use of explicit definitions of likeness or rule-based strategies.

  16. Sweden: no easy choices.

    PubMed

    Calltorp, J

    1995-10-01

    This paper describes some characteristic aspects of the Swedish health care model which can explain why choices and prioritizing have been difficult to discuss officially until very recently. Health care is an important symbol and cornerstone of the welfare society and it is therefore difficult to admit and formulate the concept of limits regarding this part of society. Parallel to a considerable decrease in the health care sector's fraction of GDP during the last 10 years, where real cuts have been more and more visible, a public discussion on choices has emerged. A parliamentary committee of politicians and experts has addressed the issue and published a final report in the spring of 1995. It proposes an 'ethical platform' as a base for addressing the issue and describes guidelines for prioritizing on a political-administrative level and clinical level.

  17. A Knowledge-Based System Developer for aerospace applications

    NASA Technical Reports Server (NTRS)

    Shi, George Z.; Wu, Kewei; Fensky, Connie S.; Lo, Ching F.

    1993-01-01

    A prototype Knowledge-Based System Developer (KBSD) has been developed for aerospace applications by utilizing artificial intelligence technology. The KBSD directly acquires knowledge from domain experts through a graphical interface then builds expert systems from that knowledge. This raises the state of the art of knowledge acquisition/expert system technology to a new level by lessening the need for skilled knowledge engineers. The feasibility, applicability , and efficiency of the proposed concept was established, making a continuation which would develop the prototype to a full-scale general-purpose knowledge-based system developer justifiable. The KBSD has great commercial potential. It will provide a marketable software shell which alleviates the need for knowledge engineers and increase productivity in the workplace. The KBSD will therefore make knowledge-based systems available to a large portion of industry.

  18. Integration of symbolic and algorithmic hardware and software for the automation of space station subsystems

    NASA Technical Reports Server (NTRS)

    Gregg, Hugh; Healey, Kathleen; Hack, Edmund; Wong, Carla

    1988-01-01

    Expert systems that require access to data bases, complex simulations and real time instrumentation have both symbolic and algorithmic needs. Both of these needs could be met using a general purpose workstation running both symbolic and algorithmic codes, or separate, specialized computers networked together. The later approach was chosen to implement TEXSYS, the thermal expert system, developed by the NASA Ames Research Center in conjunction with the Johnson Space Center to demonstrate the ability of an expert system to autonomously monitor the thermal control system of the space station. TEXSYS has been implemented on a Symbolics workstation, and will be linked to a microVAX computer that will control a thermal test bed. The integration options and several possible solutions are presented.

  19. ExpertEyes: open-source, high-definition eyetracking.

    PubMed

    Parada, Francisco J; Wyatte, Dean; Yu, Chen; Akavipat, Ruj; Emerick, Brandi; Busey, Thomas

    2015-03-01

    ExpertEyes is a low-cost, open-source package of hardware and software that is designed to provide portable high-definition eyetracking. The project involves several technological innovations, including portability, high-definition video recording, and multiplatform software support. It was designed for challenging recording environments, and all processing is done offline to allow for optimization of parameter estimation. The pupil and corneal reflection are estimated using a novel forward eye model that simultaneously fits both the pupil and the corneal reflection with full ellipses, addressing a common situation in which the corneal reflection sits at the edge of the pupil and therefore breaks the contour of the ellipse. The accuracy and precision of the system are comparable to or better than what is available in commercial eyetracking systems, with a typical accuracy of less than 0.4° and best accuracy below 0.3°, and with a typical precision (SD method) around 0.3° and best precision below 0.2°. Part of the success of the system comes from a high-resolution eye image. The high image quality results from uncasing common digital camcorders and recording directly to SD cards, which avoids the limitations of the analog NTSC format. The software is freely downloadable, and complete hardware plans are available, along with sources for custom parts.

  20. Virtual Character Animation Based on Affordable Motion Capture and Reconfigurable Tangible Interfaces.

    PubMed

    Lamberti, Fabrizio; Paravati, Gianluca; Gatteschi, Valentina; Cannavo, Alberto; Montuschi, Paolo

    2018-05-01

    Software for computer animation is generally characterized by a steep learning curve, due to the entanglement of both sophisticated techniques and interaction methods required to control 3D geometries. This paper proposes a tool designed to support computer animation production processes by leveraging the affordances offered by articulated tangible user interfaces and motion capture retargeting solutions. To this aim, orientations of an instrumented prop are recorded together with animator's motion in the 3D space and used to quickly pose characters in the virtual environment. High-level functionalities of the animation software are made accessible via a speech interface, thus letting the user control the animation pipeline via voice commands while focusing on his or her hands and body motion. The proposed solution exploits both off-the-shelf hardware components (like the Lego Mindstorms EV3 bricks and the Microsoft Kinect, used for building the tangible device and tracking animator's skeleton) and free open-source software (like the Blender animation tool), thus representing an interesting solution also for beginners approaching the world of digital animation for the first time. Experimental results in different usage scenarios show the benefits offered by the designed interaction strategy with respect to a mouse & keyboard-based interface both for expert and non-expert users.

  1. Models of expert assessments and their study in problems of choice and decision-making in management of motor transport processes

    NASA Astrophysics Data System (ADS)

    Belokurov, V. P.; Belokurov, S. V.; Korablev, R. A.; Shtepa, A. A.

    2018-05-01

    The article deals with decision making concerning transport tasks on search iterations in the management of motor transport processes. An optimal selection of the best option for specific situations is suggested in the management of complex multi-criteria transport processes.

  2. Aligning Items and Achievement Levels: A Study Comparing Expert Judgments

    ERIC Educational Resources Information Center

    Kaliski, Pamela; Huff, Kristen; Barry, Carol

    2011-01-01

    For educational achievement tests that employ multiple-choice (MC) items and aim to reliably classify students into performance categories, it is critical to design MC items that are capable of discriminating student performance according to the stated achievement levels. This is accomplished, in part, by clearly understanding how item design…

  3. Literature Study Groups: Literacy Learning "with Legs"

    ERIC Educational Resources Information Center

    Parsons, Sue Christian; Mokhtari, Kouider; Yellin, David; Orwig, Ryan

    2011-01-01

    Literature study groups help promote critical thinking and improve reading skills. These groups, in general, are characterized by: (1) a flexible grouping--usually determined by a reader's choice of a given book at a given time; (2) participant-centered dialogue, where the teacher takes on the role of facilitator and expert participant rather than…

  4. Humility, Will, and Level 5 Leadership: An Interview with Jim Collins

    ERIC Educational Resources Information Center

    Brosnan, Michael

    2015-01-01

    Organizational expert, Jim Collins, is the author of "Good to Great" (2001) and "How the Mighty Fall" (2009) and coauthor of "Great by Choice" (2011). Collins also authored a monograph entitled, "Good to Great and the Social Sectors," and presented his findings at the 2007 NAIS Conference. Recently, Collins…

  5. Students Alerted to Loan Debt

    ERIC Educational Resources Information Center

    Adams, Caralee J.

    2011-01-01

    Students are taking on more college debt in this struggling economy, often without the knowledge to make wise choices. To help students better manage their debt, some college campuses and high schools are ramping up their financial-literacy efforts, where experts say such education should begin. But a squeeze on K-12 resources has hampered the…

  6. Expertise-Based Differences in Search and Option-Generation Strategies

    ERIC Educational Resources Information Center

    Raab, Markus; Johnson, Joseph G.

    2007-01-01

    The current work builds on option-generation research using experts of various skill levels in a realistic task. We extend previous findings that relate an athlete's performance strategy to generated options and subsequent choices in handball. In a 2-year longitudinal study, we present eye-tracking data to independently verify decision strategies…

  7. Accountability and the Federal Role: A Third Way on ESEA

    ERIC Educational Resources Information Center

    Darling-Hammond, Linda; Hill, Paul T.

    2015-01-01

    In summer of 2014, two groups of scholars and policy experts met separately to rethink educational accountability. These groups came from what most would consider different "camps" on school reform--one focused on transforming teaching for "deeper learning" and the other focused on choice as a means for leveraging school…

  8. Integrated Testlets: A New Form of Expert-Student Collaborative Testing

    ERIC Educational Resources Information Center

    Shiell, Ralph C.; Slepkov, Aaron D.

    2015-01-01

    Integrated testlets are a new assessment tool that encompass the procedural benefits of multiple-choice testing, the pedagogical advantages of free-response-based tests, and the collaborative aspects of a viva voce or defence examination format. The result is a robust assessment tool that provides a significant formative aspect for students.…

  9. Development and Validation of the Conceptual Assessment of Natural Selection (CANS)

    ERIC Educational Resources Information Center

    Kalinowski, Steven T.; Leonard, Mary J.; Taper, Mark L.

    2016-01-01

    We developed and validated the Conceptual Assessment of Natural Selection (CANS), a multiple-choice test designed to assess how well college students understand the central principles of natural selection. The expert panel that reviewed the CANS concluded its questions were relevant to natural selection and generally did a good job sampling the…

  10. Development of a fuzzy logic expert system for pile selection. Master's thesis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulshafer, M.L.

    1989-01-01

    This thesis documents the development of prototype expert system for pile selection for use on microcomputers. It concerns the initial selection of a pile foundation taking into account the parameters such as soil condition, pile length, loading scenario, material availability, contractor experience, and noise or vibration constraints. The prototype expert system called Pile Selection, version 1 (PS1) was developed using an expert system shell FLOPS. FLOPS is a shell based on the AI language OPS5 with many unique features. The system PS1 utilizes all of these unique features. Among the features used are approximate reasoning with fuzzy set theory, themore » blackboard architecture, and the emulated parallel processing of fuzzy production rules. A comprehensive review of the parameters used in selecting a pile was made, and the effects of the uncertainties associated with the vagueness of these parameters was examined in detail. Fuzzy set theory was utilized to deal with such uncertainties and provides the basis for developing a method for determining the best possible choice of piles for a given situation. Details of the development of PS1, including documenting and collating pile information for use in the expert knowledge data bases, are discussed.« less

  11. Guidance and Control System for an Autonomous Vehicle

    DTIC Science & Technology

    1990-06-01

    implementing an appropriate computer architecture in support of these goals is also discussed and detailed, along with the choice of associated computer hardware and real - time operating system software. (rh)

  12. Calypso: a user-friendly web-server for mining and visualizing microbiome-environment interactions.

    PubMed

    Zakrzewski, Martha; Proietti, Carla; Ellis, Jonathan J; Hasan, Shihab; Brion, Marie-Jo; Berger, Bernard; Krause, Lutz

    2017-03-01

    Calypso is an easy-to-use online software suite that allows non-expert users to mine, interpret and compare taxonomic information from metagenomic or 16S rDNA datasets. Calypso has a focus on multivariate statistical approaches that can identify complex environment-microbiome associations. The software enables quantitative visualizations, statistical testing, multivariate analysis, supervised learning, factor analysis, multivariable regression, network analysis and diversity estimates. Comprehensive help pages, tutorials and videos are provided via a wiki page. The web-interface is accessible via http://cgenome.net/calypso/ . The software is programmed in Java, PERL and R and the source code is available from Zenodo ( https://zenodo.org/record/50931 ). The software is freely available for non-commercial users. l.krause@uq.edu.au. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  13. Accelerating artificial intelligence with reconfigurable computing

    NASA Astrophysics Data System (ADS)

    Cieszewski, Radoslaw

    Reconfigurable computing is emerging as an important area of research in computer architectures and software systems. Many algorithms can be greatly accelerated by placing the computationally intense portions of an algorithm into reconfigurable hardware. Reconfigurable computing combines many benefits of both software and ASIC implementations. Like software, the mapped circuit is flexible, and can be changed over the lifetime of the system. Similar to an ASIC, reconfigurable systems provide a method to map circuits into hardware. Reconfigurable systems therefore have the potential to achieve far greater performance than software as a result of bypassing the fetch-decode-execute operations of traditional processors, and possibly exploiting a greater level of parallelism. Such a field, where there is many different algorithms which can be accelerated, is an artificial intelligence. This paper presents example hardware implementations of Artificial Neural Networks, Genetic Algorithms and Expert Systems.

  14. Is decentralization good for logistics systems? Evidence on essential medicine logistics in Ghana and Guatemala.

    PubMed

    Bossert, Thomas J; Bowser, Diana M; Amenyah, Johnnie K

    2007-03-01

    Efficient logistics systems move essential medicines down the supply chain to the service delivery point, and then to the end user. Experts on logistics systems tend to see the supply chain as requiring centralized control to be most effective. However, many health reforms have involved decentralization, which experts fear has disrupted the supply chain and made systems less effective. There is no consensus on an appropriate methodology for assessing the effectiveness of decentralization in general, and only a few studies have attempted to address decentralization of logistics systems. This paper sets out a framework and methodology of a pioneering exploratory study that examines the experiences of decentralization in two countries, Guatemala and Ghana, and presents suggestive results of how decentralization affected the performance of their logistics systems. The analytical approach assessed decentralization using the principal author's 'decision space' approach, which defines decentralization as the degree of choice that local officials have over different health system functions. In this case the approach focused on 15 different logistics functions and measured the relationship between the degree of choice and indicators of performance for each of the functions. The results of both studies indicate that less choice (i.e. more centralized) was associated with better performance for two key functions (inventory control and information systems), while more choice (i.e. more decentralized) over planning and budgeting was associated with better performance. With different systems of procurement in Ghana and Guatemala, we found that a system with some elements of procurement that are centralized (selection of firms and prices fixed by national tender) was positively related in Guatemala but negatively related in Ghana, where a system of 'cash and carry' cost recovery allowed more local choice. The authors conclude that logistics systems can be effectively decentralized for some functions while others should remain centralized. These preliminary findings, however, should be subject to alternative methodologies to confirm the findings.

  15. Software Tools for Emittance Measurement and Matching for 12 GeV CEBAF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turner, Dennis L.

    2016-05-01

    This paper discusses model-driven setup of the Continuous Electron Beam Accelerator Facility (CEBAF) for the 12GeV era, focusing on qsUtility. qsUtility is a set of software tools created to perform emittance measurements, analyze those measurements, and compute optics corrections based upon the measurements.qsUtility was developed as a toolset to facilitate reducing machine configuration time and reproducibility by way of an accurate accelerator model, and to provide Operations staff with tools to measure and correct machine optics with little or no assistance from optics experts.

  16. DataHub knowledge based assistance for science visualization and analysis using large distributed databases

    NASA Technical Reports Server (NTRS)

    Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.

    1991-01-01

    Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.

  17. Development of Computer-Based Resources for Textile Education.

    ERIC Educational Resources Information Center

    Hopkins, Teresa; Thomas, Andrew; Bailey, Mike

    1998-01-01

    Describes the production of computer-based resources for students of textiles and engineering in the United Kingdom. Highlights include funding by the Teaching and Learning Technology Programme (TLTP), courseware author/subject expert interaction, usage test and evaluation, authoring software, graphics, computer-aided design simulation, self-test…

  18. Multi-viewpoint clustering analysis

    NASA Technical Reports Server (NTRS)

    Mehrotra, Mala; Wild, Chris

    1993-01-01

    In this paper, we address the feasibility of partitioning rule-based systems into a number of meaningful units to enhance the comprehensibility, maintainability and reliability of expert systems software. Preliminary results have shown that no single structuring principle or abstraction hierarchy is sufficient to understand complex knowledge bases. We therefore propose the Multi View Point - Clustering Analysis (MVP-CA) methodology to provide multiple views of the same expert system. We present the results of using this approach to partition a deployed knowledge-based system that navigates the Space Shuttle's entry. We also discuss the impact of this approach on verification and validation of knowledge-based systems.

  19. An expert system for municipal solid waste management simulation analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsieh, M.C.; Chang, N.B.

    1996-12-31

    Optimization techniques were usually used to model the complicated metropolitan solid waste management system to search for the best dynamic combination of waste recycling, facility siting, and system operation, where sophisticated and well-defined interrelationship are required in the modeling process. But this paper applied the Concurrent Object-Oriented Simulation (COOS), a new simulation software construction method, to bridge the gap between the physical system and its computer representation. The case study of Kaohsiung solid waste management system in Taiwan is prepared for the illustration of the analytical methodology of COOS and its implementation in the creation of an expert system.

  20. Knowledge Engineering as a Component of the Curriculum for Medical Cybernetists.

    PubMed

    Karas, Sergey; Konev, Arthur

    2017-01-01

    According to a new state educational standard, students who have chosen medical cybernetics as their major must develop a knowledge engineering competency. Previously, in the course "Clinical cybernetics" while practicing project-based learning students were designing automated workstations for medical personnel using client-server technology. The purpose of the article is to give insight into the project of a new educational module "Knowledge engineering". Students will acquire expert knowledge by holding interviews and conducting surveys, and then they will formalize it. After that, students will form declarative expert knowledge in a network model and analyze the knowledge graph. Expert decision making methods will be applied in software on the basis of a production model of knowledge. Project implementation will result not only in the development of analytical competencies among students, but also creation of a practically useful expert system based on student models to support medical decisions. Nowadays, this module is being tested in the educational process.

Top