Requirements analysis, domain knowledge, and design
NASA Technical Reports Server (NTRS)
Potts, Colin
1988-01-01
Two improvements to current requirements analysis practices are suggested: domain modeling, and the systematic application of analysis heuristics. Domain modeling is the representation of relevant application knowledge prior to requirements specification. Artificial intelligence techniques may eventually be applicable for domain modeling. In the short term, however, restricted domain modeling techniques, such as that in JSD, will still be of practical benefit. Analysis heuristics are standard patterns of reasoning about the requirements. They usually generate questions of clarification or issues relating to completeness. Analysis heuristics can be represented and therefore systematically applied in an issue-based framework. This is illustrated by an issue-based analysis of JSD's domain modeling and functional specification heuristics. They are discussed in the context of the preliminary design of simple embedded systems.
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
Construction of dynamic stochastic simulation models using knowledge-based techniques
NASA Technical Reports Server (NTRS)
Williams, M. Douglas; Shiva, Sajjan G.
1990-01-01
Over the past three decades, computer-based simulation models have proven themselves to be cost-effective alternatives to the more structured deterministic methods of systems analysis. During this time, many techniques, tools and languages for constructing computer-based simulation models have been developed. More recently, advances in knowledge-based system technology have led many researchers to note the similarities between knowledge-based programming and simulation technologies and to investigate the potential application of knowledge-based programming techniques to simulation modeling. The integration of conventional simulation techniques with knowledge-based programming techniques is discussed to provide a development environment for constructing knowledge-based simulation models. A comparison of the techniques used in the construction of dynamic stochastic simulation models and those used in the construction of knowledge-based systems provides the requirements for the environment. This leads to the design and implementation of a knowledge-based simulation development environment. These techniques were used in the construction of several knowledge-based simulation models including the Advanced Launch System Model (ALSYM).
The study on knowledge transferring incentive for information system requirement development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Yang
2015-03-10
Information system requirement development is a process of users’ knowledge sharing and transferring. However the tacit requirements developing is a main problem during requirement development process, for the reason of difficult to encoding, express, and communicate. Knowledge fusion and corporate effort is needed to finding tacit requirements. Under this background, our paper try to find out the rule of effort dynamic evolutionary of software developer and user by building an evolutionary game model on the condition of incentive system. And in addition this paper provides an in depth discussion at the end of this paper.
Some requirements and suggestions for a methodology to develop knowledge based systems.
Green, D W; Colbert, M; Long, J
1989-11-01
This paper describes an approach to the creation of a methodology for the development of knowledge based systems. It specifies some requirements and suggests how these requirements might be met. General requirements can be satisfied using a systems approach. More specific ones can be met by viewing an organization as a network of consultations for coordinating expertise. The nature of consultations is described and the form of a possible cognitive model using a blackboard architecture is outlined. The value of the approach is illustrated in terms of certain knowledge elicitation methods.
A Rapid Approach to Modeling Species-Habitat Relationships
NASA Technical Reports Server (NTRS)
Carter, Geoffrey M.; Breinger, David R.; Stolen, Eric D.
2005-01-01
A growing number of species require conservation or management efforts. Success of these activities requires knowledge of the species' occurrence pattern. Species-habitat models developed from GIS data sources are commonly used to predict species occurrence but commonly used data sources are often developed for purposes other than predicting species occurrence and are of inappropriate scale and the techniques used to extract predictor variables are often time consuming and cannot be repeated easily and thus cannot efficiently reflect changing conditions. We used digital orthophotographs and a grid cell classification scheme to develop an efficient technique to extract predictor variables. We combined our classification scheme with a priori hypothesis development using expert knowledge and a previously published habitat suitability index and used an objective model selection procedure to choose candidate models. We were able to classify a large area (57,000 ha) in a fraction of the time that would be required to map vegetation and were able to test models at varying scales using a windowing process. Interpretation of the selected models confirmed existing knowledge of factors important to Florida scrub-jay habitat occupancy. The potential uses and advantages of using a grid cell classification scheme in conjunction with expert knowledge or an habitat suitability index (HSI) and an objective model selection procedure are discussed.
A Multidisciplinary Model for Development of Intelligent Computer-Assisted Instruction.
ERIC Educational Resources Information Center
Park, Ok-choon; Seidel, Robert J.
1989-01-01
Proposes a schematic multidisciplinary model to help developers of intelligent computer-assisted instruction (ICAI) identify the types of required expertise and integrate them into a system. Highlights include domain types and expertise; knowledge acquisition; task analysis; knowledge representation; student modeling; diagnosis of learning needs;…
Error-associated behaviors and error rates for robotic geology
NASA Technical Reports Server (NTRS)
Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin
2004-01-01
This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.
ERIC Educational Resources Information Center
Yanchinda, Jirawit; Chakpitak, Nopasit; Yodmongkol, Pitipong
2015-01-01
Knowledge of the appropriate technologies for sustainable development projects has encouraged grass roots development, which has in turn promoted sustainable and successful community development, which a requirement is to share and reuse this knowledge effectively. This research aims to propose a tutorial ontology effectiveness modeling on organic…
Data identification for improving gene network inference using computational algebra.
Dimitrova, Elena; Stigler, Brandilyn
2014-11-01
Identification of models of gene regulatory networks is sensitive to the amount of data used as input. Considering the substantial costs in conducting experiments, it is of value to have an estimate of the amount of data required to infer the network structure. To minimize wasted resources, it is also beneficial to know which data are necessary to identify the network. Knowledge of the data and knowledge of the terms in polynomial models are often required a priori in model identification. In applications, it is unlikely that the structure of a polynomial model will be known, which may force data sets to be unnecessarily large in order to identify a model. Furthermore, none of the known results provides any strategy for constructing data sets to uniquely identify a model. We provide a specialization of an existing criterion for deciding when a set of data points identifies a minimal polynomial model when its monomial terms have been specified. Then, we relax the requirement of the knowledge of the monomials and present results for model identification given only the data. Finally, we present a method for constructing data sets that identify minimal polynomial models.
The Gain-Loss Model: A Probabilistic Skill Multimap Model for Assessing Learning Processes
ERIC Educational Resources Information Center
Robusto, Egidio; Stefanutti, Luca; Anselmi, Pasquale
2010-01-01
Within the theoretical framework of knowledge space theory, a probabilistic skill multimap model for assessing learning processes is proposed. The learning process of a student is modeled as a function of the student's knowledge and of an educational intervention on the attainment of specific skills required to solve problems in a knowledge…
Translating three states of knowledge--discovery, invention, and innovation
2010-01-01
Background Knowledge Translation (KT) has historically focused on the proper use of knowledge in healthcare delivery. A knowledge base has been created through empirical research and resides in scholarly literature. Some knowledge is amenable to direct application by stakeholders who are engaged during or after the research process, as shown by the Knowledge to Action (KTA) model. Other knowledge requires multiple transformations before achieving utility for end users. For example, conceptual knowledge generated through science or engineering may become embodied as a technology-based invention through development methods. The invention may then be integrated within an innovative device or service through production methods. To what extent is KT relevant to these transformations? How might the KTA model accommodate these additional development and production activities while preserving the KT concepts? Discussion Stakeholders adopt and use knowledge that has perceived utility, such as a solution to a problem. Achieving a technology-based solution involves three methods that generate knowledge in three states, analogous to the three classic states of matter. Research activity generates discoveries that are intangible and highly malleable like a gas; development activity transforms discoveries into inventions that are moderately tangible yet still malleable like a liquid; and production activity transforms inventions into innovations that are tangible and immutable like a solid. The paper demonstrates how the KTA model can accommodate all three types of activity and address all three states of knowledge. Linking the three activities in one model also illustrates the importance of engaging the relevant stakeholders prior to initiating any knowledge-related activities. Summary Science and engineering focused on technology-based devices or services change the state of knowledge through three successive activities. Achieving knowledge implementation requires methods that accommodate these three activities and knowledge states. Accomplishing beneficial societal impacts from technology-based knowledge involves the successful progression through all three activities, and the effective communication of each successive knowledge state to the relevant stakeholders. The KTA model appears suitable for structuring and linking these processes. PMID:20205873
Using diagnostic experiences in experience-based innovative design
NASA Astrophysics Data System (ADS)
Prabhakar, Sattiraju; Goel, Ashok K.
1992-03-01
Designing a novel class of devices requires innovation. Often, the design knowledge of these devices does not identify and address the constraints that are required for their performance in the real world operating environment. So any new design adapted from these devices tend to be similarly sketchy. In order to address this problem, we propose a case-based reasoning method called performance driven innovation (PDI). We model the design as a dynamic process, arrive at a design by adaptation from the known designs, generate failures for this design for some new constraints, and then use this failure knowledge to generate the required design knowledge for the new constraints. In this paper, we discuss two aspects of PDI: the representation of PDI cases and the translation of the failure knowledge into design knowledge for a constraint. Each case in PDI has two components: design and failure knowledge. Both of them are represented using a substance-behavior-function model. Failure knowledge has internal device failure behaviors and external environmental behaviors. The environmental behavior, for a constraint, interacting with the design behaviors, results in the failure internal behavior. The failure adaptation strategy generates functions, from the failure knowledge, which can be addressed using the routine design methods. These ideas are illustrated using a coffee-maker example.
Adaptive cyber-attack modeling system
NASA Astrophysics Data System (ADS)
Gonsalves, Paul G.; Dougherty, Edward T.
2006-05-01
The pervasiveness of software and networked information systems is evident across a broad spectrum of business and government sectors. Such reliance provides an ample opportunity not only for the nefarious exploits of lone wolf computer hackers, but for more systematic software attacks from organized entities. Much effort and focus has been placed on preventing and ameliorating network and OS attacks, a concomitant emphasis is required to address protection of mission critical software. Typical software protection technique and methodology evaluation and verification and validation (V&V) involves the use of a team of subject matter experts (SMEs) to mimic potential attackers or hackers. This manpower intensive, time-consuming, and potentially cost-prohibitive approach is not amenable to performing the necessary multiple non-subjective analyses required to support quantifying software protection levels. To facilitate the evaluation and V&V of software protection solutions, we have designed and developed a prototype adaptive cyber attack modeling system. Our approach integrates an off-line mechanism for rapid construction of Bayesian belief network (BN) attack models with an on-line model instantiation, adaptation and knowledge acquisition scheme. Off-line model construction is supported via a knowledge elicitation approach for identifying key domain requirements and a process for translating these requirements into a library of BN-based cyber-attack models. On-line attack modeling and knowledge acquisition is supported via BN evidence propagation and model parameter learning.
Gaussian Processes for Data-Efficient Learning in Robotics and Control.
Deisenroth, Marc Peter; Fox, Dieter; Rasmussen, Carl Edward
2015-02-01
Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. However, autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. In this paper, we follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effects of model errors, a key problem in model-based learning. Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.
Core competencies in clinical neuropsychology training across the world.
Hessen, Erik; Hokkanen, Laura; Ponsford, Jennie; van Zandvoort, Martine; Watts, Ann; Evans, Jonathan; Haaland, Kathleen Y
2018-05-01
This work aimed to review main competency requirements from training models in countries with well-established specialties in clinical neuropsychology and to extract core competencies that likely will apply to clinical neuropsychologists regardless of regional and cultural context. We reviewed standards for post-graduate training in clinical neuropsychology from countries in Europe, Australia, and North America based on existing literature, presentations at international conferences, and from description of the training models from national psychological or neuropsychological associations. Despite differences, the reviewed models share similar core competencies considered necessary for a specialty in clinical neuropsychology: (1) In-depth knowledge of general psychology including clinical psychology (post-graduate level), ethical, and legal standards. (2) Expert knowledge about clinically relevant brain-behavioral relationships. (3) Comprehensive knowledge about, and skills in, related clinical disciplines. (4) In-depth knowledge about and skills in neuropsychological assessment, including decision-making and diagnostic competency according to current classification of diseases. (5) Competencies in the area of diversity and culture in relation to clinical neuropsychology. (6) Communication competency of neuropsychological findings and test results to relevant and diverse audiences. (7) Knowledge about and skills in psychological and neuropsychological intervention, including treatment and rehabilitation. All the models have undergone years of development in accordance with requirements of national health care systems in different parts of the world. Despite differences, the common core competency requirements across different regions of the world suggest generalizability of these competencies. We hope this summary can be useful as countries with less established neuropsychology training programs develop their models.
NASA Technical Reports Server (NTRS)
Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui; Yenne, Britt; Vansickle, Larry; Ballantyne, Michael
1992-01-01
Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model.
Automated extraction of knowledge for model-based diagnostics
NASA Technical Reports Server (NTRS)
Gonzalez, Avelino J.; Myler, Harley R.; Towhidnejad, Massood; Mckenzie, Frederic D.; Kladke, Robin R.
1990-01-01
The concept of accessing computer aided design (CAD) design databases and extracting a process model automatically is investigated as a possible source for the generation of knowledge bases for model-based reasoning systems. The resulting system, referred to as automated knowledge generation (AKG), uses an object-oriented programming structure and constraint techniques as well as internal database of component descriptions to generate a frame-based structure that describes the model. The procedure has been designed to be general enough to be easily coupled to CAD systems that feature a database capable of providing label and connectivity data from the drawn system. The AKG system is capable of defining knowledge bases in formats required by various model-based reasoning tools.
NASA Technical Reports Server (NTRS)
Kim, Jonnathan H.
1995-01-01
Humans can perform many complicated tasks without explicit rules. This inherent and advantageous capability becomes a hurdle when a task is to be automated. Modern computers and numerical calculations require explicit rules and discrete numerical values. In order to bridge the gap between human knowledge and automating tools, a knowledge model is proposed. Knowledge modeling techniques are discussed and utilized to automate a labor and time intensive task of detecting anomalous bearing wear patterns in the Space Shuttle Main Engine (SSME) High Pressure Oxygen Turbopump (HPOTP).
Knowledge network model of the energy consumption in discrete manufacturing system
NASA Astrophysics Data System (ADS)
Xu, Binzi; Wang, Yan; Ji, Zhicheng
2017-07-01
Discrete manufacturing system generates a large amount of data and information because of the development of information technology. Hence, a management mechanism is urgently required. In order to incorporate knowledge generated from manufacturing data and production experience, a knowledge network model of the energy consumption in the discrete manufacturing system was put forward based on knowledge network theory and multi-granularity modular ontology technology. This model could provide a standard representation for concepts, terms and their relationships, which could be understood by both human and computer. Besides, the formal description of energy consumption knowledge elements (ECKEs) in the knowledge network was also given. Finally, an application example was used to verify the feasibility of the proposed method.
Learning to Teach: Pedagogical Content Knowledge in Adventure-Based Learning
ERIC Educational Resources Information Center
Sutherland, Sue; Stuhr, Paul T.; Ayvazo, Shiri
2016-01-01
Background: Many alternative curricular models exist in physical education to better meet the needs of students than the multi-activity team sports curriculum that dominates in the USA. These alternative curricular models typically require different content knowledge (CK) and pedagogical CK (PCK) to implement successfully. One of the complexities…
ERIC Educational Resources Information Center
Meehan, Peter M.; Beal, George M.
The objective of this monograph is to contribute to the further understanding of the knowledge-production-and-utilization process. Its primary focus is on a model both general and detailed enough to provide a comprehensive overview of the diverse functions, roles, and processes required to understand the flow of knowledge from its point of origin…
Multi-model-based interactive authoring environment for creating shareable medical knowledge.
Ali, Taqdir; Hussain, Maqbool; Ali Khan, Wajahat; Afzal, Muhammad; Hussain, Jamil; Ali, Rahman; Hassan, Waseem; Jamshed, Arif; Kang, Byeong Ho; Lee, Sungyoung
2017-10-01
Technologically integrated healthcare environments can be realized if physicians are encouraged to use smart systems for the creation and sharing of knowledge used in clinical decision support systems (CDSS). While CDSSs are heading toward smart environments, they lack support for abstraction of technology-oriented knowledge from physicians. Therefore, abstraction in the form of a user-friendly and flexible authoring environment is required in order for physicians to create shareable and interoperable knowledge for CDSS workflows. Our proposed system provides a user-friendly authoring environment to create Arden Syntax MLM (Medical Logic Module) as shareable knowledge rules for intelligent decision-making by CDSS. Existing systems are not physician friendly and lack interoperability and shareability of knowledge. In this paper, we proposed Intelligent-Knowledge Authoring Tool (I-KAT), a knowledge authoring environment that overcomes the above mentioned limitations. Shareability is achieved by creating a knowledge base from MLMs using Arden Syntax. Interoperability is enhanced using standard data models and terminologies. However, creation of shareable and interoperable knowledge using Arden Syntax without abstraction increases complexity, which ultimately makes it difficult for physicians to use the authoring environment. Therefore, physician friendliness is provided by abstraction at the application layer to reduce complexity. This abstraction is regulated by mappings created between legacy system concepts, which are modeled as domain clinical model (DCM) and decision support standards such as virtual medical record (vMR) and Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT). We represent these mappings with a semantic reconciliation model (SRM). The objective of the study is the creation of shareable and interoperable knowledge using a user-friendly and flexible I-KAT. Therefore we evaluated our system using completeness and user satisfaction criteria, which we assessed through the system- and user-centric evaluation processes. For system-centric evaluation, we compared the implementation of clinical information modelling system requirements in our proposed system and in existing systems. The results suggested that 82.05% of the requirements were fully supported, 7.69% were partially supported, and 10.25% were not supported by our system. In the existing systems, 35.89% of requirements were fully supported, 28.20% were partially supported, and 35.89% were not supported. For user-centric evaluation, the assessment criterion was 'ease of use'. Our proposed system showed 15 times better results with respect to MLM creation time than the existing systems. Moreover, on average, the participants made only one error in MLM creation using our proposed system, but 13 errors per MLM using the existing systems. We provide a user-friendly authoring environment for creation of shareable and interoperable knowledge for CDSS to overcome knowledge acquisition complexity. The authoring environment uses state-of-the-art decision support-related clinical standards with increased ease of use. Copyright © 2017 Elsevier B.V. All rights reserved.
Structuring and extracting knowledge for the support of hypothesis generation in molecular biology
Roos, Marco; Marshall, M Scott; Gibson, Andrew P; Schuemie, Martijn; Meij, Edgar; Katrenko, Sophia; van Hage, Willem Robert; Krommydas, Konstantinos; Adriaans, Pieter W
2009-01-01
Background Hypothesis generation in molecular and cellular biology is an empirical process in which knowledge derived from prior experiments is distilled into a comprehensible model. The requirement of automated support is exemplified by the difficulty of considering all relevant facts that are contained in the millions of documents available from PubMed. Semantic Web provides tools for sharing prior knowledge, while information retrieval and information extraction techniques enable its extraction from literature. Their combination makes prior knowledge available for computational analysis and inference. While some tools provide complete solutions that limit the control over the modeling and extraction processes, we seek a methodology that supports control by the experimenter over these critical processes. Results We describe progress towards automated support for the generation of biomolecular hypotheses. Semantic Web technologies are used to structure and store knowledge, while a workflow extracts knowledge from text. We designed minimal proto-ontologies in OWL for capturing different aspects of a text mining experiment: the biological hypothesis, text and documents, text mining, and workflow provenance. The models fit a methodology that allows focus on the requirements of a single experiment while supporting reuse and posterior analysis of extracted knowledge from multiple experiments. Our workflow is composed of services from the 'Adaptive Information Disclosure Application' (AIDA) toolkit as well as a few others. The output is a semantic model with putative biological relations, with each relation linked to the corresponding evidence. Conclusion We demonstrated a 'do-it-yourself' approach for structuring and extracting knowledge in the context of experimental research on biomolecular mechanisms. The methodology can be used to bootstrap the construction of semantically rich biological models using the results of knowledge extraction processes. Models specific to particular experiments can be constructed that, in turn, link with other semantic models, creating a web of knowledge that spans experiments. Mapping mechanisms can link to other knowledge resources such as OBO ontologies or SKOS vocabularies. AIDA Web Services can be used to design personalized knowledge extraction procedures. In our example experiment, we found three proteins (NF-Kappa B, p21, and Bax) potentially playing a role in the interplay between nutrients and epigenetic gene regulation. PMID:19796406
Scalable Learning for Geostatistics and Speaker Recognition
2011-01-01
of prior knowledge of the model or due to improved robustness requirements). Both these methods have their own advantages and disadvantages. The use...application. If the data is well-correlated and low-dimensional, any prior knowledge available on the data can be used to build a parametric model. In the...absence of prior knowledge , non-parametric methods can be used. If the data is high-dimensional, PCA based dimensionality reduction is often the first
MRAC Control with Prior Model Knowledge for Asymmetric Damaged Aircraft
Zhang, Jing
2015-01-01
This paper develops a novel state-tracking multivariable model reference adaptive control (MRAC) technique utilizing prior knowledge of plant models to recover control performance of an asymmetric structural damaged aircraft. A modification of linear model representation is given. With prior knowledge on structural damage, a polytope linear parameter varying (LPV) model is derived to cover all concerned damage conditions. An MRAC method is developed for the polytope model, of which the stability and asymptotic error convergence are theoretically proved. The proposed technique reduces the number of parameters to be adapted and thus decreases computational cost and requires less input information. The method is validated by simulations on NASA generic transport model (GTM) with damage. PMID:26180839
1995-09-01
vital processes of a business. process, IDEF, method, methodology, modeling, knowledge acquisition, requirements definition, information systems... knowledge resources. Like manpower, materials, and machines, information and knowledge assets are recognized as vital resources that can be leveraged to...integrated enterprise. These technologies are designed to leverage information and knowledge resources as the key enablers for high quality systems
From Data to Knowledge: GEOSS experience and the GEOSS Knowledge Base contribution to the GCI
NASA Astrophysics Data System (ADS)
Santoro, M.; Nativi, S.; Mazzetti, P., Sr.; Plag, H. P.
2016-12-01
According to systems theory, data is raw, it simply exists and has no significance beyond its existence; while, information is data that has been given meaning by way of relational connection. The appropriate collection of information, such that it contributes to understanding, is a process of knowledge creation.The Global Earth Observation System of Systems (GEOSS) developed by the Group on Earth Observations (GEO) is a set of coordinated, independent Earth observation, information and processing systems that interact and provide access to diverse information for a broad range of users in both public and private sectors. GEOSS links these systems to strengthen the monitoring of the state of the Earth. In the past ten years, the development of GEOSS has taught several lessons dealing with the need to move from (open) data to information and knowledge sharing. Advanced user-focused services require to move from a data-driven framework to a knowledge sharing platform. Such a platform needs to manage information and knowledge, in addition to datasets linked to them. For this scope, GEO has launched a specific task called "GEOSS Knowledge Base", which deals with resources, like user requirements, Sustainable Development Goals (SDGs), observation and processing ontologies, publications, guidelines, best practices, business processes/algorithms, definition of advanced concepts like Essential Variables (EVs), indicators, strategic goals, etc. In turn, information and knowledge (e.g. guidelines, best practices, user requirements, business processes, algorithms, etc.) can be used to generate additional information and knowledge from shared datasets. To fully utilize and leverage the GEOSS Knowledge Base, the current GEOSS Common Infrastructure (GCI) model will be extended and advanced to consider important concepts and implementation artifacts, such as data processing services and environmental/economic models as well as EVs, Primary Indicators, and SDGs. The new GCI model will link these concepts to the present dataset, observation and sensor concepts, enabling a set of very important new capabilities to be offered to GEOSS users.
Knowledge into action - supporting the implementation of evidence into practice in Scotland.
Davies, Sandra; Herbert, Paul; Wales, Ann; Ritchie, Karen; Wilson, Suzanne; Dobie, Laura; Thain, Annette
2017-03-01
The knowledge into action model for NHS Scotland provides a framework for librarians and health care staff to support getting evidence into practice. Central to this model is the development of a network of knowledge brokers to facilitate identification, use, creation and sharing of knowledge. To translate the concepts described in the model into tangible activities with the intention of supporting better use of evidence in health care and subsequently improving patient outcomes. Four areas of activity were addressed by small working groups comprising knowledge services staff in local and national boards. The areas of activity were as follows: defining existing and required capabilities and developing learning opportunities for the knowledge broker network; establishing national search and summarising services; developing actionable knowledge tools; and supporting person-to-person knowledge sharing. This work presents the development of practical tools and support to translate a conceptual model for getting knowledge into action into a series of activities and outputs to support better use of evidence in health care and subsequently improved patient outcomes. © 2017 Health Libraries Group.
Enhancing Users' Participation in Business Process Modeling through Ontology-Based Training
NASA Astrophysics Data System (ADS)
Macris, A.; Malamateniou, F.; Vassilacopoulos, G.
Successful business process design requires active participation of users who are familiar with organizational activities and business process modelling concepts. Hence, there is a need to provide users with reusable, flexible, agile and adaptable training material in order to enable them instil their knowledge and expertise in business process design and automation activities. Knowledge reusability is of paramount importance in designing training material on process modelling since it enables users participate actively in process design/redesign activities stimulated by the changing business environment. This paper presents a prototype approach for the design and use of training material that provides significant advantages to both the designer (knowledge - content reusability and semantic web enabling) and the user (semantic search, knowledge navigation and knowledge dissemination). The approach is based on externalizing domain knowledge in the form of ontology-based knowledge networks (i.e. training scenarios serving specific training needs) so that it is made reusable.
Flagg, Jennifer L; Lane, Joseph P; Lockett, Michelle M
2013-02-15
Traditional government policies suggest that upstream investment in scientific research is necessary and sufficient to generate technological innovations. The expected downstream beneficial socio-economic impacts are presumed to occur through non-government market mechanisms. However, there is little quantitative evidence for such a direct and formulaic relationship between public investment at the input end and marketplace benefits at the impact end. Instead, the literature demonstrates that the technological innovation process involves a complex interaction between multiple sectors, methods, and stakeholders. The authors theorize that accomplishing the full process of technological innovation in a deliberate and systematic manner requires an operational-level model encompassing three underlying methods, each designed to generate knowledge outputs in different states: scientific research generates conceptual discoveries; engineering development generates prototype inventions; and industrial production generates commercial innovations. Given the critical roles of engineering and business, the entire innovation process should continuously consider the practical requirements and constraints of the commercial marketplace.The Need to Knowledge (NtK) Model encompasses the activities required to successfully generate innovations, along with associated strategies for effectively communicating knowledge outputs in all three states to the various stakeholders involved. It is intentionally grounded in evidence drawn from academic analysis to facilitate objective and quantitative scrutiny, and industry best practices to enable practical application. The Need to Knowledge (NtK) Model offers a practical, market-oriented approach that avoids the gaps, constraints and inefficiencies inherent in undirected activities and disconnected sectors. The NtK Model is a means to realizing increased returns on public investments in those science and technology programs expressly intended to generate beneficial socio-economic impacts.
2013-01-01
Background Traditional government policies suggest that upstream investment in scientific research is necessary and sufficient to generate technological innovations. The expected downstream beneficial socio-economic impacts are presumed to occur through non-government market mechanisms. However, there is little quantitative evidence for such a direct and formulaic relationship between public investment at the input end and marketplace benefits at the impact end. Instead, the literature demonstrates that the technological innovation process involves a complex interaction between multiple sectors, methods, and stakeholders. Discussion The authors theorize that accomplishing the full process of technological innovation in a deliberate and systematic manner requires an operational-level model encompassing three underlying methods, each designed to generate knowledge outputs in different states: scientific research generates conceptual discoveries; engineering development generates prototype inventions; and industrial production generates commercial innovations. Given the critical roles of engineering and business, the entire innovation process should continuously consider the practical requirements and constraints of the commercial marketplace. The Need to Knowledge (NtK) Model encompasses the activities required to successfully generate innovations, along with associated strategies for effectively communicating knowledge outputs in all three states to the various stakeholders involved. It is intentionally grounded in evidence drawn from academic analysis to facilitate objective and quantitative scrutiny, and industry best practices to enable practical application. Summary The Need to Knowledge (NtK) Model offers a practical, market-oriented approach that avoids the gaps, constraints and inefficiencies inherent in undirected activities and disconnected sectors. The NtK Model is a means to realizing increased returns on public investments in those science and technology programs expressly intended to generate beneficial socio-economic impacts. PMID:23414369
A Knowledge Conversion Model Based on the Cognitive Load Theory for Architectural Design Education
ERIC Educational Resources Information Center
Wu, Yun-Wu; Liao, Shin; Wen, Ming-Hui; Weng, Kuo-Hua
2017-01-01
The education of architectural design requires balanced curricular arrangements of respectively theoretical knowledge and practical skills to really help students build their knowledge structures, particularly helping them in solving the problems of cognitive load. The purpose of this study is to establish an architectural design knowledge…
Security Modeling and Correctness Proof Using Specware and Isabelle
2008-12-01
proving requires substantial knowledge and experience in logical calculus . 15. NUMBER OF PAGES 146 14. SUBJECT TERMS Formal Method, Theorem...although the actual proving requires substantial knowledge and experience in logical calculus . vi THIS PAGE INTENTIONALLY LEFT BLANK vii TABLE OF...formal language and provides tools for proving those formulas in a logical calculus ” [5]. We are demonstrating in this thesis that a specification in
Knowledge Management: A Model to Enhance Combatant Command Effectiveness
2011-02-15
implementing the change that is required to achieve the knowledge management vision.43 The Chief Knowledge Management Officer ( KMO ) is overall responsible for...the processes, people/culture and technology in the organization. The Chief KMO develops policy and leads the organization’s knowledge management...integrates team. Reporting directly to the Chief KMO is the Chief Process Manager, Chief Learning Manager and Chief Technology Officer
NASA Astrophysics Data System (ADS)
Tucker, Deborah L.
Purpose. The purpose of this grounded theory study was to refine, using a Delphi study process, the four categories of the theoretical model of the comprehensive knowledge base required by providers of professional development for K-12 teachers of science generated from a review of the literature. Methodology. This grounded theory study used data collected through a modified Delphi technique and interviews to refine and validate the literature-based knowledge base required by providers of professional development for K-12 teachers of science. Twenty-three participants, experts in the fields of science education, how people learn, instructional and assessment strategies, and learning contexts, responded to the study's questions. Findings. By "densifying" the four categories of the knowledge base, this study determined the causal conditions (the science subject matter knowledge), the intervening conditions (how people learn), the strategies (the effective instructional and assessment strategies), and the context (the context and culture of formal learning environments) surrounding the science professional development process. Eight sections were added to the literature-based knowledge base; the final model comprised of forty-nine sections. The average length of the operational definitions increased nearly threefold and the number of citations per operational definition increased more than twofold. Conclusions. A four-category comprehensive model that can serve as the foundation for the knowledge base required by science professional developers now exists. Subject matter knowledge includes science concepts, inquiry, the nature of science, and scientific habits of mind; how people learn includes the principles of learning, active learning, andragogy, variations in learners, neuroscience and cognitive science, and change theory; effective instructional and assessment strategies include constructivist learning and inquiry-based teaching, differentiation of instruction, making knowledge and thinking accessible to learners, automatic and fluent retrieval of nonscience-specific skills, and science assessment and assessment strategies, science-specific instructional strategies, and safety within a learning environment; and, contextual knowledge includes curriculum selection and implementation strategies and knowledge of building program coherence. Recommendations. Further research on the use of which specific instructional strategies identified in the refined knowledge base have positive, significant effect sizes for adult learners is recommended.
Ahmed, Wamiq M; Lenz, Dominik; Liu, Jia; Paul Robinson, J; Ghafoor, Arif
2008-03-01
High-throughput biological imaging uses automated imaging devices to collect a large number of microscopic images for analysis of biological systems and validation of scientific hypotheses. Efficient manipulation of these datasets for knowledge discovery requires high-performance computational resources, efficient storage, and automated tools for extracting and sharing such knowledge among different research sites. Newly emerging grid technologies provide powerful means for exploiting the full potential of these imaging techniques. Efficient utilization of grid resources requires the development of knowledge-based tools and services that combine domain knowledge with analysis algorithms. In this paper, we first investigate how grid infrastructure can facilitate high-throughput biological imaging research, and present an architecture for providing knowledge-based grid services for this field. We identify two levels of knowledge-based services. The first level provides tools for extracting spatiotemporal knowledge from image sets and the second level provides high-level knowledge management and reasoning services. We then present cellular imaging markup language, an extensible markup language-based language for modeling of biological images and representation of spatiotemporal knowledge. This scheme can be used for spatiotemporal event composition, matching, and automated knowledge extraction and representation for large biological imaging datasets. We demonstrate the expressive power of this formalism by means of different examples and extensive experimental results.
Data to knowledge: how to get meaning from your result.
Berman, Helen M; Gabanyi, Margaret J; Groom, Colin R; Johnson, John E; Murshudov, Garib N; Nicholls, Robert A; Reddy, Vijay; Schwede, Torsten; Zimmerman, Matthew D; Westbrook, John; Minor, Wladek
2015-01-01
Structural and functional studies require the development of sophisticated 'Big Data' technologies and software to increase the knowledge derived and ensure reproducibility of the data. This paper presents summaries of the Structural Biology Knowledge Base, the VIPERdb Virus Structure Database, evaluation of homology modeling by the Protein Model Portal, the ProSMART tool for conformation-independent structure comparison, the LabDB 'super' laboratory information management system and the Cambridge Structural Database. These techniques and technologies represent important tools for the transformation of crystallographic data into knowledge and information, in an effort to address the problem of non-reproducibility of experimental results.
2016-01-01
Observations of individual organisms (data) can be combined with expert ecological knowledge of species, especially causal knowledge, to model and extract from flower–visiting data useful information about behavioral interactions between insect and plant organisms, such as nectar foraging and pollen transfer. We describe and evaluate a method to elicit and represent such expert causal knowledge of behavioral ecology, and discuss the potential for wider application of this method to the design of knowledge-based systems for knowledge discovery in biodiversity and ecosystem informatics. PMID:27851814
eClims: An Extensible and Dynamic Integration Framework for Biomedical Information Systems.
Savonnet, Marinette; Leclercq, Eric; Naubourg, Pierre
2016-11-01
Biomedical information systems (BIS) require consideration of three types of variability: data variability induced by new high throughput technologies, schema or model variability induced by large scale studies or new fields of research, and knowledge variability resulting from new discoveries. Beyond data heterogeneity, managing variabilities in the context of BIS requires extensible and dynamic integration process. In this paper, we focus on data and schema variabilities and we propose an integration framework based on ontologies, master data, and semantic annotations. The framework addresses issues related to: 1) collaborative work through a dynamic integration process; 2) variability among studies using an annotation mechanism; and 3) quality control over data and semantic annotations. Our approach relies on two levels of knowledge: BIS-related knowledge is modeled using an application ontology coupled with UML models that allow controlling data completeness and consistency, and domain knowledge is described by a domain ontology, which ensures data coherence. A system build with the eClims framework has been implemented and evaluated in the context of a proteomic platform.
Knowledge management for efficient quantitative analyses during regulatory reviews.
Krudys, Kevin; Li, Fang; Florian, Jeffry; Tornoe, Christoffer; Chen, Ying; Bhattaram, Atul; Jadhav, Pravin; Neal, Lauren; Wang, Yaning; Gobburu, Joga; Lee, Peter I D
2011-11-01
Knowledge management comprises the strategies and methods employed to generate and leverage knowledge within an organization. This report outlines the activities within the Division of Pharmacometrics at the US FDA to effectively manage knowledge with the ultimate goal of improving drug development and advancing public health. The infrastructure required for pharmacometric knowledge management includes provisions for data standards, queryable databases, libraries of modeling tools, archiving of analysis results and reporting templates for effective communication. Two examples of knowledge management systems developed within the Division of Pharmacometrics are used to illustrate these principles. The benefits of sound knowledge management include increased productivity, allowing reviewers to focus on research questions spanning new drug applications, such as improved trial design and biomarker development. The future of knowledge management depends on the collaboration between the FDA and industry to implement data and model standards to enhance sharing and dissemination of knowledge.
Requirements for color technology
NASA Astrophysics Data System (ADS)
Campbell, Ronald B., Jr.
1993-06-01
The requirements for color technology in the general office are reviewed. The two most salient factors driving the requirements for color are the information explosion and the virtually negligible growth in white collar productivity in the recent past. Accordingly, the business requirement upon color technology is that it be utilized in an effective and efficient manner to increase office productivity. Recent research on productivity and growth has moved beyond the classical two factor productivity model of labor and capital to explicitly include knowledge as a third and vital factor. Documents are agents of knowledge in the general office. Documents articulate, express, disseminate, and communicate knowledge. The central question addressed here is how can color, in conjunction with other techniques such as graphics and document design, improve the growth of knowledge? The central thesis is that the effective use of color to convert information into knowledge is one of the most powerful ways to increase office productivity. Material on the value of color is reviewed. This material is related to the role of documents. Document services are the way in which users access and utilize color technology. The requirements for color technology are then defined against the services taxonomy.
Issues in Quantitative Analysis of Ultraviolet Imager (UV) Data: Airglow
NASA Technical Reports Server (NTRS)
Germany, G. A.; Richards, P. G.; Spann, J. F.; Brittnacher, M. J.; Parks, G. K.
1999-01-01
The GGS Ultraviolet Imager (UVI) has proven to be especially valuable in correlative substorm, auroral morphology, and extended statistical studies of the auroral regions. Such studies are based on knowledge of the location, spatial, and temporal behavior of auroral emissions. More quantitative studies, based on absolute radiometric intensities from UVI images, require a more intimate knowledge of the instrument behavior and data processing requirements and are inherently more difficult than studies based on relative knowledge of the oval location. In this study, UVI airglow observations are analyzed and compared with model predictions to illustrate issues that arise in quantitative analysis of UVI images. These issues include instrument calibration, long term changes in sensitivity, and imager flat field response as well as proper background correction. Airglow emissions are chosen for this study because of their relatively straightforward modeling requirements and because of their implications for thermospheric compositional studies. The analysis issues discussed here, however, are identical to those faced in quantitative auroral studies.
Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations
NASA Astrophysics Data System (ADS)
Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad
2012-11-01
This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.
William J. Zielinski; Andrew N. Gray; Jeffrey R. Dunk; Joseph W. Sherlock; Gary E. Dixon
2010-01-01
New knowledge from wildlife-habitat relationship models is often difficult to implement in a management context. This can occur because researchers do not always consider whether managers have access to information about environmental covariates that permit the models to be applied. Moreover, ecosystem management requires knowledge about the condition of habitats over...
Quality assurance paradigms for artificial intelligence in modelling and simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oren, T.I.
1987-04-01
New classes of quality assurance concepts and techniques are required for the advanced knowledge-processing paradigms (such as artificial intelligence, expert systems, or knowledge-based systems) and the complex problems that only simulative systems can cope with. A systematization of quality assurance problems as well as examples are given to traditional and cognizant quality assurance techniques in traditional and cognizant modelling and simulation.
Data to knowledge: how to get meaning from your result
Berman, Helen M.; Gabanyi, Margaret J.; Groom, Colin R.; Johnson, John E.; Murshudov, Garib N.; Nicholls, Robert A.; Reddy, Vijay; Schwede, Torsten; Zimmerman, Matthew D.; Westbrook, John; Minor, Wladek
2015-01-01
Structural and functional studies require the development of sophisticated ‘Big Data’ technologies and software to increase the knowledge derived and ensure reproducibility of the data. This paper presents summaries of the Structural Biology Knowledge Base, the VIPERdb Virus Structure Database, evaluation of homology modeling by the Protein Model Portal, the ProSMART tool for conformation-independent structure comparison, the LabDB ‘super’ laboratory information management system and the Cambridge Structural Database. These techniques and technologies represent important tools for the transformation of crystallographic data into knowledge and information, in an effort to address the problem of non-reproducibility of experimental results. PMID:25610627
Incorporating Resilience into Dynamic Social Models
2016-07-20
solved by simply using the information provided by the scenario. Instead, additional knowledge is required from relevant fields that study these...resilience function by leveraging Bayesian Knowledge Bases (BKBs), a probabilistic reasoning network framework[5],[6]. BKBs allow for inferencing...reasoning network framework based on Bayesian Knowledge Bases (BKBs). BKBs are central to our social resilience framework as they are used to
NASA Astrophysics Data System (ADS)
Nývlt, Vladimír; Prušková, Kristýna
2017-10-01
BIM today is much more than drafting in 3D only, and project participants are further challenging, what is the topic of both this paper, and further research. Knowledge of objects, their behaviour, and other characteristics has high impact on whole building life cycle. Other structured and unstructured knowledge is rightfully added (e.g. historically based experience, needs and requirements of users, investors, needs for project and objects revisions) Grasping of all attributes into system for collection, managing and time control of knowledge. Further important findings lie in the necessity of understanding how to manage knowledge needs with diverse and variable ways, when BIM maturity levels are advanced, as defined by Bew and Richards (2008). All decisions made would always rely on good, timely, and correct data. Usage of BIM models in terms of Building Information Management can support all decisions through data gathering, sharing, and using across all disciplines and all Life Cycle steps. It particularly significantly improves possibilities and level of life cycle costing. Experience and knowledge stored in data models of BIM, describing user requirements, best practices derived from other projects and/or research outputs will help to understand sustainability in its complexity and wholeness.
Precision GPS ephemerides and baselines
NASA Technical Reports Server (NTRS)
1992-01-01
The required knowledge of the Global Positioning System (GPS) satellite position accuracy can vary depending on a particular application. Application to relative positioning of receiver locations on the ground to infer Earth's tectonic plate motion requires the most accurate knowledge of the GPS satellite orbits. Research directed towards improving and evaluating the accuracy of GPS satellite orbits was conducted at the University of Texas Center for Space Research (CSR). Understanding and modeling the forces acting on the satellites was a major focus of the research. Other aspects of orbit determination, such as the reference frame, time system, measurement modeling, and parameterization, were also investigated. Gravitational forces were modeled by truncated versions of extant gravity fields such as, Goddard Earth Model (GEM-L2), GEM-T1, TEG-2, and third body perturbations due to the Sun and Moon. Nongravitational forces considered were the solar radiation pressure, and perturbations due to thermal venting and thermal imbalance. At the GPS satellite orbit accuracy level required for crustal dynamic applications, models for the nongravitational perturbation play a critical role, since the gravitational forces are well understood and are modeled adequately for GPS satellite orbits.
Rate determination from vector observations
NASA Technical Reports Server (NTRS)
Weiss, Jerold L.
1993-01-01
Vector observations are a common class of attitude data provided by a wide variety of attitude sensors. Attitude determination from vector observations is a well-understood process and numerous algorithms such as the TRIAD algorithm exist. These algorithms require measurement of the line of site (LOS) vector to reference objects and knowledge of the LOS directions in some predetermined reference frame. Once attitude is determined, it is a simple matter to synthesize vehicle rate using some form of lead-lag filter, and then, use it for vehicle stabilization. Many situations arise, however, in which rate knowledge is required but knowledge of the nominal LOS directions are not available. This paper presents two methods for determining spacecraft angular rates from vector observations without a priori knowledge of the vector directions. The first approach uses an extended Kalman filter with a spacecraft dynamic model and a kinematic model representing the motion of the observed LOS vectors. The second approach uses a 'differential' TRIAD algorithm to compute the incremental direction cosine matrix, from which vehicle rate is then derived.
The Joint Venture Model of Knowledge Utilization: a guide for change in nursing.
Edgar, Linda; Herbert, Rosemary; Lambert, Sylvie; MacDonald, Jo-Ann; Dubois, Sylvie; Latimer, Margot
2006-05-01
Knowledge utilization (KU) is an essential component of today's nursing practice and healthcare system. Despite advances in knowledge generation, the gap in knowledge transfer from research to practice continues. KU models have moved beyond factors affecting the individual nurse to a broader perspective that includes the practice environment and the socio-political context. This paper proposes one such theoretical model the Joint Venture Model of Knowledge Utilization (JVMKU). Key components of the JVMKU that emerged from an extensive multidisciplinary review of the literature include leadership, emotional intelligence, person, message, empowered workplace and the socio-political environment. The model has a broad and practical application and is not specific to one type of KU or one population. This paper provides a description of the JVMKU, its development and suggested uses at both local and organizational levels. Nurses in both leadership and point-of-care positions will recognize the concepts identified and will be able to apply this model for KU in their own workplace for assessment of areas requiring strengthening and support.
Haueisen, J; Ramon, C; Eiselt, M; Brauer, H; Nowak, H
1997-08-01
Modeling in magnetoencephalography (MEG) and electroencephalography (EEG) requires knowledge of the in vivo tissue resistivities of the head. The aim of this paper is to examine the influence of tissue resistivity changes on the neuromagnetic field and the electric scalp potential. A high-resolution finite element method (FEM) model (452,162 elements, 2-mm resolution) of the human head with 13 different tissue types is employed for this purpose. Our main finding was that the magnetic fields are sensitive to changes in the tissue resistivity in the vicinity of the source. In comparison, the electric surface potentials are sensitive to changes in the tissue resistivity in the vicinity of the source and in the vicinity of the position of the electrodes. The magnitude (strength) of magnetic fields and electric surface potentials is strongly influenced by tissue resistivity changes, while the topography is not as strongly influenced. Therefore, an accurate modeling of magnetic field and electric potential strength requires accurate knowledge of tissue resistivities, while for source localization procedures this knowledge might not be a necessity.
Orthonormal filters for identification in active control systems
NASA Astrophysics Data System (ADS)
Mayer, Dirk
2015-12-01
Many active noise and vibration control systems require models of the control paths. When the controlled system changes slightly over time, adaptive digital filters for the identification of the models are useful. This paper aims at the investigation of a special class of adaptive digital filters: orthonormal filter banks possess the robust and simple adaptation of the widely applied finite impulse response (FIR) filters, but at a lower model order, which is important when considering implementation on embedded systems. However, the filter banks require prior knowledge about the resonance frequencies and damping of the structure. This knowledge can be supposed to be of limited precision, since in many practical systems, uncertainties in the structural parameters exist. In this work, a procedure using a number of training systems to find the fixed parameters for the filter banks is applied. The effect of uncertainties in the prior knowledge on the model error is examined both with a basic example and in an experiment. Furthermore, the possibilities to compensate for the imprecise prior knowledge by a higher filter order are investigated. Also comparisons with FIR filters are implemented in order to assess the possible advantages of the orthonormal filter banks. Numerical and experimental investigations show that significantly lower computational effort can be reached by the filter banks under certain conditions.
Hubble Space Telescope Design Engineering Knowledgebase (HSTDEK)
NASA Technical Reports Server (NTRS)
Johannes, James D.; Everetts, Clark
1989-01-01
The research covered here pays specific attention to the development of tools to assist knowledge engineers in acquiring knowledge and to assist other technical, engineering, and management personnel in automatically performing knowledge capture as part of their everyday work without adding any extra work to what they already do. Requirements for data products, the knowledge base, and methods for mapping knowledge in the documents onto the knowledge representations are discussed, as are some of the difficulties of capturing in the knowledge base the structure of the design process itself, along with a model of the system designed. The capture of knowledge describing the interactions of different components is also discussed briefly.
Manpower, Personnel, and Training Assessment (MPTA) Handbook
2015-11-01
Occupational Specialty (MOS), any Additional Skill Identifier (ASI) required, core knowledge, skills, and abilities ( KSAs ) required for the job...of training, usability assessments, interviews with Soldiers, and manpower modeling . Some guidelines on the type of questions to ask in this portion... modeling , and simulation activities into an efficient continuum. COICs are the operational effectiveness and operational suitability issues (not
ERIC Educational Resources Information Center
Kali, Yael; Sagy, Ornit; Kuflik, Tsvi; Mogilevsky, Orit; Maayan-Fanar, Emma
2015-01-01
We report on the development and evaluation of an innovative instructional model, which harnesses advanced technologies and local resources (an in-campus museum), to support undergraduate-level art history students in developing the skills required for analyzing artwork. Theory suggests that analyzing artwork requires theoretical knowledge and…
A research perspective on white-tailed deer overabundance in the northeastern United States
William M. Healy; David S. deCalesta; Susan L. Stout
1997-01-01
Resolving issues of deer (Odocoileus spp.) over-abundance will require gaining more reliable knowledge about their role in ecosystem dynamics. Science can contribute by advancing knowledge in 4 overlapping spheres of research: model development, measurement techniques, population management, and human behavior.
Causal Reasoning in Medicine: Analysis of a Protocol.
ERIC Educational Resources Information Center
Kuipers, Benjamin; Kassirer, Jerome P.
1984-01-01
Describes the construction of a knowledge representation from the identification of the problem (nephrotic syndrome) to a running computer simulation of causal reasoning to provide a vertical slice of the construction of a cognitive model. Interactions between textbook knowledge, observations of human experts, and computational requirements are…
A Methodology for Multiple Rule System Integration and Resolution Within a Singular Knowledge Base
NASA Technical Reports Server (NTRS)
Kautzmann, Frank N., III
1988-01-01
Expert Systems which support knowledge representation by qualitative modeling techniques experience problems, when called upon to support integrated views embodying description and explanation, especially when other factors such as multiple causality, competing rule model resolution, and multiple uses of knowledge representation are included. A series of prototypes are being developed to demonstrate the feasibility of automating the process of systems engineering, design and configuration, and diagnosis and fault management. A study involves not only a generic knowledge representation; it must also support multiple views at varying levels of description and interaction between physical elements, systems, and subsystems. Moreover, it will involve models of description and explanation for each level. This multiple model feature requires the development of control methods between rule systems and heuristics on a meta-level for each expert system involved in an integrated and larger class of expert system. The broadest possible category of interacting expert systems is described along with a general methodology for the knowledge representation and control of mutually exclusive rule systems.
Linking knowledge and action through mental models of sustainable agriculture.
Hoffman, Matthew; Lubell, Mark; Hillis, Vicken
2014-09-09
Linking knowledge to action requires understanding how decision-makers conceptualize sustainability. This paper empirically analyzes farmer "mental models" of sustainability from three winegrape-growing regions of California where local extension programs have focused on sustainable agriculture. The mental models are represented as networks where sustainability concepts are nodes, and links are established when a farmer mentions two concepts in their stated definition of sustainability. The results suggest that winegrape grower mental models of sustainability are hierarchically structured, relatively similar across regions, and strongly linked to participation in extension programs and adoption of sustainable farm practices. We discuss the implications of our findings for the debate over the meaning of sustainability, and the role of local extension programs in managing knowledge systems.
Knowledge management model for teleconsulting in telemedicine.
Pico, Lilia Edith Aparicio; Cuenca, Orlando Rodriguez; Alvarez, Daniel José Salas; Salgado, Piere Augusto Peña
2008-01-01
The present article shows a study about requirements for teleconsulting in a telemedicine solution in order to create a knowledge management system. Several concepts have been found related to the term teleconsulting in telemedicine which will serve to clear up their corresponding applications, potentialities, and scope. Afterwards, different theories about the art state in knowledge management have been considered by exploring methodologies and architectures to establish the trends of knowledge management and the possibilities of using them in teleconsulting. Furthermore, local and international experiences have been examined to assess knowledge management systems focused on telemedicine. The objective of this study is to obtain a model for developing teleconsulting systems in Colombia because we have many health-information management systems but they don't offer telemedicine services for remote areas. In Colombia there are many people in rural areas with different necessities and they don't have medicine services, teleconsulting will be a good solution to this problem. Lastly, a model of a knowledge system is proposed for teleconsulting in telemedicine. The model has philosophical principles and architecture that shows the fundamental layers for its development.
Formal Representations of Eligibility Criteria: A Literature Review
Weng, Chunhua; Tu, Samson W.; Sim, Ida; Richesson, Rachel
2010-01-01
Standards-based, computable knowledge representations for eligibility criteria are increasingly needed to provide computer-based decision support for automated research participant screening, clinical evidence application, and clinical research knowledge management. We surveyed the literature and identified five aspects of eligibility criteria knowledge representations that contribute to the various research and clinical applications: the intended use of computable eligibility criteria, the classification of eligibility criteria, the expression language for representing eligibility rules, the encoding of eligibility concepts, and the modeling of patient data. We consider three of them (expression language, codification of eligibility concepts, and patient data modeling), to be essential constructs of a formal knowledge representation for eligibility criteria. The requirements for each of the three knowledge constructs vary for different use cases, which therefore should inform the development and choice of the constructs toward cost-effective knowledge representation efforts. We discuss the implications of our findings for standardization efforts toward sharable knowledge representation of eligibility criteria. PMID:20034594
Theoretical model to explain the problem-solving process in physics
NASA Astrophysics Data System (ADS)
Lopez, Carlos
2011-03-01
This work reports a theoretical model developed with the aim to explain the mental mechanisms of knowledge building during the problem-solving process in physics using a hybrid approach of assimilation- formation of concepts. The model has been termed conceptual chains and represents graphic diagrams of conceptual dependency, which have yielded information about the background knowledge required during the learning process, as well as about the formation of diverse structures that correspond to distinct forms of networking concepts Additionally, the conceptual constructs of the model have been classified according to five types of knowledge. Evidence was found about the influence of these structures, as well as of the distinct types of knowledge about the degree of difficulty of the problems. I want to be grateful to Laureate International Universities, Baltimore M.D., USA, for the financing granted for the accomplishment of this work.
Report of the panel on geopotential fields: Gravity field, section 8
NASA Technical Reports Server (NTRS)
Anderson, Allen Joel; Kaula, William M.; Lazarewics, Andrew R.; Lefebvre, Michel; Phillips, Roger J.; Rapp, Richard H.; Rummel, Reinhard F.; Smith, David E.; Tapley, Byron D.; Zlotnick, Victor
1991-01-01
The objective of the Geopotential Panel was to develop a program of data acquisition and model development for the Earth's gravity and magnetic fields that meet the basic science requirements of the solid Earth and ocean studies. Presented here are the requirements for gravity information and models through the end of the century, the present status of our knowledge, data acquisition techniques, and an outline of a program to meet the requirements.
21st century environmental problems are wicked and require holistic systems thinking and solutions that integrate social and economic knowledge with knowledge of the environment. Computer-based technologies are fundamental to our ability to research and understand the relevant sy...
Mapping University Students' Epistemic Framing of Computational Physics Using Network Analysis
ERIC Educational Resources Information Center
Bodin, Madelen
2012-01-01
Solving physics problem in university physics education using a computational approach requires knowledge and skills in several domains, for example, physics, mathematics, programming, and modeling. These competences are in turn related to students' beliefs about the domains as well as about learning. These knowledge and beliefs components are…
ERIC Educational Resources Information Center
Smaby, Marlowe H.; Maddux, Cleborne D.; Richmond, Aaron S.; Lepkowski, William J.; Packman, Jill
2005-01-01
The authors investigated whether undergraduates' scores on the Verbal and Quantitative tests of the Graduate Record Examinations and their undergraduate grade point average can be used to predict knowledge, personal development, and skills of graduates of counseling programs. Multiple regression analysis produced significant models predicting…
Abidi, Samina
2017-10-26
Clinical management of comorbidities is a challenge, especially in a clinical decision support setting, as it requires the safe and efficient reconciliation of multiple disease-specific clinical procedures to formulate a comorbid therapeutic plan that is both effective and safe for the patient. In this paper we pursue the integration of multiple disease-specific Clinical Practice Guidelines (CPG) in order to manage co-morbidities within a computerized Clinical Decision Support System (CDSS). We present a CPG integration framework-termed as COMET (Comorbidity Ontological Modeling & ExecuTion) that manifests a knowledge management approach to model, computerize and integrate multiple CPG to yield a comorbid CPG knowledge model that upon execution can provide evidence-based recommendations for handling comorbid patients. COMET exploits semantic web technologies to achieve (a) CPG knowledge synthesis to translate a paper-based CPG to disease-specific clinical pathways (CP) that include specialized co-morbidity management procedures based on input from domain experts; (b) CPG knowledge modeling to computerize the disease-specific CP using a Comorbidity CPG ontology; (c) CPG knowledge integration by aligning multiple ontologically-modeled CP to develop a unified comorbid CPG knowledge model; and (e) CPG knowledge execution using reasoning engines to derive CPG-mediated recommendations for managing patients with comorbidities. We present a web-accessible COMET CDSS that provides family physicians with CPG-mediated comorbidity decision support to manage Atrial Fibrillation and Chronic Heart Failure. We present our qualitative and quantitative analysis of the knowledge content and usability of COMET CDSS.
Representation and presentation of requirements knowledge
NASA Technical Reports Server (NTRS)
Johnson, W. L.; Feather, Martin S.; Harris, David R.
1992-01-01
An approach to representation and presentation of knowledge used in the ARIES, an experimental requirements/specification environment, is described. The approach applies the notion of a representation architecture to the domain of software engineering and incorporates a strong coupling to a transformation system. It is characterized by a single highly expressive underlying representation, interfaced simultaneously to multiple presentations, each with notations of differing degrees of expressivity. This enables analysts to use multiple languages for describing systems and have these descriptions yield a single consistent model of the system.
Khan, Taimoor; De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results.
De, Asok
2014-01-01
In the last decade, artificial neural networks have become very popular techniques for computing different performance parameters of microstrip antennas. The proposed work illustrates a knowledge-based neural networks model for predicting the appropriate shape and accurate size of the slot introduced on the radiating patch for achieving desired level of resonance, gain, directivity, antenna efficiency, and radiation efficiency for dual-frequency operation. By incorporating prior knowledge in neural model, the number of required training patterns is drastically reduced. Further, the neural model incorporated with prior knowledge can be used for predicting response in extrapolation region beyond the training patterns region. For validation, a prototype is also fabricated and its performance parameters are measured. A very good agreement is attained between measured, simulated, and predicted results. PMID:27382616
Knowledge Management through the Equilibrium Pattern Model for Learning
NASA Astrophysics Data System (ADS)
Sarirete, Akila; Noble, Elizabeth; Chikh, Azeddine
Contemporary students are characterized by having very applied learning styles and methods of acquiring knowledge. This behavior is consistent with the constructivist models where students are co-partners in the learning process. In the present work the authors developed a new model of learning based on the constructivist theory coupled with the cognitive development theory of Piaget. The model considers the level of learning based on several stages and the move from one stage to another requires learners' challenge. At each time a new concept is introduced creates a disequilibrium that needs to be worked out to return back to its equilibrium stage. This process of "disequilibrium/equilibrium" has been analyzed and validated using a course in computer networking as part of Cisco Networking Academy Program at Effat College, a women college in Saudi Arabia. The model provides a theoretical foundation for teaching especially in a complex knowledge domain such as engineering and can be used in a knowledge economy.
NASA Technical Reports Server (NTRS)
Wade, Rose C.
1989-01-01
The NASA Controlled Ecological Life Support System (CELSS) Program is involved in developing a biogenerative life support system that will supply food, air, and water to space crews on long-duration missions. An important part of this effort is in development of the knowledge and technological capability of producing and processing foods to provide optimal diets for space crews. This involves such interrelated factors as determination of the diet, based on knowledge of nutrient needs of humans and adjustments in those needs that may be required as a result of the conditions of long-duration space flight; determination of the optimal mixture of crops required to provide nutrients at levels that are sufficient but not excessive or toxic; and consideration of the critical issues of spacecraft space and power limitations, which impose a phytomass minimization requirement. The complex interactions among these factors are examined with the goal of supplying a diet that will satisfy human needs while minimizing the total phytomass requirement. The approach taken was to collect plant nutritional composition and phytomass production data, identify human nutritional needs and estimate the adjustments to the nutrient requirements likely to result from space flight, and then to generate mathematical models from these data.
Sticky knowledge: A possible model for investigating implementation in healthcare contexts
Elwyn, Glyn; Taubert, Mark; Kowalczuk, Jenny
2007-01-01
Background In health care, a well recognized gap exists between what we know should be done based on accumulated evidence and what we actually do in practice. A body of empirical literature shows organizations, like individuals, are difficult to change. In the business literature, knowledge management and transfer has become an established area of theory and practice, whilst in healthcare it is only starting to establish a firm footing. Knowledge has become a business resource, and knowledge management theorists and practitioners have examined how knowledge moves in organisations, how it is shared, and how the return on knowledge capital can be maximised to create competitive advantage. New models are being considered, and we wanted to explore the applicability of one of these conceptual models to the implementation of evidence-based practice in healthcare systems. Methods The application of a conceptual model called sticky knowledge, based on an integration of communication theory and knowledge transfer milestones, into a scenario of attempting knowledge transfer in primary care. Results We describe Szulanski's model, the empirical work he conducted, and illustrate its potential applicability with a hypothetical healthcare example based on improving palliative care services. We follow a doctor through two different posts and analyse aspects of knowledge transfer in different primary care settings. The factors included in the sticky knowledge model include: causal ambiguity, unproven knowledge, motivation of source, credibility of source, recipient motivation, recipient absorptive capacity, recipient retentive capacity, barren organisational context, and arduous relationship between source and recipient. We found that we could apply all these factors to the difficulty of implementing new knowledge into practice in primary care settings. Discussion Szulanski argues that knowledge factors play a greater role in the success or failure of a knowledge transfer than has been suspected, and we consider that this conjecture requires further empirical work in healthcare settings. PMID:18096040
Towards a Food Safety Knowledge Base Applicable in Crisis Situations and Beyond
Falenski, Alexander; Weiser, Armin A.; Thöns, Christian; Appel, Bernd; Käsbohrer, Annemarie; Filter, Matthias
2015-01-01
In case of contamination in the food chain, fast action is required in order to reduce the numbers of affected people. In such situations, being able to predict the fate of agents in foods would help risk assessors and decision makers in assessing the potential effects of a specific contamination event and thus enable them to deduce the appropriate mitigation measures. One efficient strategy supporting this is using model based simulations. However, application in crisis situations requires ready-to-use and easy-to-adapt models to be available from the so-called food safety knowledge bases. Here, we illustrate this concept and its benefits by applying the modular open source software tools PMM-Lab and FoodProcess-Lab. As a fictitious sample scenario, an intentional ricin contamination at a beef salami production facility was modelled. Predictive models describing the inactivation of ricin were reviewed, relevant models were implemented with PMM-Lab, and simulations on residual toxin amounts in the final product were performed with FoodProcess-Lab. Due to the generic and modular modelling concept implemented in these tools, they can be applied to simulate virtually any food safety contamination scenario. Apart from the application in crisis situations, the food safety knowledge base concept will also be useful in food quality and safety investigations. PMID:26247028
Towards a Food Safety Knowledge Base Applicable in Crisis Situations and Beyond.
Falenski, Alexander; Weiser, Armin A; Thöns, Christian; Appel, Bernd; Käsbohrer, Annemarie; Filter, Matthias
2015-01-01
In case of contamination in the food chain, fast action is required in order to reduce the numbers of affected people. In such situations, being able to predict the fate of agents in foods would help risk assessors and decision makers in assessing the potential effects of a specific contamination event and thus enable them to deduce the appropriate mitigation measures. One efficient strategy supporting this is using model based simulations. However, application in crisis situations requires ready-to-use and easy-to-adapt models to be available from the so-called food safety knowledge bases. Here, we illustrate this concept and its benefits by applying the modular open source software tools PMM-Lab and FoodProcess-Lab. As a fictitious sample scenario, an intentional ricin contamination at a beef salami production facility was modelled. Predictive models describing the inactivation of ricin were reviewed, relevant models were implemented with PMM-Lab, and simulations on residual toxin amounts in the final product were performed with FoodProcess-Lab. Due to the generic and modular modelling concept implemented in these tools, they can be applied to simulate virtually any food safety contamination scenario. Apart from the application in crisis situations, the food safety knowledge base concept will also be useful in food quality and safety investigations.
The Applicant Based Training Model Setting Conditions for Recruiting Success
2002-07-01
the RS XO is another critical 32. function that falls into the scope of their responsibly and requires specific training in marketing and advertising . During...Phase I require a solid working knowledge of marketing and advertising . OpsO: Phase II actions require the OpsO receive advanced training in data
The dynamics of meaningful social interactions and the emergence of collective knowledge
Dankulov, Marija Mitrović; Melnik, Roderick; Tadić, Bosiljka
2015-01-01
Collective knowledge as a social value may arise in cooperation among actors whose individual expertise is limited. The process of knowledge creation requires meaningful, logically coordinated interactions, which represents a challenging problem to physics and social dynamics modeling. By combining two-scale dynamics model with empirical data analysis from a well-known Questions & Answers system Mathematics, we show that this process occurs as a collective phenomenon in an enlarged network (of actors and their artifacts) where the cognitive recognition interactions are properly encoded. The emergent behavior is quantified by the information divergence and innovation advancing of knowledge over time and the signatures of self-organization and knowledge sharing communities. These measures elucidate the impact of each cognitive element and the individual actor’s expertise in the collective dynamics. The results are relevant to stochastic processes involving smart components and to collaborative social endeavors, for instance, crowdsourcing scientific knowledge production with online games. PMID:26174482
The dynamics of meaningful social interactions and the emergence of collective knowledge
NASA Astrophysics Data System (ADS)
Dankulov, Marija Mitrović; Melnik, Roderick; Tadić, Bosiljka
2015-07-01
Collective knowledge as a social value may arise in cooperation among actors whose individual expertise is limited. The process of knowledge creation requires meaningful, logically coordinated interactions, which represents a challenging problem to physics and social dynamics modeling. By combining two-scale dynamics model with empirical data analysis from a well-known Questions & Answers system Mathematics, we show that this process occurs as a collective phenomenon in an enlarged network (of actors and their artifacts) where the cognitive recognition interactions are properly encoded. The emergent behavior is quantified by the information divergence and innovation advancing of knowledge over time and the signatures of self-organization and knowledge sharing communities. These measures elucidate the impact of each cognitive element and the individual actor’s expertise in the collective dynamics. The results are relevant to stochastic processes involving smart components and to collaborative social endeavors, for instance, crowdsourcing scientific knowledge production with online games.
The dynamics of meaningful social interactions and the emergence of collective knowledge.
Dankulov, Marija Mitrović; Melnik, Roderick; Tadić, Bosiljka
2015-07-15
Collective knowledge as a social value may arise in cooperation among actors whose individual expertise is limited. The process of knowledge creation requires meaningful, logically coordinated interactions, which represents a challenging problem to physics and social dynamics modeling. By combining two-scale dynamics model with empirical data analysis from a well-known Questions &Answers system Mathematics, we show that this process occurs as a collective phenomenon in an enlarged network (of actors and their artifacts) where the cognitive recognition interactions are properly encoded. The emergent behavior is quantified by the information divergence and innovation advancing of knowledge over time and the signatures of self-organization and knowledge sharing communities. These measures elucidate the impact of each cognitive element and the individual actor's expertise in the collective dynamics. The results are relevant to stochastic processes involving smart components and to collaborative social endeavors, for instance, crowdsourcing scientific knowledge production with online games.
A Study of the Efficacy of Project-Based Learning Integrated with Computer-Based Simulation--STELLA
ERIC Educational Resources Information Center
Eskrootchi, Rogheyeh; Oskrochi, G. Reza
2010-01-01
Incorporating computer-simulation modelling into project-based learning may be effective but requires careful planning and implementation. Teachers, especially, need pedagogical content knowledge which refers to knowledge about how students learn from materials infused with technology. This study suggests that students learn best by actively…
ERIC Educational Resources Information Center
Hodges, Michael; Kulinna, Pamela Hodges; Lee, Chong; Kwon, Ja Youn
2017-01-01
Students of all ages have documented a deficiency in health-related fitness knowledge (HRFK). However, improving students HRFK may require a change in teacher practices and professional development (PD). Purpose: This study, framed by Guskey's Model of Teacher Change (GMTC; Guskey, 2002), sought to assist teachers' HRFK instruction as part of…
Modelling Truck Camper Production
ERIC Educational Resources Information Center
Kramlich, G. R., II; Kobylski, G.; Ahner, D.
2008-01-01
This note describes an interdisciplinary project designed to enhance students' knowledge of the basic techniques taught in a multivariable calculus course. The note discusses the four main requirements of the project and then the solutions for each requirement. Concepts covered include differentials, gradients, Lagrange multipliers, constrained…
A new flexible plug and play scheme for modeling, simulating, and predicting gastric emptying
2014-01-01
Background In-silico models that attempt to capture and describe the physiological behavior of biological organisms, including humans, are intrinsically complex and time consuming to build and simulate in a computing environment. The level of detail of description incorporated in the model depends on the knowledge of the system’s behavior at that level. This knowledge is gathered from the literature and/or improved by knowledge obtained from new experiments. Thus model development is an iterative developmental procedure. The objective of this paper is to describe a new plug and play scheme that offers increased flexibility and ease-of-use for modeling and simulating physiological behavior of biological organisms. Methods This scheme requires the modeler (user) first to supply the structure of the interacting components and experimental data in a tabular format. The behavior of the components described in a mathematical form, also provided by the modeler, is externally linked during simulation. The advantage of the plug and play scheme for modeling is that it requires less programming effort and can be quickly adapted to newer modeling requirements while also paving the way for dynamic model building. Results As an illustration, the paper models the dynamics of gastric emptying behavior experienced by humans. The flexibility to adapt the model to predict the gastric emptying behavior under varying types of nutrient infusion in the intestine (ileum) is demonstrated. The predictions were verified with a human intervention study. The error in predicting the half emptying time was found to be less than 6%. Conclusions A new plug-and-play scheme for biological systems modeling was developed that allows changes to the modeled structure and behavior with reduced programming effort, by abstracting the biological system into a network of smaller sub-systems with independent behavior. In the new scheme, the modeling and simulation becomes an automatic machine readable and executable task. PMID:24917054
Natural Resources Research Program. Annotated Bibliography for Regional Recreation Demand Models
1991-01-01
to gain decreases in hunter density. A study of cross-country skiers in Colorado established willingness-to-pay values for a day of skiing, and...peo- ple derive from the use of public beaches? How can knowledge of these rela- tionships improve public beach management? A model was developed and...commodity. The pilot calculation ascertained what basic data are required. A knowledge of a county’s popula- tion, population density, distance from the
Norfolk, Tim; Siriwardena, A Niroshan
2013-01-01
This discussion paper describes a new and comprehensive model for diagnosing the causes of individual medical performance problems: SKIPE (skills, knowledge, internal, past and external factors). This builds on a previous paper describing a unifying theory of clinical practice, the RDM-p model, which captures the primary skill sets required for effective medical performance (relationship, diagnostics and management), and the professionalism that needs to underpin them. The SKIPE model is currently being used, in conjunction with the RDM-p model, for the in-depth assessment and management of doctors whose performance is a cause for concern.
An Enabling Technology for New Planning and Scheduling Paradigms
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth
2004-01-01
The Night Projects Directorate at NASA's Marshall Space Flight Center is developing a new planning and scheduling environment and a new scheduling algorithm to enable a paradigm shift in planning and scheduling concepts. Over the past 33 years Marshall has developed and evolved a paradigm for generating payload timelines for Skylab, Spacelab, various other Shuttle payloads, and the International Space Station. The current paradigm starts by collecting the requirements, called ?ask models," from the scientists and technologists for the tasks that are to be scheduled. Because of shortcomings in the current modeling schema, some requirements are entered as notes. Next, a cadre with knowledge of vehicle and hardware modifies these models to encompass and be compatible with the hardware model; again, notes are added when the modeling schema does not provide a better way to represent the requirements. Finally, the models are modified to be compatible with the scheduling engine. Then the models are submitted to the scheduling engine for automatic scheduling or, when requirements are expressed in notes, the timeline is built manually. A future paradigm would provide a scheduling engine that accepts separate science models and hardware models. The modeling schema would have the capability to represent all the requirements without resorting to notes. Furthermore, the scheduling engine would not require that the models be modified to account for the capabilities (limitations) of the scheduling engine. The enabling technology under development at Marshall has three major components: (1) A new modeling schema allows expressing all the requirements of the tasks without resorting to notes or awkward contrivances. The chosen modeling schema is both maximally expressive and easy to use. It utilizes graphical methods to show hierarchies of task constraints and networks of temporal relationships. (2) A new scheduling algorithm automatically schedules the models without the intervention of a scheduling expert. The algorithm is tuned for the constraint hierarchies and the complex temporal relationships provided by the modeling schema. It has an extensive search algorithm that can exploit timing flexibilities and constraint and relationship options. (3) An innovative architecture allows multiple remote users to simultaneously model science and technology requirements and other users to model vehicle and hardware characteristics. The architecture allows the remote users to submit scheduling requests directly to the scheduling engine and immediately see the results. These three components are integrated so that science and technology experts with no knowledge of the vehicle or hardware subsystems and no knowledge of the internal workings of the scheduling engine have the ability to build and submit scheduling requests and see the results. The immediate feedback will hone the users' modeling skills and ultimately enable them to produce the desired timeline. This paper summarizes the three components of the enabling technology and describes how this technology would make a new paradigm possible.
Enabling a New Planning and Scheduling Paradigm
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth
2004-01-01
The Flight Projects Directorate at NASA's Marshall Space Flight Center is developing a new planning and scheduling environment and a new scheduling algorithm to enable a paradigm shift in planning and scheduling concepts. Over the past 33 years Marshall has developed and evolved a paradigm for generating payload timelines for Skylab, Spacelab, various other Shuttle payloads, and the International Space Station. The current paradigm starts by collecting the requirements, called "tasks models," from the scientists and technologists for the tasks that they want to be done. Because of shortcomings in the current modeling schema, some requirements are entered as notes. Next a cadre with knowledge of vehicle and hardware modifies these models to encompass and be compatible with the hardware model; again, notes are added when the modeling schema does not provide a better way to represent the requirements. Finally, another cadre further modifies the models to be compatible with the scheduling engine. This last cadre also submits the models to the scheduling engine or builds the timeline manually to accommodate requirements that are expressed in notes. A future paradigm would provide a scheduling engine that accepts separate science models and hardware models. The modeling schema would have the capability to represent all the requirements without resorting to notes. Furthermore, the scheduling engine would not require that the models be modified to account for the capabilities (limitations) of the scheduling engine. The enabling technology under development at Marshall has three major components. (1) A new modeling schema allows expressing all the requirements of the tasks without resorting to notes or awkward contrivances. The chosen modeling schema is both maximally expressive and easy to use. It utilizes graphics methods to show hierarchies of task constraints and networks of temporal relationships. (2) A new scheduling algorithm automatically schedules the models without the intervention of a scheduling expert. The algorithm is tuned for the constraint hierarchies and the complex temporal relationships provided by the modeling schema. It has an extensive search algorithm which can exploit timing flexibilities and constraint and relationship options. (3) A web-based architecture allows multiple remote users to simultaneously model science and technology requirements and other users to model vehicle and hardware characteristics. The architecture allows the users to submit scheduling requests directly to the scheduling engine and immediately see the results. These three components are integrated so that science and technology experts with no knowledge of the vehicle or hardware subsystems and no knowledge of the internal workings of the scheduling engine have the ability to build and submit scheduling requests and see the results. The immediate feedback will hone the users' modeling skills and ultimately enable them to produce the desired timeline. This paper summarizes the three components of the enabling technology and describes how this technology would make a new paradigm possible.
Sharing Responsibility for Data Stewardship Between Scientists and Curators
NASA Astrophysics Data System (ADS)
Hedstrom, M. L.
2012-12-01
Data stewardship is becoming increasingly important to support accurate conclusions from new forms of data, integration of and computation across heterogeneous data types, interactions between models and data, replication of results, data governance and long-term archiving. In addition to increasing recognition of the importance of data management, data science, and data curation by US and international scientific agencies, the National Academies of Science Board on Research Data and Information is sponsoring a study on Data Curation Education and Workforce Issues. Effective data stewardship requires a distributed effort among scientists who produce data, IT staff and/or vendors who provide data storage and computational facilities and services, and curators who enhance data quality, manage data governance, provide access to third parties, and assume responsibility for long-term archiving of data. The expertise necessary for scientific data management includes a mix of knowledge of the scientific domain; an understanding of domain data requirements, standards, ontologies and analytical methods; facility with leading edge information technology; and knowledge of data governance, standards, and best practices for long-term preservation and access that rarely are found in a single individual. Rather than developing data science and data curation as new and distinct occupations, this paper examines the set of tasks required for data stewardship. The paper proposes an alternative model that embeds data stewardship in scientific workflows and coordinates hand-offs between instruments, repositories, analytical processing, publishers, distributors, and archives. This model forms the basis for defining knowledge and skill requirements for specific actors in the processes required for data stewardship and the corresponding educational and training needs.
Knowledge engineering in volcanology: Practical claims and general approach
NASA Astrophysics Data System (ADS)
Pshenichny, Cyril A.
2014-10-01
Knowledge engineering, being a branch of artificial intelligence, offers a variety of methods for elicitation and structuring of knowledge in a given domain. Only a few of them (ontologies and semantic nets, event/probability trees, Bayesian belief networks and event bushes) are known to volcanologists. Meanwhile, the tasks faced by volcanology and the solutions found so far favor a much wider application of knowledge engineering, especially tools for handling dynamic knowledge. This raises some fundamental logical and mathematical problems and requires an organizational effort, but may strongly improve panel discussions, enhance decision support, optimize physical modeling and support scientific collaboration.
NASA Astrophysics Data System (ADS)
Ferguson-Hessler, Monica G. M.; de Jong, Ton
This study aims at giving a systematic description of the cognitive activities involved in teaching physics. Such a description of instruction in physics requires a basis in two models, that is, the cognitive activities involved in learning physics and the knowledge base that is the foundation of expertise in that subject. These models have been provided by earlier research. The model of instruction distinguishes three main categories of instruction process: presenting new information, integrating (i.e., bringing structure into) new knowledge, and connecting elements of new knowledge to prior knowledge. Each of the main categories has been divided into a number of specific instruction processes. Hereby any limited and specific cognitive teacher activity can be described along the two dimensions of process and type of knowledge. The model was validated by application to lectures and problem-solving classes of first year university courses. These were recorded and analyzed as to instruction process and type of knowledge. Results indicate that teachers are indeed involved in the various types of instruction processes defined. The importance of this study lies in the creation of a terminology that makes it possible to discuss instruction in an explicit and specific way.
Linking knowledge and action through mental models of sustainable agriculture
Hoffman, Matthew; Lubell, Mark; Hillis, Vicken
2014-01-01
Linking knowledge to action requires understanding how decision-makers conceptualize sustainability. This paper empirically analyzes farmer “mental models” of sustainability from three winegrape-growing regions of California where local extension programs have focused on sustainable agriculture. The mental models are represented as networks where sustainability concepts are nodes, and links are established when a farmer mentions two concepts in their stated definition of sustainability. The results suggest that winegrape grower mental models of sustainability are hierarchically structured, relatively similar across regions, and strongly linked to participation in extension programs and adoption of sustainable farm practices. We discuss the implications of our findings for the debate over the meaning of sustainability, and the role of local extension programs in managing knowledge systems. PMID:25157158
A Three-Level Hierarchical Linear Model Using Student Growth Curve Modeling and Contextual Data
ERIC Educational Resources Information Center
Giorgio, Dorian
2012-01-01
Educational experts have criticized status models of school accountability, as required by the No Child Left Behind Act (NCLB), describing them as ineffectual in measuring achievement because their one-time assessment of student knowledge ignores student growth. Research on student achievement has instead identified growth models as superior…
The Power of Proofs-of-Possession: Securing Multiparty Signatures against Rogue-Key Attacks
NASA Astrophysics Data System (ADS)
Ristenpart, Thomas; Yilek, Scott
Multiparty signature protocols need protection against rogue-key attacks, made possible whenever an adversary can choose its public key(s) arbitrarily. For many schemes, provable security has only been established under the knowledge of secret key (KOSK) assumption where the adversary is required to reveal the secret keys it utilizes. In practice, certifying authorities rarely require the strong proofs of knowledge of secret keys required to substantiate the KOSK assumption. Instead, proofs of possession (POPs) are required and can be as simple as just a signature over the certificate request message. We propose a general registered key model, within which we can model both the KOSK assumption and in-use POP protocols. We show that simple POP protocols yield provable security of Boldyreva's multisignature scheme [11], the LOSSW multisignature scheme [28], and a 2-user ring signature scheme due to Bender, Katz, and Morselli [10]. Our results are the first to provide formal evidence that POPs can stop rogue-key attacks.
C-Language Integrated Production System, Version 6.0
NASA Technical Reports Server (NTRS)
Riley, Gary; Donnell, Brian; Ly, Huyen-Anh Bebe; Ortiz, Chris
1995-01-01
C Language Integrated Production System (CLIPS) computer programs are specifically intended to model human expertise or other knowledge. CLIPS is designed to enable research on, and development and delivery of, artificial intelligence on conventional computers. CLIPS 6.0 provides cohesive software tool for handling wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming: representation of knowledge as heuristics - essentially, rules of thumb that specify set of actions performed in given situation. Object-oriented programming: modeling of complex systems comprised of modular components easily reused to model other systems or create new components. Procedural-programming: representation of knowledge in ways similar to those of such languages as C, Pascal, Ada, and LISP. Version of CLIPS 6.0 for IBM PC-compatible computers requires DOS v3.3 or later and/or Windows 3.1 or later.
The Exploration-Exploitation Dilemma: A Multidisciplinary Framework
Berger-Tal, Oded; Meron, Ehud; Saltz, David
2014-01-01
The trade-off between the need to obtain new knowledge and the need to use that knowledge to improve performance is one of the most basic trade-offs in nature, and optimal performance usually requires some balance between exploratory and exploitative behaviors. Researchers in many disciplines have been searching for the optimal solution to this dilemma. Here we present a novel model in which the exploration strategy itself is dynamic and varies with time in order to optimize a definite goal, such as the acquisition of energy, money, or prestige. Our model produced four very distinct phases: Knowledge establishment, Knowledge accumulation, Knowledge maintenance, and Knowledge exploitation, giving rise to a multidisciplinary framework that applies equally to humans, animals, and organizations. The framework can be used to explain a multitude of phenomena in various disciplines, such as the movement of animals in novel landscapes, the most efficient resource allocation for a start-up company, or the effects of old age on knowledge acquisition in humans. PMID:24756026
Conservation planning for a species requires knowledge of the species’ population status and distribution. An important step in obtaining this information for many species is the development of models that predict the habitat distribution for the species. Such models can be usef...
An, Gary; Christley, Scott
2012-01-01
Given the panoply of system-level diseases that result from disordered inflammation, such as sepsis, atherosclerosis, cancer, and autoimmune disorders, understanding and characterizing the inflammatory response is a key target of biomedical research. Untangling the complex behavioral configurations associated with a process as ubiquitous as inflammation represents a prototype of the translational dilemma: the ability to translate mechanistic knowledge into effective therapeutics. A critical failure point in the current research environment is a throughput bottleneck at the level of evaluating hypotheses of mechanistic causality; these hypotheses represent the key step toward the application of knowledge for therapy development and design. Addressing the translational dilemma will require utilizing the ever-increasing power of computers and computational modeling to increase the efficiency of the scientific method in the identification and evaluation of hypotheses of mechanistic causality. More specifically, development needs to focus on facilitating the ability of non-computer trained biomedical researchers to utilize and instantiate their knowledge in dynamic computational models. This is termed "dynamic knowledge representation." Agent-based modeling is an object-oriented, discrete-event, rule-based simulation method that is well suited for biomedical dynamic knowledge representation. Agent-based modeling has been used in the study of inflammation at multiple scales. The ability of agent-based modeling to encompass multiple scales of biological process as well as spatial considerations, coupled with an intuitive modeling paradigm, suggest that this modeling framework is well suited for addressing the translational dilemma. This review describes agent-based modeling, gives examples of its applications in the study of inflammation, and introduces a proposed general expansion of the use of modeling and simulation to augment the generation and evaluation of knowledge by the biomedical research community at large.
Chaudhri, Vinay K; Elenius, Daniel; Goldenkranz, Andrew; Gong, Allison; Martone, Maryann E; Webb, William; Yorke-Smith, Neil
2014-01-01
Using knowledge representation for biomedical projects is now commonplace. In previous work, we represented the knowledge found in a college-level biology textbook in a fashion useful for answering questions. We showed that embedding the knowledge representation and question-answering abilities in an electronic textbook helped to engage student interest and improve learning. A natural question that arises from this success, and this paper's primary focus, is whether a similar approach is applicable across a range of life science textbooks. To answer that question, we considered four different textbooks, ranging from a below-introductory college biology text to an advanced, graduate-level neuroscience textbook. For these textbooks, we investigated the following questions: (1) To what extent is knowledge shared between the different textbooks? (2) To what extent can the same upper ontology be used to represent the knowledge found in different textbooks? (3) To what extent can the questions of interest for a range of textbooks be answered by using the same reasoning mechanisms? Our existing modeling and reasoning methods apply especially well both to a textbook that is comparable in level to the text studied in our previous work (i.e., an introductory-level text) and to a textbook at a lower level, suggesting potential for a high degree of portability. Even for the overlapping knowledge found across the textbooks, the level of detail covered in each textbook was different, which requires that the representations must be customized for each textbook. We also found that for advanced textbooks, representing models and scientific reasoning processes was particularly important. With some additional work, our representation methodology would be applicable to a range of textbooks. The requirements for knowledge representation are common across textbooks, suggesting that a shared semantic infrastructure for the life sciences is feasible. Because our representation overlaps heavily with those already being used for biomedical ontologies, this work suggests a natural pathway to include such representations as part of the life sciences curriculum at different grade levels.
ERIC Educational Resources Information Center
Jones, Sarah-Louise; Procter, Richard; Younie, Sarah
2015-01-01
Research alone does not inform practice, rather a process of knowledge translation is required to enable research findings to become meaningful for practitioners in their contextual settings. However, the translational process needs to be an iterative cycle so that the practice itself can be reflected upon and thereby inform the ongoing research…
ERIC Educational Resources Information Center
Yendol-Hoppey, Diane; Dana, Nancy Fichtman
2006-01-01
Effective mentoring requires planned and mindful attention to the ways in which one's knowledge, skills, and experience can be passed on to new teachers. Stressing the importance of deep reflection on one's mentoring practice, the award-winning authors offer eight models/metaphors that mentors can customize to meet the individual needs of their…
The Polygonal Model: A Simple Representation of Biomolecules as a Tool for Teaching Metabolism
ERIC Educational Resources Information Center
Bonafe, Carlos Francisco Sampaio; Bispo, Jose Ailton Conceição; de Jesus, Marcelo Bispo
2018-01-01
Metabolism involves numerous reactions and organic compounds that the student must master to understand adequately the processes involved. Part of biochemical learning should include some knowledge of the structure of biomolecules, although the acquisition of such knowledge can be time-consuming and may require significant effort from the student.…
ERIC Educational Resources Information Center
Li, Yanyan; Dong, Mingkai; Huang, Ronghuai
2011-01-01
The knowledge society requires life-long learning and flexible learning environment that enables fast, just-in-time and relevant learning, aiding the development of communities of knowledge, linking learners and practitioners with experts. Based upon semantic wiki, a combination of wiki and Semantic Web technology, this paper designs and develops…
Knowledge-acquisition tools for medical knowledge-based systems.
Lanzola, G; Quaglini, S; Stefanelli, M
1995-03-01
Knowledge-based systems (KBS) have been proposed to solve a large variety of medical problems. A strategic issue for KBS development and maintenance are the efforts required for both knowledge engineers and domain experts. The proposed solution is building efficient knowledge acquisition (KA) tools. This paper presents a set of KA tools we are developing within a European Project called GAMES II. They have been designed after the formulation of an epistemological model of medical reasoning. The main goal is that of developing a computational framework which allows knowledge engineers and domain experts to interact cooperatively in developing a medical KBS. To this aim, a set of reusable software components is highly recommended. Their design was facilitated by the development of a methodology for KBS construction. It views this process as comprising two activities: the tailoring of the epistemological model to the specific medical task to be executed and the subsequent translation of this model into a computational architecture so that the connections between computational structures and their knowledge level counterparts are maintained. The KA tools we developed are illustrated taking examples from the behavior of a KBS we are building for the management of children with acute myeloid leukemia.
Purchase, Sharon; Vickery, Alistair; Garton-Smith, Jacquie; O'Leary, Peter; Sullivan, David; Slattery, Mark; Playford, David; Watts, Gerald
2014-12-01
To analyze various business models for improving the diagnosis and treatment of familial hypercholesterolaemia. Five different strategies were analyzed and data were collected through documentary analysis and structured interviews. Interviewees included professionals from universities, Western Australia Department of Health, private medical practitioners and not-for-profit organizations. Two business models are recommended: alliance with general practitioners and primary health care organizations and a joint venture model between private cardiology clinics and lipid disorder clinics in the public sector. Primary care providers are in a good position to co-ordinate across the multi-disciplinary health services required to treat familial hypercholesterolaemia within the population. Devolution of knowledge on treatment of familial hypercholesterolaemia from centralized specialist hospital clinics to primary care services is required to improve the rate of detection of this condition in the community. An International Classification of Disease (ICD)-10 and/or a Diagnosis-Related Group (DRG) code is required to codify, catalogue and document new cases and treatment, as well as to facilitate research and re-imbursement strategies. Primary Health Care Organizations can usefully facilitate the transfer of knowledge on best standard of care to general practice, but the best model of care will require close integration of care with specialist and academic centres.
Progress in modeling hypersonic turbulent boundary layers
NASA Technical Reports Server (NTRS)
Zeman, Otto
1993-01-01
A good knowledge of the turbulence structure, wall heat transfer, and friction in turbulent boundary layers (TBL) at high speeds is required for the design of hypersonic air breathing airplanes and reentry space vehicles. This work reports on recent progress in the modeling of high speed TBL flows. The specific research goal described here is the development of a second order closure model for zero pressure gradient TBL's for the range of Mach numbers up to hypersonic speeds with arbitrary wall cooling requirements.
ISPE: A knowledge-based system for fluidization studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, S.
1991-01-01
Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all specified goals'' are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that canmore » enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.« less
NASA Astrophysics Data System (ADS)
Davidowitz, Bette; Potgieter, Marietjie
2016-06-01
Research has shown that a high level of content knowledge (CK) is necessary but not sufficient to develop the special knowledge base of expert teachers known as pedagogical content knowledge (PCK). This study contributes towards research to quantify the relationship between CK and PCK in science. In order to determine the proportion of the variance in PCK accounted for by the variance in CK, instruments are required which are valid and reliable as well as being unidimensional to measure person abilities for CK and PCK. An instrument consisting of two paper-and-pencil tests was designed to assess Grade 12 teachers CK and PCK in organic chemistry. We used the Rasch measurement model to convert raw score data into interval measures and to provide empirical evidence for the validity, reliability and unidimensionality of the tests. The correlation between CK and PCK was estimated as r = .66 (p < .001). We found evidence to suggest that while topic-specific PCK (TSPCK) develops with increasing teaching experience, high levels of CK can be acquired with limited teaching experience. These findings support the hypothesis that CK is a requirement for the development of TSPCK; proficiency in CK is, however, not necessarily associated with high levels of TSPCK.
Lockwood, Craig; Stephenson, Matthew; Lizarondo, Lucylynn; van Den Hoek, Joan; Harrison, Margaret
2016-08-01
This paper describes an online facilitation for operationalizing the knowledge-to-action (KTA) model. The KTA model incorporates implementation planning that is optimally suited to the information needs of clinicians. The can-implement(©) is an evidence implementation process informed by the KTA model. An online counterpart, the can-implement.pro(©) , was developed to enable greater dissemination and utilization of the can-implement(©) process. The driver for this work was health professionals' need for facilitation that is iterative, informed by context and localized to the specific needs of users. The literature supporting this paper includes evaluation studies and theoretical concepts relevant to KTA model, evidence implementation and facilitation. Nursing and other health disciplines require a skill set and resources to successfully navigate the complexity of organizational requirements, inter-professional leadership and day-to-day practical management to implement evidence into clinical practice. The can-implement.pro(©) provides an accessible, inclusive system for evidence implementation projects. There is empirical support for evidence implementation informed by the KTA model, which in this phase of work has been developed for online uptake. Nurses and other clinicians seeking to implement evidence could benefit from the directed actions, planning advice and information embedded in the phases and steps of can-implement.pro(©) . © 2016 John Wiley & Sons Australia, Ltd.
Effective behavioral modeling and prediction even when few exemplars are available
NASA Astrophysics Data System (ADS)
Goan, Terrance; Kartha, Neelakantan; Kaneshiro, Ryan
2006-05-01
While great progress has been made in the lowest levels of data fusion, practical advances in behavior modeling and prediction remain elusive. The most critical limitation of existing approaches is their inability to support the required knowledge modeling and continuing refinement under realistic constraints (e.g., few historic exemplars, the lack of knowledge engineering support, and the need for rapid system deployment). This paper reports on our ongoing efforts to develop Propheteer, a system which will address these shortcomings through two primary techniques. First, with Propheteer we abandon the typical consensus-driven modeling approaches that involve infrequent group decision making sessions in favor of an approach that solicits asynchronous knowledge contributions (in the form of alternative future scenarios and indicators) without burdening the user with endless certainty or probability estimates. Second, we enable knowledge contributions by personnel beyond the typical core decision making group, thereby casting light on blind spots, mitigating human biases, and helping maintain the currency of the developed behavior models. We conclude with a discussion of the many lessons learned in the development of our prototype Propheteer system.
Cardiac nursing: achieving competent practitioners.
Riley, Jillian; Brodie, Lyndell; Shuldham, Caroline
2005-03-01
This paper describes how competency statements were integrated into an academic framework to provide a transparent yet flexible career pathway for the nurse working in acute cardiac care. Nurses are expanding and developing their roles and use wide ranging skills and knowledge to care for patients. Additionally, models of care delivery are changing and patients are cared for in a variety of settings. Where evidence exists, these models demonstrate improvement in the provision and quality of services and contribute to improved quality of life, maximise medication and therapy and reduce waiting times for investigations. However, whilst many studies have demonstrated benefit, translating these results into routine practice requires skilled nurses who are "fit for purpose," and to support this, professional competencies can be used to measure competence in practice whilst informing educational initiatives. This paper outlines the development of competency statements that identify the knowledge and skills required for safe, effective and competent care and direct the cardiac nurse acquire skills and knowledge in a focused and coherent way.
Literature Mining and Knowledge Discovery Tools for Virtual Tissues
Virtual Tissues (VTs) are in silico models that simulate the cellular fabric of tissues to analyze complex relationships and predict multicellular behaviors in specific biological systems such as the mature liver (v-Liver™) or developing embryo (v-Embryo™). VT models require inpu...
A Knowledge-Base for a Personalized Infectious Disease Risk Prediction System.
Vinarti, Retno; Hederman, Lucy
2018-01-01
We present a knowledge-base to represent collated infectious disease risk (IDR) knowledge. The knowledge is about personal and contextual risk of contracting an infectious disease obtained from declarative sources (e.g. Atlas of Human Infectious Diseases). Automated prediction requires encoding this knowledge in a form that can produce risk probabilities (e.g. Bayesian Network - BN). The knowledge-base presented in this paper feeds an algorithm that can auto-generate the BN. The knowledge from 234 infectious diseases was compiled. From this compilation, we designed an ontology and five rule types for modelling IDR knowledge in general. The evaluation aims to assess whether the knowledge-base structure, and its application to three disease-country contexts, meets the needs of personalized IDR prediction system. From the evaluation results, the knowledge-base conforms to the system's purpose: personalization of infectious disease risk.
NASA Astrophysics Data System (ADS)
Bamberger, Yael M.; Davis, Elizabeth A.
2013-01-01
This paper focuses on students' ability to transfer modelling performances across content areas, taking into consideration their improvement of content knowledge as a result of a model-based instruction. Sixty-five sixth grade students of one science teacher in an urban public school in the Midwestern USA engaged in scientific modelling practices that were incorporated into a curriculum focused on the nature of matter. Concept-process models were embedded in the curriculum, as well as emphasis on meta-modelling knowledge and modelling practices. Pre-post test items that required drawing scientific models of smell, evaporation, and friction were analysed. The level of content understanding was coded and scored, as were the following elements of modelling performance: explanation, comparativeness, abstraction, and labelling. Paired t-tests were conducted to analyse differences in students' pre-post tests scores on content knowledge and on each element of the modelling performances. These are described in terms of the amount of transfer. Students significantly improved in their content knowledge for the smell and the evaporation models, but not for the friction model, which was expected as that topic was not taught during the instruction. However, students significantly improved in some of their modelling performances for all the three models. This improvement serves as evidence that the model-based instruction can help students acquire modelling practices that they can apply in a new content area.
Spatial modelling of disease using data- and knowledge-driven approaches.
Stevens, Kim B; Pfeiffer, Dirk U
2011-09-01
The purpose of spatial modelling in animal and public health is three-fold: describing existing spatial patterns of risk, attempting to understand the biological mechanisms that lead to disease occurrence and predicting what will happen in the medium to long-term future (temporal prediction) or in different geographical areas (spatial prediction). Traditional methods for temporal and spatial predictions include general and generalized linear models (GLM), generalized additive models (GAM) and Bayesian estimation methods. However, such models require both disease presence and absence data which are not always easy to obtain. Novel spatial modelling methods such as maximum entropy (MAXENT) and the genetic algorithm for rule set production (GARP) require only disease presence data and have been used extensively in the fields of ecology and conservation, to model species distribution and habitat suitability. Other methods, such as multicriteria decision analysis (MCDA), use knowledge of the causal factors of disease occurrence to identify areas potentially suitable for disease. In addition to their less restrictive data requirements, some of these novel methods have been shown to outperform traditional statistical methods in predictive ability (Elith et al., 2006). This review paper provides details of some of these novel methods for mapping disease distribution, highlights their advantages and limitations, and identifies studies which have used the methods to model various aspects of disease distribution. Copyright © 2011. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Hutchison, Amy C.; Woodward, Lindsay
2018-01-01
Background: Presently, models of professional development aimed at supporting teachers' technology integration efforts are often short and decontextualized. With many schools across the country utilizing standards that require students to engage with digital tools, a situative model that supports building teachers' knowledge within their…
Conservation of design knowledge. [of large complex spaceborne systems
NASA Technical Reports Server (NTRS)
Sivard, Cecilia; Zweben, Monte; Cannon, David; Lakin, Fred; Leifer, Larry
1989-01-01
This paper presents an approach for acquiring knowledge about a design during the design process. The objective is to increase the efficiency of the lifecycle management of a space-borne system by providing operational models of the system's structure and behavior, as well as the design rationale, to human and automated operators. A design knowledge acquisition system is under development that compares how two alternative design versions meet the system requirements as a means for automatically capturing rationale for design changes.
ERIC Educational Resources Information Center
Laswadi; Kusumah, Yaya S.; Darwis, Sutawanir; Afgani, Jarnawi D.
2016-01-01
Conceptual understanding (CU) and procedural fluency (PF) are two important mathematical competencies required by students. CU helps students organizing their knowledge into a coherent whole, and PF helps them to find the right solution of a problem. In order to enhance CU and PF, students need learning experiences in constructing knowledge and…
Towards new approaches in phenological modelling
NASA Astrophysics Data System (ADS)
Chmielewski, Frank-M.; Götz, Klaus-P.; Rawel, Harshard M.; Homann, Thomas
2014-05-01
Modelling of phenological stages is based on temperature sums for many decades, describing both the chilling and the forcing requirement of woody plants until the beginning of leafing or flowering. Parts of this approach go back to Reaumur (1735), who originally proposed the concept of growing degree-days. Now, there is a growing body of opinion that asks for new methods in phenological modelling and more in-depth studies on dormancy release of woody plants. This requirement is easily understandable if we consider the wide application of phenological models, which can even affect the results of climate models. To this day, in phenological models still a number of parameters need to be optimised on observations, although some basic physiological knowledge of the chilling and forcing requirement of plants is already considered in these approaches (semi-mechanistic models). Limiting, for a fundamental improvement of these models, is the lack of knowledge about the course of dormancy in woody plants, which cannot be directly observed and which is also insufficiently described in the literature. Modern metabolomic methods provide a solution for this problem and allow both, the validation of currently used phenological models as well as the development of mechanistic approaches. In order to develop this kind of models, changes of metabolites (concentration, temporal course) must be set in relation to the variability of environmental (steering) parameters (weather, day length, etc.). This necessarily requires multi-year (3-5 yr.) and high-resolution (weekly probes between autumn and spring) data. The feasibility of this approach has already been tested in a 3-year pilot-study on sweet cherries. Our suggested methodology is not only limited to the flowering of fruit trees, it can be also applied to tree species of the natural vegetation, where even greater deficits in phenological modelling exist.
Xia, Yin; Liu, Dianfeng; Liu, Yaolin; He, Jianhua; Hong, Xiaofeng
2014-01-01
Alternative land use zoning scenarios provide guidance for sustainable land use controls. This study focused on an ecologically vulnerable catchment on the Loess Plateau in China, proposed a novel land use zoning model, and generated alternative zoning solutions to satisfy the various requirements of land use stakeholders and managers. This model combined multiple zoning objectives, i.e., maximum zoning suitability, maximum planning compatibility and maximum spatial compactness, with land use constraints by using goal programming technique, and employed a modified simulated annealing algorithm to search for the optimal zoning solutions. The land use zoning knowledge was incorporated into the initialisation operator and neighbourhood selection strategy of the simulated annealing algorithm to improve its efficiency. The case study indicates that the model is both effective and robust. Five optimal zoning scenarios of the study area were helpful for satisfying the requirements of land use controls in loess hilly regions, e.g., land use intensification, agricultural protection and environmental conservation. PMID:25170679
FGMReview: design of a knowledge management tool on female genital mutilation.
Martínez Pérez, Guillermo; Turetsky, Risa
2015-11-01
Web-based literature search engines may not be user-friendly for some readers searching for information on female genital mutilation. This is a traditional practice that has no health benefits, and about 140 million girls and women worldwide have undergone it. In 2012, the website FGMReview was created with the aim to offer a user-friendly, accessible, scalable, and innovative knowledge management tool specialized in female genital mutilation. The design of this website was guided by a conceptual model based on the use of benchmarking techniques and requirements engineering, an area of knowledge from the computer informatics field, influenced by the Transcultural Nursing model. The purpose of this article is to describe this conceptual model. Nurses and other health care providers can use this conceptual model to guide their methodological approach to design and launch other eHealth projects. © The Author(s) 2014.
49 CFR 383.111 - Required knowledge.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 49 Transportation 5 2011-10-01 2011-10-01 false Required knowledge. 383.111 Section 383.111... STANDARDS; REQUIREMENTS AND PENALTIES Required Knowledge and Skills § 383.111 Required knowledge. (a) All CMV operators must have knowledge of the following 20 general areas: (1) Safe operations regulations...
49 CFR 383.111 - Required knowledge.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 5 2010-10-01 2010-10-01 false Required knowledge. 383.111 Section 383.111... STANDARDS; REQUIREMENTS AND PENALTIES Required Knowledge and Skills § 383.111 Required knowledge. All commercial motor vehicle operators must have knowledge of the following general areas: (a) Safe operations...
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2003-08-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.
Vali, Leila; Izadi, Azar; Jahani, Yunes; Okhovati, Maryam
2016-01-01
Introduction Education and research are two major functions of universities, which require proper and systematic exploitation of available knowledge and information. Therefore, it is necessary to investigate the knowledge management status in an education system by considering the function of faculty members in creation and dissemination of knowledge. This study was conducted to investigate the knowledge management status among faculty members of the Kerman University of Medical Sciences based on the Nonaka and Takeuchi models in 2015. Methods This was a descriptive-analytical and cross-sectional study. It was conducted on 165 faculty members at the Kerman University of Medical Sciences, who were selected from seven faculties as weighted using a random stratified sampling method. The Nonaka and Takeuchi knowledge management questionnaire consists of 26 questions in four dimensions of socialization, externalization, internalization, and combination. Scoring of questions was conducted using the five-point Likert scale. To analyze data, independent t-test, one-way ANOVA, Pearson correlation coefficients, and the Kruskal-Wallis test were employed. Results The four dimensions in the Nonaka and Takeuchi model are based on optimal indicators (3.5), dimensions of combination, and externalization with an average of 3.3 were found in higher ranks and internalization and socialization had averages of 3.1 and 3. According to the findings of this study, the average knowledge management among faculty members of the Kerman University of Medical Sciences was estimated to be 3.1, with a bit difference compared to the average. According to the results of t-tests, there was no significant relationship between gender and various dimensions of knowledge management (p>0.05). The findings of Kruskal-Wallis showed that there is no significant relationship between variables of age, academic rank, and type of faculty with regard to dimensions of knowledge management (p>0.05). In addition, according to the results of Pearson tests, there is no significant relation between employment history and dimensions of knowledge management (p>0.05). Conclusion Considering the function and importance of knowledge management in education and research organizations including universities, it is recommended to pay comprehensive attention to establishment of knowledge management and knowledge sharing in universities and provide the required background to from research teams and communication networks inside and outside universities. PMID:27757183
Vali, Leila; Izadi, Azar; Jahani, Yunes; Okhovati, Maryam
2016-08-01
Education and research are two major functions of universities, which require proper and systematic exploitation of available knowledge and information. Therefore, it is necessary to investigate the knowledge management status in an education system by considering the function of faculty members in creation and dissemination of knowledge. This study was conducted to investigate the knowledge management status among faculty members of the Kerman University of Medical Sciences based on the Nonaka and Takeuchi models in 2015. This was a descriptive-analytical and cross-sectional study. It was conducted on 165 faculty members at the Kerman University of Medical Sciences, who were selected from seven faculties as weighted using a random stratified sampling method. The Nonaka and Takeuchi knowledge management questionnaire consists of 26 questions in four dimensions of socialization, externalization, internalization, and combination. Scoring of questions was conducted using the five-point Likert scale. To analyze data, independent t-test, one-way ANOVA, Pearson correlation coefficients, and the Kruskal-Wallis test were employed. The four dimensions in the Nonaka and Takeuchi model are based on optimal indicators (3.5), dimensions of combination, and externalization with an average of 3.3 were found in higher ranks and internalization and socialization had averages of 3.1 and 3. According to the findings of this study, the average knowledge management among faculty members of the Kerman University of Medical Sciences was estimated to be 3.1, with a bit difference compared to the average. According to the results of t-tests, there was no significant relationship between gender and various dimensions of knowledge management (p>0.05). The findings of Kruskal-Wallis showed that there is no significant relationship between variables of age, academic rank, and type of faculty with regard to dimensions of knowledge management (p>0.05). In addition, according to the results of Pearson tests, there is no significant relation between employment history and dimensions of knowledge management (p>0.05). Considering the function and importance of knowledge management in education and research organizations including universities, it is recommended to pay comprehensive attention to establishment of knowledge management and knowledge sharing in universities and provide the required background to from research teams and communication networks inside and outside universities.
2016-01-01
Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome. PMID:27051515
Nickerson, David; Atalag, Koray; de Bono, Bernard; Geiger, Jörg; Goble, Carole; Hollmann, Susanne; Lonien, Joachim; Müller, Wolfgang; Regierer, Babette; Stanford, Natalie J; Golebiewski, Martin; Hunter, Peter
2016-04-06
Reconstructing and understanding the Human Physiome virtually is a complex mathematical problem, and a highly demanding computational challenge. Mathematical models spanning from the molecular level through to whole populations of individuals must be integrated, then personalized. This requires interoperability with multiple disparate and geographically separated data sources, and myriad computational software tools. Extracting and producing knowledge from such sources, even when the databases and software are readily available, is a challenging task. Despite the difficulties, researchers must frequently perform these tasks so that available knowledge can be continually integrated into the common framework required to realize the Human Physiome. Software and infrastructures that support the communities that generate these, together with their underlying standards to format, describe and interlink the corresponding data and computer models, are pivotal to the Human Physiome being realized. They provide the foundations for integrating, exchanging and re-using data and models efficiently, and correctly, while also supporting the dissemination of growing knowledge in these forms. In this paper, we explore the standards, software tooling, repositories and infrastructures that support this work, and detail what makes them vital to realizing the Human Physiome.
Adaptive Modeling of the International Space Station Electrical Power System
NASA Technical Reports Server (NTRS)
Thomas, Justin Ray
2007-01-01
Software simulations provide NASA engineers the ability to experiment with spacecraft systems in a computer-imitated environment. Engineers currently develop software models that encapsulate spacecraft system behavior. These models can be inaccurate due to invalid assumptions, erroneous operation, or system evolution. Increasing accuracy requires manual calibration and domain-specific knowledge. This thesis presents a method for automatically learning system models without any assumptions regarding system behavior. Data stream mining techniques are applied to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). We also explore a knowledge fusion approach that uses traditional engineered EPS models to supplement the learned models. We observed that these engineered EPS models provide useful background knowledge to reduce predictive error spikes when confronted with making predictions in situations that are quite different from the training scenarios used when learning the model. Evaluations using ISS sensor data and existing EPS models demonstrate the success of the adaptive approach. Our experimental results show that adaptive modeling provides reductions in model error anywhere from 80% to 96% over these existing models. Final discussions include impending use of adaptive modeling technology for ISS mission operations and the need for adaptive modeling in future NASA lunar and Martian exploration.
A hierarchical modeling methodology for the definition and selection of requirements
NASA Astrophysics Data System (ADS)
Dufresne, Stephane
This dissertation describes the development of a requirements analysis methodology that takes into account the concept of operations and the hierarchical decomposition of aerospace systems. At the core of the methodology, the Analytic Network Process (ANP) is used to ensure the traceability between the qualitative and quantitative information present in the hierarchical model. The proposed methodology is implemented to the requirements definition of a hurricane tracker Unmanned Aerial Vehicle. Three research objectives are identified in this work; (1) improve the requirements mapping process by matching the stakeholder expectations with the concept of operations, systems and available resources; (2) reduce the epistemic uncertainty surrounding the requirements and requirements mapping; and (3) improve the requirements down-selection process by taking into account the level of importance of the criteria and the available resources. Several challenges are associated with the identification and definition of requirements. The complexity of the system implies that a large number of requirements are needed to define the systems. These requirements are defined early in the conceptual design, where the level of knowledge is relatively low and the level of uncertainty is large. The proposed methodology intends to increase the level of knowledge and reduce the level of uncertainty by guiding the design team through a structured process. To address these challenges, a new methodology is created to flow-down the requirements from the stakeholder expectations to the systems alternatives. A taxonomy of requirements is created to classify the information gathered during the problem definition. Subsequently, the operational and systems functions and measures of effectiveness are integrated to a hierarchical model to allow the traceability of the information. Monte Carlo methods are used to evaluate the variations of the hierarchical model elements and consequently reduce the epistemic uncertainty. The proposed methodology is applied to the design of a hurricane tracker Unmanned Aerial Vehicles to demonstrate the origin and impact of requirements on the concept of operations and systems alternatives. This research demonstrates that the hierarchical modeling methodology provides a traceable flow-down of the requirements from the problem definition to the systems alternatives phases of conceptual design.
A Knowledge-Based and Model-Driven Requirements Engineering Approach to Conceptual Satellite Design
NASA Astrophysics Data System (ADS)
Dos Santos, Walter A.; Leonor, Bruno B. F.; Stephany, Stephan
Satellite systems are becoming even more complex, making technical issues a significant cost driver. The increasing complexity of these systems makes requirements engineering activities both more important and difficult. Additionally, today's competitive pressures and other market forces drive manufacturing companies to improve the efficiency with which they design and manufacture space products and systems. This imposes a heavy burden on systems-of-systems engineering skills and particularly on requirements engineering which is an important phase in a system's life cycle. When this is poorly performed, various problems may occur, such as failures, cost overruns and delays. One solution is to underpin the preliminary conceptual satellite design with computer-based information reuse and integration to deal with the interdisciplinary nature of this problem domain. This can be attained by taking a model-driven engineering approach (MDE), in which models are the main artifacts during system development. MDE is an emergent approach that tries to address system complexity by the intense use of models. This work outlines the use of SysML (Systems Modeling Language) and a novel knowledge-based software tool, named SatBudgets, to deal with these and other challenges confronted during the conceptual phase of a university satellite system, called ITASAT, currently being developed by INPE and some Brazilian universities.
DOT National Transportation Integrated Search
2005-03-01
The conventional approach to signal timing optimization and field deployment requires current traffic flow data, experience with optimization models, familiarity with the signal controller hardware, and knowledge of field operations including signal ...
Signal timing on a shoestring.
DOT National Transportation Integrated Search
2005-03-01
The conventional approach to signal timing optimization and field deployment requires current traffic flow data, experience with optimization models, familiarity with the signal controller hardware, and knowledge of field operations including signal ...
An adaptive inverse kinematics algorithm for robot manipulators
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Glass, K.; Seraji, H.
1990-01-01
An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.
Conceptual Models and Guidelines for Clinical Assessment of Financial Capacity
Marson, Daniel
2016-01-01
The ability to manage financial affairs is a life skill of critical importance, and neuropsychologists are increasingly asked to assess financial capacity across a variety of settings. Sound clinical assessment of financial capacity requires knowledge and appreciation of applicable clinical conceptual models and principles. However, the literature has presented relatively little conceptual guidance for clinicians concerning financial capacity and its assessment. This article seeks to address this gap. The article presents six clinical models of financial capacity : (1) the early gerontological IADL model of Lawton, (2) the clinical skills model and (3) related cognitive psychological model developed by Marson and colleagues, (4) a financial decision-making model adapting earlier decisional capacity work of Appelbaum and Grisso, (5) a person-centered model of financial decision-making developed by Lichtenberg and colleagues, and (6) a recent model of financial capacity in the real world developed through the Institute of Medicine. Accompanying presentation of the models is discussion of conceptual and practical perspectives they represent for clinician assessment. Based on the models, the article concludes by presenting a series of conceptually oriented guidelines for clinical assessment of financial capacity. In summary, sound assessment of financial capacity requires knowledge and appreciation of clinical conceptual models and principles. Awareness of such models, principles and guidelines will strengthen and advance clinical assessment of financial capacity. PMID:27506235
An, Gary C
2010-01-01
The greatest challenge facing the biomedical research community is the effective translation of basic mechanistic knowledge into clinically effective therapeutics. This challenge is most evident in attempts to understand and modulate "systems" processes/disorders, such as sepsis, cancer, and wound healing. Formulating an investigatory strategy for these issues requires the recognition that these are dynamic processes. Representation of the dynamic behavior of biological systems can aid in the investigation of complex pathophysiological processes by augmenting existing discovery procedures by integrating disparate information sources and knowledge. This approach is termed Translational Systems Biology. Focusing on the development of computational models capturing the behavior of mechanistic hypotheses provides a tool that bridges gaps in the understanding of a disease process by visualizing "thought experiments" to fill those gaps. Agent-based modeling is a computational method particularly well suited to the translation of mechanistic knowledge into a computational framework. Utilizing agent-based models as a means of dynamic hypothesis representation will be a vital means of describing, communicating, and integrating community-wide knowledge. The transparent representation of hypotheses in this dynamic fashion can form the basis of "knowledge ecologies," where selection between competing hypotheses will apply an evolutionary paradigm to the development of community knowledge.
ERIC Educational Resources Information Center
Mavor, A. S.; And Others
Part of a sustained program that has involved the design of personally tailored information systems responsive to the needs of scientists performing common research and teaching tasks, this project focuses on the procedural and content requirements for accomplishing need diagnosis and presents these requirements as specifications for an…
Hydroacoustic propagation grids for the CTBT knowledge databaes BBN technical memorandum W1303
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Angell
1998-05-01
The Hydroacoustic Coverage Assessment Model (HydroCAM) has been used to develop components of the hydroacoustic knowledge database required by operational monitoring systems, particularly the US National Data Center (NDC). The database, which consists of travel time, amplitude correction and travel time standard deviation grids, is planned to support source location, discrimination and estimation functions of the monitoring network. The grids will also be used under the current BBN subcontract to support an analysis of the performance of the International Monitoring System (IMS) and national sensor systems. This report describes the format and contents of the hydroacoustic knowledgebase grids, and themore » procedures and model parameters used to generate these grids. Comparisons between the knowledge grids, measured data and other modeled results are presented to illustrate the strengths and weaknesses of the current approach. A recommended approach for augmenting the knowledge database with a database of expected spectral/waveform characteristics is provided in the final section of the report.« less
A Review of Flood Loss Models as Basis for Harmonization and Benchmarking
Kreibich, Heidi; Franco, Guillermo; Marechal, David
2016-01-01
Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss–or flood vulnerability–relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework. PMID:27454604
A Review of Flood Loss Models as Basis for Harmonization and Benchmarking.
Gerl, Tina; Kreibich, Heidi; Franco, Guillermo; Marechal, David; Schröter, Kai
2016-01-01
Risk-based approaches have been increasingly accepted and operationalized in flood risk management during recent decades. For instance, commercial flood risk models are used by the insurance industry to assess potential losses, establish the pricing of policies and determine reinsurance needs. Despite considerable progress in the development of loss estimation tools since the 1980s, loss estimates still reflect high uncertainties and disparities that often lead to questioning their quality. This requires an assessment of the validity and robustness of loss models as it affects prioritization and investment decision in flood risk management as well as regulatory requirements and business decisions in the insurance industry. Hence, more effort is needed to quantify uncertainties and undertake validations. Due to a lack of detailed and reliable flood loss data, first order validations are difficult to accomplish, so that model comparisons in terms of benchmarking are essential. It is checked if the models are informed by existing data and knowledge and if the assumptions made in the models are aligned with the existing knowledge. When this alignment is confirmed through validation or benchmarking exercises, the user gains confidence in the models. Before these benchmarking exercises are feasible, however, a cohesive survey of existing knowledge needs to be undertaken. With that aim, this work presents a review of flood loss-or flood vulnerability-relationships collected from the public domain and some professional sources. Our survey analyses 61 sources consisting of publications or software packages, of which 47 are reviewed in detail. This exercise results in probably the most complete review of flood loss models to date containing nearly a thousand vulnerability functions. These functions are highly heterogeneous and only about half of the loss models are found to be accompanied by explicit validation at the time of their proposal. This paper exemplarily presents an approach for a quantitative comparison of disparate models via the reduction to the joint input variables of all models. Harmonization of models for benchmarking and comparison requires profound insight into the model structures, mechanisms and underlying assumptions. Possibilities and challenges are discussed that exist in model harmonization and the application of the inventory in a benchmarking framework.
A synopsis of test results and knowledge gained from the Phase-0 CSI evolutionary model
NASA Technical Reports Server (NTRS)
Belvin, W. Keith; Elliott, Kenny B.; Horta, Lucas G.
1993-01-01
The Phase-0 CSI Evolutionary Model (CEM) is a testbed for the study of space platform global line-of-sight (LOS) pointing. Now that the tests have been completed, a summary of hardware and closed-loop test experiences is necessary to insure a timely dissemination of the knowledge gained. The testbed is described and modeling experiences are presented followed by a summary of the research performed by various investigators. Some early lessons on implementing the closed-loop controllers are described with particular emphasis on real-time computing requirements. A summary of closed-loop studies and a synopsis of test results are presented. Plans for evolving the CEM from phase 0 to phases 1 and 2 are also described. Subsequently, a summary of knowledge gained from the design and testing of the Phase-0 CEM is made.
Blackboard architecture for medical image interpretation
NASA Astrophysics Data System (ADS)
Davis, Darryl N.; Taylor, Christopher J.
1991-06-01
There is a growing interest in using sophisticated knowledge-based systems for biomedical image interpretation. We present a principled attempt to use artificial intelligence methodologies in interpreting lateral skull x-ray images. Such radiographs are routinely used in cephalometric analysis to provide quantitative measurements useful to clinical orthodontists. Manual and interactive methods of analysis are known to be error prone and previous attempts to automate this analysis typically fail to capture the expertise and adaptability required to cope with the variability in biological structure and image quality. An integrated model-based system has been developed which makes use of a blackboard architecture and multiple knowledge sources. A model definition interface allows quantitative models, of feature appearance and location, to be built from examples as well as more qualitative modelling constructs. Visual task definition and blackboard control modules allow task-specific knowledge sources to act on information available to the blackboard in a hypothesise and test reasoning cycle. Further knowledge-based modules include object selection, location hypothesis, intelligent segmentation, and constraint propagation systems. Alternative solutions to given tasks are permitted.
Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting
Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart
2015-02-14
Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less
New, Leslie F.; Moretti, David J.; Hooker, Sascha K.; Costa, Daniel P.; Simmons, Samantha E.
2013-01-01
Mass stranding of several species of beaked whales (family Ziphiidae) associated with exposure to anthropogenic sounds has raised concern for the conservation of these species. However, little is known about the species’ life histories, prey or habitat requirements. Without this knowledge, it becomes difficult to assess the effects of anthropogenic sound, since there is no way to determine whether the disturbance is impacting the species’ physical or environmental requirements. Here we take a bioenergetics approach to address this gap in our knowledge, as the elusive, deep-diving nature of beaked whales has made it hard to study these effects directly. We develop a model for Ziphiidae linking feeding energetics to the species’ requirements for survival and reproduction, since these life history traits would be the most likely to be impacted by non-lethal disturbances. Our models suggest that beaked whale reproduction requires energy dense prey, and that poor resource availability would lead to an extension of the inter-calving interval. Further, given current information, it seems that some beaked whale species require relatively high quality habitat in order to meet their requirements for survival and reproduction. As a result, even a small non-lethal disturbance that results in displacement of whales from preferred habitats could potentially impact a population if a significant proportion of that population was affected. We explored the impact of varying ecological parameters and model assumptions on survival and reproduction, and find that calf and fetus survival appear more readily affected than the survival of adult females. PMID:23874737
ERIC Educational Resources Information Center
de La Torre, Jimmy; Karelitz, Tzur M.
2009-01-01
Compared to unidimensional item response models (IRMs), cognitive diagnostic models (CDMs) based on latent classes represent examinees' knowledge and item requirements using discrete structures. This study systematically examines the viability of retrofitting CDMs to IRM-based data with a linear attribute structure. The study utilizes a procedure…
Clinical professional governance for detailed clinical models.
Goossen, William; Goossen-Baremans, Anneke
2013-01-01
This chapter describes the need for Detailed Clinical Models for contemporary Electronic Health Systems, data exchange and data reuse. It starts with an explanation of the components related to Detailed Clinical Models with a brief summary of knowledge representation, including terminologies representing clinic relevant "things" in the real world, and information models that abstract these in order to let computers process data about these things. Next, Detailed Clinical Models are defined and their purpose is described. It builds on existing developments around the world and accumulates in current work to create a technical specification at the level of the International Standards Organization. The core components of properly expressed Detailed Clinical Models are illustrated, including clinical knowledge and context, data element specification, code bindings to terminologies and meta-information about authors, versioning among others. Detailed Clinical Models to date are heavily based on user requirements and specify the conceptual and logical levels of modelling. It is not precise enough for specific implementations, which requires an additional step. However, this allows Detailed Clinical Models to serve as specifications for many different kinds of implementations. Examples of Detailed Clinical Models are presented both in text and in Unified Modelling Language. Detailed Clinical Models can be positioned in health information architectures, where they serve at the most detailed granular level. The chapter ends with examples of projects that create and deploy Detailed Clinical Models. All have in common that they can often reuse materials from earlier projects, and that strict governance of these models is essential to use them safely in health care information and communication technology. Clinical validation is one point of such governance, and model testing another. The Plan Do Check Act cycle can be applied for governance of Detailed Clinical Models. Finally, collections of clinical models do require a repository in which they can be stored, searched, and maintained. Governance of Detailed Clinical Models is required at local, national, and international levels.
From scenarios to domain models: processes and representations
NASA Astrophysics Data System (ADS)
Haddock, Gail; Harbison, Karan
1994-03-01
The domain specific software architectures (DSSA) community has defined a philosophy for the development of complex systems. This philosophy improves productivity and efficiency by increasing the user's role in the definition of requirements, increasing the systems engineer's role in the reuse of components, and decreasing the software engineer's role to the development of new components and component modifications only. The scenario-based engineering process (SEP), the first instantiation of the DSSA philosophy, has been adopted by the next generation controller project. It is also the chosen methodology of the trauma care information management system project, and the surrogate semi-autonomous vehicle project. SEP uses scenarios from the user to create domain models and define the system's requirements. Domain knowledge is obtained from a variety of sources including experts, documents, and videos. This knowledge is analyzed using three techniques: scenario analysis, task analysis, and object-oriented analysis. Scenario analysis results in formal representations of selected scenarios. Task analysis of the scenario representations results in descriptions of tasks necessary for object-oriented analysis and also subtasks necessary for functional system analysis. Object-oriented analysis of task descriptions produces domain models and system requirements. This paper examines the representations that support the DSSA philosophy, including reference requirements, reference architectures, and domain models. The processes used to create and use the representations are explained through use of the scenario-based engineering process. Selected examples are taken from the next generation controller project.
ISPE: A knowledge-based system for fluidization studies. 1990 Annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reddy, S.
1991-01-01
Chemical engineers use mathematical simulators to design, model, optimize and refine various engineering plants/processes. This procedure requires the following steps: (1) preparation of an input data file according to the format required by the target simulator; (2) excecuting the simulation; and (3) analyzing the results of the simulation to determine if all ``specified goals`` are satisfied. If the goals are not met, the input data file must be modified and the simulation repeated. This multistep process is continued until satisfactory results are obtained. This research was undertaken to develop a knowledge based system, IPSE (Intelligent Process Simulation Environment), that canmore » enhance the productivity of chemical engineers/modelers by serving as an intelligent assistant to perform a variety tasks related to process simulation. ASPEN, a widely used simulator by the US Department of Energy (DOE) at Morgantown Energy Technology Center (METC) was selected as the target process simulator in the project. IPSE, written in the C language, was developed using a number of knowledge-based programming paradigms: object-oriented knowledge representation that uses inheritance and methods, rulebased inferencing (includes processing and propagation of probabilistic information) and data-driven programming using demons. It was implemented using the knowledge based environment LASER. The relationship of IPSE with the user, ASPEN, LASER and the C language is shown in Figure 1.« less
Bias-dependent hybrid PKI empirical-neural model of microwave FETs
NASA Astrophysics Data System (ADS)
Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera
2011-10-01
Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.
Faculty role modeling of professional writing: one baccalaureate nursing program's experience.
Newton, Sarah E
2008-01-01
According to The Essentials of Baccalaureate Education for Professional Nursing Practice (American Association of Colleges of Nursing, 1998), professional writing is an important outcome of baccalaureate nursing education. Most baccalaureate nursing programs in the United States expect formally written student papers to adhere to the style requirements outlined in the Publication Manual of the American Psychological Association (APA, 2001). It is essential for the baccalaureate nursing faculty members who evaluate student papers to be role models for the desired writing behaviors to facilitate student attainment of professional writing outcomes. However, to what extent nursing faculty members' writing behaviors and knowledge of the APA style requirements impact student writing outcomes is not known because the issue has not been addressed in the literature. The purpose of this article is to describe one Midwestern baccalaureate nursing program's faculty development efforts to assess faculty familiarity with the APA style requirements and how such knowledge may impact baccalaureate nursing students' writing outcomes.
Computer-Mediated Assessment of Higher-Order Thinking Development
ERIC Educational Resources Information Center
Tilchin, Oleg; Raiyn, Jamal
2015-01-01
Solving complicated problems in a contemporary knowledge-based society requires higher-order thinking (HOT). The most productive way to encourage development of HOT in students is through use of the Problem-based Learning (PBL) model. This model organizes learning by solving corresponding problems relative to study courses. Students are directed…
The development of physiologically based toxicokinetic (PBTK) models for hydrophobic chemicals in fish requires: 1) an understanding of chemical efflux at fish gills; 2) knowledge of the factors that limit chemical exchange between blood and tissues; and, 3) a mechanistic descrip...
ERIC Educational Resources Information Center
Chatti, Mohamed Amine; Jarke, Matthias; Specht, Marcus
2010-01-01
Recognizing the failures of traditional Technology Enhanced Learning (TEL) initiatives to achieve performance improvement, we need to rethink how we design new TEL models that can respond to the learning requirements of the 21st century and mirror the characteristics of knowledge and learning which are fundamentally personal, social, distributed,…
Modeling the South American range of the cerulean warbler
S. Barker; S. Benítez; J. Baldy; D. Cisneros Heredia; G. Colorado Zuluaga; F. Cuesta; I. Davidson; D. Díaz; A. Ganzenmueller; S. García; M. K. Girvan; E. Guevara; P. Hamel; A. B. Hennessey; O. L. Hernández; S. Herzog; D. Mehlman; M. I. Moreno; E. Ozdenerol; P. Ramoni-Perazzi; M. Romero; D. Romo; P. Salaman; T. Santander; C. Tovar; M. Welton; T. Will; C. Pedraza; G. Galindo
2006-01-01
Successful conservation of rare species requires detailed knowledge of the speciesâ distribution. Modeling spatial distribution is an efficient means of locating potential habitats. Cerulean Warbler (Dendroica cerulea, Parulidae) was listed as a Vulnerable Species by the International Union for the Conservation of Nature and Natural Resources in...
OʼHara, Susan
2014-01-01
Nurses have increasingly been regarded as critical members of the planning team as architects recognize their knowledge and value. But the nurses' role as knowledge experts can be expanded to leading efforts to integrate the clinical, operational, and architectural expertise through simulation modeling. Simulation modeling allows for the optimal merge of multifactorial data to understand the current state of the intensive care unit and predict future states. Nurses can champion the simulation modeling process and reap the benefits of a cost-effective way to test new designs, processes, staffing models, and future programming trends prior to implementation. Simulation modeling is an evidence-based planning approach, a standard, for integrating the sciences with real client data, to offer solutions for improving patient care.
Methods Beyond Methods: A Model for Africana Graduate Methods Training.
Best, Latrica E; Byrd, W Carson
2014-06-01
A holistic graduate education can impart not just tools and knowledge, but critical positioning to fulfill many of the original missions of Africana Studies programs set forth in the 1960s and 1970s. As an interdisciplinary field with many approaches to examining the African Diaspora, the methodological training of graduate students can vary across graduate programs. Although taking qualitative methods courses are often required of graduate students in Africana Studies programs, and these programs offer such courses, rarely if ever are graduate students in these programs required to take quantitative methods courses, let alone have these courses offered in-house. These courses can offer Africana Studies graduate students new tools for their own research, but more importantly, improve their knowledge of quantitative research of diasporic communities. These tools and knowledge can assist with identifying flawed arguments about African-descended communities and their members. This article explores the importance of requiring and offering critical quantitative methods courses in graduate programs in Africana Studies, and discusses the methods requirements of one graduate program in the field as an example of more rigorous training that other programs could offer graduate students.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arion is a library and tool set that enables researchers to holistically define test system models. To define a complex system for testing an algorithm or control requires expertise across multiple domains. Simulating a complex system requires the integration of multiple simulators and test hardware, each with their own specification languages and concepts. This requires extensive set of knowledge and capabilities. Arion was developed to alleviate this challenge. Arion is a library of Java libraries that abstracts the concepts from supported simulators into a cohesive model language that allows someone to build models to their needed level of fidelity andmore » expertise. Arion is also a software tool that translates the users model back into the specification languages of the simulators and test hardware needed for execution.« less
Prietula, M J; Feltovich, P J; Marchak, F
2000-01-01
We propose that considering four categories of task factors can facilitate knowledge elicitation efforts in the analysis of complex cognitive tasks: materials, strategies, knowledge characteristics, and goals. A study was conducted to examine the effects of altering aspects of two of these task categories on problem-solving behavior across skill levels: materials and goals. Two versions of an applied engineering problem were presented to expert, intermediate, and novice participants. Participants were to minimize the cost of running a steam generation facility by adjusting steam generation levels and flows. One version was cast in the form of a dynamic, computer-based simulation that provided immediate feedback on flows, costs, and constraint violations, thus incorporating key variable dynamics of the problem context. The other version was cast as a static computer-based model, with no dynamic components, cost feedback, or constraint checking. Experts performed better than the other groups across material conditions, and, when required, the presentation of the goal assisted the experts more than the other groups. The static group generated richer protocols than the dynamic group, but the dynamic group solved the problem in significantly less time. Little effect of feedback was found for intermediates, and none for novices. We conclude that demonstrating differences in performance in this task requires different materials than explicating underlying knowledge that leads to performance. We also conclude that substantial knowledge is required to exploit the information yielded by the dynamic form of the task or the explicit solution goal. This simple model can help to identify the contextual factors that influence elicitation and specification of knowledge, which is essential in the engineering of joint cognitive systems.
WE-A-BRD-01: Innovation in Radiation Therapy Planning I: Knowledge Guided Treatment Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Q; Olsen, L
2014-06-15
Intensity modulated radiation therapy (IMRT) and Volumetric Modulated Arc Therapy (VMAT) offer the capability of normal tissues and organs sparing. However, the exact amount of sparing is often unknown until the plan is complete. This lack of prior guidance has led to the iterative, trial and-error approach in current planning practice. Even with this effort the search for patient-specific optimal organ sparing is still strongly influenced by planner's experience. While experience generally helps in maximizing the dosimetric advantages of IMRT/VMAT, there have been several reports showing unnecessarily high degree of plan quality variability at individual institutions and amongst different institutions,more » even with a large amount of experience and the best available tools. Further, when physician and physicist evaluate a plan, the dosimetric quality of the plan is often compared with a standard protocol that ignores individual patient anatomy and tumor characteristic variations. In recent years, developments of knowledge models for clinical IMRT/VMAT planning guidance have shown promising clinical potentials. These knowledge models extract past expert clinical experience into mathematical models that predict dose sparing references at patient-specific level. For physicians and planners, these references provide objective values that reflect best achievable dosimetric constraints. For quality assurance, applying patient-specific dosimetry requirements will enable more quantitative and objective assessment of protocol compliance for complex IMRT planning. Learning Objectives: Modeling and representation of knowledge for knowledge-guided treatment planning. Demonstrations of knowledge-guided treatment planning with a few clinical caanatomical sites. Validation and evaluation of knowledge models for cost and quality effective standardization of plan optimization.« less
Chiogna, Gabriele; Marcolini, Giorgia; Liu, Wanying; Pérez Ciria, Teresa; Tuo, Ye
2018-08-15
Water management in the alpine region has an important impact on streamflow. In particular, hydropower production is known to cause hydropeaking i.e., sudden fluctuations in river stage caused by the release or storage of water in artificial reservoirs. Modeling hydropeaking with hydrological models, such as the Soil Water Assessment Tool (SWAT), requires knowledge of reservoir management rules. These data are often not available since they are sensitive information belonging to hydropower production companies. In this short communication, we propose to couple the results of a calibrated hydrological model with a machine learning method to reproduce hydropeaking without requiring the knowledge of the actual reservoir management operation. We trained a support vector machine (SVM) with SWAT model outputs, the day of the week and the energy price. We tested the model for the Upper Adige river basin in North-East Italy. A wavelet analysis showed that energy price has a significant influence on river discharge, and a wavelet coherence analysis demonstrated the improved performance of the SVM model in comparison to the SWAT model alone. The SVM model was also able to capture the fluctuations in streamflow caused by hydropeaking when both energy price and river discharge displayed a complex temporal dynamic. Copyright © 2018 Elsevier B.V. All rights reserved.
Automatic programming of arc welding robots
NASA Astrophysics Data System (ADS)
Padmanabhan, Srikanth
Automatic programming of arc welding robots requires the geometric description of a part from a solid modeling system, expert weld process knowledge and the kinematic arrangement of the robot and positioner automatically. Current commercial solid models are incapable of storing explicitly product and process definitions of weld features. This work presents a paradigm to develop a computer-aided engineering environment that supports complete weld feature information in a solid model and to create an automatic programming system for robotic arc welding. In the first part, welding features are treated as properties or attributes of an object, features which are portions of the object surface--the topological boundary. The structure for representing the features and attributes is a graph called the Welding Attribute Graph (WAGRAPH). The method associates appropriate weld features to geometric primitives, adds welding attributes, and checks the validity of welding specifications. A systematic structure is provided to incorporate welding attributes and coordinate system information in a CSG tree. The specific implementation of this structure using a hybrid solid modeler (IDEAS) and an object-oriented programming paradigm is described. The second part provides a comprehensive methodology to acquire and represent weld process knowledge required for the proper selection of welding schedules. A methodology of knowledge acquisition using statistical methods is proposed. It is shown that these procedures did little to capture the private knowledge of experts (heuristics), but helped in determining general dependencies, and trends. A need was established for building the knowledge-based system using handbook knowledge and to allow the experts further to build the system. A methodology to check the consistency and validity for such knowledge addition is proposed. A mapping shell designed to transform the design features to application specific weld process schedules is described. A new approach using fixed path modified continuation methods is proposed in the final section to plan continuously the trajectory of weld seams in an integrated welding robot and positioner environment. The joint displacement, velocity, and acceleration histories all along the path as a function of the path parameter for the best possible welding condition are provided for the robot and the positioner to track various paths normally encountered in arc welding.
NSF's Perspective on Space Weather Research for Building Forecasting Capabilities
NASA Astrophysics Data System (ADS)
Bisi, M. M.; Pulkkinen, A. A.; Bisi, M. M.; Pulkkinen, A. A.; Webb, D. F.; Oughton, E. J.; Azeem, S. I.
2017-12-01
Space weather research at the National Science Foundation (NSF) is focused on scientific discovery and on deepening knowledge of the Sun-Geospace system. The process of maturation of knowledge base is a requirement for the development of improved space weather forecast models and for the accurate assessment of potential mitigation strategies. Progress in space weather forecasting requires advancing in-depth understanding of the underlying physical processes, developing better instrumentation and measurement techniques, and capturing the advancements in understanding in large-scale physics based models that span the entire chain of events from the Sun to the Earth. This presentation will provide an overview of current and planned programs pertaining to space weather research at NSF and discuss the recommendations of the Geospace Section portfolio review panel within the context of space weather forecasting capabilities.
Web-video-mining-supported workflow modeling for laparoscopic surgeries.
Liu, Rui; Zhang, Xiaoli; Zhang, Hao
2016-11-01
As quality assurance is of strong concern in advanced surgeries, intelligent surgical systems are expected to have knowledge such as the knowledge of the surgical workflow model (SWM) to support their intuitive cooperation with surgeons. For generating a robust and reliable SWM, a large amount of training data is required. However, training data collected by physically recording surgery operations is often limited and data collection is time-consuming and labor-intensive, severely influencing knowledge scalability of the surgical systems. The objective of this research is to solve the knowledge scalability problem in surgical workflow modeling with a low cost and labor efficient way. A novel web-video-mining-supported surgical workflow modeling (webSWM) method is developed. A novel video quality analysis method based on topic analysis and sentiment analysis techniques is developed to select high-quality videos from abundant and noisy web videos. A statistical learning method is then used to build the workflow model based on the selected videos. To test the effectiveness of the webSWM method, 250 web videos were mined to generate a surgical workflow for the robotic cholecystectomy surgery. The generated workflow was evaluated by 4 web-retrieved videos and 4 operation-room-recorded videos, respectively. The evaluation results (video selection consistency n-index ≥0.60; surgical workflow matching degree ≥0.84) proved the effectiveness of the webSWM method in generating robust and reliable SWM knowledge by mining web videos. With the webSWM method, abundant web videos were selected and a reliable SWM was modeled in a short time with low labor cost. Satisfied performances in mining web videos and learning surgery-related knowledge show that the webSWM method is promising in scaling knowledge for intelligent surgical systems. Copyright © 2016 Elsevier B.V. All rights reserved.
Discrimination of dynamical system models for biological and chemical processes.
Lorenz, Sönke; Diederichs, Elmar; Telgmann, Regina; Schütte, Christof
2007-06-01
In technical chemistry, systems biology and biotechnology, the construction of predictive models has become an essential step in process design and product optimization. Accurate modelling of the reactions requires detailed knowledge about the processes involved. However, when concerned with the development of new products and production techniques for example, this knowledge often is not available due to the lack of experimental data. Thus, when one has to work with a selection of proposed models, the main tasks of early development is to discriminate these models. In this article, a new statistical approach to model discrimination is described that ranks models wrt. the probability with which they reproduce the given data. The article introduces the new approach, discusses its statistical background, presents numerical techniques for its implementation and illustrates the application to examples from biokinetics.
To ontologise or not to ontologise: An information model for a geospatial knowledge infrastructure
NASA Astrophysics Data System (ADS)
Stock, Kristin; Stojanovic, Tim; Reitsma, Femke; Ou, Yang; Bishr, Mohamed; Ortmann, Jens; Robertson, Anne
2012-08-01
A geospatial knowledge infrastructure consists of a set of interoperable components, including software, information, hardware, procedures and standards, that work together to support advanced discovery and creation of geoscientific resources, including publications, data sets and web services. The focus of the work presented is the development of such an infrastructure for resource discovery. Advanced resource discovery is intended to support scientists in finding resources that meet their needs, and focuses on representing the semantic details of the scientific resources, including the detailed aspects of the science that led to the resource being created. This paper describes an information model for a geospatial knowledge infrastructure that uses ontologies to represent these semantic details, including knowledge about domain concepts, the scientific elements of the resource (analysis methods, theories and scientific processes) and web services. This semantic information can be used to enable more intelligent search over scientific resources, and to support new ways to infer and visualise scientific knowledge. The work describes the requirements for semantic support of a knowledge infrastructure, and analyses the different options for information storage based on the twin goals of semantic richness and syntactic interoperability to allow communication between different infrastructures. Such interoperability is achieved by the use of open standards, and the architecture of the knowledge infrastructure adopts such standards, particularly from the geospatial community. The paper then describes an information model that uses a range of different types of ontologies, explaining those ontologies and their content. The information model was successfully implemented in a working geospatial knowledge infrastructure, but the evaluation identified some issues in creating the ontologies.
When craft and science collide: Improving therapeutic practices through evidence-based innovations.
Justice, Laura M
2010-04-01
Evidence-based practice (EBP) is a model of clinical decision-making that is increasingly being advocated for use in the field of speech-language pathology. With the increased emphasis on scientific evidence as a form of knowledge important to EBP, clinicians may wonder whether their craft-based knowledge (i.e., knowledge derived from theory and practice), remains a legitimate form of knowledge for use in clinician decisions. This article describes forms of knowledge that may be used to address clinical questions, to include both craft and science. Additionally, the steps used when engaging in EBP are described so that clinicians understand when and how craft comes into play. The major premise addressed within this article is that craft is a legitimate form of knowledge and that engagement in EBP requires one to employ craft-based knowledge.
James T. Peterson; Jason Dunham
2003-01-01
Effective conservation efforts for at-risk species require knowledge of the locations of existing populations. Species presence can be estimated directly by conducting field-sampling surveys or alternatively by developing predictive models. Direct surveys can be expensive and inefficient, particularly for rare and difficult- to-sample species, and models of species...
A Linked Model for Simulating Stand Development and Growth Processes of Loblolly Pine
V. Clark Baldwin; Phillip M. Dougherty; Harold E. Burkhart
1998-01-01
Linking models of different scales (e.g., process, tree-stand-ecosystem) is essential for furthering our understanding of stand, climatic, and edaphic effects on tree growth and forest productivity. Moreover, linking existing models that differ in scale and levels of resolution quickly identifies knowledge gaps in information required to scale from one level to another...
Treating Depression in Staff-Model Versus Network-Model Managed Care Organizations
Meredith, Lisa S; Rubenstein, Lisa V; Rost, Kathryn; Ford, Daniel E; Gordon, Nancy; Nutting, Paul; Camp, Patti; Wells, Kenneth B
1999-01-01
OBJECTIVE To compare primary care providers’ depression-related knowledge, attitudes, and practices and to understand how these reports vary for providers in staff or group-model managed care organizations (MCOs) compared with network-model MCOs including independent practice associations and preferred provider organizations. DESIGN Survey of primary care providers’ depression-related practices in 1996. SETTING AND PARTICIPANTS We surveyed 410 providers, from 80 outpatient clinics, in 11 MCOs participating in four studies designed to improve the quality of depression care in primary care. MEASUREMENTS AND MAIN RESULTS We measured knowledge based on depression guidelines, attitudes (beliefs about burden, skill, and barriers) related to depression, and reported behavior. Providers in both types of MCO are equally knowledgeable about treating depression (better knowledge of pharmacologic than psychotherapeutic treatments) and perceive equivalent skills in treating depression. However, compared with network-model providers, staff/group-model providers have stronger beliefs that treating depression is burdensome to their practice. While more staff/group-model providers reported time limitations as a barrier to optimal depression treatment, more network-model providers reported limited access to mental health specialty referral as a barrier. Accordingly, these staff/group-model providers are more likely to treat patients with major depression through referral (51% vs 38%) or to assess but not treat (17% vs 7%), and network-model providers are more likely to prescribe antidepressants (57% vs 6%) as first-line treatment. CONCLUSIONS Whereas the providers from staff/group-model MCOs had greater access to and relied more on referral, the providers from network-model organizations were more likely to treat depression themselves. Given varying attitudes and behaviors, improving primary care for the treatment of depression will require unique strategies beyond enhancing technical knowledge for the two types of MCOs. PMID:9893090
1991-02-01
3 2.2 Hybrid Rule/Fact Schemas .............................................................. 3 3 THE LIMITATIONS OF RULE BASED KNOWLEDGE...or hybrid rule/fact schemas. 2 UNCLASSIFIED .WA UNCLASSIFIED ERL-0520-RR 2.1 Propositional Logic The simplest form of production-rules are based upon...requirements which may lead to poor system performance. 2.2 Hybrid Rule/Fact Schemas Hybrid rule/fact relationships (also known as Predicate Calculus ) have
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.
1992-01-01
Fuzzy logic and neural networks provide new methods for designing control systems. Fuzzy logic controllers do not require a complete analytical model of a dynamic system and can provide knowledge-based heuristic controllers for ill-defined and complex systems. Neural networks can be used for learning control. In this chapter, we discuss hybrid methods using fuzzy logic and neural networks which can start with an approximate control knowledge base and refine it through reinforcement learning.
Turon, Clàudia; Comas, Joaquim; Torrens, Antonina; Molle, Pascal; Poch, Manel
2008-01-01
With the aim of improving effluent quality of waste stabilization ponds, different designs of vertical flow constructed wetlands and intermittent sand filters were tested on an experimental full-scale plant within the framework of a European project. The information extracted from this study was completed and updated with heuristic and bibliographic knowledge. The data and knowledge acquired were difficult to integrate into mathematical models because they involve qualitative information and expert reasoning. Therefore, it was decided to develop an environmental decision support system (EDSS-Filter-Design) as a tool to integrate mathematical models and knowledge-based techniques. This paper describes the development of this support tool, emphasizing the collection of data and knowledge and representation of this information by means of mathematical equations and a rule-based system. The developed support tool provides the main design characteristics of filters: (i) required surface, (ii) media type, and (iii) media depth. These design recommendations are based on wastewater characteristics, applied load, and required treatment level data provided by the user. The results of the EDSS-Filter-Design provide appropriate and useful information and guidelines on how to design filters, according to the expert criteria. The encapsulation of the information into a decision support system reduces the design period and provides a feasible, reasoned, and positively evaluated proposal.
Koehler Leman, Julia; Bonneau, Richard
2018-04-03
Membrane proteins composed of soluble and membrane domains are often studied one domain at a time. However, to understand the biological function of entire protein systems and their interactions with each other and drugs, knowledge of full-length structures or models is required. Although few computational methods exist that could potentially be used to model full-length constructs of membrane proteins, none of these methods are perfectly suited for the problem at hand. Existing methods require an interface or knowledge of the relative orientations of the domains or are not designed for domain assembly, and none of them are developed for membrane proteins. Here we describe the first domain assembly protocol specifically designed for membrane proteins that assembles intra- and extracellular soluble domains and the transmembrane domain into models of the full-length membrane protein. Our protocol does not require an interface between the domains and samples possible domain orientations based on backbone dihedrals in the flexible linker regions, created via fragment insertion, while keeping the transmembrane domain fixed in the membrane. For five examples tested, our method mp_domain_assembly, implemented in RosettaMP, samples domain orientations close to the known structure and is best used in conjunction with experimental data to reduce the conformational search space.
Proactive learning for artificial cognitive systems
NASA Astrophysics Data System (ADS)
Lee, Soo-Young
2010-04-01
The Artificial Cognitive Systems (ACS) will be developed for human-like functions such as vision, auditory, inference, and behavior. Especially, computational models and artificial HW/SW systems will be devised for Proactive Learning (PL) and Self-Identity (SI). The PL model provides bilateral interactions between robot and unknown environment (people, other robots, cyberspace). For the situation awareness in unknown environment it is required to receive audiovisual signals and to accumulate knowledge. If the knowledge is not enough, the PL should improve by itself though internet and others. For human-oriented decision making it is also required for the robot to have self-identify and emotion. Finally, the developed models and system will be mounted on a robot for the human-robot co-existing society. The developed ACS will be tested against the new Turing Test for the situation awareness. The Test problems will consist of several video clips, and the performance of the ACSs will be compared against those of human with several levels of cognitive ability.
Modeling patient safety incidents knowledge with the Categorial Structure method.
Souvignet, Julien; Bousquet, Cédric; Lewalle, Pierre; Trombert-Paviot, Béatrice; Rodrigues, Jean Marie
2011-01-01
Following the WHO initiative named World Alliance for Patient Safety (PS) launched in 2004 a conceptual framework developed by PS national reporting experts has summarized the knowledge available. As a second step, the Department of Public Health of the University of Saint Etienne team elaborated a Categorial Structure (a semi formal structure not related to an upper level ontology) identifying the elements of the semantic structure underpinning the broad concepts contained in the framework for patient safety. This knowledge engineering method has been developed to enable modeling patient safety information as a prerequisite for subsequent full ontology development. The present article describes the semantic dissection of the concepts, the elicitation of the ontology requirements and the domain constraints of the conceptual framework. This ontology includes 134 concepts and 25 distinct relations and will serve as basis for an Information Model for Patient Safety.
NASA Technical Reports Server (NTRS)
Killough, Brian; Stover, Shelley
2008-01-01
The Committee on Earth Observation Satellites (CEOS) provides a brief to the Goddard Institute for Space Studies (GISS) regarding the CEOS Systems Engineering Office (SEO) and current work on climate requirements and analysis. A "system framework" is provided for the Global Earth Observation System of Systems (GEOSS). SEO climate-related tasks are outlined including the assessment of essential climate variable (ECV) parameters, use of the "systems framework" to determine relevant informational products and science models and the performance of assessments and gap analyses of measurements and missions for each ECV. Climate requirements, including instruments and missions, measurements, knowledge and models, and decision makers, are also outlined. These requirements would establish traceability from instruments to products and services allowing for benefit evaluation of instruments and measurements. Additionally, traceable climate requirements would provide a better understanding of global climate models.
Rütten, A; Wolff, A; Streber, A
2016-03-01
This article discusses 2 current issues in the field of public health research: (i) transfer of scientific knowledge into practice and (ii) sustainable implementation of good practice projects. It also supports integration of scientific and practice-based evidence production. Furthermore, it supports utilisation of interactive models that transcend deductive approaches to the process of knowledge transfer. Existing theoretical approaches, pilot studies and thoughtful conceptual considerations are incorporated into a framework showing the interplay of science, politics and prevention practice, which fosters a more sustainable implementation of health promotion programmes. The framework depicts 4 key processes of interaction between science and prevention practice: interactive knowledge to action, capacity building, programme adaptation and adaptation of the implementation context. Ensuring sustainability of health promotion programmes requires a concentrated process of integrating scientific and practice-based evidence production in the context of implementation. Central to the integration process is the approach of interactive knowledge to action, which especially benefits from capacity building processes that facilitate participation and systematic interaction between relevant stakeholders. Intense cooperation also induces a dynamic interaction between multiple actors and components such as health promotion programmes, target groups, relevant organisations and social, cultural and political contexts. The reciprocal adaptation of programmes and key components of the implementation context can foster effectiveness and sustainability of programmes. Sustainable implementation of evidence-based health promotion programmes requires alternatives to recent deductive models of knowledge transfer. Interactive approaches prove to be promising alternatives. Simultaneously, they change the responsibilities of science, policy and public health practice. Existing boundaries within disciplines and sectors are overcome by arranging transdisciplinary teams as well as by developing common agendas and procedures. Such approaches also require adaptations of the structure of research projects such as extending the length of funding. © Georg Thieme Verlag KG Stuttgart · New York.
Assessing the impact of modeling limits on intelligent systems
NASA Technical Reports Server (NTRS)
Rouse, William B.; Hammer, John M.
1990-01-01
The knowledge bases underlying intelligent systems are validated. A general conceptual framework is provided for considering the roles in intelligent systems of models of physical, behavioral, and operational phenomena. A methodology is described for identifying limits in particular intelligent systems, and the use of the methodology is illustrated via an experimental evaluation of the pilot-vehicle interface within the Pilot's Associate. The requirements and functionality are outlined for a computer based knowledge engineering environment which would embody the approach advocated and illustrated in earlier discussions. Issues considered include the specific benefits of this functionality, the potential breadth of applicability, and technical feasibility.
Probability density function of non-reactive solute concentration in heterogeneous porous formations
Alberto Bellin; Daniele Tonina
2007-01-01
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for...
Fostering Radical Conceptual Change through Dual-Situated Learning Model
ERIC Educational Resources Information Center
She, Hsiao-Ching
2004-01-01
This article examines how the Dual-Situated Learning Model (DSLM) facilitates a radical change of concepts that involve the understanding of matter, process, and hierarchical attributes. The DSLM requires knowledge of students' prior beliefs of science concepts and the nature of these concepts. In addition, DSLM also serves two functions: it…
Instructional Strategies to Promote Student Strategic Thinking When Using SolidWorks
ERIC Educational Resources Information Center
Toto, Roxanne; Colledge, Thomas; Frederick, David; Pung, Wik Hung
2014-01-01
Reflective of current trends in industry, engineering design professionals are expected to have knowledge of 3D modeling software. Responding to this need, engineering curricula seek to effectively prepare students for the workforce by requiring instruction in the use of 3D parametric solid modeling. Recent literature contains many examples that…
ERIC Educational Resources Information Center
Hudson, Peter; Usak, Muhammet; Savran-Gencer, Ayse
2009-01-01
Primary science education is a concern around the world and quality mentoring within schools can develop pre-service teachers' practices. A five-factor model for mentoring has been identified, namely, personal attributes, system requirements, pedagogical knowledge, modelling, and feedback. Final-year pre-service teachers (mentees, n = 211) from…
Modeling the South American range of the cerulean warbler
S. Barker; S. Benítez; J. Baldy; D. Cisneros Heredia; G. Colorado Zuluaga; F. Cuesta; I. Davidson; D. Díaz; A. Ganzenmueller; S. García; M. K. Girvan; E. Guevara; P. Hamel; A. B. Hennessey; O. L. Hernández; S. Herzog; D. Mehlman; M. I. Moreno; E. Ozdenerol; P. Ramoni-Perazzi; M. Romero; D. Romo; P. Salaman; T. Santander; C. Tovar; M. Welton; T. Will; C. Galindo Pedraza
2007-01-01
Successful conservation of rare species requires detailed knowledge of the speciesâ distribution. Modeling spatial distribution is an efficient means of locating potential habitats. Cerulean Warbler (Dendroica cerulea, Parulidae) was listed as a Vulnerable Species by the International Union for the Conservation of Nature and Natural Resources in 2004...
Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B
2013-01-01
The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.
EliXR-TIME: A Temporal Knowledge Representation for Clinical Research Eligibility Criteria.
Boland, Mary Regina; Tu, Samson W; Carini, Simona; Sim, Ida; Weng, Chunhua
2012-01-01
Effective clinical text processing requires accurate extraction and representation of temporal expressions. Multiple temporal information extraction models were developed but a similar need for extracting temporal expressions in eligibility criteria (e.g., for eligibility determination) remains. We identified the temporal knowledge representation requirements of eligibility criteria by reviewing 100 temporal criteria. We developed EliXR-TIME, a frame-based representation designed to support semantic annotation for temporal expressions in eligibility criteria by reusing applicable classes from well-known clinical temporal knowledge representations. We used EliXR-TIME to analyze a training set of 50 new temporal eligibility criteria. We evaluated EliXR-TIME using an additional random sample of 20 eligibility criteria with temporal expressions that have no overlap with the training data, yielding 92.7% (76 / 82) inter-coder agreement on sentence chunking and 72% (72 / 100) agreement on semantic annotation. We conclude that this knowledge representation can facilitate semantic annotation of the temporal expressions in eligibility criteria.
Symbolic modeling of human anatomy for visualization and simulation
NASA Astrophysics Data System (ADS)
Pommert, Andreas; Schubert, Rainer; Riemer, Martin; Schiemann, Thomas; Tiede, Ulf; Hoehne, Karl H.
1994-09-01
Visualization of human anatomy in a 3D atlas requires both spatial and more abstract symbolic knowledge. Within our 'intelligent volume' model which integrates these two levels, we developed and implemented a semantic network model for describing human anatomy. Concepts for structuring (abstraction levels, domains, views, generic and case-specific modeling, inheritance) are introduced. Model, tools for generation and exploration and applications in our 3D anatomical atlas are presented and discussed.
Clinical Knowledge Governance Framework for Nationwide Data Infrastructure Projects.
Wulff, Antje; Haarbrandt, Birger; Marschollek, Michael
2018-01-01
The availability of semantically-enriched and interoperable clinical information models is crucial for reusing once collected data across institutions like aspired in the German HiGHmed project. Funded by the Federal Ministry of Education and Research, this nationwide data infrastructure project adopts the openEHR approach for semantic modelling. Here, strong governance is required to define high-quality and reusable models. Design of a clinical knowledge governance framework for openEHR modelling in cross-institutional settings like HiGHmed. Analysis of successful practices from international projects, published ideas on archetype governance and own modelling experiences as well as modelling of BPMN processes. We designed a framework by presenting archetype variations, roles and responsibilities, IT support and modelling workflows. Our framework has great potential to make the openEHR modelling efforts manageable. Because practical experiences are rare, prospectively our work will be predestinated to evaluate the benefits of such structured governance approaches.
Damage identification using inverse methods.
Friswell, Michael I
2007-02-15
This paper gives an overview of the use of inverse methods in damage detection and location, using measured vibration data. Inverse problems require the use of a model and the identification of uncertain parameters of this model. Damage is often local in nature and although the effect of the loss of stiffness may require only a small number of parameters, the lack of knowledge of the location means that a large number of candidate parameters must be included. This paper discusses a number of problems that exist with this approach to health monitoring, including modelling error, environmental effects, damage localization and regularization.
Use of cccupancy models to evaluate expert knowledge-based species-habitat relationships
Iglecia, Monica N.; Collazo, Jaime A.; McKerrow, Alexa
2012-01-01
Expert knowledge-based species-habitat relationships are used extensively to guide conservation planning, particularly when data are scarce. Purported relationships describe the initial state of knowledge, but are rarely tested. We assessed support in the data for suitability rankings of vegetation types based on expert knowledge for three terrestrial avian species in the South Atlantic Coastal Plain of the United States. Experts used published studies, natural history, survey data, and field experience to rank vegetation types as optimal, suitable, and marginal. We used single-season occupancy models, coupled with land cover and Breeding Bird Survey data, to examine the hypothesis that patterns of occupancy conformed to species-habitat suitability rankings purported by experts. Purported habitat suitability was validated for two of three species. As predicted for the Eastern Wood-Pewee (Contopus virens) and Brown-headed Nuthatch (Sitta pusilla), occupancy was strongly influenced by vegetation types classified as “optimal habitat” by the species suitability rankings for nuthatches and wood-pewees. Contrary to predictions, Red-headed Woodpecker (Melanerpes erythrocephalus) models that included vegetation types as covariates received similar support by the data as models without vegetation types. For all three species, occupancy was also related to sampling latitude. Our results suggest that covariates representing other habitat requirements might be necessary to model occurrence of generalist species like the woodpecker. The modeling approach described herein provides a means to test expert knowledge-based species-habitat relationships, and hence, help guide conservation planning.
Teaching Children to Use Databases through Direct Instruction.
ERIC Educational Resources Information Center
Rooze, Gene E.
1988-01-01
Provides a direct instruction strategy for teaching skills and concepts required for database use. Creates an interactive environment which motivates, provides a model, imparts information, allows active student participation, gives knowledge of results, and presents guidance. (LS)
NASA Technical Reports Server (NTRS)
Ott, C. M.; Mena, K. D.; Nickerson, C.A.; Pierson, D. L.
2009-01-01
Historically, microbiological spaceflight requirements have been established in a subjective manner based upon expert opinion of both environmental and clinical monitoring results and the incidence of disease. The limited amount of data, especially from long-duration missions, has created very conservative requirements based primarily on the concentration of microorganisms. Periodic reevaluations of new data from later missions have allowed some relaxation of these stringent requirements. However, the requirements remain very conservative and subjective in nature, and the risk of crew illness due to infectious microorganisms is not well defined. The use of modeling techniques for microbial risk has been applied in the food and potable water industries and has exceptional potential for spaceflight applications. From a productivity standpoint, this type of modeling can (1) decrease unnecessary costs and resource usage and (2) prevent inadequate or inappropriate data for health assessment. In addition, a quantitative model has several advantages for risk management and communication. By identifying the variable components of the model and the knowledge associated with each component, this type of modeling can: (1) Systematically identify and close knowledge gaps, (2) Systematically identify acceptable and unacceptable risks, (3) Improve communication with stakeholders as to the reasons for resource use, and (4) Facilitate external scientific approval of the NASA requirements. The modeling of microbial risk involves the evaluation of several key factors including hazard identification, crew exposure assessment, dose-response assessment, and risk characterization. Many of these factors are similar to conditions found on Earth; however, the spaceflight environment is very specialized as the inhabitants live in a small, semi-closed environment that is often dependent on regenerative life support systems. To further complicate modeling efforts, microbial dose-response characteristics may be affected by a potentially dysfunctional crew immune system during a mission. In addition, microbial virulence has been shown to change under certain conditions during spaceflight, further complicating dose-response characterization. An initial study of the applicability of microbial risk assessment techniques was performed using Crew Health Care System (CHeCS) operational data from the International Space Station potable water systems. The risk of infection from potable water was selected as the flight systems and microbial ecology are well defined. This initial study confirmed the feasibility of using microbial risk assessment modeling for spaceflight systems. While no immediate threat was detected, the study identified several medically significant microorganisms that could pose a health risk if uncontrolled. The study also identified several specific knowledge gaps in making a risk assessment and noted that filling these knowledge gaps is essential as the risk estimates may change by orders of magnitude depending on the answers. The current phase of the microbial risk assessment studies focuses on the dose-response relationship of specific infectious agents, focusing on Salmonella enterica Typhimurium, Pseudomonas spp., and Escherichia coli, as their evaluation will provide a better baseline for determining the overall hazard characterization. The organisms were chosen as they either have been isolated on spacecraft or have an identified route of infection during a mission. The characterization will utilize dose-response models selected either from the peer-reviewed literature and/or by using statistical approaches. Development of these modeling and risk assessment techniques will help to optimize flight requirements and to protect the safety, health, and performance of the crew.
Managing the Alert Process at NewYork-Presbyterian Hospital
Kuperman, Gilad J; Diamente, Rosanna; Khatu, Vrinda; Chan-Kraushar, Terri; Stetson, Pete; Boyer, Aurelia; Cooper, Mary
2005-01-01
Clinical decision support can improve the quality of care, but requires substantial knowledge management activities. At NewYork-Presbyterian Hospital in New York City, we have implemented a formal alert management process whereby only hospital committees and departments can request alerts. An explicit requestor, who will help resolve the details of the alert logic and the alert message must be identified. Alerts must be requested in writing using a structured alert request form. Alert requests are reviewed by the Alert Committee and then forwarded to the Information Systems department for a software development estimate. The model required that clinical committees and departments become more actively involved in the development of alerts than had previously been necessary. In the 12 months following implementation, 10 alert requests were received. The model has been well received. A lot of the knowledge engineering work has been distributed and burden has been removed from scarce medical informatics resources. PMID:16779073
Fostering radical conceptual change through dual-situated learning model
NASA Astrophysics Data System (ADS)
She, Hsiao-Ching
2004-02-01
This article examines how the Dual-Situated Learning Model (DSLM) facilitates a radical change of concepts that involve the understanding of matter, process, and hierarchical attributes. The DSLM requires knowledge of students' prior beliefs of science concepts and the nature of these concepts. In addition, DSLM also serves two functions: it creates dissonance with students' prior knowledge by challenging their epistemological and ontological beliefs about science concepts, and it provides essential mental sets for students to reconstruct a more scientific view of the concepts. In this study, the concept heat transfer: heat conduction and convection, which requires an understanding of matter, process, and hierarchical attributes, was chosen to examine how DSLM can facilitate radical conceptual change among students. Results show that DSLM has great potential to foster a radical conceptual change process in learning heat transfer. Radical conceptual change can definitely be achieved and does not necessarily involve a slow or gradual process.
Predicting Mycobacterium tuberculosis Complex Clades Using Knowledge-Based Bayesian Networks
Bennett, Kristin P.
2014-01-01
We develop a novel approach for incorporating expert rules into Bayesian networks for classification of Mycobacterium tuberculosis complex (MTBC) clades. The proposed knowledge-based Bayesian network (KBBN) treats sets of expert rules as prior distributions on the classes. Unlike prior knowledge-based support vector machine approaches which require rules expressed as polyhedral sets, KBBN directly incorporates the rules without any modification. KBBN uses data to refine rule-based classifiers when the rule set is incomplete or ambiguous. We develop a predictive KBBN model for 69 MTBC clades found in the SITVIT international collection. We validate the approach using two testbeds that model knowledge of the MTBC obtained from two different experts and large DNA fingerprint databases to predict MTBC genetic clades and sublineages. These models represent strains of MTBC using high-throughput biomarkers called spacer oligonucleotide types (spoligotypes), since these are routinely gathered from MTBC isolates of tuberculosis (TB) patients. Results show that incorporating rules into problems can drastically increase classification accuracy if data alone are insufficient. The SITVIT KBBN is publicly available for use on the World Wide Web. PMID:24864238
Polyenergetic known-component reconstruction without prior shape models
NASA Astrophysics Data System (ADS)
Zhang, C.; Zbijewski, W.; Zhang, X.; Xu, S.; Stayman, J. W.
2017-03-01
Purpose: Previous work has demonstrated that structural models of surgical tools and implants can be integrated into model-based CT reconstruction to greatly reduce metal artifacts and improve image quality. This work extends a polyenergetic formulation of known-component reconstruction (Poly-KCR) by removing the requirement that a physical model (e.g. CAD drawing) be known a priori, permitting much more widespread application. Methods: We adopt a single-threshold segmentation technique with the help of morphological structuring elements to build a shape model of metal components in a patient scan based on initial filtered-backprojection (FBP) reconstruction. This shape model is used as an input to Poly-KCR, a formulation of known-component reconstruction that does not require a prior knowledge of beam quality or component material composition. An investigation of performance as a function of segmentation thresholds is performed in simulation studies, and qualitative comparisons to Poly-KCR with an a priori shape model are made using physical CBCT data of an implanted cadaver and in patient data from a prototype extremities scanner. Results: We find that model-free Poly-KCR (MF-Poly-KCR) provides much better image quality compared to conventional reconstruction techniques (e.g. FBP). Moreover, the performance closely approximates that of Poly- KCR with an a prior shape model. In simulation studies, we find that imaging performance generally follows segmentation accuracy with slight under- or over-estimation based on the shape of the implant. In both simulation and physical data studies we find that the proposed approach can remove most of the blooming and streak artifacts around the component permitting visualization of the surrounding soft-tissues. Conclusion: This work shows that it is possible to perform known-component reconstruction without prior knowledge of the known component. In conjunction with the Poly-KCR technique that does not require knowledge of beam quality or material composition, very little needs to be known about the metal implant and system beforehand. These generalizations will allow more widespread application of KCR techniques in real patient studies where the information of surgical tools and implants is limited or not available.
Kim, Eun A; Choi, So Eun
2015-12-01
The purpose of this study was to test and validate a model to predict living and brain death organ donation intention in nursing students. The conceptual model was based on the theory planned behavior. Quota sampling methodology was used to recruit 921 nursing students from all over the country and data collection was done from October 1 to December 20, 2013. The model fit indices for the hypothetical model were suitable for the recommended level. Knowledge, attitude, subjective norm and perceived behavioral control explained 40.2% and 40.1% respectively for both living and brain death organ donation intention. Subjective norm was the most direct influential factor for organ donation intention. Knowledge had significant direct effect on attitude and indirect effect on subjective norm and perceived behavioral control. These effects were higher in brain death organ donation intention than in living donation intention. The overall findings of this study suggest the need to develop systematic education programs to increases knowledge about brain death organ donation. The development, application, and evaluation of intervention programs are required to improve subjective norm.
Model-based diagnostics for Space Station Freedom
NASA Technical Reports Server (NTRS)
Fesq, Lorraine M.; Stephan, Amy; Martin, Eric R.; Lerutte, Marcel G.
1991-01-01
An innovative approach to fault management was recently demonstrated for the NASA LeRC Space Station Freedom (SSF) power system testbed. This project capitalized on research in model-based reasoning, which uses knowledge of a system's behavior to monitor its health. The fault management system (FMS) can isolate failures online, or in a post analysis mode, and requires no knowledge of failure symptoms to perform its diagnostics. An in-house tool called MARPLE was used to develop and run the FMS. MARPLE's capabilities are similar to those available from commercial expert system shells, although MARPLE is designed to build model-based as opposed to rule-based systems. These capabilities include functions for capturing behavioral knowledge, a reasoning engine that implements a model-based technique known as constraint suspension, and a tool for quickly generating new user interfaces. The prototype produced by applying MARPLE to SSF not only demonstrated that model-based reasoning is a valuable diagnostic approach, but it also suggested several new applications of MARPLE, including an integration and testing aid, and a complement to state estimation.
Identification of Boolean Network Models From Time Series Data Incorporating Prior Knowledge.
Leifeld, Thomas; Zhang, Zhihua; Zhang, Ping
2018-01-01
Motivation: Mathematical models take an important place in science and engineering. A model can help scientists to explain dynamic behavior of a system and to understand the functionality of system components. Since length of a time series and number of replicates is limited by the cost of experiments, Boolean networks as a structurally simple and parameter-free logical model for gene regulatory networks have attracted interests of many scientists. In order to fit into the biological contexts and to lower the data requirements, biological prior knowledge is taken into consideration during the inference procedure. In the literature, the existing identification approaches can only deal with a subset of possible types of prior knowledge. Results: We propose a new approach to identify Boolean networks from time series data incorporating prior knowledge, such as partial network structure, canalizing property, positive and negative unateness. Using vector form of Boolean variables and applying a generalized matrix multiplication called the semi-tensor product (STP), each Boolean function can be equivalently converted into a matrix expression. Based on this, the identification problem is reformulated as an integer linear programming problem to reveal the system matrix of Boolean model in a computationally efficient way, whose dynamics are consistent with the important dynamics captured in the data. By using prior knowledge the number of candidate functions can be reduced during the inference. Hence, identification incorporating prior knowledge is especially suitable for the case of small size time series data and data without sufficient stimuli. The proposed approach is illustrated with the help of a biological model of the network of oxidative stress response. Conclusions: The combination of efficient reformulation of the identification problem with the possibility to incorporate various types of prior knowledge enables the application of computational model inference to systems with limited amount of time series data. The general applicability of this methodological approach makes it suitable for a variety of biological systems and of general interest for biological and medical research.
Ohl, Cornelia; Moser, Frank
2007-08-01
Chemicals indisputably contribute greatly to the well-being of modern societies. Apart from such benefits, however, chemicals often pose serious threats to human health and the environment when improperly handled. Therefore, the European Commission has proposed a regulatory framework for the Registration, Evaluation and Authorization of Chemicals (REACH) that requires companies using chemicals to gather pertinent information on the properties of these substances. In this article, we argue that the crucial aspect of this information management may be the honesty and accuracy of the transfer of relevant knowledge from the producer of a chemical to its user. This may be particularly true if the application of potentially hazardous chemicals is not part of the user's core competency. Against this background, we maintain that the traditional sales concept provides no incentives for transferring this knowledge. The reason is that increased user knowledge of a chemical's properties may raise the efficiency of its application. That is, excessive and unnecessary usage will be eliminated. This, in turn, would lower the amount of chemicals sold and in competitive markets directly decrease profits of the producer. Through the introduction of chemical leasing business models, we attempt to present a strategy to overcome the incentive structure of classical sales models, which is counterproductive for the transfer of knowledge. By introducing two models (a Model A that differs least and a Model B that differs most from traditional sales concepts), we demonstrate that chemical leasing business models are capable of accomplishing the goal of Registration, Evaluation and Authorization of Chemicals: to effectively manage the risk of chemicals by reducing the total quantity of chemicals used, either by a transfer of applicable knowledge from the lessor to the lessee (Model A) or by efficient application of the chemical by the lessor him/herself (Model B).
Methods Beyond Methods: A Model for Africana Graduate Methods Training
Best, Latrica E.; Byrd, W. Carson
2018-01-01
A holistic graduate education can impart not just tools and knowledge, but critical positioning to fulfill many of the original missions of Africana Studies programs set forth in the 1960s and 1970s. As an interdisciplinary field with many approaches to examining the African Diaspora, the methodological training of graduate students can vary across graduate programs. Although taking qualitative methods courses are often required of graduate students in Africana Studies programs, and these programs offer such courses, rarely if ever are graduate students in these programs required to take quantitative methods courses, let alone have these courses offered in-house. These courses can offer Africana Studies graduate students new tools for their own research, but more importantly, improve their knowledge of quantitative research of diasporic communities. These tools and knowledge can assist with identifying flawed arguments about African-descended communities and their members. This article explores the importance of requiring and offering critical quantitative methods courses in graduate programs in Africana Studies, and discusses the methods requirements of one graduate program in the field as an example of more rigorous training that other programs could offer graduate students. PMID:29710883
The smooth (tractor) operator: insights of knowledge engineering.
Cullen, Ralph H; Smarr, Cory-Ann; Serrano-Baquero, Daniel; McBride, Sara E; Beer, Jenay M; Rogers, Wendy A
2012-11-01
The design of and training for complex systems requires in-depth understanding of task demands imposed on users. In this project, we used the knowledge engineering approach (Bowles et al., 2004) to assess the task of mowing in a citrus grove. Knowledge engineering is divided into four phases: (1) Establish goals. We defined specific goals based on the stakeholders involved. The main goal was to identify operator demands to support improvement of the system. (2) Create a working model of the system. We reviewed product literature, analyzed the system, and conducted expert interviews. (3) Extract knowledge. We interviewed tractor operators to understand their knowledge base. (4) Structure knowledge. We analyzed and organized operator knowledge to inform project goals. We categorized the information and developed diagrams to display the knowledge effectively. This project illustrates the benefits of knowledge engineering as a qualitative research method to inform technology design and training. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Two Approaches to Calibration in Metrology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campanelli, Mark
2014-04-01
Inferring mathematical relationships with quantified uncertainty from measurement data is common to computational science and metrology. Sufficient knowledge of measurement process noise enables Bayesian inference. Otherwise, an alternative approach is required, here termed compartmentalized inference, because collection of uncertain data and model inference occur independently. Bayesian parameterized model inference is compared to a Bayesian-compatible compartmentalized approach for ISO-GUM compliant calibration problems in renewable energy metrology. In either approach, model evidence can help reduce model discrepancy.
2009-09-01
the most efficient model is developed and validated by applying it to the current IA C&A process flow at the TSO-KC. Finally... models are explored using the Knowledge Value Added (KVA) methodology, and the most efficient model is developed and validated by applying it to the ... models requires only one available actor from its respective group , rather than all actors in the group , to
ERIC Educational Resources Information Center
Paquette, Luc; Lebeau, Jean-François; Beaulieu, Gabriel; Mayers, André
2015-01-01
Model-tracing tutors (MTTs) have proven effective for the tutoring of well-defined tasks, but the pedagogical interventions they produce are limited and usually require the inclusion of pedagogical content, such as text message templates, in the model of the task. The capability to generate pedagogical content would be beneficial to MTT…
ERIC Educational Resources Information Center
El Hadidi, Hala; Kirby, David A.
2015-01-01
In the modern knowledge economy universities are being required to operate more entrepreneurially, commercializing the results of their research and spinning out new ventures. The literature on the Triple Helix model (of academic-industry-government relations) is outlined, emphasizing--as does the model--the enhanced role that the modern…
Multi-Grade Teaching: A Review of Research and Practice. Education Research Paper.
ERIC Educational Resources Information Center
Little, Angela
The single-grade model of education, based on the division of labor in industry, has come to dominate the school, class, and curriculum organization used by central authorities. Although the multi-grade model is common in developing countries and in rural areas of industrialized countries, the knowledge required for effective multi-grade teaching…
Improving Students' Critical Thinking Skills through Remap NHT in Biology Classroom
ERIC Educational Resources Information Center
Mahanal, Susriyati; Zubaidah, Siti; Bahri, Arsad; Syahadatud Dinnurriya, Maratusy
2016-01-01
Previous studies in Malang, Indonesia, showed that there were the failure biology learning caused by not only the low students' prior knowledge, but also biology learning model has not improved the students' critical thinking skills yet, which affected the low of cognitive learning outcomes. The learning model is required to improve students'…
A mathematical model for predicting fire spread in wildland fuels
Richard C. Rothermel
1972-01-01
A mathematical fire model for predicting rate of spread and intensity that is applicable to a wide range of wildland fuels and environment is presented. Methods of incorporating mixtures of fuel sizes are introduced by weighting input parameters by surface area. The input parameters do not require a prior knowledge of the burning characteristics of the fuel.
Big data, big knowledge: big data for personalized healthcare.
Viceconti, Marco; Hunter, Peter; Hose, Rod
2015-07-01
The idea that the purely phenomenological knowledge that we can extract by analyzing large amounts of data can be useful in healthcare seems to contradict the desire of VPH researchers to build detailed mechanistic models for individual patients. But in practice no model is ever entirely phenomenological or entirely mechanistic. We propose in this position paper that big data analytics can be successfully combined with VPH technologies to produce robust and effective in silico medicine solutions. In order to do this, big data technologies must be further developed to cope with some specific requirements that emerge from this application. Such requirements are: working with sensitive data; analytics of complex and heterogeneous data spaces, including nontextual information; distributed data management under security and performance constraints; specialized analytics to integrate bioinformatics and systems biology information with clinical observations at tissue, organ and organisms scales; and specialized analytics to define the "physiological envelope" during the daily life of each patient. These domain-specific requirements suggest a need for targeted funding, in which big data technologies for in silico medicine becomes the research priority.
Prediction of Fatigue Crack Growth in Rail Steels.
DOT National Transportation Integrated Search
1981-10-01
Measures to prevent derailments due to fatigue failures of rails require adequate knowledge of the rate of propagation of fatigue cracks under service loading. The report presents a computational model for the prediction of crack growth in rails. The...
NASA Astrophysics Data System (ADS)
Hafner, Robert; Stewart, Jim
Past problem-solving research has provided a basis for helping students structure their knowledge and apply appropriate problem-solving strategies to solve problems for which their knowledge (or mental models) of scientific phenomena is adequate (model-using problem solving). This research examines how problem solving in the domain of Mendelian genetics proceeds in situations where solvers' mental models are insufficient to solve problems at hand (model-revising problem solving). Such situations require solvers to use existing models to recognize anomalous data and to revise those models to accommodate the data. The study was conducted in the context of 9-week high school genetics course and addressed: the heuristics charactenstic of successful model-revising problem solving: the nature of the model revisions, made by students as well as the nature of model development across problem types; and the basis upon which solvers decide that a revised model is sufficient (that t has both predictive and explanatory power).
Access to finance from different finance provider types: Farmer knowledge of the requirements.
Wulandari, Eliana; Meuwissen, Miranda P M; Karmana, Maman H; Oude Lansink, Alfons G J M
2017-01-01
Analysing farmer knowledge of the requirements of finance providers can provide valuable insights to policy makers about ways to improve farmers' access to finance. This study compares farmer knowledge of the requirements to obtain finance with the actual requirements set by different finance provider types, and investigates the relation between demographic and socioeconomic factors and farmer knowledge of finance requirements. We use a structured questionnaire to collect data from a sample of finance providers and farmers in Java Island, Indonesia. We find that the most important requirements to acquire finance vary among different finance provider types. We also find that farmers generally have little knowledge of the requirements, which are important to each type of finance provider. Awareness campaigns are needed to increase farmer knowledge of the diversity of requirements among the finance provider types.
Access to finance from different finance provider types: Farmer knowledge of the requirements
Meuwissen, Miranda P. M.; Karmana, Maman H.; Oude Lansink, Alfons G. J. M.
2017-01-01
Analysing farmer knowledge of the requirements of finance providers can provide valuable insights to policy makers about ways to improve farmers’ access to finance. This study compares farmer knowledge of the requirements to obtain finance with the actual requirements set by different finance provider types, and investigates the relation between demographic and socioeconomic factors and farmer knowledge of finance requirements. We use a structured questionnaire to collect data from a sample of finance providers and farmers in Java Island, Indonesia. We find that the most important requirements to acquire finance vary among different finance provider types. We also find that farmers generally have little knowledge of the requirements, which are important to each type of finance provider. Awareness campaigns are needed to increase farmer knowledge of the diversity of requirements among the finance provider types. PMID:28877174
A Curriculum Model for Graduate Specialization in Nursing Informatics
Romano, C.A.; Heller, B.R.
1988-01-01
The purpose of this paper is to describe the emerging role of the nurse as Information Systems Specialist and to delineate a prototype educational curriculum in Nursing Informatics that is designed to prepare nurses for this role. The major duties, knowledge required, and resulting interactions related to the role are discussed. Program objectives, admission requirements, and a description of the major areas of coursework are also outlined. The impact of this model program in strengthening the organization and management of nursing services in the health care system is also emphasized.
Band, Leah R.; Fozard, John A.; Godin, Christophe; Jensen, Oliver E.; Pridmore, Tony; Bennett, Malcolm J.; King, John R.
2012-01-01
Over recent decades, we have gained detailed knowledge of many processes involved in root growth and development. However, with this knowledge come increasing complexity and an increasing need for mechanistic modeling to understand how those individual processes interact. One major challenge is in relating genotypes to phenotypes, requiring us to move beyond the network and cellular scales, to use multiscale modeling to predict emergent dynamics at the tissue and organ levels. In this review, we highlight recent developments in multiscale modeling, illustrating how these are generating new mechanistic insights into the regulation of root growth and development. We consider how these models are motivating new biological data analysis and explore directions for future research. This modeling progress will be crucial as we move from a qualitative to an increasingly quantitative understanding of root biology, generating predictive tools that accelerate the development of improved crop varieties. PMID:23110897
2013-01-01
Background Knowledge translation strategies are an approach to increase the use of evidence within policy and practice decision-making contexts. In clinical and health service contexts, knowledge translation strategies have focused on individual behavior change, however the multi-system context of public health requires a multi-level, multi-strategy approach. This paper describes the design of and implementation plan for a knowledge translation intervention for public health decision making in local government. Methods Four preliminary research studies contributed findings to the design of the intervention: a systematic review of knowledge translation intervention effectiveness research, a scoping study of knowledge translation perspectives and relevant theory literature, a survey of the local government public health workforce, and a study of the use of evidence-informed decision-making for public health in local government. A logic model was then developed to represent the putative pathways between intervention inputs, processes, and outcomes operating between individual-, organizational-, and system-level strategies. This formed the basis of the intervention plan. Results The systematic and scoping reviews identified that effective and promising strategies to increase access to research evidence require an integrated intervention of skill development, access to a knowledge broker, resources and tools for evidence-informed decision making, and networking for information sharing. Interviews and survey analysis suggested that the intervention needs to operate at individual and organizational levels, comprising workforce development, access to evidence, and regular contact with a knowledge broker to increase access to intervention evidence; develop skills in appraisal and integration of evidence; strengthen networks; and explore organizational factors to build organizational cultures receptive to embedding evidence in practice. The logic model incorporated these inputs and strategies with a set of outcomes to measure the intervention’s effectiveness based on the theoretical frameworks, evaluation studies, and decision-maker experiences. Conclusion Documenting the design of and implementation plan for this knowledge translation intervention provides a transparent, theoretical, and practical approach to a complex intervention. It provides significant insights into how practitioners might engage with evidence in public health decision making. While this intervention model was designed for the local government context, it is likely to be applicable and generalizable across sectors and settings. Trial registration Australia New Zealand Clinical Trials Register ACTRN12609000953235. PMID:24107358
Selective Cooperation in Early Childhood – How to Choose Models and Partners
Hermes, Jonas; Behne, Tanya; Studte, Kristin; Zeyen, Anna-Maria; Gräfenhain, Maria; Rakoczy, Hannes
2016-01-01
Cooperation is essential for human society, and children engage in cooperation from early on. It is unclear, however, how children select their partners for cooperation. We know that children choose selectively whom to learn from (e.g. preferring reliable over unreliable models) on a rational basis. The present study investigated whether children (and adults) also choose their cooperative partners selectively and what model characteristics they regard as important for cooperative partners and for informants about novel words. Three- and four-year-old children (N = 64) and adults (N = 14) saw contrasting pairs of models differing either in physical strength or in accuracy (in labeling known objects). Participants then performed different tasks (cooperative problem solving and word learning) requiring the choice of a partner or informant. Both children and adults chose their cooperative partners selectively. Moreover they showed the same pattern of selective model choice, regarding a wide range of model characteristics as important for cooperation (preferring both the strong and the accurate model for a strength-requiring cooperation tasks), but only prior knowledge as important for word learning (preferring the knowledgeable but not the strong model for word learning tasks). Young children’s selective model choice thus reveals an early rational competence: They infer characteristics from past behavior and flexibly consider what characteristics are relevant for certain tasks. PMID:27505043
Motion Planning in a Society of Intelligent Mobile Agents
NASA Technical Reports Server (NTRS)
Esterline, Albert C.; Shafto, Michael (Technical Monitor)
2002-01-01
The majority of the work on this grant involved formal modeling of human-computer integration. We conceptualize computer resources as a multiagent system so that these resources and human collaborators may be modeled uniformly. In previous work we had used modal for this uniform modeling, and we had developed a process-algebraic agent abstraction. In this work, we applied this abstraction (using CSP) in uniformly modeling agents and users, which allowed us to use tools for investigating CSP models. This work revealed the power of, process-algebraic handshakes in modeling face-to-face conversation. We also investigated specifications of human-computer systems in the style of algebraic specification. This involved specifying the common knowledge required for coordination and process-algebraic patterns of communication actions intended to establish the common knowledge. We investigated the conditions for agents endowed with perception to gain common knowledge and implemented a prototype neural-network system that allows agents to detect when such conditions hold. The literature on multiagent systems conceptualizes communication actions as speech acts. We implemented a prototype system that infers the deontic effects (obligations, permissions, prohibitions) of speech acts and detects violations of these effects. A prototype distributed system was developed that allows users to collaborate in moving proxy agents; it was designed to exploit handshakes and common knowledge Finally. in work carried over from a previous NASA ARC grant, about fifteen undergraduates developed and presented projects on multiagent motion planning.
NASA Technical Reports Server (NTRS)
Throop, David R.
1992-01-01
The paper examines the requirements for the reuse of computational models employed in model-based reasoning (MBR) to support automated inference about mechanisms. Areas in which the theory of MBR is not yet completely adequate for using the information that simulations can yield are identified, and recent work in these areas is reviewed. It is argued that using MBR along with simulations forces the use of specific fault models. Fault models are used so that a particular fault can be instantiated into the model and run. This in turn implies that the component specification language needs to be capable of encoding any fault that might need to be sensed or diagnosed. It also means that the simulation code must anticipate all these faults at the component level.
A Fast Variant of 1H Spectroscopic U-FLARE Imaging Using Adjusted Chemical Shift Phase Encoding
NASA Astrophysics Data System (ADS)
Ebel, Andreas; Dreher, Wolfgang; Leibfritz, Dieter
2000-02-01
So far, fast spectroscopic imaging (SI) using the U-FLARE sequence has provided metabolic maps indirectly via Fourier transformation (FT) along the chemical shift (CS) dimension and subsequent peak integration. However, a large number of CS encoding steps Nω is needed to cover the spectral bandwidth and to achieve sufficient spectral resolution for peak integration even if the number of resonance lines is small compared to Nω and even if only metabolic images are of interest and not the spectra in each voxel. Other reconstruction algorithms require extensive prior knowledge, starting values, and/or model functions. An adjusted CS phase encoding scheme (APE) can be used to overcome these drawbacks. It incorporates prior knowledge only about the resonance frequencies present in the sample. Thus, Nω can be reduced by a factor of 4 for many 1H in vivo studies while no spectra have to be reconstructed, and no additional user interaction, prior knowledge, starting values, or model function are required. Phantom measurements and in vivo experiments on rat brain have been performed at 4.7 T to test the feasibility of the method for proton SI.
Dynamic updating of hippocampal object representations reflects new conceptual knowledge
Mack, Michael L.; Love, Bradley C.; Preston, Alison R.
2016-01-01
Concepts organize the relationship among individual stimuli or events by highlighting shared features. Often, new goals require updating conceptual knowledge to reflect relationships based on different goal-relevant features. Here, our aim is to determine how hippocampal (HPC) object representations are organized and updated to reflect changing conceptual knowledge. Participants learned two classification tasks in which successful learning required attention to different stimulus features, thus providing a means to index how representations of individual stimuli are reorganized according to changing task goals. We used a computational learning model to capture how people attended to goal-relevant features and organized object representations based on those features during learning. Using representational similarity analyses of functional magnetic resonance imaging data, we demonstrate that neural representations in left anterior HPC correspond with model predictions of concept organization. Moreover, we show that during early learning, when concept updating is most consequential, HPC is functionally coupled with prefrontal regions. Based on these findings, we propose that when task goals change, object representations in HPC can be organized in new ways, resulting in updated concepts that highlight the features most critical to the new goal. PMID:27803320
Discrete Event Simulation Modeling and Analysis of Key Leader Engagements
2012-06-01
to offer. GreenPlayer agents require four parameters, pC, pKLK, pTK, and pRK , which give probabilities for being corrupt, having key leader...HandleMessageRequest component. The same parameter constraints apply to these four parameters. The parameter pRK is the same parameter from the CreatePlayers component...whether the local Green player has resource critical knowledge by using the parameter pRK . It schedules an EndResourceKnowledgeRequest event, passing
van der Linden, Helma; Austin, Tony; Talmon, Jan
2009-09-01
Future-proof EHR systems must be capable of interpreting information structures for medical concepts that were not available at the build-time of the system. The two-model approach of CEN 13606/openEHR using archetypes achieves this by separating generic clinical knowledge from domain-related knowledge. The presentation of this information can either itself be generic, or require design time awareness of the domain knowledge being employed. To develop a Graphical User Interface (GUI) that would be capable of displaying previously unencountered clinical data structures in a meaningful way. Through "reasoning by analogy" we defined an approach for the representation and implementation of "presentational knowledge". A proof-of-concept implementation was built to validate its implementability and to test for unanticipated issues. A two-model approach to specifying and generating a screen representation for archetype-based information, inspired by the two-model approach of archetypes, was developed. There is a separation between software-related display knowledge and domain-related display knowledge and the toolkit is designed with the reuse of components in mind. The approach leads to a flexible GUI that can adapt not only to information structures that had not been predefined within the receiving system, but also to novel ways of displaying the information. We also found that, ideally, the openEHR Archetype Definition Language should receive minor adjustments to allow for generic binding.
ERIC Educational Resources Information Center
Millstone, Rachel Diana
2010-01-01
The current conceptualization of science set forth by the National Research Council (2008) is one of science as a social activity, rather than a view of science as a fixed body of knowledge. This requires teachers to consider how communication, processing, and meaning-making contribute to science learning. It also requires teachers to think deeply…
Boilermodel: A Qualitative Model-Based Reasoning System Implemented in Ada
1991-09-01
comple- ment to shipboard engineering training. Accesion For NTIS CRA&I DTIO I A3 f_- Unairmoui1ccd [i Justification By ................... Distribut;or, I...investment (in terms of man-hours lost, equipment maintenance, materials, etc.) for initial training. On- going training is also required to sustain a...REASONING FROM MODELS Model-based expert systems have been written in many languages and for many different architectures . Knowledge representation also
[The Role of Nursing Education in the Advancement of the Nursing Profession].
Chang Yeh, Mei
2017-02-01
The present article discusses the role of nursing education in the advancement of the nursing profession in the context of the three facets of knowledge: generation, dissemination, and application. Nursing is an applied science and the application of knowledge in practice is the ultimate goal of the nursing profession. The reform of the healthcare delivery model requires that nurses acquire and utilize evidence-based clinical knowledge, critical thinking, effective communication, and team collaboration skills in order to ensure the quality of patient care and safety. Therefore, baccalaureate education has become the minimal requirement for pre-licensure nursing education. Schools of nursing are responsible to cultivate competent nurses to respond to the demands on the nursing workforce from the healthcare system. Attaining a master's education in nursing helps cultivate Advanced Practice Registered Nurses (APRNs) to further expand the roles and functions of the nursing profession in order to promote the quality of care in clinical practice. Nursing faculty and scholars of higher education institutions generate nursing knowledge and develop professional scholarship through research. Attaining a doctoral education in nursing cultivates faculties and scholars who will continually generate and disseminate nursing knowledge into the future.
Conceptual Models and Guidelines for Clinical Assessment of Financial Capacity.
Marson, Daniel
2016-09-01
The ability to manage financial affairs is a life skill of critical importance, and neuropsychologists are increasingly asked to assess financial capacity across a variety of settings. Sound clinical assessment of financial capacity requires knowledge and appreciation of applicable clinical conceptual models and principles. However, the literature has presented relatively little conceptual guidance for clinicians concerning financial capacity and its assessment. This article seeks to address this gap. The article presents six clinical models of financial capacity : (1) the early gerontological IADL model of Lawton, (2) the clinical skills model and (3) related cognitive psychological model developed by Marson and colleagues, (4) a financial decision-making model adapting earlier decisional capacity work of Appelbaum and Grisso, (5) a person-centered model of financial decision-making developed by Lichtenberg and colleagues, and (6) a recent model of financial capacity in the real world developed through the Institute of Medicine. Accompanying presentation of the models is discussion of conceptual and practical perspectives they represent for clinician assessment. Based on the models, the article concludes by presenting a series of conceptually oriented guidelines for clinical assessment of financial capacity. In summary, sound assessment of financial capacity requires knowledge and appreciation of clinical conceptual models and principles. Awareness of such models, principles and guidelines will strengthen and advance clinical assessment of financial capacity. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The KATE shell: An implementation of model-based control, monitor and diagnosis
NASA Technical Reports Server (NTRS)
Cornell, Matthew
1987-01-01
The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.
Schulthess, Pascal; van Wijk, Rob C; Krekels, Elke H J; Yates, James W T; Spaink, Herman P; van der Graaf, Piet H
2018-04-25
To advance the systems approach in pharmacology, experimental models and computational methods need to be integrated from early drug discovery onward. Here, we propose outside-in model development, a model identification technique to understand and predict the dynamics of a system without requiring prior biological and/or pharmacological knowledge. The advanced data required could be obtained by whole vertebrate, high-throughput, low-resource dose-exposure-effect experimentation with the zebrafish larva. Combinations of these innovative techniques could improve early drug discovery. © 2018 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Training tomorrow's environmental problem-solvers: an integrative approach to graduate education
USDA-ARS?s Scientific Manuscript database
Environmental problems are generally complex and blind to disciplinary boundaries. Efforts to devise long-term solutions require collaborative research that integrates knowledge across historically disparate fields, yet the traditional model for training new scientists emphasizes personal independe...
van den Heuvel, Charles; Weingart, Scott B; Spelt, Nils; Nellen, Henk
2016-01-01
Science in the early modern world depended on openness in scholarly communication. On the other hand, a web of commercial, political, and religious conflicts required broad measures of secrecy and confidentiality; similar measures were integral to scholarly rivalries and plagiarism. This paper analyzes confidentiality and secrecy in intellectual and technological knowledge exchange via letters and drawings. We argue that existing approaches to understanding knowledge exchange in early modern Europe--which focus on the Republic of Letters as a unified entity of corresponding scholars--can be improved upon by analyzing multilayered networks of communication. We describe a data model to analyze circles of confidence and cultures of secrecy in intellectual and technological knowledge exchanges. Finally, we discuss the outcomes of a first experiment focusing on the question of how personal and professional/official relationships interact with confidentiality and secrecy, based on a case study of the correspondence of Hugo Grotius.
López-Gil, Juan-Miguel; Gil, Rosa; García, Roberto
2016-01-01
This work presents a Web ontology for modeling and representation of the emotional, cognitive and motivational state of online learners, interacting with university systems for distance or blended education. The ontology is understood as a way to provide the required mechanisms to model reality and associate it to emotional responses, but without committing to a particular way of organizing these emotional responses. Knowledge representation for the contributed ontology is performed by using Web Ontology Language (OWL), a semantic web language designed to represent rich and complex knowledge about things, groups of things, and relations between things. OWL is a computational logic-based language such that computer programs can exploit knowledge expressed in OWL and also facilitates sharing and reusing knowledge using the global infrastructure of the Web. The proposed ontology has been tested in the field of Massive Open Online Courses (MOOCs) to check if it is capable of representing emotions and motivation of the students in this context of use. PMID:27199796
Developing an ontological explosion knowledge base for business continuity planning purposes.
Mohammadfam, Iraj; Kalatpour, Omid; Golmohammadi, Rostam; Khotanlou, Hasan
2013-01-01
Industrial accidents are among the most known challenges to business continuity. Many organisations have lost their reputation following devastating accidents. To manage the risks of such accidents, it is necessary to accumulate sufficient knowledge regarding their roots, causes and preventive techniques. The required knowledge might be obtained through various approaches, including databases. Unfortunately, many databases are hampered by (among other things) static data presentations, a lack of semantic features, and the inability to present accident knowledge as discrete domains. This paper proposes the use of Protégé software to develop a knowledge base for the domain of explosion accidents. Such a structure has a higher capability to improve information retrieval compared with common accident databases. To accomplish this goal, a knowledge management process model was followed. The ontological explosion knowledge base (EKB) was built for further applications, including process accident knowledge retrieval and risk management. The paper will show how the EKB has a semantic feature that enables users to overcome some of the search constraints of existing accident databases.
Instructable autonomous agents. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Huffman, Scott Bradley
1994-01-01
In contrast to current intelligent systems, which must be laboriously programmed for each task they are meant to perform, instructable agents can be taught new tasks and associated knowledge. This thesis presents a general theory of learning from tutorial instruction and its use to produce an instructable agent. Tutorial instruction is a particularly powerful form of instruction, because it allows the instructor to communicate whatever kind of knowledge a student needs at whatever point it is needed. To exploit this broad flexibility, however, a tutorable agent must support a full range of interaction with its instructor to learn a full range of knowledge. Thus, unlike most machine learning tasks, which target deep learning of a single kind of knowledge from a single kind of input, tutorability requires a breadth of learning from a broad range of instructional interactions. The theory of learning from tutorial instruction presented here has two parts. First, a computational model of an intelligent agent, the problem space computational model, indicates the types of knowledge that determine an agent's performance, and thus, that should be acquirable via instruction. Second, a learning technique, called situated explanation specifies how the agent learns general knowledge from instruction. The theory is embodied by an implemented agent, Instructo-Soar, built within the Soar architecture. Instructo-Soar is able to learn hierarchies of completely new tasks, to extend task knowledge to apply in new situations, and in fact to acquire every type of knowledge it uses during task performance - control knowledge, knowledge of operators' effects, state inferences, etc. - from interactive natural language instructions. This variety of learning occurs by applying the situated explanation technique to a variety of instructional interactions involving a variety of types of instructions (commands, statements, conditionals, etc.). By taking seriously the requirements of flexible tutorial instruction, Instructo-Soar demonstrates a breadth of interaction and learning capabilities that goes beyond previous instructable systems, such as learning apprentice systems. Instructo-Soar's techniques could form the basis for future 'instructable technologies' that come equipped with basic capabilities, and can be taught by novice users to perform any number of desired tasks.
Rasmussen's model of human behavior in laparoscopy training.
Wentink, M; Stassen, L P S; Alwayn, I; Hosman, R J A W; Stassen, H G
2003-08-01
Compared to aviation, where virtual reality (VR) training has been standardized and simulators have proven their benefits, the objectives, needs, and means of VR training in minimally invasive surgery (MIS) still have to be established. The aim of the study presented is to introduce Rasmussen's model of human behavior as a practical framework for the definition of the training objectives, needs, and means in MIS. Rasmussen distinguishes three levels of human behavior: skill-, rule-, and knowledge-based behaviour. The training needs of a laparoscopic novice can be determined by identifying the specific skill-, rule-, and knowledge-based behavior that is required for performing safe laparoscopy. Future objectives of VR laparoscopy trainers should address all three levels of behavior. Although most commercially available simulators for laparoscopy aim at training skill-based behavior, especially the training of knowledge-based behavior during complications in surgery will improve safety levels. However, the cost and complexity of a training means increases when the training objectives proceed from the training of skill-based behavior to the training of complex knowledge-based behavior. In aviation, human behavior models have been used successfully to integrate the training of skill-, rule-, and knowledge-based behavior in a full flight simulator. Understanding surgeon behavior is one of the first steps towards a future full-scale laparoscopy simulator.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulvatunyou, Boonserm; Wysk, Richard A.; Cho, Hyunbo
2004-06-01
In today's global manufacturing environment, manufacturing functions are distributed as never before. Design, engineering, fabrication, and assembly of new products are done routinely in many different enterprises scattered around the world. Successful business transactions require the sharing of design and engineering data on an unprecedented scale. This paper describes a framework that facilitates the collaboration of engineering tasks, particularly process planning and analysis, to support such globalized manufacturing activities. The information models of data and the software components that integrate those information models are described. The integration framework uses an Integrated Product and Process Data (IPPD) representation called a Resourcemore » Independent Operation Summary (RIOS) to facilitate the communication of business and manufacturing requirements. Hierarchical process modeling, process planning decomposition and an augmented AND/OR directed graph are used in this representation. The Resource Specific Process Planning (RSPP) module assigns required equipment and tools, selects process parameters, and determines manufacturing costs based on two-level hierarchical RIOS data. The shop floor knowledge (resource and process knowledge) and a hybrid approach (heuristic and linear programming) to linearize the AND/OR graph provide the basis for the planning. Finally, a prototype system is developed and demonstrated with an exemplary part. Java and XML (Extensible Markup Language) are used to ensure software and information portability.« less
Galiano, Daniel; Bernardo-Silva, Jorge; de Freitas, Thales R. O.
2014-01-01
Conservation of small mammals requires knowledge of the genetically and ecologically meaningful spatial scales at which species respond to habitat modifications. Conservation strategies can be improved through the use of ecological niche models and genetic data to classify areas of high environmental suitability. In this study, we applied a Maxent model integrated with genetic information (nucleotide diversity, haplotype diversity and Fu's Fs neutrality tests) to evaluate potential genetic pool populations with highly suitable areas for two parapatric endangered species of tuco-tucos (Ctenomys minutus and C. lami). Our results demonstrated that both species were largely influenced by vegetation and soil variables at a landscape scale and inhabit a highly specific niche. Ctenomys minutus was also influenced by the variable altitude; the species was associated with low altitudes (sea level). Our model of genetic data associated with environmental suitability indicate that the genetic pool data were associated with highly suitable areas for C. minutus. This pattern was not evident for C. lami, but this outcome could be a consequence of the restricted range of the species. The preservation of species requires not only detailed knowledge of their natural history and genetic structure but also information on the availability of suitable areas where species can survive, and such knowledge can aid significantly in conservation planning. This finding reinforces the use of these two techniques for planning conservation actions. PMID:24819251
NASA Astrophysics Data System (ADS)
Chang, Daniel Y.; Rowe, Neil C.
2013-05-01
While conducting a cutting-edge research in a specific domain, we realize that (1) requirements clarity and correctness are crucial to our success [1], (2) hardware is hard to change, most work is in software requirements development, coding and testing [2], (3) requirements are constantly changing, so that configurability, reusability, scalability, adaptability, modularity and testability are important non-functional attributes [3], (4) cross-domain knowledge is necessary for complex systems [4], and (5) if our research is successful, the results could be applied to other domains with similar problems. In this paper, we propose to use model-driven requirements engineering (MDRE) to model and guide our requirements/development, since models are easy to understand, execute, and modify. The domain for our research is Electronic Warfare (EW) real-time ultra-wide instantaneous bandwidth (IBW1) signal simulation. The proposed four MDRE models are (1) Switch-and-Filter architecture, (2) multiple parallel data bit streams alignment, (3) post-ADC and pre-DAC bits re-mapping, and (4) Discrete Fourier Transform (DFT) filter bank. This research is unique since the instantaneous bandwidth we are dealing with is in gigahertz range instead of conventional megahertz.
A knowledge based search tool for performance measures in health care systems.
Beyan, Oya D; Baykal, Nazife
2012-02-01
Performance measurement is vital for improving the health care systems. However, we are still far from having accepted performance measurement models. Researchers and developers are seeking comparable performance indicators. We developed an intelligent search tool to identify appropriate measures for specific requirements by matching diverse care settings. We reviewed the literature and analyzed 229 performance measurement studies published after 2000. These studies are evaluated with an original theoretical framework and stored in the database. A semantic network is designed for representing domain knowledge and supporting reasoning. We have applied knowledge based decision support techniques to cope with uncertainty problems. As a result we designed a tool which simplifies the performance indicator search process and provides most relevant indicators by employing knowledge based systems.
AI and simulation: What can they learn from each other
NASA Technical Reports Server (NTRS)
Colombano, Silvano P.
1988-01-01
Simulation and Artificial Intelligence share a fertile common ground both from a practical and from a conceptual point of view. Strengths and weaknesses of both Knowledge Based System and Modeling and Simulation are examined and three types of systems that combine the strengths of both technologies are discussed. These types of systems are a practical starting point, however, the real strengths of both technologies will be exploited only when they are combined in a common knowledge representation paradigm. From an even deeper conceptual point of view, one might even argue that the ability to reason from a set of facts (i.e., Expert System) is less representative of human reasoning than the ability to make a model of the world, change it as required, and derive conclusions about the expected behavior of world entities. This is a fundamental problem in AI, and Modeling Theory can contribute to its solution. The application of Knowledge Engineering technology to a Distributed Processing Network Simulator (DPNS) is discussed.
Predicting Correctness of Problem Solving from Low-Level Log Data in Intelligent Tutoring Systems
ERIC Educational Resources Information Center
Cetintas, Suleyman; Si, Luo; Xin, Yan Ping; Hord, Casey
2009-01-01
This paper proposes a learning based method that can automatically determine how likely a student is to give a correct answer to a problem in an intelligent tutoring system. Only log files that record students' actions with the system are used to train the model, therefore the modeling process doesn't require expert knowledge for identifying…
A conceptual model of plant responses to climate with implications for monitoring ecosystem change
C. David Bertelsen
2013-01-01
Climate change is affecting natural systems on a global scale and is particularly rapid in the Southwest. It is important to identify impacts of a changing climate before ecosystems become unstable. Recognizing plant responses to climate change requires knowledge of both species present and plant responses to variable climatic conditions. A conceptual model derived...
Model-centric approaches for the development of health information systems.
Tuomainen, Mika; Mykkänen, Juha; Luostarinen, Heli; Pöyhölä, Assi; Paakkanen, Esa
2007-01-01
Modeling is used increasingly in healthcare to increase shared knowledge, to improve the processes, and to document the requirements of the solutions related to health information systems (HIS). There are numerous modeling approaches which aim to support these aims, but a careful assessment of their strengths, weaknesses and deficiencies is needed. In this paper, we compare three model-centric approaches in the context of HIS development: the Model-Driven Architecture, Business Process Modeling with BPMN and BPEL and the HL7 Development Framework. The comparison reveals that all these approaches are viable candidates for the development of HIS. However, they have distinct strengths and abstraction levels, they require local and project-specific adaptation and offer varying levels of automation. In addition, illustration of the solutions to the end users must be improved.
Horne, Avril C; Szemis, Joanna M; Webb, J Angus; Kaur, Simranjit; Stewardson, Michael J; Bond, Nick; Nathan, Rory
2018-03-01
One important aspect of adaptive management is the clear and transparent documentation of hypotheses, together with the use of predictive models (complete with any assumptions) to test those hypotheses. Documentation of such models can improve the ability to learn from management decisions and supports dialog between stakeholders. A key challenge is how best to represent the existing scientific knowledge to support decision-making. Such challenges are currently emerging in the field of environmental water management in Australia, where managers are required to prioritize the delivery of environmental water on an annual basis, using a transparent and evidence-based decision framework. We argue that the development of models of ecological responses to environmental water use needs to support both the planning and implementation cycles of adaptive management. Here we demonstrate an approach based on the use of Conditional Probability Networks to translate existing ecological knowledge into quantitative models that include temporal dynamics to support adaptive environmental flow management. It equally extends to other applications where knowledge is incomplete, but decisions must still be made.
NASA Astrophysics Data System (ADS)
Horne, Avril C.; Szemis, Joanna M.; Webb, J. Angus; Kaur, Simranjit; Stewardson, Michael J.; Bond, Nick; Nathan, Rory
2018-03-01
One important aspect of adaptive management is the clear and transparent documentation of hypotheses, together with the use of predictive models (complete with any assumptions) to test those hypotheses. Documentation of such models can improve the ability to learn from management decisions and supports dialog between stakeholders. A key challenge is how best to represent the existing scientific knowledge to support decision-making. Such challenges are currently emerging in the field of environmental water management in Australia, where managers are required to prioritize the delivery of environmental water on an annual basis, using a transparent and evidence-based decision framework. We argue that the development of models of ecological responses to environmental water use needs to support both the planning and implementation cycles of adaptive management. Here we demonstrate an approach based on the use of Conditional Probability Networks to translate existing ecological knowledge into quantitative models that include temporal dynamics to support adaptive environmental flow management. It equally extends to other applications where knowledge is incomplete, but decisions must still be made.
The application of SSADM to modelling the logical structure of proteins.
Saldanha, J; Eccles, J
1991-10-01
A logical design that describes the overall structure of proteins, together with a more detailed design describing secondary and some supersecondary structures, has been constructed using the computer-aided software engineering (CASE) tool, Auto-mate. Auto-mate embodies the philosophy of the Structured Systems Analysis and Design Method (SSADM) which enables the logical design of computer systems. Our design will facilitate the building of large information systems, such as databases and knowledgebases in the field of protein structure, by the derivation of system requirements from our logical model prior to producing the final physical system. In addition, the study has highlighted the ease of employing SSADM as a formalism in which to conduct the transferral of concepts from an expert into a design for a knowledge-based system that can be implemented on a computer (the knowledge-engineering exercise). It has been demonstrated how SSADM techniques may be extended for the purpose of modelling the constituent Prolog rules. This facilitates the integration of the logical system design model with the derived knowledge-based system.
Using robust Bayesian network to estimate the residuals of fluoroquinolone antibiotic in soil.
Li, Xuewen; Xie, Yunfeng; Li, Lianfa; Yang, Xunfeng; Wang, Ning; Wang, Jinfeng
2015-11-01
Prediction of antibiotic pollution and its consequences is difficult, due to the uncertainties and complexities associated with multiple related factors. This article employed domain knowledge and spatial data to construct a Bayesian network (BN) model to assess fluoroquinolone antibiotic (FQs) pollution in the soil of an intensive vegetable cultivation area. The results show: (1) The relationships between FQs pollution and contributory factors: Three factors (cultivation methods, crop rotations, and chicken manure types) were consistently identified as predictors in the topological structures of three FQs, indicating their importance in FQs pollution; deduced with domain knowledge, the cultivation methods are determined by the crop rotations, which require different nutrients (derived from the manure) according to different plant biomass. (2) The performance of BN model: The integrative robust Bayesian network model achieved the highest detection probability (pd) of high-risk and receiver operating characteristic (ROC) area, since it incorporates domain knowledge and model uncertainty. Our encouraging findings have implications for the use of BN as a robust approach to assessment of FQs pollution and for informing decisions on appropriate remedial measures.
Customer-centered careflow modeling based on guidelines.
Huang, Biqing; Zhu, Peng; Wu, Cheng
2012-10-01
In contemporary society, customer-centered health care, which stresses customer participation and long-term tailored care, is inevitably becoming a trend. Compared with the hospital or physician-centered healthcare process, the customer-centered healthcare process requires more knowledge and modeling such a process is extremely complex. Thus, building a care process model for a special customer is cost prohibitive. In addition, during the execution of a care process model, the information system should have flexibility to modify the model so that it adapts to changes in the healthcare process. Therefore, supporting the process in a flexible, cost-effective way is a key challenge for information technology. To meet this challenge, first, we analyze various kinds of knowledge used in process modeling, illustrate their characteristics, and detail their roles and effects in careflow modeling. Secondly, we propose a methodology to manage a lifecycle of the healthcare process modeling, with which models could be built gradually with convenience and efficiency. In this lifecycle, different levels of process models are established based on the kinds of knowledge involved, and the diffusion strategy of these process models is designed. Thirdly, architecture and prototype of the system supporting the process modeling and its lifecycle are given. This careflow system also considers the compatibility of legacy systems and authority problems. Finally, an example is provided to demonstrate implementation of the careflow system.
Systematic analysis of signaling pathways using an integrative environment.
Visvanathan, Mahesh; Breit, Marc; Pfeifer, Bernhard; Baumgartner, Christian; Modre-Osprian, Robert; Tilg, Bernhard
2007-01-01
Understanding the biological processes of signaling pathways as a whole system requires an integrative software environment that has comprehensive capabilities. The environment should include tools for pathway design, visualization, simulation and a knowledge base concerning signaling pathways as one. In this paper we introduce a new integrative environment for the systematic analysis of signaling pathways. This system includes environments for pathway design, visualization, simulation and a knowledge base that combines biological and modeling information concerning signaling pathways that provides the basic understanding of the biological system, its structure and functioning. The system is designed with a client-server architecture. It contains a pathway designing environment and a simulation environment as upper layers with a relational knowledge base as the underlying layer. The TNFa-mediated NF-kB signal trans-duction pathway model was designed and tested using our integrative framework. It was also useful to define the structure of the knowledge base. Sensitivity analysis of this specific pathway was performed providing simulation data. Then the model was extended showing promising initial results. The proposed system offers a holistic view of pathways containing biological and modeling data. It will help us to perform biological interpretation of the simulation results and thus contribute to a better understanding of the biological system for drug identification.
Modeling to Mars: a NASA Model Based Systems Engineering Pathfinder Effort
NASA Technical Reports Server (NTRS)
Phojanamongkolkij, Nipa; Lee, Kristopher A.; Miller, Scott T.; Vorndran, Kenneth A.; Vaden, Karl R.; Ross, Eric P.; Powell, Bobby C.; Moses, Robert W.
2017-01-01
The NASA Engineering Safety Center (NESC) Systems Engineering (SE) Technical Discipline Team (TDT) initiated the Model Based Systems Engineering (MBSE) Pathfinder effort in FY16. The goals and objectives of the MBSE Pathfinder include developing and advancing MBSE capability across NASA, applying MBSE to real NASA issues, and capturing issues and opportunities surrounding MBSE. The Pathfinder effort consisted of four teams, with each team addressing a particular focus area. This paper focuses on Pathfinder team 1 with the focus area of architectures and mission campaigns. These efforts covered the timeframe of February 2016 through September 2016. The team was comprised of eight team members from seven NASA Centers (Glenn Research Center, Langley Research Center, Ames Research Center, Goddard Space Flight Center IV&V Facility, Johnson Space Center, Marshall Space Flight Center, and Stennis Space Center). Collectively, the team had varying levels of knowledge, skills and expertise in systems engineering and MBSE. The team applied their existing and newly acquired system modeling knowledge and expertise to develop modeling products for a campaign (Program) of crew and cargo missions (Projects) to establish a human presence on Mars utilizing In-Situ Resource Utilization (ISRU). Pathfinder team 1 developed a subset of modeling products that are required for a Program System Requirement Review (SRR)/System Design Review (SDR) and Project Mission Concept Review (MCR)/SRR as defined in NASA Procedural Requirements. Additionally, Team 1 was able to perform and demonstrate some trades and constraint analyses. At the end of these efforts, over twenty lessons learned and recommended next steps have been identified.
Semantic Modeling of Requirements: Leveraging Ontologies in Systems Engineering
ERIC Educational Resources Information Center
Mir, Masood Saleem
2012-01-01
The interdisciplinary nature of "Systems Engineering" (SE), having "stakeholders" from diverse domains with orthogonal facets, and need to consider all stages of "lifecycle" of system during conception, can benefit tremendously by employing "Knowledge Engineering" (KE) to achieve semantic agreement among all…
Agoncillo, A V; Mejino, J L; Rosse, C
1999-01-01
A principled and logical representation of the structure of the human body has led to conflicts with traditional representations of the same knowledge by anatomy textbooks. The examples which illustrate resolution of these conflicts suggest that stricter requirements must be met for semantic consistency, expressivity and specificity by knowledge sources intended to support inference than by textbooks and term lists. These next-generation resources should influence traditional concept representation, rather than be constrained by convention.
Perspectives on scientific and technological literacy in Tonga: Moving forward in the 21st century
NASA Astrophysics Data System (ADS)
Palefau, Tevita Hala
Tonga has undergone complex changes in the last three decades. Disturbing numbers of young Tongans have inadequate knowledge in traditional science and technology, ill equipped to work in, contribute to and profit from our society. In short, they lack sufficient background knowledge to acquire the training, skills and understanding that are needed in the 21st Century. The purpose of this research is to assist the formulation of national science and technology curriculum. Hence, views of life in Tonga and opinions about Tonga's needs held by three stakeholder groups (traditional, workplaces, public) were paramount in this study. How these stakeholders see Tonga in terms of science and technology needs will contribute substantially to the Ministry of Education's decisions for this century. Based on critical evaluation of international literature and how scientific and technological literacy (STL) is crucial to Tongan society, a model 'TAP-STL' is established as study framework: 'TAP' for ṯraditional, a&barbelow;cademic and p&barbelow;ublic STL, to promote national development. This qualitative case study employs an interview method to collect data from twelve knowledgeable participants selected by reputational sampling from across the kingdom. By exploring their understanding of STL requirements, the study sought to identify any shortfall between the science and technology provided in school and that needed for maintenance of traditional culture, effective participation in Tonga's workplaces and public understanding. The study produced findings under these categories: understanding of traditional knowledge and skills needed to preserve Tongan cultural identity; understanding needed for fishing, handicrafts and everyday maintenance, together with essential health knowledge and skills; and required understanding of public information campaigns related to health, domestic goods, drugs and environment that contribute to responsible citizenship. The study identified personal qualities, safety policies, market, management and budget skills required for national development. These STL knowledge and skills are translated to an appropriate Model for Tonga Science and Technology Curriculum. The thesis concludes with proposition for reorganization of science and technology curriculum: establishment of two streams: an academic stream for university preparation and vocational stream for workplace and citizenship preparation; and establishment of two purpose-built programs: community involvement and workplace apprenticeship for all students.
NASA Astrophysics Data System (ADS)
Bandaragoda, C.; Castronova, A. M.; Phuong, J.; Istanbulluoglu, E.; Strauch, R. L.; Nudurupati, S. S.; Tarboton, D. G.; Wang, S. W.; Yin, D.; Barnhart, K. R.; Tucker, G. E.; Hutton, E.; Hobley, D. E. J.; Gasparini, N. M.; Adams, J. M.
2017-12-01
The ability to test hypotheses about hydrology, geomorphology and atmospheric processes is invaluable to research in the era of big data. Although community resources are available, there remain significant educational, logistical and time investment barriers to their use. Knowledge infrastructure is an emerging intellectual framework to understand how people are creating, sharing and distributing knowledge - which has been dramatically transformed by Internet technologies. In addition to the technical and social components in a cyberinfrastructure system, knowledge infrastructure considers educational, institutional, and open source governance components required to advance knowledge. We are designing an infrastructure environment that lowers common barriers to reproducing modeling experiments for earth surface investigation. Landlab is an open-source modeling toolkit for building, coupling, and exploring two-dimensional numerical models. HydroShare is an online collaborative environment for sharing hydrologic data and models. CyberGIS-Jupyter is an innovative cyberGIS framework for achieving data-intensive, reproducible, and scalable geospatial analytics using the Jupyter Notebook based on ROGER - the first cyberGIS supercomputer, so that models that can be elastically reproduced through cloud computing approaches. Our team of geomorphologists, hydrologists, and computer geoscientists has created a new infrastructure environment that combines these three pieces of software to enable knowledge discovery. Through this novel integration, any user can interactively execute and explore their shared data and model resources. Landlab on HydroShare with CyberGIS-Jupyter supports the modeling continuum from fully developed modelling applications, prototyping new science tools, hands on research demonstrations for training workshops, and classroom applications. Computational geospatial models based on big data and high performance computing can now be more efficiently developed, improved, scaled, and seamlessly reproduced among multidisciplinary users, thereby expanding the active learning curriculum and research opportunities for students in earth surface modeling and informatics.
49 CFR 383.117 - Requirements for passenger endorsement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... COMMERCIAL DRIVER'S LICENSE STANDARDS; REQUIREMENTS AND PENALTIES Required Knowledge and Skills § 383.117... following additional knowledge and skills test requirements. (a) Knowledge test. All applicants for the... procedures not otherwise specified. (b) Skills test. To obtain a passenger endorsement applicable to a...
Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F; Musen, Mark A
The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks.
Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F.; Musen, Mark A.
2015-01-01
The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks. PMID:26568745
International Space Station Human Behavior and Performance Competency Model: Volume II
NASA Technical Reports Server (NTRS)
Schmidt, Lacey
2008-01-01
This document further defines the behavioral markers identified in the document "Human Behavior and Performance Competency Model" Vol. I. The Human Behavior and Performance (HBP) competencies were recommended as requirements to participate in international long duration missions, and form the basis for determining the HBP training curriculum for long duration crewmembers. This document provides details, examples, knowledge areas, and affective skills to support the use of the HBP competencies in training and evaluation. This document lists examples and details specific to HBP competencies required of astronauts/cosmonauts who participate in ISS expedition and other international long-duration missions. Please note that this model does not encompass all competencies required. While technical competencies are critical for crewmembers, they are beyond the scope of this document. Additionally, the competencies in this model (and subsequent objectives) are not intended to limit the internal activities or training programs of any international partner.
An AI-based approach to structural damage identification by modal analysis
NASA Technical Reports Server (NTRS)
Glass, B. J.; Hanagud, S.
1990-01-01
Flexible-structure damage is presently addressed by a combined model- and parameter-identification approach which employs the AI methodologies of classification, heuristic search, and object-oriented model knowledge representation. The conditions for model-space search convergence to the best model are discussed in terms of search-tree organization and initial model parameter error. In the illustrative example of a truss structure presented, the use of both model and parameter identification is shown to lead to smaller parameter corrections than would be required by parameter identification alone.
Linking Earth Observations and Models to Societal Information Needs: The Case of Coastal Flooding
NASA Astrophysics Data System (ADS)
Buzzanga, B. A.; Plag, H. P.
2016-12-01
Coastal flooding is expected to increase in many areas due to sea level rise (SLR). Many societal applications such as emergency planning and designing public services depend on information on how the flooding spectrum may change as a result of SLR. To identify the societal information needs a conceptual model is needed that identifies the key stakeholders, applications, and information and observation needs. In the context of the development of the Global Earth Observation System of Systems (GEOSS), which is implemented by the Group on Earth Observations (GEO), the Socio-Economic and Environmental Information Needs Knowledge Base (SEE-IN KB) is developed as part of the GEOSS Knowledge Base. A core function of the SEE-IN KB is to facilitate the linkage of societal information needs to observations, models, information and knowledge. To achieve this, the SEE-IN KB collects information on objects such as user types, observational requirements, societal goals, models, and datasets. Comprehensive information concerning the interconnections between instances of these objects is used to capture the connectivity and to establish a conceptual model as a network of networks. The captured connectivity can be used in searches to allow users to discover products and services for their information needs, and providers to search for users and applications benefiting from their products. It also allows to answer "What if?" questions and supports knowledge creation. We have used the SEE-IN KB to develop a conceptual model capturing the stakeholders in coastal flooding and their information needs, and to link these elements to objects. We show how the knowledge base enables the transition of scientific data to useable information by connecting individuals such as city managers to flood maps. Within the knowledge base, these same users can request information that improves their ability to make specific planning decisions. These needs are linked to entities within research institutions that have the capabilities to meet them. Further, current research such as that investigating precipitation-induced flooding under different SLR scenarios is linked to the users who benefit from the knowledge, effectively creating a bi-directional channel between science and society that increases knowledge and improves foresight.
Hunt, C Anthony; Kennedy, Ryan C; Kim, Sean H J; Ropella, Glen E P
2013-01-01
A crisis continues to brew within the pharmaceutical research and development (R&D) enterprise: productivity continues declining as costs rise, despite ongoing, often dramatic scientific and technical advances. To reverse this trend, we offer various suggestions for both the expansion and broader adoption of modeling and simulation (M&S) methods. We suggest strategies and scenarios intended to enable new M&S use cases that directly engage R&D knowledge generation and build actionable mechanistic insight, thereby opening the door to enhanced productivity. What M&S requirements must be satisfied to access and open the door, and begin reversing the productivity decline? Can current methods and tools fulfill the requirements, or are new methods necessary? We draw on the relevant, recent literature to provide and explore answers. In so doing, we identify essential, key roles for agent-based and other methods. We assemble a list of requirements necessary for M&S to meet the diverse needs distilled from a collection of research, review, and opinion articles. We argue that to realize its full potential, M&S should be actualized within a larger information technology framework—a dynamic knowledge repository—wherein models of various types execute, evolve, and increase in accuracy over time. We offer some details of the issues that must be addressed for such a repository to accrue the capabilities needed to reverse the productivity decline. © 2013 Wiley Periodicals, Inc. PMID:23737142
Building beef cow nutritional programs with the 1996 NRC beef cattle requirements model.
Lardy, G P; Adams, D C; Klopfenstein, T J; Patterson, H H
2004-01-01
Designing a sound cow-calf nutritional program requires knowledge of nutrient requirements, diet quality, and intake. Effectively using the NRC (1996) beef cattle requirements model (1996NRC) also requires knowledge of dietary degradable intake protein (DIP) and microbial efficiency. Objectives of this paper are to 1) describe a framework in which 1996NRC-applicable data can be generated, 2) describe seasonal changes in nutrients on native range, 3) use the 1996NRC to predict nutrient balance for cattle grazing these forages, and 4) make recommendations for using the 1996NRC for forage-fed cattle. Extrusa samples were collected over 2 yr on native upland range and subirrigated meadow in the Nebraska Sandhills. Samples were analyzed for CP, in vitro OM digestibility (IVOMD), and DIP. Regression equations to predict nutrients were developed from these data. The 1996NRC was used to predict nutrient balances based on the dietary nutrient analyses. Recommendations for model users were also developed. On subirrigated meadow, CP and IVOMD increased rapidly during March and April. On native range, CP and IVOMD increased from April through June but decreased rapidly from August through September. Degradable intake protein (DM basis) followed trends similar to CP for both native range and subirrigated meadow. Predicted nutrient balances for spring- and summer-calving cows agreed with reported values in the literature, provided that IVOMD values were converted to DE before use in the model (1.07 x IVOMD - 8.13). When the IVOMD-to-DE conversion was not used, the model gave unrealistically high NE(m) balances. To effectively use the 1996NRC to estimate protein requirements, users should focus on three key estimates: DIP, microbial efficiency, and TDN intake. Consequently, efforts should be focused on adequately describing seasonal changes in forage nutrient content. In order to increase use of the 1996NRC, research is needed in the following areas: 1) cost-effective and accurate commercial laboratory procedures to estimate DIP, 2) reliable estimates or indicators of microbial efficiency for various forage types and qualities, 3) improved estimates of dietary TDN for forage-based diets, 4) validation work to improve estimates of DIP and MP requirements, and 5) incorporation of nitrogen recycling estimates.
Garber, Susan L
Every day, in clinics and hospitals around the world, occupational therapists care for patients with serious problems requiring viable solutions. Each patient is unique, and his or her problem does not necessarily correspond to existing practice models. Practitioners must adapt standard approaches to provide effective outcomes, yet problems exist for which few or no beneficial approaches have been identified. Such clinical issues require solutions to be generated de novo from the practitioner's body of knowledge and past experience. Yet, no single new intervention can be used without prior validation of its efficacy. Only a therapist with a prepared mind can accept such challenges, recognize what is known and not yet known, design studies to acquire that needed knowledge, and translate it into successful clinical treatment strategies. The occupational therapist with a prepared mind is one willing to seize unexpected opportunities and construct new paradigms of practice. Innovation through scientific inquiry requires a prepared mind. Copyright © 2016 by the American Occupational Therapy Association, Inc.
Global tropospheric chemistry: A plan for action
NASA Technical Reports Server (NTRS)
1984-01-01
Prompted by an increasing awareness of the influence of human activity on the chemistry of the global troposphere, a panel was formed to (1) assess the requirement for a global study of the chemistry of the troposphere; (2) develop a scientific strategy for a comprehensive plan taking into account the existing and projected programs of the government; (3) assess the requirements of a global study in terms of theoretical knowledge, numerical modeling, instrumentation, observing platforms, ground-level observational techniques, and other related needs; and (4) outline the appropriate sequence and coordination required to achieve the most effective utilization of available resources. Part 1 presents a coordinated national blueprint for scientific investigations of biogeochemical cycles in the global troposphere. part 2 presents much of the background information of the present knowledge and gaps in the understanding of tropospheric chemical cycles and processes from which the proposed program was developed.
Global tropospheric chemistry: A plan for action
NASA Astrophysics Data System (ADS)
1984-10-01
Prompted by an increasing awareness of the influence of human activity on the chemistry of the global troposphere, a panel was formed to (1) assess the requirement for a global study of the chemistry of the troposphere; (2) develop a scientific strategy for a comprehensive plan taking into account the existing and projected programs of the government; (3) assess the requirements of a global study in terms of theoretical knowledge, numerical modeling, instrumentation, observing platforms, ground-level observational techniques, and other related needs; and (4) outline the appropriate sequence and coordination required to achieve the most effective utilization of available resources. Part 1 presents a coordinated national blueprint for scientific investigations of biogeochemical cycles in the global troposphere. part 2 presents much of the background information of the present knowledge and gaps in the understanding of tropospheric chemical cycles and processes from which the proposed program was developed.
User's manual for the Simulated Life Analysis of Vehicle Elements (SLAVE) model
NASA Technical Reports Server (NTRS)
Paul, D. D., Jr.
1972-01-01
The simulated life analysis of vehicle elements model was designed to perform statistical simulation studies for any constant loss rate. The outputs of the model consist of the total number of stages required, stages successfully completing their lifetime, and average stage flight life. This report contains a complete description of the model. Users' instructions and interpretation of input and output data are presented such that a user with little or no prior programming knowledge can successfully implement the program.
Single unit approaches to human vision and memory.
Kreiman, Gabriel
2007-08-01
Research on the visual system focuses on using electrophysiology, pharmacology and other invasive tools in animal models. Non-invasive tools such as scalp electroencephalography and imaging allow examining humans but show a much lower spatial and/or temporal resolution. Under special clinical conditions, it is possible to monitor single-unit activity in humans when invasive procedures are required due to particular pathological conditions including epilepsy and Parkinson's disease. We review our knowledge about the visual system and visual memories in the human brain at the single neuron level. The properties of the human brain seem to be broadly compatible with the knowledge derived from animal models. The possibility of examining high-resolution brain activity in conscious human subjects allows investigators to ask novel questions that are challenging to address in animal models.
Requirements for the formal representation of pathophysiology mechanisms by clinicians
Helvensteijn, M.; Kokash, N.; Martorelli, I.; Sarwar, D.; Islam, S.; Grenon, P.; Hunter, P.
2016-01-01
Knowledge of multiscale mechanisms in pathophysiology is the bedrock of clinical practice. If quantitative methods, predicting patient-specific behaviour of these pathophysiology mechanisms, are to be brought to bear on clinical decision-making, the Human Physiome community and Clinical community must share a common computational blueprint for pathophysiology mechanisms. A number of obstacles stand in the way of this sharing—not least the technical and operational challenges that must be overcome to ensure that (i) the explicit biological meanings of the Physiome's quantitative methods to represent mechanisms are open to articulation, verification and study by clinicians, and that (ii) clinicians are given the tools and training to explicitly express disease manifestations in direct contribution to modelling. To this end, the Physiome and Clinical communities must co-develop a common computational toolkit, based on this blueprint, to bridge the representation of knowledge of pathophysiology mechanisms (a) that is implicitly depicted in electronic health records and the literature, with (b) that found in mathematical models explicitly describing mechanisms. In particular, this paper makes use of a step-wise description of a specific disease mechanism as a means to elicit the requirements of representing pathophysiological meaning explicitly. The computational blueprint developed from these requirements addresses the Clinical community goals to (i) organize and manage healthcare resources in terms of relevant disease-related knowledge of mechanisms and (ii) train the next generation of physicians in the application of quantitative methods relevant to their research and practice. PMID:27051514
Operations Assessment of Launch Vehicle Architectures using Activity Based Cost Models
NASA Technical Reports Server (NTRS)
Ruiz-Torres, Alex J.; McCleskey, Carey
2000-01-01
The growing emphasis on affordability for space transportation systems requires the assessment of new space vehicles for all life cycle activities, from design and development, through manufacturing and operations. This paper addresses the operational assessment of launch vehicles, focusing on modeling the ground support requirements of a vehicle architecture, and estimating the resulting costs and flight rate. This paper proposes the use of Activity Based Costing (ABC) modeling for this assessment. The model uses expert knowledge to determine the activities, the activity times and the activity costs based on vehicle design characteristics. The approach provides several advantages to current approaches to vehicle architecture assessment including easier validation and allowing vehicle designers to understand the cost and cycle time drivers.
Naranjo, Ramon C.
2017-01-01
Groundwater-flow models are often calibrated using a limited number of observations relative to the unknown inputs required for the model. This is especially true for models that simulate groundwater surface-water interactions. In this case, subsurface temperature sensors can be an efficient means for collecting long-term data that capture the transient nature of physical processes such as seepage losses. Continuous and spatially dense network of diverse observation data can be used to improve knowledge of important physical drivers, conceptualize and calibrate variably saturated groundwater flow models. An example is presented for which the results of such analysis were used to help guide irrigation districts and water management decisions on costly upgrades to conveyance systems to improve water usage, farm productivity and restoration efforts to improve downstream water quality and ecosystems.
Stone, Vathsala I; Lane, Joseph P
2012-05-16
Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact-that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and "bench to bedside" expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits.
2012-01-01
Background Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact—that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. Methods This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. Results The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and “bench to bedside” expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. Conclusions High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits. PMID:22591638
Considerations on Educating Engineers in Sustainability
ERIC Educational Resources Information Center
Boyle, Carol
2004-01-01
The teaching of sustainability to engineers will follow similar paths to that of environmental engineering. There is a strong feeling that environmental engineering is a discipline unto itself, requiring knowledge of chemistry, physics, biology, hydrology, toxicology, modelling and law. However, environmental engineering can also be encompassed…
Informatics Approach to Improving Surgical Skills Training
ERIC Educational Resources Information Center
Islam, Gazi
2013-01-01
Surgery as a profession requires significant training to improve both clinical decision making and psychomotor proficiency. In the medical knowledge domain, tools have been developed, validated, and accepted for evaluation of surgeons' competencies. However, assessment of the psychomotor skills still relies on the Halstedian model of…
Recent advances in estimating protein and energy requirements of ruminants
USDA-ARS?s Scientific Manuscript database
Considerable efforts have been made in gathering scientific data and developing feeding systems for ruminant animals in the last 50 years. Future endeavours should target the assessment, interpretation, and integration of the accumulated knowledge to develop nutrition models in a holistic and pragma...
Volume and Value of Big Healthcare Data.
Dinov, Ivo D
Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions.
Simulation of an Asynchronous Machine by using a Pseudo Bond Graph
NASA Astrophysics Data System (ADS)
Romero, Gregorio; Felez, Jesus; Maroto, Joaquin; Martinez, M. Luisa
2008-11-01
For engineers, computer simulation, is a basic tool since it enables them to understand how systems work without actually needing to see them. They can learn how they work in different circumstances and optimize their design with considerably less cost in terms of time and money than if they had to carry out tests on a physical system. However, if computer simulation is to be reliable it is essential for the simulation model to be validated. There is a wide range of commercial brands on the market offering products for electrical domain simulation (SPICE, LabVIEW PSCAD,Dymola, Simulink, Simplorer,...). These are powerful tools, but require the engineer to have a perfect knowledge of the electrical field. This paper shows an alternative methodology to can simulate an asynchronous machine using the multidomain Bond Graph technique and apply it in any program that permit the simulation of models based in this technique; no extraordinary knowledge of this technique and electric field are required to understand the process .
Volume and Value of Big Healthcare Data
Dinov, Ivo D.
2016-01-01
Modern scientific inquiries require significant data-driven evidence and trans-disciplinary expertise to extract valuable information and gain actionable knowledge about natural processes. Effective evidence-based decisions require collection, processing and interpretation of vast amounts of complex data. The Moore's and Kryder's laws of exponential increase of computational power and information storage, respectively, dictate the need rapid trans-disciplinary advances, technological innovation and effective mechanisms for managing and interrogating Big Healthcare Data. In this article, we review important aspects of Big Data analytics and discuss important questions like: What are the challenges and opportunities associated with this biomedical, social, and healthcare data avalanche? Are there innovative statistical computing strategies to represent, model, analyze and interpret Big heterogeneous data? We present the foundation of a new compressive big data analytics (CBDA) framework for representation, modeling and inference of large, complex and heterogeneous datasets. Finally, we consider specific directions likely to impact the process of extracting information from Big healthcare data, translating that information to knowledge, and deriving appropriate actions. PMID:26998309
Introducing Seismic Tomography with Computational Modeling
NASA Astrophysics Data System (ADS)
Neves, R.; Neves, M. L.; Teodoro, V.
2011-12-01
Learning seismic tomography principles and techniques involves advanced physical and computational knowledge. In depth learning of such computational skills is a difficult cognitive process that requires a strong background in physics, mathematics and computer programming. The corresponding learning environments and pedagogic methodologies should then involve sets of computational modelling activities with computer software systems which allow students the possibility to improve their mathematical or programming knowledge and simultaneously focus on the learning of seismic wave propagation and inverse theory. To reduce the level of cognitive opacity associated with mathematical or programming knowledge, several computer modelling systems have already been developed (Neves & Teodoro, 2010). Among such systems, Modellus is particularly well suited to achieve this goal because it is a domain general environment for explorative and expressive modelling with the following main advantages: 1) an easy and intuitive creation of mathematical models using just standard mathematical notation; 2) the simultaneous exploration of images, tables, graphs and object animations; 3) the attribution of mathematical properties expressed in the models to animated objects; and finally 4) the computation and display of mathematical quantities obtained from the analysis of images and graphs. Here we describe virtual simulations and educational exercises which enable students an easy grasp of the fundamental of seismic tomography. The simulations make the lecture more interactive and allow students the possibility to overcome their lack of advanced mathematical or programming knowledge and focus on the learning of seismological concepts and processes taking advantage of basic scientific computation methods and tools.
Huang, Hui-Ru; Chen, Chi-Wen; Chen, Chin-Mi; Yang, Hsiao-Ling; Su, Wen-Jen; Wang, Jou-Kou; Tsai, Pei-Kwei
2018-03-01
Health-promoting behaviors could serve as a major strategy to optimize long-term outcomes for adolescents with congenital heart disease. The associations assessed from a positive perspective of knowledge, attitudes, and practice model would potentially cultivate health-promoting behaviors during adolescence. The purpose of this study was to examine the relationships between disease knowledge, resilience, family functioning, and health-promoting behaviors in adolescents with congenital heart disease. A total of 320 adolescents with congenital heart disease who were aged 12-18 years were recruited from pediatric cardiology outpatient departments, and participated in a cross-sectional survey. The participants completed the Leuven Knowledge Questionnaire for Congenital Heart Disease; Haase Adolescent Resilience in Illness Scale; Family Adaptability, Partnership, Growth, Affection, and Resolve; and Adolescent Health Promotion scales. The collected data were analyzed using descriptive statistics and three multiple regression models. Greater knowledge of prevention of complications and higher resilience had a more powerful effect in enhancing health-promoting behaviors. Having symptoms and moderate or severe family dysfunction were significantly more negatively predictive of health-promoting behaviors than not having symptoms and positive family function. The third model explained 40% of the variance in engaging in health-promoting behaviors among adolescents with congenital heart disease. The findings of this study provide new insights into the role of disease knowledge, resilience, and family functioning in the health-promoting behavior of adolescents with congenital heart disease. Continued efforts are required to plan family care programs that promote the acquisition of sufficient disease knowledge and the development of resilience for adolescents with congenital heart disease.
Brad C. Timm; Kevin McGarigal; Samuel A. Cushman; Joseph L. Ganey
2016-01-01
Efficacy of future habitat selection studies will benefit by taking a multi-scale approach. In addition to potentially providing increased explanatory power and predictive capacity, multi-scale habitat models enhance our understanding of the scales at which species respond to their environment, which is critical knowledge required to implement effective...
NASA Astrophysics Data System (ADS)
Elag, M.; Goodall, J. L.
2013-12-01
Hydrologic modeling often requires the re-use and integration of models from different disciplines to simulate complex environmental systems. Component-based modeling introduces a flexible approach for integrating physical-based processes across disciplinary boundaries. Several hydrologic-related modeling communities have adopted the component-based approach for simulating complex physical systems by integrating model components across disciplinary boundaries in a workflow. However, it is not always straightforward to create these interdisciplinary models due to the lack of sufficient knowledge about a hydrologic process. This shortcoming is a result of using informal methods for organizing and sharing information about a hydrologic process. A knowledge-based ontology provides such standards and is considered the ideal approach for overcoming this challenge. The aims of this research are to present the methodology used in analyzing the basic hydrologic domain in order to identify hydrologic processes, the ontology itself, and how the proposed ontology is integrated with the Water Resources Component (WRC) ontology. The proposed ontology standardizes the definitions of a hydrologic process, the relationships between hydrologic processes, and their associated scientific equations. The objective of the proposed Hydrologic Process (HP) Ontology is to advance the idea of creating a unified knowledge framework for components' metadata by introducing a domain-level ontology for hydrologic processes. The HP ontology is a step toward an explicit and robust domain knowledge framework that can be evolved through the contribution of domain users. Analysis of the hydrologic domain is accomplished using the Formal Concept Approach (FCA), in which the infiltration process, an important hydrologic process, is examined. Two infiltration methods, the Green-Ampt and Philip's methods, were used to demonstrate the implementation of information in the HP ontology. Furthermore, a SPARQL service is provided for semantic-based querying of the ontology.
Elicitation of neurological knowledge with argument-based machine learning.
Groznik, Vida; Guid, Matej; Sadikov, Aleksander; Možina, Martin; Georgiev, Dejan; Kragelj, Veronika; Ribarič, Samo; Pirtošek, Zvezdan; Bratko, Ivan
2013-02-01
The paper describes the use of expert's knowledge in practice and the efficiency of a recently developed technique called argument-based machine learning (ABML) in the knowledge elicitation process. We are developing a neurological decision support system to help the neurologists differentiate between three types of tremors: Parkinsonian, essential, and mixed tremor (comorbidity). The system is intended to act as a second opinion for the neurologists, and most importantly to help them reduce the number of patients in the "gray area" that require a very costly further examination (DaTSCAN). We strive to elicit comprehensible and medically meaningful knowledge in such a way that it does not come at the cost of diagnostic accuracy. To alleviate the difficult problem of knowledge elicitation from data and domain experts, we used ABML. ABML guides the expert to explain critical special cases which cannot be handled automatically by machine learning. This very efficiently reduces the expert's workload, and combines expert's knowledge with learning data. 122 patients were enrolled into the study. The classification accuracy of the final model was 91%. Equally important, the initial and the final models were also evaluated for their comprehensibility by the neurologists. All 13 rules of the final model were deemed as appropriate to be able to support its decisions with good explanations. The paper demonstrates ABML's advantage in combining machine learning and expert knowledge. The accuracy of the system is very high with respect to the current state-of-the-art in clinical practice, and the system's knowledge base is assessed to be very consistent from a medical point of view. This opens up the possibility to use the system also as a teaching tool. Copyright © 2012 Elsevier B.V. All rights reserved.
49 CFR 383.110 - General requirement.
Code of Federal Regulations, 2011 CFR
2011-10-01
... STANDARDS; REQUIREMENTS AND PENALTIES Required Knowledge and Skills § 383.110 General requirement. All drivers of CMVs must have the knowledge and skills necessary to operate a CMV safely as contained in this subpart. The specific types of items that a State must include in the knowledge and skills tests that it...
NASA Astrophysics Data System (ADS)
Rajabi, Mohammad Mahdi; Ataie-Ashtiani, Behzad
2016-05-01
Bayesian inference has traditionally been conceived as the proper framework for the formal incorporation of expert knowledge in parameter estimation of groundwater models. However, conventional Bayesian inference is incapable of taking into account the imprecision essentially embedded in expert provided information. In order to solve this problem, a number of extensions to conventional Bayesian inference have been introduced in recent years. One of these extensions is 'fuzzy Bayesian inference' which is the result of integrating fuzzy techniques into Bayesian statistics. Fuzzy Bayesian inference has a number of desirable features which makes it an attractive approach for incorporating expert knowledge in the parameter estimation process of groundwater models: (1) it is well adapted to the nature of expert provided information, (2) it allows to distinguishably model both uncertainty and imprecision, and (3) it presents a framework for fusing expert provided information regarding the various inputs of the Bayesian inference algorithm. However an important obstacle in employing fuzzy Bayesian inference in groundwater numerical modeling applications is the computational burden, as the required number of numerical model simulations often becomes extremely exhaustive and often computationally infeasible. In this paper, a novel approach of accelerating the fuzzy Bayesian inference algorithm is proposed which is based on using approximate posterior distributions derived from surrogate modeling, as a screening tool in the computations. The proposed approach is first applied to a synthetic test case of seawater intrusion (SWI) in a coastal aquifer. It is shown that for this synthetic test case, the proposed approach decreases the number of required numerical simulations by an order of magnitude. Then the proposed approach is applied to a real-world test case involving three-dimensional numerical modeling of SWI in Kish Island, located in the Persian Gulf. An expert elicitation methodology is developed and applied to the real-world test case in order to provide a road map for the use of fuzzy Bayesian inference in groundwater modeling applications.
Design and Analysis Techniques for Concurrent Blackboard Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Mcmanus, John William
1992-01-01
Blackboard systems are a natural progression of knowledge-based systems into a more powerful problem solving technique. They provide a way for several highly specialized knowledge sources to cooperate to solve large, complex problems. Blackboard systems incorporate the concepts developed by rule-based and expert systems programmers and include the ability to add conventionally coded knowledge sources. The small and specialized knowledge sources are easier to develop and test, and can be hosted on hardware specifically suited to the task that they are solving. The Formal Model for Blackboard Systems was developed to provide a consistent method for describing a blackboard system. A set of blackboard system design tools has been developed and validated for implementing systems that are expressed using the Formal Model. The tools are used to test and refine a proposed blackboard system design before the design is implemented. My research has shown that the level of independence and specialization of the knowledge sources directly affects the performance of blackboard systems. Using the design, simulation, and analysis tools, I developed a concurrent object-oriented blackboard system that is faster, more efficient, and more powerful than existing systems. The use of the design and analysis tools provided the highly specialized and independent knowledge sources required for my concurrent blackboard system to achieve its design goals.
Generating target system specifications from a domain model using CLIPS
NASA Technical Reports Server (NTRS)
Sugumaran, Vijayan; Gomaa, Hassan; Kerschberg, Larry
1991-01-01
The quest for reuse in software engineering is still being pursued and researchers are actively investigating the domain modeling approach to software construction. There are several domain modeling efforts reported in the literature and they all agree that the components that are generated from domain modeling are more conducive to reuse. Once a domain model is created, several target systems can be generated by tailoring the domain model or by evolving the domain model and then tailoring it according to the specified requirements. This paper presents the Evolutionary Domain Life Cycle (EDLC) paradigm in which a domain model is created using multiple views, namely, aggregation hierarchy, generalization/specialization hierarchies, object communication diagrams and state transition diagrams. The architecture of the Knowledge Based Requirements Elicitation Tool (KBRET) which is used to generate target system specifications is also presented. The preliminary version of KBRET is implemented in the C Language Integrated Production System (CLIPS).
A Working Framework for Enabling International Science Data System Interoperability
NASA Astrophysics Data System (ADS)
Hughes, J. Steven; Hardman, Sean; Crichton, Daniel J.; Martinez, Santa; Law, Emily; Gordon, Mitchell K.
2016-07-01
For diverse scientific disciplines to interoperate they must be able to exchange information based on a shared understanding. To capture this shared understanding, we have developed a knowledge representation framework that leverages ISO level reference models for metadata registries and digital archives. This framework provides multi-level governance, evolves independent of the implementation technologies, and promotes agile development, namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. The knowledge representation is captured in an ontology through a process of knowledge acquisition. Discipline experts in the role of stewards at the common, discipline, and project levels work to design and populate the ontology model. The result is a formal and consistent knowledge base that provides requirements for data representation, integrity, provenance, context, identification, and relationship. The contents of the knowledge base are translated and written to files in suitable formats to configure system software and services, provide user documentation, validate input, and support data analytics. This presentation will provide an overview of the framework, present a use case that has been adopted by an entire science discipline at the international level, and share some important lessons learned.
NASA Astrophysics Data System (ADS)
Rao, Dhananjai M.; Chernyakhovsky, Alexander; Rao, Victoria
2008-05-01
Humanity is facing an increasing number of highly virulent and communicable diseases such as avian influenza. Researchers believe that avian influenza has potential to evolve into one of the deadliest pandemics. Combating these diseases requires in-depth knowledge of their epidemiology. An effective methodology for discovering epidemiological knowledge is to utilize a descriptive, evolutionary, ecological model and use bio-simulations to study and analyze it. These types of bio-simulations fall under the category of computational evolutionary methods because the individual entities participating in the simulation are permitted to evolve in a natural manner by reacting to changes in the simulated ecosystem. This work describes the application of the aforementioned methodology to discover epidemiological knowledge about avian influenza using a novel eco-modeling and bio-simulation environment called SEARUMS. The mathematical principles underlying SEARUMS, its design, and the procedure for using SEARUMS are discussed. The bio-simulations and multi-faceted case studies conducted using SEARUMS elucidate its ability to pinpoint timelines, epicenters, and socio-economic impacts of avian influenza. This knowledge is invaluable for proactive deployment of countermeasures in order to minimize negative socioeconomic impacts, combat the disease, and avert a pandemic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rao, Dhananjai M.; Chernyakhovsky, Alexander; Rao, Victoria
2008-05-08
Humanity is facing an increasing number of highly virulent and communicable diseases such as avian influenza. Researchers believe that avian influenza has potential to evolve into one of the deadliest pandemics. Combating these diseases requires in-depth knowledge of their epidemiology. An effective methodology for discovering epidemiological knowledge is to utilize a descriptive, evolutionary, ecological model and use bio-simulations to study and analyze it. These types of bio-simulations fall under the category of computational evolutionary methods because the individual entities participating in the simulation are permitted to evolve in a natural manner by reacting to changes in the simulated ecosystem. Thismore » work describes the application of the aforementioned methodology to discover epidemiological knowledge about avian influenza using a novel eco-modeling and bio-simulation environment called SEARUMS. The mathematical principles underlying SEARUMS, its design, and the procedure for using SEARUMS are discussed. The bio-simulations and multi-faceted case studies conducted using SEARUMS elucidate its ability to pinpoint timelines, epicenters, and socio-economic impacts of avian influenza. This knowledge is invaluable for proactive deployment of countermeasures in order to minimize negative socioeconomic impacts, combat the disease, and avert a pandemic.« less
MDA-based EHR application security services.
Blobel, Bernd; Pharow, Peter
2004-01-01
Component-oriented, distributed, virtual EHR systems have to meet enhanced security and privacy requirements. In the context of advanced architectural paradigms such as component-orientation, model-driven, and knowledge-based, standardised security services needed have to be specified and implemented in an integrated way following the same paradigm. This concerns the deployment of formal models, meta-languages, reference models such as the ISO RM-ODP, and development as well as implementation tools. International projects' results presented proceed on that streamline.
Improving plant bioaccumulation science through consistent reporting of experimental data.
Fantke, Peter; Arnot, Jon A; Doucette, William J
2016-10-01
Experimental data and models for plant bioaccumulation of organic contaminants play a crucial role for assessing the potential human and ecological risks associated with chemical use. Plants are receptor organisms and direct or indirect vectors for chemical exposures to all other organisms. As new experimental data are generated they are used to improve our understanding of plant-chemical interactions that in turn allows for the development of better scientific knowledge and conceptual and predictive models. The interrelationship between experimental data and model development is an ongoing, never-ending process needed to advance our ability to provide reliable quality information that can be used in various contexts including regulatory risk assessment. However, relatively few standard experimental protocols for generating plant bioaccumulation data are currently available and because of inconsistent data collection and reporting requirements, the information generated is often less useful than it could be for direct applications in chemical assessments and for model development and refinement. We review existing testing guidelines, common data reporting practices, and provide recommendations for revising testing guidelines and reporting requirements to improve bioaccumulation knowledge and models. This analysis provides a list of experimental parameters that will help to develop high quality datasets and support modeling tools for assessing bioaccumulation of organic chemicals in plants and ultimately addressing uncertainty in ecological and human health risk assessments. Copyright © 2016 Elsevier Ltd. All rights reserved.
Using a cloud to replenish parched groundwater modeling efforts.
Hunt, Randall J; Luchette, Joseph; Schreuder, Willem A; Rumbaugh, James O; Doherty, John; Tonkin, Matthew J; Rumbaugh, Douglas B
2010-01-01
Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate "virtual" computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.
Using a cloud to replenish parched groundwater modeling efforts
Hunt, Randall J.; Luchette, Joseph; Schreuder, Willem A.; Rumbaugh, James O.; Doherty, John; Tonkin, Matthew J.; Rumbaugh, Douglas B.
2010-01-01
Groundwater models can be improved by introduction of additional parameter flexibility and simultaneous use of soft-knowledge. However, these sophisticated approaches have high computational requirements. Cloud computing provides unprecedented access to computing power via the Internet to facilitate the use of these techniques. A modeler can create, launch, and terminate “virtual” computers as needed, paying by the hour, and save machine images for future use. Such cost-effective and flexible computing power empowers groundwater modelers to routinely perform model calibration and uncertainty analysis in ways not previously possible.
46 CFR 11.713 - Requirements for maintaining current knowledge of waters to be navigated.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 1 2010-10-01 2010-10-01 false Requirements for maintaining current knowledge of waters... § 11.713 Requirements for maintaining current knowledge of waters to be navigated. (a) If a first class... current knowledge of the route. Persons using this method of re-familiarization shall certify, when...
Influenza pathogenicity during pregnancy in women and animal models.
van Riel, Debby; Mittrücker, Hans-Willi; Engels, Geraldine; Klingel, Karin; Markert, Udo R; Gabriel, Gülsah
2016-11-01
Pregnant women are at the highest risk to develop severe and even fatal influenza. The high vulnerability of women against influenza A virus infections during pregnancy was repeatedly highlighted during influenza pandemics including the pandemic of this century. In 2009, mortality rates were particularly high among otherwise healthy pregnant women. However, our current understanding of the molecular mechanisms involved in severe disease development during pregnancy is still very limited. In this review, we summarize the knowledge on the clinical observations in influenza A virus-infected pregnant women. In addition, knowledge obtained from few existing experimental infections in pregnant animal models is discussed. Since clinical data do not provide in-depth information on the pathogenesis of severe influenza during pregnancy, adequate animal models are urgently required that mimic clinical findings. Studies in pregnant animal models will allow the dissection of involved molecular disease pathways that are key to improve patient management and care.
Marrying Hydrological Modelling and Integrated Assessment for the needs of Water Resource Management
NASA Astrophysics Data System (ADS)
Croke, B. F. W.; Blakers, R. S.; El Sawah, S.; Fu, B.; Guillaume, J. H. A.; Kelly, R. A.; Patrick, M. J.; Ross, A.; Ticehurst, J.; Barthel, R.; Jakeman, A. J.
2014-09-01
This paper discusses the integration of hydrology with other disciplines using an Integrated Assessment (IA) and modelling approach to the management and allocation of water resources. Recent developments in the field of socio-hydrology aim to develop stronger relationships between hydrology and the human dimensions of Water Resource Management (WRM). This should build on an existing wealth of knowledge and experience of coupled human-water systems. To further strengthen this relationship and contribute to this broad body of knowledge, we propose a strong and durable "marriage" between IA and hydrology. The foundation of this marriage requires engagement with appropriate concepts, model structures, scales of analyses, performance evaluation and communication - and the associated tools and models that are needed for pragmatic deployment or operation. To gain insight into how this can be achieved, an IA case study in water allocation in the Lower Namoi catchment, NSW, Australia is presented.
Evaluation of PROforma as a language for implementing medical guidelines in a practical context
Sutton, David R; Taylor, Paul; Earle, Kenneth
2006-01-01
Background PROforma is one of several languages that allow clinical guidelines to be expressed in a computer-interpretable manner. How these languages should be compared, and what requirements they should meet, are questions that are being actively addressed by a community of interested researchers. Methods We have developed a system to allow hypertensive patients to be monitored and assessed without visiting their GPs (except in the most urgent cases). Blood pressure measurements are performed at the patients' pharmacies and a web-based system, created using PROforma, makes recommendations for continued monitoring, and/or changes in medication. The recommendations and measurements are transmitted electronically to a practitioner with authority to issue and change prescriptions. We evaluated the use of PROforma during the knowledge acquisition, analysis, design and implementation of this system. The analysis focuses on the logical adequacy, heuristic power, notational convenience, and explanation support provided by the PROforma language. Results PROforma proved adequate as a language for the implementation of the clinical reasoning required by this project. However a lack of notational convenience led us to use UML activity diagrams, rather than PROforma process descriptions, to create the models that were used during the knowledge acquisition and analysis phases of the project. These UML diagrams were translated into PROforma during the implementation of the project. Conclusion The experience accumulated during this study highlighted the importance of structure preserving design, that is to say that the models used in the design and implementation of a knowledge-based system should be structurally similar to those created during knowledge acquisition and analysis. Ideally the same language should be used for all of these models. This means that great importance has to be attached to the notational convenience of these languages, by which we mean the ease with which they can be read, written, and understood by human beings. The importance of notational convenience arises from the fact that a language used during knowledge acquisition and analysis must be intelligible to the potential users of a system, and to the domain experts who provide the knowledge that will be used in its construction. PMID:16597341
Evaluation of PROforma as a language for implementing medical guidelines in a practical context.
Sutton, David R; Taylor, Paul; Earle, Kenneth
2006-04-05
PROforma is one of several languages that allow clinical guidelines to be expressed in a computer-interpretable manner. How these languages should be compared, and what requirements they should meet, are questions that are being actively addressed by a community of interested researchers. We have developed a system to allow hypertensive patients to be monitored and assessed without visiting their GPs (except in the most urgent cases). Blood pressure measurements are performed at the patients' pharmacies and a web-based system, created using PROforma, makes recommendations for continued monitoring, and/or changes in medication. The recommendations and measurements are transmitted electronically to a practitioner with authority to issue and change prescriptions. We evaluated the use of PROforma during the knowledge acquisition, analysis, design and implementation of this system. The analysis focuses on the logical adequacy, heuristic power, notational convenience, and explanation support provided by the PROforma language. PROforma proved adequate as a language for the implementation of the clinical reasoning required by this project. However a lack of notational convenience led us to use UML activity diagrams, rather than PROforma process descriptions, to create the models that were used during the knowledge acquisition and analysis phases of the project. These UML diagrams were translated into PROforma during the implementation of the project. The experience accumulated during this study highlighted the importance of structure preserving design, that is to say that the models used in the design and implementation of a knowledge-based system should be structurally similar to those created during knowledge acquisition and analysis. Ideally the same language should be used for all of these models. This means that great importance has to be attached to the notational convenience of these languages, by which we mean the ease with which they can be read, written, and understood by human beings. The importance of notational convenience arises from the fact that a language used during knowledge acquisition and analysis must be intelligible to the potential users of a system, and to the domain experts who provide the knowledge that will be used in its construction.
Framing a Knowledge Base for a Legal Expert System Dealing with Indeterminate Concepts.
Araszkiewicz, Michał; Łopatkiewicz, Agata; Zienkiewicz, Adam; Zurek, Tomasz
2015-01-01
Despite decades of development of formal tools for modelling legal knowledge and reasoning, the creation of a fully fledged legal decision support system remains challenging. Among those challenges, such system requires an enormous amount of commonsense knowledge to derive legal expertise. This paper describes the development of a negotiation decision support system (the Parenting Plan Support System or PPSS) to support parents in drafting an agreement (the parenting plan) for the exercise of parental custody of minor children after a divorce is granted. The main objective here is to discuss problems of framing an intuitively appealing and computationally efficient knowledge base that can adequately represent the indeterminate legal concept of the well-being of the child in the context of continental legal culture and of Polish law in particular. In addition to commonsense reasoning, interpretation of such a concept demands both legal expertise and significant professional knowledge from other domains.
Framing a Knowledge Base for a Legal Expert System Dealing with Indeterminate Concepts
Araszkiewicz, Michał; Łopatkiewicz, Agata; Zienkiewicz, Adam
2015-01-01
Despite decades of development of formal tools for modelling legal knowledge and reasoning, the creation of a fully fledged legal decision support system remains challenging. Among those challenges, such system requires an enormous amount of commonsense knowledge to derive legal expertise. This paper describes the development of a negotiation decision support system (the Parenting Plan Support System or PPSS) to support parents in drafting an agreement (the parenting plan) for the exercise of parental custody of minor children after a divorce is granted. The main objective here is to discuss problems of framing an intuitively appealing and computationally efficient knowledge base that can adequately represent the indeterminate legal concept of the well-being of the child in the context of continental legal culture and of Polish law in particular. In addition to commonsense reasoning, interpretation of such a concept demands both legal expertise and significant professional knowledge from other domains. PMID:26495435
Improving Cyber-Security of Smart Grid Systems via Anomaly Detection and Linguistic Domain Knowledge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ondrej Linda; Todd Vollmer; Milos Manic
The planned large scale deployment of smart grid network devices will generate a large amount of information exchanged over various types of communication networks. The implementation of these critical systems will require appropriate cyber-security measures. A network anomaly detection solution is considered in this work. In common network architectures multiple communications streams are simultaneously present, making it difficult to build an anomaly detection solution for the entire system. In addition, common anomaly detection algorithms require specification of a sensitivity threshold, which inevitably leads to a tradeoff between false positives and false negatives rates. In order to alleviate these issues, thismore » paper proposes a novel anomaly detection architecture. The designed system applies the previously developed network security cyber-sensor method to individual selected communication streams allowing for learning accurate normal network behavior models. Furthermore, the developed system dynamically adjusts the sensitivity threshold of each anomaly detection algorithm based on domain knowledge about the specific network system. It is proposed to model this domain knowledge using Interval Type-2 Fuzzy Logic rules, which linguistically describe the relationship between various features of the network communication and the possibility of a cyber attack. The proposed method was tested on experimental smart grid system demonstrating enhanced cyber-security.« less
Zein, Rizqy Amelia; Suhariadi, Fendy; Hendriani, Wiwin
2017-01-01
The research aimed to investigate the effect of lay knowledge of pulmonary tuberculosis (TB) and prior contact with pulmonary TB patients on a health-belief model (HBM) as well as to identify the social determinants that affect lay knowledge. Survey research design was conducted, where participants were required to fill in a questionnaire, which measured HBM and lay knowledge of pulmonary TB. Research participants were 500 residents of Semampir, Asemrowo, Bubutan, Pabean Cantian, and Simokerto districts, where the risk of pulmonary TB transmission is higher than other districts in Surabaya. Being a female, older in age, and having prior contact with pulmonary TB patients significantly increase the likelihood of having a higher level of lay knowledge. Lay knowledge is a substantial determinant to estimate belief in the effectiveness of health behavior and personal health threat. Prior contact with pulmonary TB patients is able to explain the belief in the effectiveness of a health behavior, yet fails to estimate participants' belief in the personal health threat. Health authorities should prioritize males and young people as their main target groups in a pulmonary TB awareness campaign. The campaign should be able to reconstruct people's misconception about pulmonary TB, thereby bringing around the health-risk perception so that it is not solely focused on improving lay knowledge.
Real-Time Multimedia on the Internet: What Will It Take?
ERIC Educational Resources Information Center
Sodergren, Mike
1998-01-01
Considers the requirements for real-time, interactive multimedia over the Internet. Topics include demand for interactivity; new pricing models for Internet service; knowledgeable suppliers; consumer education on standards; enhanced infrastructure, including bandwidth; and new technology, including RSVP, and end-to-end Internet-working protocol.…
Elaborating on Threshold Concepts
ERIC Educational Resources Information Center
Rountree, Janet; Robins, Anthony; Rountree, Nathan
2013-01-01
We propose an expanded definition of Threshold Concepts (TCs) that requires the successful acquisition and internalisation not only of knowledge, but also its practical elaboration in the domains of applied strategies and mental models. This richer definition allows us to clarify the relationship between TCs and Fundamental Ideas, and to account…
eQETIC: A Maturity Model for Online Education
ERIC Educational Resources Information Center
Rossi, Rogério; Mustaro, Pollyana Notargiacomo
2015-01-01
Digital solutions have substantially contributed to the growth and dissemination of education. The distance education modality has been presented as an opportunity for worldwide students in many types of courses. However, projects of digital educational platforms require different expertise including knowledge areas such as pedagogy, psychology,…
Auditing the Numeracy Demands of the Middle Years Curriculum
ERIC Educational Resources Information Center
Goos, Merrilyn; Geiger, Vince; Dole, Shelley
2010-01-01
The "National Numeracy Review" recognised that numeracy development requires an across the curriculum commitment. To explore the nature of this commitment we conducted a numeracy audit of the South Australian Middle Years curriculum, using a numeracy model that incorporates mathematical knowledge, dispositions, tools, contexts, and a…
Liaw, Siaw-Teng; Deveny, Elizabeth; Morrison, Iain; Lewis, Bryn
2006-09-01
Using a factorial vignette survey and modeling methodology, we developed clinical and information models - incorporating evidence base, key concepts, relevant terms, decision-making and workflow needed to practice safely and effectively - to guide the development of an integrated rule-based knowledge module to support prescribing decisions in asthma. We identified workflows, decision-making factors, factor use, and clinician information requirements. The Unified Modeling Language (UML) and public domain software and knowledge engineering tools (e.g. Protégé) were used, with the Australian GP Data Model as the starting point for expressing information needs. A Web Services service-oriented architecture approach was adopted within which to express functional needs, and clinical processes and workflows were expressed in the Business Process Execution Language (BPEL). This formal analysis and modeling methodology to define and capture the process and logic of prescribing best practice in a reference implementation is fundamental to tackling deficiencies in prescribing decision support software.
Modeling and Improving Information Flows in the Development of Large Business Applications
NASA Astrophysics Data System (ADS)
Schneider, Kurt; Lübke, Daniel
Designing a good architecture for an application is a wicked problem. Therefore, experience and knowledge are considered crucial for informing work in software architecture. However, many organizations do not pay sufficient attention to experience exploitation and architectural learning. Many users of information systems are not aware of the options and the needs to report problems and requirements. They often do not have time to describe a problem encountered in sufficient detail for developers to remove it. And there may be a lengthy process for providing feedback. Hence, the knowledge about problems and potential solutions is not shared effectively. Architectural knowledge needs to include evaluative feedback as well as decisions and their reasons (rationale).
WFIRST: Update on the Coronagraph Science Requirements
NASA Astrophysics Data System (ADS)
Douglas, Ewan S.; Cahoy, Kerri; Carlton, Ashley; Macintosh, Bruce; Turnbull, Margaret; Kasdin, Jeremy; WFIRST Coronagraph Science Investigation Teams
2018-01-01
The WFIRST Coronagraph instrument (CGI) will enable direct imaging and low resolution spectroscopy of exoplanets in reflected light and imaging polarimetry of circumstellar disks. The CGI science investigation teams were tasked with developing a set of science requirements which advance our knowledge of exoplanet occurrence and atmospheric composition, as well as the composition and morphology of exozodiacal debris disks, cold Kuiper Belt analogs, and protoplanetary systems. We present the initial content, rationales, validation, and verification plans for the WFIRST CGI, informed by detailed and still-evolving instrument and observatory performance models. We also discuss our approach to the requirements development and management process, including the collection and organization of science inputs, open source approach to managing the requirements database, and the range of models used for requirements validation. These tools can be applied to requirements development processes for other astrophysical space missions, and may ease their management and maintenance. These WFIRST CGI science requirements allow the community to learn about and provide insights and feedback on the expected instrument performance and science return.
Towards an Age-Phenome Knowledge-base
2011-01-01
Background Currently, data about age-phenotype associations are not systematically organized and cannot be studied methodically. Searching for scientific articles describing phenotypic changes reported as occurring at a given age is not possible for most ages. Results Here we present the Age-Phenome Knowledge-base (APK), in which knowledge about age-related phenotypic patterns and events can be modeled and stored for retrieval. The APK contains evidence connecting specific ages or age groups with phenotypes, such as disease and clinical traits. Using a simple text mining tool developed for this purpose, we extracted instances of age-phenotype associations from journal abstracts related to non-insulin-dependent Diabetes Mellitus. In addition, links between age and phenotype were extracted from clinical data obtained from the NHANES III survey. The knowledge stored in the APK is made available for the relevant research community in the form of 'Age-Cards', each card holds the collection of all the information stored in the APK about a particular age. These Age-Cards are presented in a wiki, allowing community review, amendment and contribution of additional information. In addition to the wiki interaction, complex searches can also be conducted which require the user to have some knowledge of database query construction. Conclusions The combination of a knowledge model based repository with community participation in the evolution and refinement of the knowledge-base makes the APK a useful and valuable environment for collecting and curating existing knowledge of the connections between age and phenotypes. PMID:21651792
Detecting misinformation and knowledge conflicts in relational data
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Jackobsen, Matthew; Riordan, Brian
2014-06-01
Information fusion is required for many mission-critical intelligence analysis tasks. Using knowledge extracted from various sources, including entities, relations, and events, intelligence analysts respond to commander's information requests, integrate facts into summaries about current situations, augment existing knowledge with inferred information, make predictions about the future, and develop action plans. However, information fusion solutions often fail because of conflicting and redundant knowledge contained in multiple sources. Most knowledge conflicts in the past were due to translation errors and reporter bias, and thus could be managed. Current and future intelligence analysis, especially in denied areas, must deal with open source data processing, where there is much greater presence of intentional misinformation. In this paper, we describe a model for detecting conflicts in multi-source textual knowledge. Our model is based on constructing semantic graphs representing patterns of multi-source knowledge conflicts and anomalies, and detecting these conflicts by matching pattern graphs against the data graph constructed using soft co-reference between entities and events in multiple sources. The conflict detection process maintains the uncertainty throughout all phases, providing full traceability and enabling incremental updates of the detection results as new knowledge or modification to previously analyzed information are obtained. Detected conflicts are presented to analysts for further investigation. In the experimental study with SYNCOIN dataset, our algorithms achieved perfect conflict detection in ideal situation (no missing data) while producing 82% recall and 90% precision in realistic noise situation (15% of missing attributes).
Camerini, Luca; Schulz, Peter Johannes
2012-07-18
The effectiveness of eHealth interventions in terms of reach and outcomes is now well documented. However, there is a need to understand not only whether eHealth interventions work, but also what kind of functions and mechanisms enhance their effectiveness. The present investigation contributes to tackling these challenges by investigating the role played by functional interactivity on patients' knowledge, empowerment, and health outcomes. To test whether health knowledge and empowerment mediate a possible relationship between the availability of interactive features on an eHealth application and individuals' health outcomes. We present an empirical, model-driven evaluation of the effects of functional interactivity implemented in an eHealth application, based on a brief theoretical review of the constructs of interactivity, health knowledge, empowerment, and health outcomes. We merged these constructs into a theoretical model of interactivity effects that we tested on an eHealth application for patients with fibromyalgia syndrome (FMS). This study used a pretest-posttest experimental design. We recruited 165 patients and randomly assigned them to three study groups, corresponding to different levels of functional interactivity. Eligibility to participate in the study required that patients (1) be fluent in Italian, (2) have access to the Internet, (3) report confidence in how to use a computer, and (4) have received a diagnosis of FMS from a doctor. We used structural equation modeling techniques to analyze changes between the pretest and the posttest results. The main finding was that functional interactivity had no impact on empowerment dimensions, nor direct observable effects on knowledge. However, knowledge positively affected health outcomes (b = -.12, P = .02), as did the empowerment dimensions of meaning (b = -.49, P < .001) and impact (b = -.25, P < .001). The theoretical model was partially confirmed, but only as far as the effects of knowledge and empowerment were concerned. The differential effect of interactive functions was by far weaker than expected. The strong impact of knowledge and empowerment on health outcomes suggests that these constructs should be targeted and enhanced by eHealth applications.
Representing Operational Modes for Situation Awareness
NASA Astrophysics Data System (ADS)
Kirchhübel, Denis; Lind, Morten; Ravn, Ole
2017-01-01
Operating complex plants is an increasingly demanding task for human operators. Diagnosis of and reaction to on-line events requires the interpretation of real time data. Vast amounts of sensor data as well as operational knowledge about the state and design of the plant are necessary to deduct reasonable reactions to abnormal situations. Intelligent computational support tools can make the operator’s task easier, but they require knowledge about the overall system in form of some model. While tools used for fault-tolerant control design based on physical principles and relations are valuable tools for designing robust systems, the models become too complex when considering the interactions on a plant-wide level. The alarm systems meant to support human operators in the diagnosis of the plant-wide situation on the other hand fail regularly in situations where these interactions of systems lead to many related alarms overloading the operator with alarm floods. Functional modelling can provide a middle way to reduce the complexity of plant-wide models by abstracting from physical details to more general functions and behaviours. Based on functional models the propagation of failures through the interconnected systems can be inferred and alarm floods can potentially be reduced to their root-cause. However, the desired behaviour of a complex system changes due to operating procedures that require more than one physical and functional configuration. In this paper a consistent representation of possible configurations is deduced from the analysis of an exemplary start-up procedure by functional models. The proposed interpretation of the modelling concepts simplifies the functional modelling of distinct modes. The analysis further reveals relevant links between the quantitative sensor data and the qualitative perspective of the diagnostics tool based on functional models. This will form the basis for the ongoing development of a novel real-time diagnostics system based on the on-line adaptation of the underlying MFM model.
The future of climate science analysis in a coming era of exascale computing
NASA Astrophysics Data System (ADS)
Bates, S. C.; Strand, G.
2013-12-01
Projections of Community Earth System Model (CESM) output based on the growth of data archived over 2000-2012 at all of our computing sites (NCAR, NERSC, ORNL) show that we can expect to reach 1,000 PB (1 EB) sometime in the next decade or so. The current paradigms of using site-based archival systems to hold these data that are then accessed via portals or gateways, downloading the data to a local system, and then processing/analyzing the data will be irretrievably broken before then. From a climate modeling perspective, the expertise involved in making climate models themselves efficient on HPC systems will need to be applied to the data as well - providing fast parallel analysis tools co-resident in memory with the data, because the disk I/O bandwidth simply will not keep up with the expected arrival of exaflop systems. The ability of scientists, analysts, stakeholders and others to use climate model output to turn these data into understanding and knowledge will require significant advances in the current typical analysis tools and packages to enable these processes for these vast volumes of data. Allowing data users to enact their own analyses on model output is virtually a requirement as well - climate modelers cannot anticipate all the possibilities for analysis that users may want to do. In addition, the expertise of data scientists, and their knowledge of the model output and their knowledge of best practices in data management (metadata, curation, provenance and so on) will need to be rewarded and exploited to gain the most understanding possible from these volumes of data. In response to growing data size, demand, and future projections, the CESM output has undergone a structure evolution and the data management plan has been reevaluated and updated. The major evolution of the CESM data structure is presented here, along with the CESM experience and role within the CMIP3/CMIP5.
14 CFR 63.53 - Knowledge requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Knowledge requirements. 63.53 Section 63.53 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN CERTIFICATION: FLIGHT CREWMEMBERS OTHER THAN PILOTS Flight Navigators § 63.53 Knowledge requirements. (a) An...
14 CFR 65.75 - Knowledge requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 2 2011-01-01 2011-01-01 false Knowledge requirements. 65.75 Section 65.75 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN CERTIFICATION: AIRMEN OTHER THAN FLIGHT CREWMEMBERS Mechanics § 65.75 Knowledge requirements. (a) Each applicant...
14 CFR 65.75 - Knowledge requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Knowledge requirements. 65.75 Section 65.75 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN CERTIFICATION: AIRMEN OTHER THAN FLIGHT CREWMEMBERS Mechanics § 65.75 Knowledge requirements. (a) Each applicant...
14 CFR 65.55 - Knowledge requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 2 2011-01-01 2011-01-01 false Knowledge requirements. 65.55 Section 65.55 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN CERTIFICATION: AIRMEN OTHER THAN FLIGHT CREWMEMBERS Aircraft Dispatchers § 65.55 Knowledge requirements. (a) A...
14 CFR 63.53 - Knowledge requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 2 2011-01-01 2011-01-01 false Knowledge requirements. 63.53 Section 63.53 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN CERTIFICATION: FLIGHT CREWMEMBERS OTHER THAN PILOTS Flight Navigators § 63.53 Knowledge requirements. (a) An...
14 CFR 63.35 - Knowledge requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Knowledge requirements. 63.35 Section 63.35 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN CERTIFICATION: FLIGHT CREWMEMBERS OTHER THAN PILOTS Flight Engineers § 63.35 Knowledge requirements. (a) An...
Francis, Claire E; Longmuir, Patricia E; Boyer, Charles; Andersen, Lars Bo; Barnes, Joel D; Boiarskaia, Elena; Cairney, John; Faigenbaum, Avery D; Faulkner, Guy; Hands, Beth P; Hay, John A; Janssen, Ian; Katzmarzyk, Peter T; Kemper, Han C; Knudson, Duane; Lloyd, Meghann; McKenzie, Thomas L; Olds, Tim S; Sacheck, Jennifer M; Shephard, Roy J; Zhu, Weimo; Tremblay, Mark S
2016-02-01
The Canadian Assessment of Physical Literacy (CAPL) was conceptualized as a tool to monitor children's physical literacy. The original model (fitness, activity behavior, knowledge, motor skill) required revision and relative weights for calculating/interpreting scores were required. Nineteen childhood physical activity/fitness experts completed a 3-round Delphi process. Round 1 was open-ended questions. Subsequent rounds rated statements using a 5-point Likert scale. Recommendations were sought regarding protocol inclusion, relative importance within composite scores and score interpretation. Delphi participant consensus was achieved for 64% (47/73) of statement topics, including a revised conceptual model, specific assessment protocols, the importance of longitudinal tracking, and the relative importance of individual protocols and composite scores. Divergent opinions remained regarding the inclusion of sleep time, assessment/ scoring of the obstacle course assessment of motor skill, and the need for an overall physical literacy classification. The revised CAPL model (overlapping domains of physical competence, motivation, and knowledge, encompassed by daily behavior) is appropriate for monitoring the physical literacy of children aged 8 to 12 years. Objectively measured domains (daily behavior, physical competence) have higher relative importance. The interpretation of CAPL results should be reevaluated as more data become available.
Requirements and Solutions for Personalized Health Systems.
Blobel, Bernd; Ruotsalainen, Pekka; Lopez, Diego M; Oemig, Frank
2017-01-01
Organizational, methodological and technological paradigm changes enable a precise, personalized, predictive, preventive and participative approach to health and social services supported by multiple actors from different domains at diverse level of knowledge and skills. Interoperability has to advance beyond Information and Communication Technologies (ICT) concerns, including the real world business domains and their processes, but also the individual context of all actors involved. The paper introduces and compares personalized health definitions, summarizes requirements and principles for pHealth systems, and considers intelligent interoperability. It addresses knowledge representation and harmonization, decision intelligence, and usability as crucial issues in pHealth. On this basis, a system-theoretical, ontology-based, policy-driven reference architecture model for open and intelligent pHealth ecosystems and its transformation into an appropriate ICT design and implementation is proposed.
NASA Astrophysics Data System (ADS)
Hong, Yoon-Seok; Rosen, Michael R.
2002-03-01
An urban fractured-rock aquifer system, where disposal of storm water is via 'soak holes' drilled directly into the top of fractured-rock basalt, has a highly dynamic nature where theories or knowledge to generate the model are still incomplete and insufficient. Therefore, formulating an accurate mechanistic model, usually based on first principles (physical and chemical laws, mass balance, and diffusion and transport, etc.), requires time- and money-consuming tasks. Instead of a human developing the mechanistic-based model, this paper presents an approach to automatic model evolution in genetic programming (GP) to model dynamic behaviour of groundwater level fluctuations affected by storm water infiltration. This GP evolves mathematical models automatically that have an understandable structure using function tree representation by methods of natural selection ('survival of the fittest') through genetic operators (reproduction, crossover, and mutation). The simulation results have shown that GP is not only capable of predicting the groundwater level fluctuation due to storm water infiltration but also provides insight into the dynamic behaviour of a partially known urban fractured-rock aquifer system by allowing knowledge extraction of the evolved models. Our results show that GP can work as a cost-effective modelling tool, enabling us to create prototype models quickly and inexpensively and assists us in developing accurate models in less time, even if we have limited experience and incomplete knowledge for an urban fractured-rock aquifer system affected by storm water infiltration.
Generic Educational Knowledge Representation for Adaptive and Cognitive Systems
ERIC Educational Resources Information Center
Caravantes, Arturo; Galan, Ramon
2011-01-01
The interoperability of educational systems, encouraged by the development of specifications, standards and tools related to the Semantic Web is limited to the exchange of information in domain and student models. High system interoperability requires that a common framework be defined that represents the functional essence of educational systems.…
Supporting the Educational Needs of Students with Orthopedic Impairments.
ERIC Educational Resources Information Center
Heller, Kathryn Wolff; Swinehart-Jones, Dawn
2003-01-01
This article provides information on orthopedic impairments and the unique knowledge and skills required to provide these students with an appropriate education. Information on current practice is provided, as well as training and technical assistance models that can be used to help provide teachers with the necessary training. (Contains…
A Simple Relativistic Bohr Atom
ERIC Educational Resources Information Center
Terzis, Andreas F.
2008-01-01
A simple concise relativistic modification of the standard Bohr model for hydrogen-like atoms with circular orbits is presented. As the derivation requires basic knowledge of classical and relativistic mechanics, it can be taught in standard courses in modern physics and introductory quantum mechanics. In addition, it can be shown in a class that…
Model Railroading and Computer Fundamentals
ERIC Educational Resources Information Center
McCormick, John W.
2007-01-01
Less than one half of one percent of all processors manufactured today end up in computers. The rest are embedded in other devices such as automobiles, airplanes, trains, satellites, and nearly every modern electronic device. Developing software for embedded systems requires a greater knowledge of hardware than developing for a typical desktop…
Quantifier Comprehension in Corticobasal Degeneration
ERIC Educational Resources Information Center
McMillan, Corey T.; Clark, Robin; Moore, Peachie; Grossman, Murray
2006-01-01
In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers.…
Understanding transport of volatile contaminants in soil gas and ground water, particularly those associated with underground storage tanks (USTs), requires a detailed knowledge about the depth-dependent distribution of chemical species in the subsurface. A risk assessment of th...
Understanding transport of volatile contaminants in soil gas and ground water, particularly those associated with underground storage tanks (USTs), requires a detailed knowledge about the depthdependent distribution of chemical species in the subsurface. A risk assessment of the...
Observations and Modeling of the Green Ocean Amazon 2014/15. CHUVA Field Campaign Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Machado, L. A. T.
2016-03-01
The physical processes inside clouds are one of the most unknown components of weather and climate systems. A description of cloud processes through the use of standard meteorological parameters in numerical models has to be strongly improved to accurately describe the characteristics of hydrometeors, latent heating profiles, radiative balance, air entrainment, and cloud updrafts and downdrafts. Numerical models have been improved to run at higher spatial resolutions where it is necessary to explicitly describe these cloud processes. For instance, to analyze the effects of global warming in a given region it is necessary to perform simulations taking into account allmore » of the cloud processes described above. Another important application that requires this knowledge is satellite precipitation estimation. The analysis will be performed focusing on the microphysical evolution and cloud life cycle, different precipitation estimation algorithms, the development of thunderstorms and lightning formation, processes in the boundary layer, and cloud microphysical modeling. This project intends to extend the knowledge of these cloud processes to reduce the uncertainties in precipitation estimation, mainly from warm clouds, and, consequently, improve knowledge of the water and energy budget and cloud microphysics.« less
A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model
NASA Technical Reports Server (NTRS)
Mathe, Nathalie; Chen, James
1994-01-01
Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.
The practice of agent-based model visualization.
Dorin, Alan; Geard, Nicholas
2014-01-01
We discuss approaches to agent-based model visualization. Agent-based modeling has its own requirements for visualization, some shared with other forms of simulation software, and some unique to this approach. In particular, agent-based models are typified by complexity, dynamism, nonequilibrium and transient behavior, heterogeneity, and a researcher's interest in both individual- and aggregate-level behavior. These are all traits requiring careful consideration in the design, experimentation, and communication of results. In the case of all but final communication for dissemination, researchers may not make their visualizations public. Hence, the knowledge of how to visualize during these earlier stages is unavailable to the research community in a readily accessible form. Here we explore means by which all phases of agent-based modeling can benefit from visualization, and we provide examples from the available literature and online sources to illustrate key stages and techniques.
Modeling software systems by domains
NASA Technical Reports Server (NTRS)
Dippolito, Richard; Lee, Kenneth
1992-01-01
The Software Architectures Engineering (SAE) Project at the Software Engineering Institute (SEI) has developed engineering modeling techniques that both reduce the complexity of software for domain-specific computer systems and result in systems that are easier to build and maintain. These techniques allow maximum freedom for system developers to apply their domain expertise to software. We have applied these techniques to several types of applications, including training simulators operating in real time, engineering simulators operating in non-real time, and real-time embedded computer systems. Our modeling techniques result in software that mirrors both the complexity of the application and the domain knowledge requirements. We submit that the proper measure of software complexity reflects neither the number of software component units nor the code count, but the locus of and amount of domain knowledge. As a result of using these techniques, domain knowledge is isolated by fields of engineering expertise and removed from the concern of the software engineer. In this paper, we will describe kinds of domain expertise, describe engineering by domains, and provide relevant examples of software developed for simulator applications using the techniques.
Robinson, Vivian; Tugwell, Peter; Walker, Peter; Ter Kuile, Aleida A; Neufeld, Vic; Hatcher-Roberts, Janet; Amaratunga, Carol; Andersson, Neil; Doull, Marion; Labonte, Ron; Muckle, Wendy; Murangira, Felicite; Nyamai, Caroline; Ralph-Robinson, Dawn; Simpson, Don; Sitthi-Amorn, Chitr; Turnbull, Jeff; Walker, Joelle; Wood, Chris
2007-08-01
Collaborative action is required to address persistent and systematic health inequities which exist for most diseases in most countries of the world. The Academic NGO initiative (ACANGO) described in this paper was set up as a focused network giving priority to twinned partnerships between Academic research centres and community-based NGOs. ACANGO aims to capture the strengths of both in order to build consensus among stakeholders, engage the community, focus on leadership training, shared management and resource development and deployment. A conceptual model was developed through a series of community consultations. This model was tested with four academic-community challenge projects based in Kenya, Canada, Thailand and Rwanda and an online forum and coordinating hub based at the University of Ottawa. Between February 2005 and February 2007, each of the four challenge projects was able to show specific outputs, outcomes and impacts related to enhancing health equity through the relevant production and application of knowledge. The ACANGO initiative model and network has demonstrated success in enhancing the production and use of knowledge in program design and implementation for vulnerable populations.
Structural analysis consultation using artificial intelligence
NASA Technical Reports Server (NTRS)
Melosh, R. J.; Marcal, P. V.; Berke, L.
1978-01-01
The primary goal of consultation is definition of the best strategy to deal with a structural engineering analysis objective. The knowledge base to meet the need is designed to identify the type of numerical analysis, the needed modeling detail, and specific analysis data required. Decisions are constructed on the basis of the data in the knowledge base - material behavior, relations between geometry and structural behavior, measures of the importance of time and temperature changes - and user supplied specifics characteristics of the spectrum of analysis types, the relation between accuracy and model detail on the structure, its mechanical loadings, and its temperature states. Existing software demonstrated the feasibility of the approach, encompassing the 36 analysis classes spanning nonlinear, temperature affected, incremental analyses which track the behavior of structural systems.
High-Reproducibility and High-Accuracy Method for Automated Topic Classification
NASA Astrophysics Data System (ADS)
Lancichinetti, Andrea; Sirer, M. Irmak; Wang, Jane X.; Acuna, Daniel; Körding, Konrad; Amaral, Luís A. Nunes
2015-01-01
Much of human knowledge sits in large databases of unstructured text. Leveraging this knowledge requires algorithms that extract and record metadata on unstructured text documents. Assigning topics to documents will enable intelligent searching, statistical characterization, and meaningful classification. Latent Dirichlet allocation (LDA) is the state of the art in topic modeling. Here, we perform a systematic theoretical and numerical analysis that demonstrates that current optimization techniques for LDA often yield results that are not accurate in inferring the most suitable model parameters. Adapting approaches from community detection in networks, we propose a new algorithm that displays high reproducibility and high accuracy and also has high computational efficiency. We apply it to a large set of documents in the English Wikipedia and reveal its hierarchical structure.
A Learning-Based Approach to Reactive Security
NASA Astrophysics Data System (ADS)
Barth, Adam; Rubinstein, Benjamin I. P.; Sundararajan, Mukund; Mitchell, John C.; Song, Dawn; Bartlett, Peter L.
Despite the conventional wisdom that proactive security is superior to reactive security, we show that reactive security can be competitive with proactive security as long as the reactive defender learns from past attacks instead of myopically overreacting to the last attack. Our game-theoretic model follows common practice in the security literature by making worst-case assumptions about the attacker: we grant the attacker complete knowledge of the defender's strategy and do not require the attacker to act rationally. In this model, we bound the competitive ratio between a reactive defense algorithm (which is inspired by online learning theory) and the best fixed proactive defense. Additionally, we show that, unlike proactive defenses, this reactive strategy is robust to a lack of information about the attacker's incentives and knowledge.
Semantic Clinical Guideline Documents
Eriksson, Henrik; Tu, Samson W.; Musen, Mark
2005-01-01
Decision-support systems based on clinical practice guidelines can support physicians and other health-care personnel in the process of following best practice consistently. A knowledge-based approach to represent guidelines makes it possible to encode computer-interpretable guidelines in a formal manner, perform consistency checks, and use the guidelines directly in decision-support systems. Decision-support authors and guideline users require guidelines in human-readable formats in addition to computer-interpretable ones (e.g., for guideline review and quality assurance). We propose a new document-oriented information architecture that combines knowledge-representation models with electronic and paper documents. The approach integrates decision-support modes with standard document formats to create a combined clinical-guideline model that supports on-line viewing, printing, and decision support. PMID:16779037
ERIC Educational Resources Information Center
Morse, Emile L.; Schmidt, Heidi; Butter, Karen; Rider, Cynthia; Hickey, Thomas B.; O'Neill, Edward T.; Toves, Jenny; Green, Marlan; Soy, Sue; Gunn, Stan; Galloway, Patricia
2002-01-01
Includes four articles that discuss evaluation methods for information management systems under the Defense Advanced Research Projects Agency; building digital libraries at the University of California San Francisco's Tobacco Control Archives; IFLA's Functional Requirements for Bibliographic Records; and designing the Texas email repository model…
NASA Technical Reports Server (NTRS)
Barro, E.; Delbufalo, A.; Rossi, F.
1993-01-01
The definition of some modern high demanding space systems requires a different approach to system definition and design from that adopted for traditional missions. System functionality is strongly coupled to the operational analysis, aimed at characterizing the dynamic interactions of the flight element with its surrounding environment and its ground control segment. Unambiguous functional, operational and performance requirements are to be defined for the system, thus improving also the successive development stages. This paper proposes a Petri Nets based methodology and two related prototype applications (to ARISTOTELES orbit control and to Hermes telemetry generation) for the operational analysis of space systems through the dynamic modeling of their functions and a related computer aided environment (ISIDE) able to make the dynamic model work, thus enabling an early validation of the system functional representation, and to provide a structured system requirements data base, which is the shared knowledge base interconnecting static and dynamic applications, fully traceable with the models and interfaceable with the external world.
Reduced-order modeling for hyperthermia: an extended balanced-realization-based approach.
Mattingly, M; Bailey, E A; Dutton, A W; Roemer, R B; Devasia, S
1998-09-01
Accurate thermal models are needed in hyperthermia cancer treatments for such tasks as actuator and sensor placement design, parameter estimation, and feedback temperature control. The complexity of the human body produces full-order models which are too large for effective execution of these tasks, making use of reduced-order models necessary. However, standard balanced-realization (SBR)-based model reduction techniques require a priori knowledge of the particular placement of actuators and sensors for model reduction. Since placement design is intractable (computationally) on the full-order models, SBR techniques must use ad hoc placements. To alleviate this problem, an extended balanced-realization (EBR)-based model-order reduction approach is presented. The new technique allows model order reduction to be performed over all possible placement designs and does not require ad hoc placement designs. It is shown that models obtained using the EBR method are more robust to intratreatment changes in the placement of the applied power field than those models obtained using the SBR method.
Koo, Chulmo; Wati, Yulia; Park, Keeho
2011-01-01
Background The fact that patient satisfaction with primary care clinical practices and physician-patient communications has decreased gradually has brought a new opportunity to the online channel as a supplementary service to provide additional information. Objective In this study, our objectives were to examine the process of cognitive knowledge expectation-confirmation from eHealth users and to recommend the attributes of a “knowledge-intensive website.”. Knowledge expectation can be defined as users’ existing attitudes or beliefs regarding expected levels of knowledge they may gain by accessing the website. Knowledge confirmation is the extent to which user’s knowledge expectation of information systems use is realized during actual use. In our hypothesized research model, perceived information quality, presentation and attractiveness as well as knowledge expectation influence knowledge confirmation, which in turn influences perceived usefulness and end user satisfaction, which feeds back to knowledge expectation. Methods An empirical study was conducted at the National Cancer Center (NCC), Republic of Korea (South Korea), by evaluating its official website. A user survey was administered containing items to measure subjectively perceived website quality and expectation-confirmation attributes. A study sample of 198 usable responses was used for further analysis. We used the structural equation model to test the proposed research model. Results Knowledge expectation exhibited a positive effect on knowledge confirmation (beta = .27, P < .001). The paths from information quality, information presentation, and website attractiveness to knowledge confirmation were also positive and significant (beta = .24, P < .001; beta = .29, P < .001; beta = .18, P < .001, respectively). Moreover, the effect of knowledge confirmation on perceived usefulness was also positively significant (beta = .64, P < .001). Knowledge expectation together with knowledge confirmation and perceived usefulness also significantly affected end user satisfaction (beta = .22 P < .001; beta = .39, P < .001; beta = .25, P < .001, respectively). Conclusions Theoretically, this study has (1) identified knowledge-intensive website attributes, (2) enhanced the theoretical foundation of eHealth from the information systems (IS) perspective by adopting the expectation-confirmation theory (ECT), and (3) examined the importance of information and knowledge attributes and explained their impact on user satisfaction. Practically, our empirical results suggest that perceived website quality (ie, information quality, information presentation, and website attractiveness) is a core requirement for knowledge building. In addition, our study has also shown that knowledge confirmation has a greater effect on satisfaction than both knowledge expectation and perceived usefulness. PMID:22047810
Concepts, Control, and Context: A Connectionist Account of Normal and Disordered Semantic Cognition
2018-01-01
Semantic cognition requires conceptual representations shaped by verbal and nonverbal experience and executive control processes that regulate activation of knowledge to meet current situational demands. A complete model must also account for the representation of concrete and abstract words, of taxonomic and associative relationships, and for the role of context in shaping meaning. We present the first major attempt to assimilate all of these elements within a unified, implemented computational framework. Our model combines a hub-and-spoke architecture with a buffer that allows its state to be influenced by prior context. This hybrid structure integrates the view, from cognitive neuroscience, that concepts are grounded in sensory-motor representation with the view, from computational linguistics, that knowledge is shaped by patterns of lexical co-occurrence. The model successfully codes knowledge for abstract and concrete words, associative and taxonomic relationships, and the multiple meanings of homonyms, within a single representational space. Knowledge of abstract words is acquired through (a) their patterns of co-occurrence with other words and (b) acquired embodiment, whereby they become indirectly associated with the perceptual features of co-occurring concrete words. The model accounts for executive influences on semantics by including a controlled retrieval mechanism that provides top-down input to amplify weak semantic relationships. The representational and control elements of the model can be damaged independently, and the consequences of such damage closely replicate effects seen in neuropsychological patients with loss of semantic representation versus control processes. Thus, the model provides a wide-ranging and neurally plausible account of normal and impaired semantic cognition. PMID:29733663
Mobile, Collaborative Situated Knowledge Creation for Urban Planning
Zurita, Gustavo; Baloian, Nelson
2012-01-01
Geo-collaboration is an emerging research area in computer sciences studying the way spatial, geographically referenced information and communication technologies can support collaborative activities. Scenarios in which information associated to its physical location are of paramount importance are often referred as Situated Knowledge Creation scenarios. To date there are few computer systems supporting knowledge creation that explicitly incorporate physical context as part of the knowledge being managed in mobile face-to-face scenarios. This work presents a collaborative software application supporting visually-geo-referenced knowledge creation in mobile working scenarios while the users are interacting face-to-face. The system allows to manage data information associated to specific physical locations for knowledge creation processes in the field, such as urban planning, identifying specific physical locations, territorial management, etc.; using Tablet-PCs and GPS in order to geo-reference data and information. It presents a model for developing mobile applications supporting situated knowledge creation in the field, introducing the requirements for such an application and the functionalities it should have in order to fulfill them. The paper also presents the results of utility and usability evaluations. PMID:22778639
Mobile, collaborative situated knowledge creation for urban planning.
Zurita, Gustavo; Baloian, Nelson
2012-01-01
Geo-collaboration is an emerging research area in computer sciences studying the way spatial, geographically referenced information and communication technologies can support collaborative activities. Scenarios in which information associated to its physical location are of paramount importance are often referred as Situated Knowledge Creation scenarios. To date there are few computer systems supporting knowledge creation that explicitly incorporate physical context as part of the knowledge being managed in mobile face-to-face scenarios. This work presents a collaborative software application supporting visually-geo-referenced knowledge creation in mobile working scenarios while the users are interacting face-to-face. The system allows to manage data information associated to specific physical locations for knowledge creation processes in the field, such as urban planning, identifying specific physical locations, territorial management, etc.; using Tablet-PCs and GPS in order to geo-reference data and information. It presents a model for developing mobile applications supporting situated knowledge creation in the field, introducing the requirements for such an application and the functionalities it should have in order to fulfill them. The paper also presents the results of utility and usability evaluations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharp, J.K.
1997-11-01
This seminar describes a process and methodology that uses structured natural language to enable the construction of precise information requirements directly from users, experts, and managers. The main focus of this natural language approach is to create the precise information requirements and to do it in such a way that the business and technical experts are fully accountable for the results. These requirements can then be implemented using appropriate tools and technology. This requirement set is also a universal learning tool because it has all of the knowledge that is needed to understand a particular process (e.g., expense vouchers, projectmore » management, budget reviews, tax, laws, machine function).« less
Hoskinson, Anne-Marie
2010-01-01
Biological problems in the twenty-first century are complex and require mathematical insight, often resulting in mathematical models of biological systems. Building mathematical-biological models requires cooperation among biologists and mathematicians, and mastery of building models. A new course in mathematical modeling presented the opportunity to build both content and process learning of mathematical models, the modeling process, and the cooperative process. There was little guidance from the literature on how to build such a course. Here, I describe the iterative process of developing such a course, beginning with objectives and choosing content and process competencies to fulfill the objectives. I include some inductive heuristics for instructors seeking guidance in planning and developing their own courses, and I illustrate with a description of one instructional model cycle. Students completing this class reported gains in learning of modeling content, the modeling process, and cooperative skills. Student content and process mastery increased, as assessed on several objective-driven metrics in many types of assessments.
2010-01-01
Biological problems in the twenty-first century are complex and require mathematical insight, often resulting in mathematical models of biological systems. Building mathematical–biological models requires cooperation among biologists and mathematicians, and mastery of building models. A new course in mathematical modeling presented the opportunity to build both content and process learning of mathematical models, the modeling process, and the cooperative process. There was little guidance from the literature on how to build such a course. Here, I describe the iterative process of developing such a course, beginning with objectives and choosing content and process competencies to fulfill the objectives. I include some inductive heuristics for instructors seeking guidance in planning and developing their own courses, and I illustrate with a description of one instructional model cycle. Students completing this class reported gains in learning of modeling content, the modeling process, and cooperative skills. Student content and process mastery increased, as assessed on several objective-driven metrics in many types of assessments. PMID:20810966
Capturing security requirements for software systems.
El-Hadary, Hassan; El-Kassas, Sherif
2014-07-01
Security is often an afterthought during software development. Realizing security early, especially in the requirement phase, is important so that security problems can be tackled early enough before going further in the process and avoid rework. A more effective approach for security requirement engineering is needed to provide a more systematic way for eliciting adequate security requirements. This paper proposes a methodology for security requirement elicitation based on problem frames. The methodology aims at early integration of security with software development. The main goal of the methodology is to assist developers elicit adequate security requirements in a more systematic way during the requirement engineering process. A security catalog, based on the problem frames, is constructed in order to help identifying security requirements with the aid of previous security knowledge. Abuse frames are used to model threats while security problem frames are used to model security requirements. We have made use of evaluation criteria to evaluate the resulting security requirements concentrating on conflicts identification among requirements. We have shown that more complete security requirements can be elicited by such methodology in addition to the assistance offered to developers to elicit security requirements in a more systematic way.
Capturing security requirements for software systems
El-Hadary, Hassan; El-Kassas, Sherif
2014-01-01
Security is often an afterthought during software development. Realizing security early, especially in the requirement phase, is important so that security problems can be tackled early enough before going further in the process and avoid rework. A more effective approach for security requirement engineering is needed to provide a more systematic way for eliciting adequate security requirements. This paper proposes a methodology for security requirement elicitation based on problem frames. The methodology aims at early integration of security with software development. The main goal of the methodology is to assist developers elicit adequate security requirements in a more systematic way during the requirement engineering process. A security catalog, based on the problem frames, is constructed in order to help identifying security requirements with the aid of previous security knowledge. Abuse frames are used to model threats while security problem frames are used to model security requirements. We have made use of evaluation criteria to evaluate the resulting security requirements concentrating on conflicts identification among requirements. We have shown that more complete security requirements can be elicited by such methodology in addition to the assistance offered to developers to elicit security requirements in a more systematic way. PMID:25685514
Shiffman, Richard N; Michel, George; Essaihi, Abdelwaheb; Thornquist, Elizabeth
2004-01-01
A gap exists between the information contained in published clinical practice guidelines and the knowledge and information that are necessary to implement them. This work describes a process to systematize and make explicit the translation of document-based knowledge into workflow-integrated clinical decision support systems. This approach uses the Guideline Elements Model (GEM) to represent the guideline knowledge. Implementation requires a number of steps to translate the knowledge contained in guideline text into a computable format and to integrate the information into clinical workflow. The steps include: (1) selection of a guideline and specific recommendations for implementation, (2) markup of the guideline text, (3) atomization, (4) deabstraction and (5) disambiguation of recommendation concepts, (6) verification of rule set completeness, (7) addition of explanations, (8) building executable statements, (9) specification of origins of decision variables and insertions of recommended actions, (10) definition of action types and selection of associated beneficial services, (11) choice of interface components, and (12) creation of requirement specification. The authors illustrate these component processes using examples drawn from recent experience translating recommendations from the National Heart, Lung, and Blood Institute's guideline on management of chronic asthma into a workflow-integrated decision support system that operates within the Logician electronic health record system. Using the guideline document as a knowledge source promotes authentic translation of domain knowledge and reduces the overall complexity of the implementation task. From this framework, we believe that a better understanding of activities involved in guideline implementation will emerge.
A Computational Workflow for the Automated Generation of Models of Genetic Designs.
Misirli, Göksel; Nguyen, Tramy; McLaughlin, James Alastair; Vaidyanathan, Prashant; Jones, Timothy S; Densmore, Douglas; Myers, Chris; Wipat, Anil
2018-06-05
Computational models are essential to engineer predictable biological systems and to scale up this process for complex systems. Computational modeling often requires expert knowledge and data to build models. Clearly, manual creation of models is not scalable for large designs. Despite several automated model construction approaches, computational methodologies to bridge knowledge in design repositories and the process of creating computational models have still not been established. This paper describes a workflow for automatic generation of computational models of genetic circuits from data stored in design repositories using existing standards. This workflow leverages the software tool SBOLDesigner to build structural models that are then enriched by the Virtual Parts Repository API using Systems Biology Open Language (SBOL) data fetched from the SynBioHub design repository. The iBioSim software tool is then utilized to convert this SBOL description into a computational model encoded using the Systems Biology Markup Language (SBML). Finally, this SBML model can be simulated using a variety of methods. This workflow provides synthetic biologists with easy to use tools to create predictable biological systems, hiding away the complexity of building computational models. This approach can further be incorporated into other computational workflows for design automation.
The relationship between chiropractor required and current level of business knowledge.
Ciolfi, Michael Anthony; Kasen, Patsy Anne
2017-01-01
Chiropractors frequently practice within health care systems requiring the business acumen of an entrepreneur. However, some chiropractors do not know the relationship between the level of business knowledge required for practice success and their current level of business knowledge. The purpose of this quantitative study was to examine the relationship between chiropractors' perceived level of business knowledge required and their perceived level of current business knowledge. Two hundred and seventy-four participants completed an online survey (Health Care Training and Education Needs Survey) which included eight key business items. Participants rated the level of perceived business knowledge required (Part I) and their current perceived level of knowledge (Part II) for the same eight items. Data was collected from November 27, 2013 to December 18, 2013. Data were analyzed using Spearman's ranked correlation to determine the statistically significant relationships for the perceived level of knowledge required and the perceived current level of knowledge for each of the paired eight items from Parts I and II of the survey. Wilcoxon Signed Ranks Tests were performed to determine the statistical difference between the paired items. The results of Spearman's correlation testing indicated a statistically significant ( p < 0.01) positive correlation for the perceived level of knowledge required and perceived current level of knowledge for six variables: (a) organizational behavior, (b) strategic management, (c) marketing, (d) legal and ethical, (e) managerial decisions, and (f) operations. Wilcoxon Signed Ranks testing indicated a significant difference for three paired items: strategic management; marketing and; legal and ethical. The results suggest that relationships exist for the majority of business items (6 of 8) however a statistically difference was demonstrated in only three of the paired business items. The implications of this study for social change include the potential to improve chiropractors' business knowledge and skills, enable practice success, enhance health services delivery and positively influence the profession as a viable career.
A Complex Network Approach to Distributional Semantic Models
Utsumi, Akira
2015-01-01
A number of studies on network analysis have focused on language networks based on free word association, which reflects human lexical knowledge, and have demonstrated the small-world and scale-free properties in the word association network. Nevertheless, there have been very few attempts at applying network analysis to distributional semantic models, despite the fact that these models have been studied extensively as computational or cognitive models of human lexical knowledge. In this paper, we analyze three network properties, namely, small-world, scale-free, and hierarchical properties, of semantic networks created by distributional semantic models. We demonstrate that the created networks generally exhibit the same properties as word association networks. In particular, we show that the distribution of the number of connections in these networks follows the truncated power law, which is also observed in an association network. This indicates that distributional semantic models can provide a plausible model of lexical knowledge. Additionally, the observed differences in the network properties of various implementations of distributional semantic models are consistently explained or predicted by considering the intrinsic semantic features of a word-context matrix and the functions of matrix weighting and smoothing. Furthermore, to simulate a semantic network with the observed network properties, we propose a new growing network model based on the model of Steyvers and Tenenbaum. The idea underlying the proposed model is that both preferential and random attachments are required to reflect different types of semantic relations in network growth process. We demonstrate that this model provides a better explanation of network behaviors generated by distributional semantic models. PMID:26295940
A Sensitivity Analysis of fMRI Balloon Model.
Zayane, Chadia; Laleg-Kirati, Taous Meriem
2015-01-01
Functional magnetic resonance imaging (fMRI) allows the mapping of the brain activation through measurements of the Blood Oxygenation Level Dependent (BOLD) contrast. The characterization of the pathway from the input stimulus to the output BOLD signal requires the selection of an adequate hemodynamic model and the satisfaction of some specific conditions while conducting the experiment and calibrating the model. This paper, focuses on the identifiability of the Balloon hemodynamic model. By identifiability, we mean the ability to estimate accurately the model parameters given the input and the output measurement. Previous studies of the Balloon model have somehow added knowledge either by choosing prior distributions for the parameters, freezing some of them, or looking for the solution as a projection on a natural basis of some vector space. In these studies, the identification was generally assessed using event-related paradigms. This paper justifies the reasons behind the need of adding knowledge, choosing certain paradigms, and completing the few existing identifiability studies through a global sensitivity analysis of the Balloon model in the case of blocked design experiment.
Development of the IMB Model and an Evidence-Based Diabetes Self-management Mobile Application.
Jeon, Eunjoo; Park, Hyeoun-Ae
2018-04-01
This study developed a diabetes self-management mobile application based on the information-motivation-behavioral skills (IMB) model, evidence extracted from clinical practice guidelines, and requirements identified through focus group interviews (FGIs) with diabetes patients. We developed a diabetes self-management (DSM) app in accordance with the following four stages of the system development life cycle. The functional and knowledge requirements of the users were extracted through FGIs with 19 diabetes patients. A system diagram, data models, a database, an algorithm, screens, and menus were designed. An Android app and server with an SSL protocol were developed. The DSM app algorithm and heuristics, as well as the usability of the DSM app were evaluated, and then the DSM app was modified based on heuristics and usability evaluation. A total of 11 requirement themes were identified through the FGIs. Sixteen functions and 49 knowledge rules were extracted. The system diagram consisted of a client part and server part, 78 data models, a database with 10 tables, an algorithm, and a menu structure with 6 main menus, and 40 user screens were developed. The DSM app was Android version 4.4 or higher for Bluetooth connectivity. The proficiency and efficiency scores of the algorithm were 90.96% and 92.39%, respectively. Fifteen issues were revealed through the heuristic evaluation, and the app was modified to address three of these issues. It was also modified to address five comments received by the researchers through the usability evaluation. The DSM app was developed based on behavioral change theory through IMB models. It was designed to be evidence-based, user-centered, and effective. It remains necessary to fully evaluate the effect of the DSM app on the DSM behavior changes of diabetes patients.
Development of the IMB Model and an Evidence-Based Diabetes Self-management Mobile Application
Jeon, Eunjoo
2018-01-01
Objectives This study developed a diabetes self-management mobile application based on the information-motivation-behavioral skills (IMB) model, evidence extracted from clinical practice guidelines, and requirements identified through focus group interviews (FGIs) with diabetes patients. Methods We developed a diabetes self-management (DSM) app in accordance with the following four stages of the system development life cycle. The functional and knowledge requirements of the users were extracted through FGIs with 19 diabetes patients. A system diagram, data models, a database, an algorithm, screens, and menus were designed. An Android app and server with an SSL protocol were developed. The DSM app algorithm and heuristics, as well as the usability of the DSM app were evaluated, and then the DSM app was modified based on heuristics and usability evaluation. Results A total of 11 requirement themes were identified through the FGIs. Sixteen functions and 49 knowledge rules were extracted. The system diagram consisted of a client part and server part, 78 data models, a database with 10 tables, an algorithm, and a menu structure with 6 main menus, and 40 user screens were developed. The DSM app was Android version 4.4 or higher for Bluetooth connectivity. The proficiency and efficiency scores of the algorithm were 90.96% and 92.39%, respectively. Fifteen issues were revealed through the heuristic evaluation, and the app was modified to address three of these issues. It was also modified to address five comments received by the researchers through the usability evaluation. Conclusions The DSM app was developed based on behavioral change theory through IMB models. It was designed to be evidence-based, user-centered, and effective. It remains necessary to fully evaluate the effect of the DSM app on the DSM behavior changes of diabetes patients. PMID:29770246
An information model to support user-centered design of medical devices.
Hagedorn, Thomas J; Krishnamurty, Sundar; Grosse, Ian R
2016-08-01
The process of engineering design requires the product development team to balance the needs and limitations of many stakeholders, including those of the user, regulatory organizations, and the designing institution. This is particularly true in medical device design, where additional consideration must be given for a much more complex user-base that can only be accessed on a limited basis. Given this inherent challenge, few projects exist that consider design domain concepts, such as aspects of a detailed design, a detailed view of various stakeholders and their capabilities, along with the user-needs simultaneously. In this paper, we present a novel information model approach that combines a detailed model of design elements with a model of the design itself, customer requirements, and of the capabilities of the customer themselves. The information model is used to facilitate knowledge capture and automated reasoning across domains with a minimal set of rules by adopting a terminology that treats customer and design specific factors identically, thus enabling straightforward assessments. A uniqueness of this approach is that it systematically provides an integrated perspective on the key usability information that drive design decisions towards more universal or effective outcomes with the very design information impacted by the usability information. This can lead to cost-efficient optimal designs based on a direct inclusion of the needs of customers alongside those of business, marketing, and engineering requirements. Two case studies are presented to show the method's potential as a more effective knowledge management tool with built-in automated inferences that provide design insight, as well as its overall effectiveness as a platform to develop and execute medical device design from a holistic perspective. Copyright © 2016 Elsevier Inc. All rights reserved.
Assessing the convergence of LHS Monte Carlo simulations of wastewater treatment models.
Benedetti, Lorenzo; Claeys, Filip; Nopens, Ingmar; Vanrolleghem, Peter A
2011-01-01
Monte Carlo (MC) simulation appears to be the only currently adopted tool to estimate global sensitivities and uncertainties in wastewater treatment modelling. Such models are highly complex, dynamic and non-linear, requiring long computation times, especially in the scope of MC simulation, due to the large number of simulations usually required. However, no stopping rule to decide on the number of simulations required to achieve a given confidence in the MC simulation results has been adopted so far in the field. In this work, a pragmatic method is proposed to minimize the computation time by using a combination of several criteria. It makes no use of prior knowledge about the model, is very simple, intuitive and can be automated: all convenient features in engineering applications. A case study is used to show an application of the method, and the results indicate that the required number of simulations strongly depends on the model output(s) selected, and on the type and desired accuracy of the analysis conducted. Hence, no prior indication is available regarding the necessary number of MC simulations, but the proposed method is capable of dealing with these variations and stopping the calculations after convergence is reached.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyung Lee; Rich Johnson, Ph.D.; Kimberlyn C. Moussesau
2011-12-01
The Nuclear Energy - Knowledge base for Advanced Modeling and Simulation (NE-KAMS) is being developed at the Idaho National Laboratory in conjunction with Bettis Laboratory, Sandia National Laboratories, Argonne National Laboratory, Oak Ridge National Laboratory, Utah State University and others. The objective of this consortium is to establish a comprehensive knowledge base to provide Verification and Validation (V&V) and Uncertainty Quantification (UQ) and other resources for advanced modeling and simulation (M&S) in nuclear reactor design and analysis. NE-KAMS will become a valuable resource for the nuclear industry, the national laboratories, the U.S. NRC and the public to help ensure themore » safe operation of existing and future nuclear reactors. A survey and evaluation of the state-of-the-art of existing V&V and M&S databases, including the Department of Energy and commercial databases, has been performed to ensure that the NE-KAMS effort will not be duplicating existing resources and capabilities and to assess the scope of the effort required to develop and implement NE-KAMS. The survey and evaluation have indeed highlighted the unique set of value-added functionality and services that NE-KAMS will provide to its users. Additionally, the survey has helped develop a better understanding of the architecture and functionality of these data and knowledge bases that can be used to leverage the development of NE-KAMS.« less
Automated Knowledge Discovery from Simulators
NASA Technical Reports Server (NTRS)
Burl, Michael C.; DeCoste, D.; Enke, B. L.; Mazzoni, D.; Merline, W. J.; Scharenbroich, L.
2006-01-01
In this paper, we explore one aspect of knowledge discovery from simulators, the landscape characterization problem, where the aim is to identify regions in the input/ parameter/model space that lead to a particular output behavior. Large-scale numerical simulators are in widespread use by scientists and engineers across a range of government agencies, academia, and industry; in many cases, simulators provide the only means to examine processes that are infeasible or impossible to study otherwise. However, the cost of simulation studies can be quite high, both in terms of the time and computational resources required to conduct the trials and the manpower needed to sift through the resulting output. Thus, there is strong motivation to develop automated methods that enable more efficient knowledge extraction.
NASA Astrophysics Data System (ADS)
Sirirojvisuth, Apinut
In complex aerospace system design, making an effective design decision requires multidisciplinary knowledge from both product and process perspectives. Integrating manufacturing considerations into the design process is most valuable during the early design stages since designers have more freedom to integrate new ideas when changes are relatively inexpensive in terms of time and effort. Several metrics related to manufacturability are cost, time, and manufacturing readiness level (MRL). Yet, there is a lack of structured methodology that quantifies how changes in the design decisions impact these metrics. As a result, a new set of integrated cost analysis tools are proposed in this study to quantify the impacts. Equally important is the capability to integrate this new cost tool into the existing design methodologies without sacrificing agility and flexibility required during the early design phases. To demonstrate the applicability of this concept, a ModelCenter environment is used to develop software architecture that represents Integrated Product and Process Development (IPPD) methodology used in several aerospace systems designs. The environment seamlessly integrates product and process analysis tools and makes effective transition from one design phase to the other while retaining knowledge gained a priori. Then, an advanced cost estimating tool called Hybrid Lifecycle Cost Estimating Tool (HLCET), a hybrid combination of weight-, process-, and activity-based estimating techniques, is integrated with the design framework. A new weight-based lifecycle cost model is created based on Tailored Cost Model (TCM) equations [3]. This lifecycle cost tool estimates the program cost based on vehicle component weights and programmatic assumptions. Additional high fidelity cost tools like process-based and activity-based cost analysis methods can be used to modify the baseline TCM result as more knowledge is accumulated over design iterations. Therefore, with this concept, the additional manufacturing knowledge can be used to identify a more accurate lifecycle cost and facilitate higher fidelity tradeoffs during conceptual and preliminary design. Advanced Composite Cost Estimating Model (ACCEM) is employed as a process-based cost component to replace the original TCM result of the composite part production cost. The reason for the replacement is that TCM estimates production costs from part weights as a result of subtractive manufacturing of metallic origin such as casting, forging, and machining processes. A complexity factor can sometimes be adjusted to reflect different types of metal and machine settings. The TCM assumption, however, gives erroneous results when applied to additive processes like those of composite manufacturing. Another innovative aspect of this research is the introduction of a work measurement technique called Maynard Operation Sequence Technique (MOST) to be used, similarly to Activity-Based Costing (ABC) approach, to estimate manufacturing time of a part by virtue of breaking down the operations occurred during its production. ABC allows a realistic determination of cost incurred in each activity, as opposed to using a traditional method of time estimation by analogy or using response surface equations from historical process data. The MOST concept provides a tailored study of an individual process typically required for a new, innovative design. Nevertheless, the MOST idea has some challenges, one of which is its requirement to build a new process from ground up. The process development requires a Subject Matter Expertise (SME) in manufacturing method of the particular design. The SME must have also a comprehensive understanding of the MOST system so that the correct parameters are chosen. In practice, these knowledge requirements may demand people from outside of the design discipline and a priori training of MOST. To relieve the constraint, this study includes an entirely new sub-system architecture that comprises 1) a knowledge-based system to provide the required knowledge during the process selection; and 2) a new user-interface to guide the parameter selection when building the process using MOST. Also included in this study is the demonstration of how the HLCET and its constituents can be integrated with a Georgia Tech' Integrated Product and Process Development (IPPD) methodology. The applicability of this work will be shown through a complex aerospace design example to gain insights into how manufacturing knowledge helps make better design decisions during the early stages. The setup process is explained with an example of its utility demonstrated in a hypothetical fighter aircraft wing redesign. The evaluation of the system effectiveness against existing methodologies is illustrated to conclude the thesis.
NASA Astrophysics Data System (ADS)
Nawani, Jigna; Rixius, Julia; Neuhaus, Birgit J.
2016-08-01
Empirical analysis of secondary biology classrooms revealed that, on average, 68% of teaching time in Germany revolved around processing tasks. Quality of instruction can thus be assessed by analyzing the quality of tasks used in classroom discourse. This quasi-experimental study analyzed how teachers used tasks in 38 videotaped biology lessons pertaining to the topic 'blood and circulatory system'. Two fundamental characteristics used to analyze tasks include: (1) required cognitive level of processing (e.g. low level information processing: repetiition, summary, define, classify and high level information processing: interpret-analyze data, formulate hypothesis, etc.) and (2) complexity of task content (e.g. if tasks require use of factual, linking or concept level content). Additionally, students' cognitive knowledge structure about the topic 'blood and circulatory system' was measured using student-drawn concept maps (N = 970 students). Finally, linear multilevel models were created with high-level cognitive processing tasks and higher content complexity tasks as class-level predictors and students' prior knowledge, students' interest in biology, and students' interest in biology activities as control covariates. Results showed a positive influence of high-level cognitive processing tasks (β = 0.07; p < .01) on students' cognitive knowledge structure. However, there was no observed effect of higher content complexity tasks on students' cognitive knowledge structure. Presented findings encourage the use of high-level cognitive processing tasks in biology instruction.
Immunosuppression for in vivo research: state-of-the-art protocols and experimental approaches
Diehl, Rita; Ferrara, Fabienne; Müller, Claudia; Dreyer, Antje Y; McLeod, Damian D; Fricke, Stephan; Boltze, Johannes
2017-01-01
Almost every experimental treatment strategy using non-autologous cell, tissue or organ transplantation is tested in small and large animal models before clinical translation. Because these strategies require immunosuppression in most cases, immunosuppressive protocols are a key element in transplantation experiments. However, standard immunosuppressive protocols are often applied without detailed knowledge regarding their efficacy within the particular experimental setting and in the chosen model species. Optimization of such protocols is pertinent to the translation of experimental results to human patients and thus warrants further investigation. This review summarizes current knowledge regarding immunosuppressive drug classes as well as their dosages and application regimens with consideration of species-specific drug metabolization and side effects. It also summarizes contemporary knowledge of novel immunomodulatory strategies, such as the use of mesenchymal stem cells or antibodies. Thus, this review is intended to serve as a state-of-the-art compendium for researchers to refine applied experimental immunosuppression and immunomodulation strategies to enhance the predictive value of preclinical transplantation studies. PMID:27721455
NASA Astrophysics Data System (ADS)
Roy, Sayan
This research presents a real-time adaptive phase correction technique for flexible phased array antennas on conformal surfaces of variable shapes. Previously reported pattern correctional methods for flexible phased array antennas require prior knowledge on the possible non-planar shapes in which the array may adapt for conformal applications. For the first time, this initial requirement of shape curvature knowledge is no longer needed and the instantaneous information on the relative location of array elements is used here for developing a geometrical model based on a set of Bezier curves. Specifically, by using an array of inclinometer sensors and an adaptive phase-correctional algorithm, it has been shown that the proposed geometrical model can successfully predict different conformal orientations of a 1-by-4 antenna array in real-time without the requirement of knowing the shape-changing characteristics of the surface the array is attached upon. Moreover, the phase correction technique is validated by determining the field patterns and broadside gain of the 1-by-4 antenna array on four different conformal surfaces with multiple points of curvatures. Throughout this work, measurements are shown to agree with the analytical solutions and full-wave simulations.
Statistical Inference in the Learning of Novel Phonetic Categories
ERIC Educational Resources Information Center
Zhao, Yuan
2010-01-01
Learning a phonetic category (or any linguistic category) requires integrating different sources of information. A crucial unsolved problem for phonetic learning is how this integration occurs: how can we update our previous knowledge about a phonetic category as we hear new exemplars of the category? One model of learning is Bayesian Inference,…
ERIC Educational Resources Information Center
Kollasch, Aurelia Wiktoria
2012-01-01
Today large research projects require substantial involvement of researchers from different organizations, disciplines, or cultures working in groups or teams to accomplish a common goal of producing, sharing, and disseminating scientific knowledge. This study focuses on the international research team that was launched in response to pressing…
Online Language Teacher Education: TESOL Perspectives
ERIC Educational Resources Information Center
England, Liz, Ed.
2012-01-01
More and more, ESL/EFL teachers are required by their employers to obtain a Master's degree in TESOL. Thousands of ESL/EFL teachers are acquiring professional skills and knowledge through online and distance education instructional models. Filling a growing need and making an important contribution, this book is a forerunner in addressing some of…
Neural Representations of Location Outside the Hippocampus
ERIC Educational Resources Information Center
Knierim, James J.
2006-01-01
Place cells of the rat hippocampus are a dominant model system for understanding the role of the hippocampus in learning and memory at the level of single-unit and neural ensemble responses. A complete understanding of the information processing and computations performed by the hippocampus requires detailed knowledge about the properties of the…
Teaching Business Ethics: A Quandary for Accounting Educators
ERIC Educational Resources Information Center
Frank, Gary; Ofobike, Emeka; Gradisher, Suzanne
2010-01-01
The authors discuss the pressures that accounting educators face in meeting expectations to include ethics in the accounting curriculum. Most schools still do not require discrete ethics courses for accounting students; ethics coverage is on a course-by-course basis. However, not all professors are equally comfortable or knowledgeable of models of…
ERIC Educational Resources Information Center
Evans, Barbara; Honour, Leslie
1997-01-01
Reports on a study that required student teachers training in business education to produce open learning materials on intercultural communication. Analysis of stages and responses to this assignment revealed a distinction between "deep" and "surface" learning. Includes charts delineating the characteristics of these two types…
Ontological Relations and the Capability Maturity Model Applied in Academia
ERIC Educational Resources Information Center
de Oliveira, Jerônimo Moreira; Campoy, Laura Gómez; Vilarino, Lilian
2015-01-01
This work presents a new approach to the discovery, identification and connection of ontological elements within the domain of characterization in learning organizations. In particular, the study can be applied to contexts where organizations require planning, logic, balance, and cognition in knowledge creation scenarios, which is the case for the…
Conveying Clinical Reasoning Based on Visual Observation via Eye-Movement Modelling Examples
ERIC Educational Resources Information Center
Jarodzka, Halszka; Balslev, Thomas; Holmqvist, Kenneth; Nystrom, Marcus; Scheiter, Katharina; Gerjets, Peter; Eika, Berit
2012-01-01
Complex perceptual tasks, like clinical reasoning based on visual observations of patients, require not only conceptual knowledge about diagnostic classes but also the skills to visually search for symptoms and interpret these observations. However, medical education so far has focused very little on how visual observation skills can be…
Skill Sets Required for Environmental Engineering and Where They Are Learned
ERIC Educational Resources Information Center
Reed, Kathaleen
2010-01-01
The purpose of this study was to investigate the knowledge, skills, abilities and traits environmental engineers need. Two questions were asked: what skills are considered important, and where are they learned? Dreyfus and Dreyfus' novice-to-expert model, which describes a progressive, five-step process of skill development that occurs over time…
Analysis of the impact of sources on indoor pollutant concentrations and occupant exposure to indoor pollutants requires knowledge of the emission rates from the sources. Emission rates are often determined by chamber testing and the data from the chamber test are fitted to an em...
An Integrated Model for the Adoption of Information Technologies in U.S. Colleges and Universities
ERIC Educational Resources Information Center
Garcia Molina, Pablo
2013-01-01
This thesis fulfills the requirements of a Doctor of Liberal Studies degree at Georgetown University. It advances our knowledge of the rationale and mechanisms surrounding the spread, adoption and abandonment of information and communication technologies in tertiary education institutions in the United States. This interdisciplinary thesis…
The Research of the Personality Qualities of Future Educational Psychologists
ERIC Educational Resources Information Center
Dolgova, V. I.; Salamatov, A. A.; Potapova, M. V.; Yakovleva, N. O.
2016-01-01
In this article, the authors substantiate the existence of the personality qualities of future educational psychologists (PQFEP) that are, in fact, a sum of knowledge, skills, abilities, socially required qualities of personality allowing the psychologist to solve problems in all the fields of professional activities. A model of PQFEP predicts the…
Rebecca G. Peak; Frank R., III Thompson
2014-01-01
Knowledge of the demography and habitat requirements of the endangered Golden-cheeked Warbler (Setophaga chrysoparia) is needed for its recovery, including measures of productivity instead of reproductive indices. We report on breeding phenology and demography, calculate model-based estimates of nest survival and seasonal productivity and evaluate...
ERIC Educational Resources Information Center
Doherty-Restrepo, Jennifer L.; Hughes, Brian J.; Del Rossi, Gianluca; Pitney, William A.
2009-01-01
Objective: Although continuing education is required for athletic trainers (AT) to maintain their Board of Certification credential, little is known regarding its efficacy for advancing knowledge and improving patient care. Continuing professional education (CPE) is designed to provide professionals with important practical learning opportunities.…
Curriculum Studies in Initial Teacher Education: The Importance of Holism and Project 2061
ERIC Educational Resources Information Center
Clark, John
2005-01-01
Initial teacher education programmes, in order to comply with the requirements for teacher registration, are usually expected to introduce student teachers to the mandated curriculum. Often this is done uncritically, so students tend to accept rather than examine the underlying epistemological model which partitions knowledge into distinct…
Teaching Future Teachers: A Model Workshop for Doctoral Education
ERIC Educational Resources Information Center
Pryce, Julia M.; Ainbinder, Alisa; Werner-Lin, Allison V.; Browne, Teri A.; Smithgall, Cheryl
2011-01-01
Doctoral student training has become focused in recent years on acquiring subject-area knowledge and research skills, rather than on teaching. This shift often leaves aspiring junior faculty feeling unprepared to address the demanding pedagogical requirements of the professoriate. In the area of social work, few programs contain a structured,…
Toward New Data and Information Management Solutions for Data-Intensive Ecological Research
ERIC Educational Resources Information Center
Laney, Christine Marie
2013-01-01
Ecosystem health is deteriorating in many parts of the world due to direct and indirect anthropogenic pressures. Generating accurate, useful, and impactful models of past, current, and future states of ecosystem structure and function is a complex endeavor that often requires vast amounts of data from multiple sources and knowledge from…
Heterosexual Anal Intercourse: A Neglected Risk Factor for HIV?
Baggaley, Rebecca F.; Dimitrov, Dobromir; Owen, Branwen N.; Pickles, Michael; Butler, Ailsa R.; Masse, Ben; Boily, Marie-Claude
2014-01-01
Heterosexual anal intercourse confers a much greater risk of HIV transmission than vaginal intercourse, yet its contribution to heterosexual HIV epidemics has been under researched. In this article we review the current state of knowledge of heterosexual anal intercourse practice worldwide and identify the information required to assess its role in HIV transmission within heterosexual populations, including input measures required to inform mathematical models. We then discuss the evidence relating anal intercourse and HIV with sexual violence. PMID:23279040
Autophagy in Drosophila: From Historical Studies to Current Knowledge
Mulakkal, Nitha C.; Nagy, Peter; Takats, Szabolcs; Tusco, Radu; Juhász, Gábor; Nezis, Ioannis P.
2014-01-01
The discovery of evolutionarily conserved Atg genes required for autophagy in yeast truly revolutionized this research field and made it possible to carry out functional studies on model organisms. Insects including Drosophila are classical and still popular models to study autophagy, starting from the 1960s. This review aims to summarize past achievements and our current knowledge about the role and regulation of autophagy in Drosophila, with an outlook to yeast and mammals. The basic mechanisms of autophagy in fruit fly cells appear to be quite similar to other eukaryotes, and the role that this lysosomal self-degradation process plays in Drosophila models of various diseases already made it possible to recognize certain aspects of human pathologies. Future studies in this complete animal hold great promise for the better understanding of such processes and may also help finding new research avenues for the treatment of disorders with misregulated autophagy. PMID:24949430
TARGET: Rapid Capture of Process Knowledge
NASA Technical Reports Server (NTRS)
Ortiz, C. J.; Ly, H. V.; Saito, T.; Loftin, R. B.
1993-01-01
TARGET (Task Analysis/Rule Generation Tool) represents a new breed of tool that blends graphical process flow modeling capabilities with the function of a top-down reporting facility. Since NASA personnel frequently perform tasks that are primarily procedural in nature, TARGET models mission or task procedures and generates hierarchical reports as part of the process capture and analysis effort. Historically, capturing knowledge has proven to be one of the greatest barriers to the development of intelligent systems. Current practice generally requires lengthy interactions between the expert whose knowledge is to be captured and the knowledge engineer whose responsibility is to acquire and represent the expert's knowledge in a useful form. Although much research has been devoted to the development of methodologies and computer software to aid in the capture and representation of some types of knowledge, procedural knowledge has received relatively little attention. In essence, TARGET is one of the first tools of its kind, commercial or institutional, that is designed to support this type of knowledge capture undertaking. This paper will describe the design and development of TARGET for the acquisition and representation of procedural knowledge. The strategies employed by TARGET to support use by knowledge engineers, subject matter experts, programmers and managers will be discussed. This discussion includes the method by which the tool employs its graphical user interface to generate a task hierarchy report. Next, the approach to generate production rules for incorporation in and development of a CLIPS based expert system will be elaborated. TARGET also permits experts to visually describe procedural tasks as a common medium for knowledge refinement by the expert community and knowledge engineer making knowledge consensus possible. The paper briefly touches on the verification and validation issues facing the CLIPS rule generation aspects of TARGET. A description of efforts to support TARGET's interoperability issues on PCs, Macintoshes and UNIX workstations concludes the paper.
Novel methodologies in marine fish larval nutrition.
Conceição, Luis E C; Aragão, Cláudia; Richard, Nadège; Engrola, Sofia; Gavaia, Paulo; Mira, Sara; Dias, Jorge
2010-03-01
Major gaps in knowledge on fish larval nutritional requirements still remain. Small larval size, and difficulties in acceptance of inert microdiets, makes progress slow and cumbersome. This lack of knowledge in fish larval nutritional requirements is one of the causes of high mortalities and quality problems commonly observed in marine larviculture. In recent years, several novel methodologies have contributed to significant progress in fish larval nutrition. Others are emerging and are likely to bring further insight into larval nutritional physiology and requirements. This paper reviews a range of new tools and some examples of their present use, as well as potential future applications in the study of fish larvae nutrition. Tube-feeding and incorporation into Artemia of (14)C-amino acids and lipids allowed studying Artemia intake, digestion and absorption and utilisation of these nutrients. Diet selection by fish larvae has been studied with diets containing different natural stable isotope signatures or diets where different rare metal oxides were added. Mechanistic modelling has been used as a tool to integrate existing knowledge and reveal gaps, and also to better understand results obtained in tracer studies. Population genomics may assist in assessing genotype effects on nutritional requirements, by using progeny testing in fish reared in the same tanks, and also in identifying QTLs for larval stages. Functional genomics and proteomics enable the study of gene and protein expression under various dietary conditions, and thereby identify the metabolic pathways which are affected by a given nutrient. Promising results were obtained using the metabolic programming concept in early life to facilitate utilisation of certain nutrients at later stages. All together, these methodologies have made decisive contributions, and are expected to do even more in the near future, to build a knowledge basis for development of optimised diets and feeding regimes for different species of larval fish.
Gauterin, Eckhard; Kammerer, Philipp; Kühn, Martin; Schulte, Horst
2016-05-01
Advanced model-based control of wind turbines requires knowledge of the states and the wind speed. This paper benchmarks a nonlinear Takagi-Sugeno observer for wind speed estimation with enhanced Kalman Filter techniques: The performance and robustness towards model-structure uncertainties of the Takagi-Sugeno observer, a Linear, Extended and Unscented Kalman Filter are assessed. Hence the Takagi-Sugeno observer and enhanced Kalman Filter techniques are compared based on reduced-order models of a reference wind turbine with different modelling details. The objective is the systematic comparison with different design assumptions and requirements and the numerical evaluation of the reconstruction quality of the wind speed. Exemplified by a feedforward loop employing the reconstructed wind speed, the benefit of wind speed estimation within wind turbine control is illustrated. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Supersonic projectile models for asynchronous shooter localization
NASA Astrophysics Data System (ADS)
Kozick, Richard J.; Whipps, Gene T.; Ash, Joshua N.
2011-06-01
In this work we consider the localization of a gunshot using a distributed sensor network measuring time differences of arrival between a firearm's muzzle blast and the shockwave induced by a supersonic bullet. This so-called MB-SW approach is desirable because time synchronization is not required between the sensors, however it suffers from increased computational complexity and requires knowledge of the bullet's velocity at all points along its trajectory. While the actual velocity profile of a particular gunshot is unknown, one may use a parameterized model for the velocity profile and simultaneously fit the model and localize the shooter. In this paper we study efficient solutions for the localization problem and identify deceleration models that trade off localization accuracy and computational complexity. We also develop a statistical analysis that includes bias due to mismatch between the true and actual deceleration models and covariance due to additive noise.
Homo Ignorans: Deliberately Choosing Not to Know.
Hertwig, Ralph; Engel, Christoph
2016-05-01
Western history of thought abounds with claims that knowledge is valued and sought. Yet people often choose not to know. We call the conscious choice not to seek or use knowledge (or information) deliberate ignorance. Using examples from a wide range of domains, we demonstrate that deliberate ignorance has important functions. We systematize types of deliberate ignorance, describe their functions, discuss their normative desirability, and consider how they can be modeled. To date, psychologists have paid relatively little attention to the study of ignorance, let alone the deliberate kind. Yet the desire not to know is no anomaly. It is a choice to seek rather than reduce uncertainty whose reasons require nuanced cognitive and economic theories and whose consequences-for the individual and for society-require analyses of both actor and environment. © The Author(s) 2016.
14 CFR 121.919 - Certification.
Code of Federal Regulations, 2013 CFR
2013-01-01
... shows competence in required technical knowledge and skills (e.g., piloting or other) and crew resource management (e.g., CRM or DRM) knowledge and skills in scenarios (i.e., LOE) that test both types of knowledge... evaluation of required knowledge and skills under the AQP must meet minimum certification and rating criteria...
14 CFR 121.919 - Certification.
Code of Federal Regulations, 2014 CFR
2014-01-01
... shows competence in required technical knowledge and skills (e.g., piloting or other) and crew resource management (e.g., CRM or DRM) knowledge and skills in scenarios (i.e., LOE) that test both types of knowledge... evaluation of required knowledge and skills under the AQP must meet minimum certification and rating criteria...
14 CFR 121.919 - Certification.
Code of Federal Regulations, 2012 CFR
2012-01-01
... shows competence in required technical knowledge and skills (e.g., piloting or other) and crew resource management (e.g., CRM or DRM) knowledge and skills in scenarios (i.e., LOE) that test both types of knowledge... evaluation of required knowledge and skills under the AQP must meet minimum certification and rating criteria...
What Math Knowledge Does Teaching Require?
ERIC Educational Resources Information Center
Thames, Mark Hoover; Ball, Deborah Loewenberg
2010-01-01
No one would argue with the claim that teaching mathematics requires mathematics knowledge. However, a clear description of such knowledge needed for teaching has been surprisingly elusive. To differentiate teachers' levels of mathematical knowledge, numerous studies have examined whether a teacher has a certification in math or a degree as well…
Chaouiya, Claudine; Keating, Sarah M; Berenguier, Duncan; Naldi, Aurélien; Thieffry, Denis; van Iersel, Martijn P; Le Novère, Nicolas; Helikar, Tomáš
2015-09-04
Quantitative methods for modelling biological networks require an in-depth knowledge of the biochemical reactions and their stoichiometric and kinetic parameters. In many practical cases, this knowledge is missing. This has led to the development of several qualitative modelling methods using information such as, for example, gene expression data coming from functional genomic experiments. The SBML Level 3 Version 1 Core specification does not provide a mechanism for explicitly encoding qualitative models, but it does provide a mechanism for SBML packages to extend the Core specification and add additional syntactical constructs. The SBML Qualitative Models package for SBML Level 3 adds features so that qualitative models can be directly and explicitly encoded. The approach taken in this package is essentially based on the definition of regulatory or influence graphs. The SBML Qualitative Models package defines the structure and syntax necessary to describe qualitative models that associate discrete levels of activities with entity pools and the transitions between states that describe the processes involved. This is particularly suited to logical models (Boolean or multi-valued) and some classes of Petri net models can be encoded with the approach.
Hill, Mary C.; Faunt, Claudia C.; Belcher, Wayne; Sweetkind, Donald; Tiedeman, Claire; Kavetski, Dmitri
2013-01-01
This work demonstrates how available knowledge can be used to build more transparent and refutable computer models of groundwater systems. The Death Valley regional groundwater flow system, which surrounds a proposed site for a high level nuclear waste repository of the United States of America, and the Nevada National Security Site (NNSS), where nuclear weapons were tested, is used to explore model adequacy, identify parameters important to (and informed by) observations, and identify existing old and potential new observations important to predictions. Model development is pursued using a set of fundamental questions addressed with carefully designed metrics. Critical methods include using a hydrogeologic model, managing model nonlinearity by designing models that are robust while maintaining realism, using error-based weighting to combine disparate types of data, and identifying important and unimportant parameters and observations and optimizing parameter values with computationally frugal schemes. The frugal schemes employed in this study require relatively few (10–1000 s), parallelizable model runs. This is beneficial because models able to approximate the complex site geology defensibly tend to have high computational cost. The issue of model defensibility is particularly important given the contentious political issues involved.
Sokolova, Ekaterina; Petterson, Susan R; Dienus, Olaf; Nyström, Fredrik; Lindgren, Per-Eric; Pettersson, Thomas J R
2015-09-01
Norovirus contamination of drinking water sources is an important cause of waterborne disease outbreaks. Knowledge on pathogen concentrations in source water is needed to assess the ability of a drinking water treatment plant (DWTP) to provide safe drinking water. However, pathogen enumeration in source water samples is often not sufficient to describe the source water quality. In this study, the norovirus concentrations were characterised at the contamination source, i.e. in sewage discharges. Then, the transport of norovirus within the water source (the river Göta älv in Sweden) under different loading conditions was simulated using a hydrodynamic model. Based on the estimated concentrations in source water, the required reduction of norovirus at the DWTP was calculated using quantitative microbial risk assessment (QMRA). The required reduction was compared with the estimated treatment performance at the DWTP. The average estimated concentration in source water varied between 4.8×10(2) and 7.5×10(3) genome equivalents L(-1); and the average required reduction by treatment was between 7.6 and 8.8 Log10. The treatment performance at the DWTP was estimated to be adequate to deal with all tested loading conditions, but was heavily dependent on chlorine disinfection, with the risk of poor reduction by conventional treatment and slow sand filtration. To our knowledge, this is the first article to employ discharge-based QMRA, combined with hydrodynamic modelling, in the context of drinking water. Copyright © 2015 Elsevier B.V. All rights reserved.
Ecological issues related to ozone: agricultural issues.
Fuhrer, Jürg; Booker, Fitzgerald
2003-06-01
Research on the effects of ozone on agricultural crops and agro-ecosystems is needed for the development of regional emission reduction strategies, to underpin practical recommendations aiming to increase the sustainability of agricultural land management in a changing environment, and to secure food supply in regions with rapidly growing populations. Major limitations in current knowledge exist in several areas: (1) Modelling of ozone transfer and specifically stomatal ozone uptake under variable environmental conditions, using robust and well-validated dynamic models that can be linked to large-scale photochemical models lack coverage. (2) Processes involved in the initial reactions of ozone with extracellular and cellular components after entry through the stomata, and identification of key chemical species and their role in detoxification require additional study. (3) Scaling the effects from the level of individual cells to the whole-plant requires, for instance, a better understanding of the effects of ozone on carbon transport within the plant. (4) Implications of long-term ozone effects on community and whole-ecosystem level processes, with an emphasis on crop quality, element cycling and carbon sequestration, and biodiversity of pastures and rangelands require renewed efforts. The UNECE Convention on Long Range Trans-boundary Air Pollution shows, for example, that policy decisions may require the use of integrated assessment models. These models depend on quantitative exposure-response information to link quantitative effects at each level of organization to an effective ozone dose (i.e., the balance between the rate of ozone uptake by the foliage and the rate of ozone detoxification). In order to be effective in a policy, or technological context, results from future research must be funnelled into an appropriate knowledge transfer scheme. This requires data synthesis, up-scaling, and spatial aggregation. At the research level, interactions must be considered between the effects of ozone and factors that are either directly manipulated by man through crop management, or indirectly changed. The latter include elevated atmospheric CO(2), particulate matter, other pollutants such as nitrogen oxides, UV-B radiation, climate and associated soil moisture conditions.
Graphical explanation in an expert system for Space Station Freedom rack integration
NASA Technical Reports Server (NTRS)
Craig, F. G.; Cutts, D. E.; Fennel, T. R.; Purves, B.
1990-01-01
The rationale and methodology used to incorporate graphics into explanations provided by an expert system for Space Station Freedom rack integration is examined. The rack integration task is typical of a class of constraint satisfaction problems for large programs where expertise from several areas is required. Graphically oriented approaches are used to explain the conclusions made by the system, the knowledge base content, and even at more abstract levels the control strategies employed by the system. The implemented architecture combines hypermedia and inference engine capabilities. The advantages of this architecture include: closer integration of user interface, explanation system, and knowledge base; the ability to embed links to deeper knowledge underlying the compiled knowledge used in the knowledge base; and allowing for more direct control of explanation depth and duration by the user. The graphical techniques employed range from simple statis presentation of schematics to dynamic creation of a series of pictures presented motion picture style. User models control the type, amount, and order of information presented.
14 CFR 65.75 - Knowledge requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... CERTIFICATION: AIRMEN OTHER THAN FLIGHT CREWMEMBERS Mechanics § 65.75 Knowledge requirements. (a) Each applicant for a mechanic certificate or rating must, after meeting the applicable experience requirements of...
14 CFR 65.75 - Knowledge requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... CERTIFICATION: AIRMEN OTHER THAN FLIGHT CREWMEMBERS Mechanics § 65.75 Knowledge requirements. (a) Each applicant for a mechanic certificate or rating must, after meeting the applicable experience requirements of...
14 CFR 65.75 - Knowledge requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... CERTIFICATION: AIRMEN OTHER THAN FLIGHT CREWMEMBERS Mechanics § 65.75 Knowledge requirements. (a) Each applicant for a mechanic certificate or rating must, after meeting the applicable experience requirements of...
Tetroe, Jacqueline M; Graham, Ian D; Scott, Vicky
2011-12-01
The concept of knowledge translation as defined by the Canadian Institutes for Health Research and the Knowledge to Action Cycle, described by Graham et al (Graham et al., 2006), are used to make a case for the importance of using a conceptual model to describe moving knowledge into action in the area of falls prevention. There is a large body of research in the area of falls prevention. It would seem that in many areas it is clear what is needed to prevent falls and further syntheses can determine where the evidence is sufficiently robust to warrant its implementation as well as where the gaps are that require further basic research. The phases of the action cycle highlight seven areas that should be paid attention to in order to maximize chances of successful implementation. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.
So what exactly is nursing knowledge?
Clarke, L
2011-06-01
This paper aims to present a discussion about intrinsic nursing knowledge. The paper stems from the author's study of knowledge claims enshrined in nursing journal articles, books and conference speeches. It is argued that claims by academic nurses have largely depended on principles drawn from continental and not Analytic (British-American) philosophy. Thus, claims are credible only insofar as they defer propositional logic. This is problematic inasmuch as nursing is a practice-based activity usually carried out in medical settings. Transpersonal nursing models are particularly criticizable in respect of their unworldly character as are also concepts based on shallow usages of physics or mathematics. I argue that sensible measurements of the 'real world' are possible--without endorsing positivism--and that nursing requires little recourse to logically unsustainable claims. The paper concludes with an analysis of a recent review of nursing knowledge, which analysis indicates the circularity that attends many discussions on the topic. © 2011 Blackwell Publishing.
Lexical is as lexical does: computational approaches to lexical representation
Woollams, Anna M.
2015-01-01
In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience. PMID:25893204
Rapid performance modeling and parameter regression of geodynamic models
NASA Astrophysics Data System (ADS)
Brown, J.; Duplyakin, D.
2016-12-01
Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.
Using archetypes for defining CDA templates.
Moner, David; Moreno, Alberto; Maldonado, José A; Robles, Montserrat; Parra, Carlos
2012-01-01
While HL7 CDA is a widely adopted standard for the documentation of clinical information, the archetype approach proposed by CEN/ISO 13606 and openEHR is gaining recognition as a means of describing domain models and medical knowledge. This paper describes our efforts in combining both standards. Using archetypes as an alternative for defining CDA templates permit new possibilities all based on the formal nature of archetypes and their ability to merge into the same artifact medical knowledge and technical requirements for semantic interoperability of electronic health records. We describe the process followed for the normalization of existing legacy data in a hospital environment, from the importation of the HL7 CDA model into an archetype editor, the definition of CDA archetypes and the application of those archetypes to obtain normalized CDA data instances.
Inductive System Health Monitoring
NASA Technical Reports Server (NTRS)
Iverson, David L.
2004-01-01
The Inductive Monitoring System (IMS) software was developed to provide a technique to automatically produce health monitoring knowledge bases for systems that are either difficult to model (simulate) with a computer or which require computer models that are too complex to use for real time monitoring. IMS uses nominal data sets collected either directly from the system or from simulations to build a knowledge base that can be used to detect anomalous behavior in the system. Machine learning and data mining techniques are used to characterize typical system behavior by extracting general classes of nominal data from archived data sets. IMS is able to monitor the system by comparing real time operational data with these classes. We present a description of learning and monitoring method used by IMS and summarize some recent IMS results.
Development of a Knowledge Base for Enduser Consultation of AAL-Systems.
Röll, Natalie; Stork, Wilhelm; Rosales, Bruno; Stephan, René; Knaup, Petra
2016-01-01
Manufacturer information, user experiences and product availability of assistive living technologies are usually not known to citizens or consultation centers. The different knowledge levels concerning the availability of technology shows the need for building up a knowledge base. The aim of this contribution is the definition of requirements in the development of knowledge bases for AAL consultations. The major requirements, such as a maintainable and easy to use structure were implemented into a web based knowledge base, which went productive in ~3700 consulting interviews of municipal technology information centers. Within this field phase the implementation of the requirements for a knowledge base in the field of AAL consulting was evaluated and further developed.
Intelligent Command and Control Systems for Satellite Ground Operations
NASA Technical Reports Server (NTRS)
Mitchell, Christine M.
1999-01-01
This grant, Intelligent Command and Control Systems for Satellite Ground Operations, funded by NASA Goddard Space Flight Center, has spanned almost a decade. During this time, it has supported a broad range of research addressing the changing needs of NASA operations. It is important to note that many of NASA's evolving needs, for example, use of automation to drastically reduce (e.g., 70%) operations costs, are similar requirements in both government and private sectors. Initially the research addressed the appropriate use of emerging and inexpensive computational technologies, such as X Windows, graphics, and color, together with COTS (commercial-off-the-shelf) hardware and software such as standard Unix workstations to re-engineer satellite operations centers. The first phase of research supported by this grant explored the development of principled design methodologies to make effective use of emerging and inexpensive technologies. The ultimate performance measures for new designs were whether or not they increased system effectiveness while decreasing costs. GT-MOCA (The Georgia Tech Mission Operations Cooperative Associate) and GT-VITA (Georgia Tech Visual and Inspectable Tutor and Assistant), whose latter stages were supported by this research, explored model-based design of collaborative operations teams and the design of intelligent tutoring systems, respectively. Implemented in proof-of-concept form for satellite operations, empirical evaluations of both, using satellite operators for the former and personnel involved in satellite control operations for the latter, demonstrated unequivocally the feasibility and effectiveness of the proposed modeling and design strategy underlying both research efforts. The proof-of-concept implementation of GT-MOCA showed that the methodology could specify software requirements that enabled a human-computer operations team to perform without any significant performance differences from the standard two-person satellite operations team. GT-VITA, using the same underlying methodology, the operator function model (OFM), and its computational implementation, OFMspert, successfully taught satellite control knowledge required by flight operations team members. The tutor structured knowledge in three ways: declarative knowledge (e.g., What is this? What does it do?), procedural knowledge, and operational skill. Operational skill is essential in real-time operations. It combines the two former knowledge types, assisting a student to use them effectively in a dynamic, multi-tasking, real-time operations environment. A high-fidelity simulator of the operator interface to the ground control system, including an almost full replication of both the human-computer interface and human interaction with the dynamic system, was used in the GT-MOCA and GT-VITA evaluations. The GT-VITA empirical evaluation, conducted with a range of'novices' that included GSFC operations management, GSFC operations software developers, and new flight operations team members, demonstrated that GT-VITA effectively taught a wide range of knowledge in a succinct and engaging manner.
ERIC Educational Resources Information Center
Zeng, Qingtian; Zhao, Zhongying; Liang, Yongquan
2009-01-01
User's knowledge requirement acquisition and analysis are very important for a personalized or user-adaptive learning system. Two approaches to capture user's knowledge requirement about course content within an e-learning system are proposed and implemented in this paper. The first approach is based on the historical data accumulated by an…
Bazargan, M.; Kelly, E. M.; Stein, J. A.; Husaini, B. A.; Bazargan, S. H.
2000-01-01
This study identifies theoretically based predictors of condom use in a sample of 253 sexually active African-American college students recruited from two historically African-American colleges. The Information-Motivation-Behavioral (IMB) skills model of AIDS-preventive behavior was employed to delineate the roles of HIV/AIDS knowledge, experiences with and attitudes toward condom use, peer influences, perceived vulnerability, monogamy, and behavioral skills. A predictive structural equation model revealed significant predictors of more condom use including: male gender, more sexual HIV knowledge, positive experiences and attitudes about condom use, nonmonogamy, and greater behavioral skills. Results imply that attention to behavioral skills for negotiating safer sex and training in the proper use of condoms are key elements in reducing high risk behaviors. Increasing the specific knowledge level of college students regarding the subtleties of sexual transmission of HIV is important and should be addressed. Heightening students' awareness of the limited protection of serial monogamy, and the need to address gender-specific training regarding required behavior change to reduce transmission of HIV should be an additional goal of college health professionals. PMID:10992684
Surgical anatomy of the liver, hepatic vasculature and bile ducts in the rat.
Martins, Paulo Ney Aguiar; Neuhaus, Peter
2007-04-01
The rat is the most used experimental model in surgical research. Virtually all procedures in clinical liver surgery can be performed in the rat. However, the use of the rat model in liver surgery is limited by its small size and limited knowledge of the liver anatomy. As in humans, the rat liver vasculature and biliary system have many anatomical variations. The development of surgical techniques, and the study of liver function and diseases require detailed knowledge of the regional anatomy. The objective of this study was to describe and illustrate systematically the surgical anatomy of the rat liver to facilitate the planning and performance of studies in this animal. Knowledge of the diameter and length of liver vessels is also important for the selection of catheters and perivascular devices. Twelve Wistar rat livers were dissected using a surgical microscope. Hepatic and extrahepatic anatomical structures were measured under magnification with a millimeter scale. In this study, we describe the rat liver topographical anatomy, compare it with the human liver and review the literature. Increased knowledge of the rat liver anatomy and microsurgical skills permit individualized dissection, parenchymal section, embolization and ligature of vascular and biliary branches.
ModeLang: a new approach for experts-friendly viral infections modeling.
Wasik, Szymon; Prejzendanc, Tomasz; Blazewicz, Jacek
2013-01-01
Computational modeling is an important element of systems biology. One of its important applications is modeling complex, dynamical, and biological systems, including viral infections. This type of modeling usually requires close cooperation between biologists and mathematicians. However, such cooperation often faces communication problems because biologists do not have sufficient knowledge to understand mathematical description of the models, and mathematicians do not have sufficient knowledge to define and verify these models. In many areas of systems biology, this problem has already been solved; however, in some of these areas there are still certain problematic aspects. The goal of the presented research was to facilitate this cooperation by designing seminatural formal language for describing viral infection models that will be easy to understand for biologists and easy to use by mathematicians and computer scientists. The ModeLang language was designed in cooperation with biologists and its computer implementation was prepared. Tests proved that it can be successfully used to describe commonly used viral infection models and then to simulate and verify them. As a result, it can make cooperation between biologists and mathematicians modeling viral infections much easier, speeding up computational verification of formulated hypotheses.
ModeLang: A New Approach for Experts-Friendly Viral Infections Modeling
Blazewicz, Jacek
2013-01-01
Computational modeling is an important element of systems biology. One of its important applications is modeling complex, dynamical, and biological systems, including viral infections. This type of modeling usually requires close cooperation between biologists and mathematicians. However, such cooperation often faces communication problems because biologists do not have sufficient knowledge to understand mathematical description of the models, and mathematicians do not have sufficient knowledge to define and verify these models. In many areas of systems biology, this problem has already been solved; however, in some of these areas there are still certain problematic aspects. The goal of the presented research was to facilitate this cooperation by designing seminatural formal language for describing viral infection models that will be easy to understand for biologists and easy to use by mathematicians and computer scientists. The ModeLang language was designed in cooperation with biologists and its computer implementation was prepared. Tests proved that it can be successfully used to describe commonly used viral infection models and then to simulate and verify them. As a result, it can make cooperation between biologists and mathematicians modeling viral infections much easier, speeding up computational verification of formulated hypotheses. PMID:24454531
NASA Astrophysics Data System (ADS)
Rawlusyk, Kevin James
Test items used to assess learners' knowledge on high-stakes science examinations contain contextualized questions that unintentionally assess reading skill along with conceptual knowledge. Therefore, students who are not proficient readers are unable to comprehend the text within the test item to demonstrate effectively their level of science knowledge. The purpose of this quantitative study was to understand what reading attributes were required to successfully answer the Biology 30 Diploma Exam. Furthermore, the research sought to understand the cognitive relationships among the reading attributes through quantitative analysis structured by the Attribute Hierarchy Model (AHM). The research consisted of two phases: (1) Cognitive development, where the cognitive attributes of the Biology 30 Exam were specified and hierarchy structures were developed; and (2) Psychometric analysis, that statistically tested the attribute hierarchy using the Hierarchy Consistency Index (HCI), and calculate attribute probabilities. Phase one of the research used January 2011, Biology 30 Diploma Exam, while phase two accessed archival data for the 9985 examinees who took the assessment on January 24th, 2011. Phase one identified ten specific reading attributes, of which five were identified as unique subsets of vocabulary, two were identified as reading visual representations, and three corresponded to general reading skills. Four hierarchical cognitive model were proposed then analyzed using the HCI as a mechanism to explain the relationship among the attributes. Model A had the highest HCI value (0.337), indicating an overall poor data fit, yet for the top achieving examinees the model had an excellent model fit with an HCI value of 0.888, and for examinees that scored over 60% there was a moderate model fit (HCI = 0.592). Linear regressions of the attribute probability estimates suggest that there is a cognitive relationship among six of the ten reading attributes (R2 = 0.958 and 0.922). The results conclude that the Biology 30 Diploma Exam requires examinee to understand specific reading attributes to answer test items successfully. Knowing the specific reading attributes associated with the Biology 30 Diploma Exam allows for teachers and test developers to better assess learners and to be aware that there are other cognitive processes that influence test results other than the examinees science knowledge.
Method of Individual Forecasting of Technical State of Logging Machines
NASA Astrophysics Data System (ADS)
Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.
2018-03-01
Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.
Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055
Geerts, Hugo; Hofmann-Apitius, Martin; Anastasio, Thomas J
2017-11-01
Neurodegenerative diseases such as Alzheimer's disease (AD) follow a slowly progressing dysfunctional trajectory, with a large presymptomatic component and many comorbidities. Using preclinical models and large-scale omics studies ranging from genetics to imaging, a large number of processes that might be involved in AD pathology at different stages and levels have been identified. The sheer number of putative hypotheses makes it almost impossible to estimate their contribution to the clinical outcome and to develop a comprehensive view on the pathological processes driving the clinical phenotype. Traditionally, bioinformatics approaches have provided correlations and associations between processes and phenotypes. Focusing on causality, a new breed of advanced and more quantitative modeling approaches that use formalized domain expertise offer new opportunities to integrate these different modalities and outline possible paths toward new therapeutic interventions. This article reviews three different computational approaches and their possible complementarities. Process algebras, implemented using declarative programming languages such as Maude, facilitate simulation and analysis of complicated biological processes on a comprehensive but coarse-grained level. A model-driven Integration of Data and Knowledge, based on the OpenBEL platform and using reverse causative reasoning and network jump analysis, can generate mechanistic knowledge and a new, mechanism-based taxonomy of disease. Finally, Quantitative Systems Pharmacology is based on formalized implementation of domain expertise in a more fine-grained, mechanism-driven, quantitative, and predictive humanized computer model. We propose a strategy to combine the strengths of these individual approaches for developing powerful modeling methodologies that can provide actionable knowledge for rational development of preventive and therapeutic interventions. Development of these computational approaches is likely to be required for further progress in understanding and treating AD. Copyright © 2017 the Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
The role of global cloud climatologies in validating numerical models
NASA Technical Reports Server (NTRS)
HARSHVARDHAN
1991-01-01
The net upward longwave surface radiation is exceedingly difficult to measure from space. A hybrid method using General Circulation Model (GCM) simulations and satellite data from the Earth Radiation Budget Experiment (ERBE) and the International Satellite Cloud Climatology Project (ISCCP) was used to produce global maps of this quantity over oceanic areas. An advantage of this technique is that no independent knowledge or assumptions regarding cloud cover for a particular month are required. The only information required is a relationship between the cloud radiation forcing (CRF) at the top of the atmosphere and that at the surface, which is obtained from the GCM simulation. A flow diagram of the technique and results are given.
Requirements and design aspects of a data model for a data dictionary in paediatric oncology.
Merzweiler, A; Knaup, P; Creutzig, U; Ehlerding, H; Haux, R; Mludek, V; Schilling, F H; Weber, R; Wiedemann, T
2000-01-01
German children suffering from cancer are mostly treated within the framework of multicentre clinical trials. An important task of conducting these trials is an extensive information and knowledge exchange, which has to be based on a standardised documentation. To support this effort, it is the aim of a nationwide project to define a standardised terminology that should be used by clinical trials for therapy documentation. In order to support terminology maintenance we are currently developing a data dictionary. In this paper we describe requirements and design aspects of the data model used for the data dictionary as first results of our research. We compare it with other terminology systems.
Semantic Document Model to Enhance Data and Knowledge Interoperability
NASA Astrophysics Data System (ADS)
Nešić, Saša
To enable document data and knowledge to be efficiently shared and reused across application, enterprise, and community boundaries, desktop documents should be completely open and queryable resources, whose data and knowledge are represented in a form understandable to both humans and machines. At the same time, these are the requirements that desktop documents need to satisfy in order to contribute to the visions of the Semantic Web. With the aim of achieving this goal, we have developed the Semantic Document Model (SDM), which turns desktop documents into Semantic Documents as uniquely identified and semantically annotated composite resources, that can be instantiated into human-readable (HR) and machine-processable (MP) forms. In this paper, we present the SDM along with an RDF and ontology-based solution for the MP document instance. Moreover, on top of the proposed model, we have built the Semantic Document Management System (SDMS), which provides a set of services that exploit the model. As an application example that takes advantage of SDMS services, we have extended MS Office with a set of tools that enables users to transform MS Office documents (e.g., MS Word and MS PowerPoint) into Semantic Documents, and to search local and distant semantic document repositories for document content units (CUs) over Semantic Web protocols.
Energy Auditor and Quality Control Inspector Competency Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Head, Heather R.; Kurnik, Charles W.; Schroeder, Derek
The Energy Auditor (EA) and Quality Control Inspector (QCI) Competency model was developed to identify the soft skills, foundational competencies and define the levels of Knowledge, Skills, and Abilities (KSAs) required to successfully perform the tasks defined in the EA and QCI Job Task Analysis (JTAs), the U.S. Department of Energy (DOE) used the U.S. Department of Labor's (DOL) Competency Model Clearinghouse resources to develop a QCI and EA Competency Model. To keep the QCI and EA competency model consistent with other construction and energy management competency models, DOE and the National Renewable Energy Laboratory used the existing 'Residential Constructionmore » Competency Model' and the 'Advanced Commercial Building Competency Model' where appropriate.« less
Learning and inference using complex generative models in a spatial localization task.
Bejjanki, Vikranth R; Knill, David C; Aslin, Richard N
2016-01-01
A large body of research has established that, under relatively simple task conditions, human observers integrate uncertain sensory information with learned prior knowledge in an approximately Bayes-optimal manner. However, in many natural tasks, observers must perform this sensory-plus-prior integration when the underlying generative model of the environment consists of multiple causes. Here we ask if the Bayes-optimal integration seen with simple tasks also applies to such natural tasks when the generative model is more complex, or whether observers rely instead on a less efficient set of heuristics that approximate ideal performance. Participants localized a "hidden" target whose position on a touch screen was sampled from a location-contingent bimodal generative model with different variances around each mode. Over repeated exposure to this task, participants learned the a priori locations of the target (i.e., the bimodal generative model), and integrated this learned knowledge with uncertain sensory information on a trial-by-trial basis in a manner consistent with the predictions of Bayes-optimal behavior. In particular, participants rapidly learned the locations of the two modes of the generative model, but the relative variances of the modes were learned much more slowly. Taken together, our results suggest that human performance in a more complex localization task, which requires the integration of sensory information with learned knowledge of a bimodal generative model, is consistent with the predictions of Bayes-optimal behavior, but involves a much longer time-course than in simpler tasks.
Extended professional development for systemic curriculum reform
NASA Astrophysics Data System (ADS)
Kubitskey, Mary Elizabeth
Education standards call for adopting inquiry science instruction. Successful adoption requires professional development (PD) to support teachers, increasing the need for research on PD. This dissertation examines the question: What is the influence of high quality, curriculum aligned, long-term group workshops and related practice on teacher learning? I focus on the following subquestions: (1) What is the influence of high quality, curriculum aligned, long-term, group workshops on teacher knowledge and beliefs? (2) What is the impact of the workshops on teacher practice? (3) What is the influence of practice on student response? (4) What is the impact of practice and student response on teacher knowledge and beliefs? I focus on an instance of PD nested within a long-term systemic change initiative, tracing eleven science teachers' learning from workshops and associated enactments. The data included pre and post-unit interviews (n=22), two post-workshop interviews (n=17), workshop observations (n=2), classroom observations (n=24) and student work (n=351). I used mixed-methods analysis. Quantitative analysis measured teacher learning by comparing pre and post-unit interview ratings. Qualitative components included two case study approaches: logic model technique and cross-case synthesis, examining teacher learning within and across teachers. The findings suggested a teacher-learning model incorporating PD, teacher knowledge, beliefs, practice and student response. PD impacts teachers' knowledge by providing teachers with new knowledge, adapting previous knowledge, or convincing them to value existing knowledge they chose not to use. The workshops can influence beliefs, providing teachers with confidence and motivation to adopt the practice. Beliefs can mediate how knowledge manifested itself in practice that, in turn, impacts students' response. Student response influences the teachers' beliefs, either reinforcing or motivating change. This teacher-learning model suggests a PD design model for long-term systemic change, incorporating teacher practice and student response, providing guidance for teachers making adaptations that maintain reform. This dissertation responds to the call for empirical research linking PD to learning outcomes. These models are unique because practice becomes a continuum of PD, rather than outcome and stresses the importance of addressing teachers' beliefs. This PD design provides mechanisms for maintaining equivalence between the written and enacted curriculum, sustaining the integrity of the reform.
Janulis, Patrick; Newcomb, Michael E; Sullivan, Patrick; Mustanski, Brian
2018-01-01
Knowledge about the transmission, prevention, and treatment of HIV remains a critical element in psychosocial models of HIV risk behavior and is commonly used as an outcome in HIV prevention interventions. However, most HIV knowledge questions have not undergone rigorous psychometric testing such as using item response theory. The current study used data from six studies of men who have sex with men (MSM; n = 3565) to (1) examine the item properties of HIV knowledge questions, (2) test for differential item functioning on commonly studied characteristics (i.e., age, race/ethnicity, and HIV risk behavior), (3) select items with the optimal item characteristics, and (4) leverage this combined dataset to examine the potential moderating effect of age on the relationship between condomless anal sex (CAS) and HIV knowledge. Findings indicated that existing questions tend to poorly differentiate those with higher levels of HIV knowledge, but items were relatively robust across diverse individuals. Furthermore, age moderated the relationship between CAS and HIV knowledge with older MSM having the strongest association. These findings suggest that additional items are required in order to capture a more nuanced understanding of HIV knowledge and that the association between CAS and HIV knowledge may vary by age.
Dynamic information processing states revealed through neurocognitive models of object semantics
Clarke, Alex
2015-01-01
Recognising objects relies on highly dynamic, interactive brain networks to process multiple aspects of object information. To fully understand how different forms of information about objects are represented and processed in the brain requires a neurocognitive account of visual object recognition that combines a detailed cognitive model of semantic knowledge with a neurobiological model of visual object processing. Here we ask how specific cognitive factors are instantiated in our mental processes and how they dynamically evolve over time. We suggest that coarse semantic information, based on generic shared semantic knowledge, is rapidly extracted from visual inputs and is sufficient to drive rapid category decisions. Subsequent recurrent neural activity between the anterior temporal lobe and posterior fusiform supports the formation of object-specific semantic representations – a conjunctive process primarily driven by the perirhinal cortex. These object-specific representations require the integration of shared and distinguishing object properties and support the unique recognition of objects. We conclude that a valuable way of understanding the cognitive activity of the brain is though testing the relationship between specific cognitive measures and dynamic neural activity. This kind of approach allows us to move towards uncovering the information processing states of the brain and how they evolve over time. PMID:25745632
Unsupervised domain adaptation for early detection of drought stress in hyperspectral images
NASA Astrophysics Data System (ADS)
Schmitter, P.; Steinrücken, J.; Römer, C.; Ballvora, A.; Léon, J.; Rascher, U.; Plümer, L.
2017-09-01
Hyperspectral images can be used to uncover physiological processes in plants if interpreted properly. Machine Learning methods such as Support Vector Machines (SVM) and Random Forests have been applied to estimate development of biomass and detect and predict plant diseases and drought stress. One basic requirement of machine learning implies, that training and testing is done in the same domain and the same distribution. Different genotypes, environmental conditions, illumination and sensors violate this requirement in most practical circumstances. Here, we present an approach, which enables the detection of physiological processes by transferring the prior knowledge within an existing model into a related target domain, where no label information is available. We propose a two-step transformation of the target features, which enables a direct application of an existing model. The transformation is evaluated by an objective function including additional prior knowledge about classification and physiological processes in plants. We have applied the approach to three sets of hyperspectral images, which were acquired with different plant species in different environments observed with different sensors. It is shown, that a classification model, derived on one of the sets, delivers satisfying classification results on the transformed features of the other data sets. Furthermore, in all cases early non-invasive detection of drought stress was possible.
Lightweight Adaptation of Classifiers to Users and Contexts: Trends of the Emerging Domain
Vildjiounaite, Elena; Gimel'farb, Georgy; Kyllönen, Vesa; Peltola, Johannes
2015-01-01
Intelligent computer applications need to adapt their behaviour to contexts and users, but conventional classifier adaptation methods require long data collection and/or training times. Therefore classifier adaptation is often performed as follows: at design time application developers define typical usage contexts and provide reasoning models for each of these contexts, and then at runtime an appropriate model is selected from available ones. Typically, definition of usage contexts and reasoning models heavily relies on domain knowledge. However, in practice many applications are used in so diverse situations that no developer can predict them all and collect for each situation adequate training and test databases. Such applications have to adapt to a new user or unknown context at runtime just from interaction with the user, preferably in fairly lightweight ways, that is, requiring limited user effort to collect training data and limited time of performing the adaptation. This paper analyses adaptation trends in several emerging domains and outlines promising ideas, proposed for making multimodal classifiers user-specific and context-specific without significant user efforts, detailed domain knowledge, and/or complete retraining of the classifiers. Based on this analysis, this paper identifies important application characteristics and presents guidelines to consider these characteristics in adaptation design. PMID:26473165
Applying Evidence-Based Medicine in Telehealth: An Interactive Pattern Recognition Approximation
Fernández-Llatas, Carlos; Meneu, Teresa; Traver, Vicente; Benedi, José-Miguel
2013-01-01
Born in the early nineteen nineties, evidence-based medicine (EBM) is a paradigm intended to promote the integration of biomedical evidence into the physicians daily practice. This paradigm requires the continuous study of diseases to provide the best scientific knowledge for supporting physicians in their diagnosis and treatments in a close way. Within this paradigm, usually, health experts create and publish clinical guidelines, which provide holistic guidance for the care for a certain disease. The creation of these clinical guidelines requires hard iterative processes in which each iteration supposes scientific progress in the knowledge of the disease. To perform this guidance through telehealth, the use of formal clinical guidelines will allow the building of care processes that can be interpreted and executed directly by computers. In addition, the formalization of clinical guidelines allows for the possibility to build automatic methods, using pattern recognition techniques, to estimate the proper models, as well as the mathematical models for optimizing the iterative cycle for the continuous improvement of the guidelines. However, to ensure the efficiency of the system, it is necessary to build a probabilistic model of the problem. In this paper, an interactive pattern recognition approach to support professionals in evidence-based medicine is formalized. PMID:24185841
Lee, Kyubum; Kim, Byounggun; Jeon, Minji; Kim, Jihye; Tan, Aik Choon
2018-01-01
Background With the development of artificial intelligence (AI) technology centered on deep-learning, the computer has evolved to a point where it can read a given text and answer a question based on the context of the text. Such a specific task is known as the task of machine comprehension. Existing machine comprehension tasks mostly use datasets of general texts, such as news articles or elementary school-level storybooks. However, no attempt has been made to determine whether an up-to-date deep learning-based machine comprehension model can also process scientific literature containing expert-level knowledge, especially in the biomedical domain. Objective This study aims to investigate whether a machine comprehension model can process biomedical articles as well as general texts. Since there is no dataset for the biomedical literature comprehension task, our work includes generating a large-scale question answering dataset using PubMed and manually evaluating the generated dataset. Methods We present an attention-based deep neural model tailored to the biomedical domain. To further enhance the performance of our model, we used a pretrained word vector and biomedical entity type embedding. We also developed an ensemble method of combining the results of several independent models to reduce the variance of the answers from the models. Results The experimental results showed that our proposed deep neural network model outperformed the baseline model by more than 7% on the new dataset. We also evaluated human performance on the new dataset. The human evaluation result showed that our deep neural model outperformed humans in comprehension by 22% on average. Conclusions In this work, we introduced a new task of machine comprehension in the biomedical domain using a deep neural model. Since there was no large-scale dataset for training deep neural models in the biomedical domain, we created the new cloze-style datasets Biomedical Knowledge Comprehension Title (BMKC_T) and Biomedical Knowledge Comprehension Last Sentence (BMKC_LS) (together referred to as BioMedical Knowledge Comprehension) using the PubMed corpus. The experimental results showed that the performance of our model is much higher than that of humans. We observed that our model performed consistently better regardless of the degree of difficulty of a text, whereas humans have difficulty when performing biomedical literature comprehension tasks that require expert level knowledge. PMID:29305341
Discovery learning model with geogebra assisted for improvement mathematical visual thinking ability
NASA Astrophysics Data System (ADS)
Juandi, D.; Priatna, N.
2018-05-01
The main goal of this study is to improve the mathematical visual thinking ability of high school student through implementation the Discovery Learning Model with Geogebra Assisted. This objective can be achieved through study used quasi-experimental method, with non-random pretest-posttest control design. The sample subject of this research consist of 62 senior school student grade XI in one of school in Bandung district. The required data will be collected through documentation, observation, written tests, interviews, daily journals, and student worksheets. The results of this study are: 1) Improvement students Mathematical Visual Thinking Ability who obtain learning with applied the Discovery Learning Model with Geogebra assisted is significantly higher than students who obtain conventional learning; 2) There is a difference in the improvement of students’ Mathematical Visual Thinking ability between groups based on prior knowledge mathematical abilities (high, medium, and low) who obtained the treatment. 3) The Mathematical Visual Thinking Ability improvement of the high group is significantly higher than in the medium and low groups. 4) The quality of improvement ability of high and low prior knowledge is moderate category, in while the quality of improvement ability in the high category achieved by student with medium prior knowledge.
Collaborative mining of graph patterns from multiple sources
NASA Astrophysics Data System (ADS)
Levchuk, Georgiy; Colonna-Romanoa, John
2016-05-01
Intelligence analysts require automated tools to mine multi-source data, including answering queries, learning patterns of life, and discovering malicious or anomalous activities. Graph mining algorithms have recently attracted significant attention in intelligence community, because the text-derived knowledge can be efficiently represented as graphs of entities and relationships. However, graph mining models are limited to use-cases involving collocated data, and often make restrictive assumptions about the types of patterns that need to be discovered, the relationships between individual sources, and availability of accurate data segmentation. In this paper we present a model to learn the graph patterns from multiple relational data sources, when each source might have only a fragment (or subgraph) of the knowledge that needs to be discovered, and segmentation of data into training or testing instances is not available. Our model is based on distributed collaborative graph learning, and is effective in situations when the data is kept locally and cannot be moved to a centralized location. Our experiments show that proposed collaborative learning achieves learning quality better than aggregated centralized graph learning, and has learning time comparable to traditional distributed learning in which a knowledge of data segmentation is needed.
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
On the Need to Establish an International Soil Modeling Consortium
NASA Astrophysics Data System (ADS)
Vereecken, H.; Vanderborght, J.; Schnepf, A.
2014-12-01
Soil is one of the most critical life-supporting compartments of the Biosphere. Soil provides numerous ecosystem services such as a habitat for biodiversity, water and nutrients, as well as producing food, feed, fiber and energy. To feed the rapidly growing world population in 2050, agricultural food production must be doubled using the same land resources footprint. At the same time, soil resources are threatened due to improper management and climate change. Despite the many important functions of soil, many fundamental knowledge gaps remain, regarding the role of soil biota and biodiversity on ecosystem services, the structure and dynamics of soil communities, the interplay between hydrologic and biotic processes, the quantification of soil biogeochemical processes and soil structural processes, the resilience and recovery of soils from stress, as well as the prediction of soil development and the evolution of soils in the landscape, to name a few. Soil models have long played an important role in quantifying and predicting soil processes and related ecosystem services. However, a new generation of soil models based on a whole systems approach comprising all physical, mechanical, chemical and biological processes is now required to address these critical knowledge gaps and thus contribute to the preservation of ecosystem services, improve our understanding of climate-change-feedback processes, bridge basic soil science research and management, and facilitate the communication between science and society. To meet these challenges an international community effort is required, similar to initiatives in systems biology, hydrology, and climate and crop research. Our consortium will bring together modelers and experimental soil scientists at the forefront of new technologies and approaches to characterize soils. By addressing these aims, the consortium will contribute to improve the role of soil modeling as a knowledge dissemination instrument in addressing key global issues and stimulate the development of translational research activities. This presentation will provide a compelling case for this much-needed effort, with a focus on tangible benefits to the scientific and food security communities.
Chapter 1: Biomedical knowledge integration.
Payne, Philip R O
2012-01-01
The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed via the design and use of knowledge-based systems.
Chapter 1: Biomedical Knowledge Integration
Payne, Philip R. O.
2012-01-01
The modern biomedical research and healthcare delivery domains have seen an unparalleled increase in the rate of innovation and novel technologies over the past several decades. Catalyzed by paradigm-shifting public and private programs focusing upon the formation and delivery of genomic and personalized medicine, the need for high-throughput and integrative approaches to the collection, management, and analysis of heterogeneous data sets has become imperative. This need is particularly pressing in the translational bioinformatics domain, where many fundamental research questions require the integration of large scale, multi-dimensional clinical phenotype and bio-molecular data sets. Modern biomedical informatics theory and practice has demonstrated the distinct benefits associated with the use of knowledge-based systems in such contexts. A knowledge-based system can be defined as an intelligent agent that employs a computationally tractable knowledge base or repository in order to reason upon data in a targeted domain and reproduce expert performance relative to such reasoning operations. The ultimate goal of the design and use of such agents is to increase the reproducibility, scalability, and accessibility of complex reasoning tasks. Examples of the application of knowledge-based systems in biomedicine span a broad spectrum, from the execution of clinical decision support, to epidemiologic surveillance of public data sets for the purposes of detecting emerging infectious diseases, to the discovery of novel hypotheses in large-scale research data sets. In this chapter, we will review the basic theoretical frameworks that define core knowledge types and reasoning operations with particular emphasis on the applicability of such conceptual models within the biomedical domain, and then go on to introduce a number of prototypical data integration requirements and patterns relevant to the conduct of translational bioinformatics that can be addressed via the design and use of knowledge-based systems. PMID:23300416
Concepts, challenges, and successes in modeling thermodynamics of metabolism.
Cannon, William R
2014-01-01
The modeling of the chemical reactions involved in metabolism is a daunting task. Ideally, the modeling of metabolism would use kinetic simulations, but these simulations require knowledge of the thousands of rate constants involved in the reactions. The measurement of rate constants is very labor intensive, and hence rate constants for most enzymatic reactions are not available. Consequently, constraint-based flux modeling has been the method of choice because it does not require the use of the rate constants of the law of mass action. However, this convenience also limits the predictive power of constraint-based approaches in that the law of mass action is used only as a constraint, making it difficult to predict metabolite levels or energy requirements of pathways. An alternative to both of these approaches is to model metabolism using simulations of states rather than simulations of reactions, in which the state is defined as the set of all metabolite counts or concentrations. While kinetic simulations model reactions based on the likelihood of the reaction derived from the law of mass action, states are modeled based on likelihood ratios of mass action. Both approaches provide information on the energy requirements of metabolic reactions and pathways. However, modeling states rather than reactions has the advantage that the parameters needed to model states (chemical potentials) are much easier to determine than the parameters needed to model reactions (rate constants). Herein, we discuss recent results, assumptions, and issues in using simulations of state to model metabolism.
Advanced Diagnostics for Reacting Flows
2006-06-01
TECHNICAL DISCUSSION: 1. Infrared-PLIF Imaging Diagnostics using Vibrational Transitions IR-PLIF allows for imaging a group of molecular species important...excitation of IR-active vibrational modes with imaging of the subsequent vibrational fluorescence. Quantitative interpretation requires knowledge of...the vibrational energy transfer processes, and hence in recent years we have been developing models for infrared fluorescence. During the past year
Teachers' Perceptions of Professional Development: An Exploration of Delivery Models
ERIC Educational Resources Information Center
Casale, MaryAnn
2011-01-01
In order to teach students the knowledge and skills that are required to be successful in the 21st century, teachers must change the way they have traditionally taught. A focus on problem solving, critical thinking, creative thinking, and effective communication skills is necessary for students to learn in a complex society (Darling-Hammond &…
ERIC Educational Resources Information Center
Gladman, Justin; Perkins, David
2013-01-01
Context and Objective: Australian rural general practitioners (GPs) require public health knowledge. This study explored the suitability of teaching complex public health issues related to Aboriginal health by way of a hybrid problem-based learning (PBL) model within an intensive training retreat for GP registrars, when numerous trainees have no…
The Support of Student Articulation of Reasoning, Student Reflection and Tutor Feedback
ERIC Educational Resources Information Center
Garner, Stuart
2007-01-01
Learning theory suggests that student learning can be improved if students are required to articulate and reflect about work that they have done. This process helps students think more clearly about their work and such articulation also enables tutors to better assess student knowledge and mental models. There are various electronic tools…
ERIC Educational Resources Information Center
Coetzee, Serena; Rautenbach, Victoria; du Plessis, Heindrich
2015-01-01
The South African Spatial Data Infrastructure (SASDI) was established in 2003.Registration of geographical information science (GISc) practitioners by the South African geomatics professional body followed in 2004 and accreditation of university GISc programmes in 2012. In 2010, the Committee for Spatial Information identified inadequate knowledge…
Teaching Time Value of Money Using an Excel Retirement Model
ERIC Educational Resources Information Center
Arellano, Fernando; Mulig, Liz; Rhame, Susan
2012-01-01
The time value of money (TVM) is required knowledge for all business students. It is traditionally taught in finance and accounting classes for use in various applications in the business curriculum. These concepts are also very useful in real life situations such as calculating the amount to save for retirement. This paper details a retirement…
USDA-ARS?s Scientific Manuscript database
Knowledge of the microbial quality of irrigation waters is extremely limited. For this reason, the US FDA has promulgated the Produce Rule, mandating the testing of irrigation water sources for many farms. The rule requires the collection and analysis of at least 20 water samples over two to four ye...
A Metric Model for Intranet Portal Business Requirements
2003-12-01
ROIMI) with a common unit of analysis for both aggregate and sub-corporate levels through forms of the Knowledge Value Added (KVA) and Activity Based...means in which to calculate return on intranet metrics investment (ROIMI) with a common unit of analysis for both aggregate and sub-corporate levels...IT ANALYSIS APPROACHES.....................................................13 1. Corporate Analysis
Fuzzy logic of Aristotelian forms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perlovsky, L.I.
1996-12-31
Model-based approaches to pattern recognition and machine vision have been proposed to overcome the exorbitant training requirements of earlier computational paradigms. However, uncertainties in data were found to lead to a combinatorial explosion of the computational complexity. This issue is related here to the roles of a priori knowledge vs. adaptive learning. What is the a-priori knowledge representation that supports learning? I introduce Modeling Field Theory (MFT), a model-based neural network whose adaptive learning is based on a priori models. These models combine deterministic, fuzzy, and statistical aspects to account for a priori knowledge, its fuzzy nature, and data uncertainties.more » In the process of learning, a priori fuzzy concepts converge to crisp or probabilistic concepts. The MFT is a convergent dynamical system of only linear computational complexity. Fuzzy logic turns out to be essential for reducing the combinatorial complexity to linear one. I will discuss the relationship of the new computational paradigm to two theories due to Aristotle: theory of Forms and logic. While theory of Forms argued that the mind cannot be based on ready-made a priori concepts, Aristotelian logic operated with just such concepts. I discuss an interpretation of MFT suggesting that its fuzzy logic, combining a-priority and adaptivity, implements Aristotelian theory of Forms (theory of mind). Thus, 2300 years after Aristotle, a logic is developed suitable for his theory of mind.« less
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.; Ifanti, Konstantina
2012-12-01
Process simulation models are usually empirical, therefore there is an inherent difficulty in serving as carriers for knowledge acquisition and technology transfer, since their parameters have no physical meaning to facilitate verification of the dependence on the production conditions; in such a case, a 'black box' regression model or a neural network might be used to simply connect input-output characteristics. In several cases, scientific/mechanismic models may be proved valid, in which case parameter identification is required to find out the independent/explanatory variables and parameters, which each parameter depends on. This is a difficult task, since the phenomenological level at which each parameter is defined is different. In this paper, we have developed a methodological framework under the form of an algorithmic procedure to solve this problem. The main parts of this procedure are: (i) stratification of relevant knowledge in discrete layers immediately adjacent to the layer that the initial model under investigation belongs to, (ii) design of the ontology corresponding to these layers, (iii) elimination of the less relevant parts of the ontology by thinning, (iv) retrieval of the stronger interrelations between the remaining nodes within the revised ontological network, and (v) parameter identification taking into account the most influential interrelations revealed in (iv). The functionality of this methodology is demonstrated by quoting two representative case examples on wastewater treatment.
Shiffman, Richard N.; Michel, George; Essaihi, Abdelwaheb; Thornquist, Elizabeth
2004-01-01
Objective: A gap exists between the information contained in published clinical practice guidelines and the knowledge and information that are necessary to implement them. This work describes a process to systematize and make explicit the translation of document-based knowledge into workflow-integrated clinical decision support systems. Design: This approach uses the Guideline Elements Model (GEM) to represent the guideline knowledge. Implementation requires a number of steps to translate the knowledge contained in guideline text into a computable format and to integrate the information into clinical workflow. The steps include: (1) selection of a guideline and specific recommendations for implementation, (2) markup of the guideline text, (3) atomization, (4) deabstraction and (5) disambiguation of recommendation concepts, (6) verification of rule set completeness, (7) addition of explanations, (8) building executable statements, (9) specification of origins of decision variables and insertions of recommended actions, (10) definition of action types and selection of associated beneficial services, (11) choice of interface components, and (12) creation of requirement specification. Results: The authors illustrate these component processes using examples drawn from recent experience translating recommendations from the National Heart, Lung, and Blood Institute's guideline on management of chronic asthma into a workflow-integrated decision support system that operates within the Logician electronic health record system. Conclusion: Using the guideline document as a knowledge source promotes authentic translation of domain knowledge and reduces the overall complexity of the implementation task. From this framework, we believe that a better understanding of activities involved in guideline implementation will emerge. PMID:15187061
How models can support ecosystem-based management of coral reefs
NASA Astrophysics Data System (ADS)
Weijerman, Mariska; Fulton, Elizabeth A.; Janssen, Annette B. G.; Kuiper, Jan J.; Leemans, Rik; Robson, Barbara J.; van de Leemput, Ingrid A.; Mooij, Wolf M.
2015-11-01
Despite the importance of coral reef ecosystems to the social and economic welfare of coastal communities, the condition of these marine ecosystems have generally degraded over the past decades. With an increased knowledge of coral reef ecosystem processes and a rise in computer power, dynamic models are useful tools in assessing the synergistic effects of local and global stressors on ecosystem functions. We review representative approaches for dynamically modeling coral reef ecosystems and categorize them as minimal, intermediate and complex models. The categorization was based on the leading principle for model development and their level of realism and process detail. This review aims to improve the knowledge of concurrent approaches in coral reef ecosystem modeling and highlights the importance of choosing an appropriate approach based on the type of question(s) to be answered. We contend that minimal and intermediate models are generally valuable tools to assess the response of key states to main stressors and, hence, contribute to understanding ecological surprises. As has been shown in freshwater resources management, insight into these conceptual relations profoundly influences how natural resource managers perceive their systems and how they manage ecosystem recovery. We argue that adaptive resource management requires integrated thinking and decision support, which demands a diversity of modeling approaches. Integration can be achieved through complimentary use of models or through integrated models that systemically combine all relevant aspects in one model. Such whole-of-system models can be useful tools for quantitatively evaluating scenarios. These models allow an assessment of the interactive effects of multiple stressors on various, potentially conflicting, management objectives. All models simplify reality and, as such, have their weaknesses. While minimal models lack multidimensionality, system models are likely difficult to interpret as they require many efforts to decipher the numerous interactions and feedback loops. Given the breadth of questions to be tackled when dealing with coral reefs, the best practice approach uses multiple model types and thus benefits from the strength of different models types.
Probabilistic load simulation: Code development status
NASA Astrophysics Data System (ADS)
Newell, J. F.; Ho, H.
1991-05-01
The objective of the Composite Load Spectra (CLS) project is to develop generic load models to simulate the composite load spectra that are included in space propulsion system components. The probabilistic loads thus generated are part of the probabilistic design analysis (PDA) of a space propulsion system that also includes probabilistic structural analyses, reliability, and risk evaluations. Probabilistic load simulation for space propulsion systems demands sophisticated probabilistic methodology and requires large amounts of load information and engineering data. The CLS approach is to implement a knowledge based system coupled with a probabilistic load simulation module. The knowledge base manages and furnishes load information and expertise and sets up the simulation runs. The load simulation module performs the numerical computation to generate the probabilistic loads with load information supplied from the CLS knowledge base.
Tropical forests and global change: filling knowledge gaps.
Zuidema, Pieter A; Baker, Patrick J; Groenendijk, Peter; Schippers, Peter; van der Sleen, Peter; Vlam, Mart; Sterck, Frank
2013-08-01
Tropical forests will experience major changes in environmental conditions this century. Understanding their responses to such changes is crucial to predicting global carbon cycling. Important knowledge gaps exist: the causes of recent changes in tropical forest dynamics remain unclear and the responses of entire tropical trees to environmental changes are poorly understood. In this Opinion article, we argue that filling these knowledge gaps requires a new research strategy, one that focuses on trees instead of leaves or communities, on long-term instead of short-term changes, and on understanding mechanisms instead of documenting changes. We propose the use of tree-ring analyses, stable-isotope analyses, manipulative field experiments, and well-validated simulation models to improve predictions of forest responses to global change. Copyright © 2013 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanfilippo, Antonio P.
2010-05-23
The increasing asymmetric nature of threats to the security, health and sustainable growth of our society requires that anticipatory reasoning become an everyday activity. Currently, the use of anticipatory reasoning is hindered by the lack of systematic methods for combining knowledge- and evidence-based models, integrating modeling algorithms, and assessing model validity, accuracy and utility. The workshop addresses these gaps with the intent of fostering the creation of a community of interest on model integration and evaluation that may serve as an aggregation point for existing efforts and a launch pad for new approaches.
Shape: A 3D Modeling Tool for Astrophysics.
Steffen, Wolfgang; Koning, Nicholas; Wenger, Stephan; Morisset, Christophe; Magnor, Marcus
2011-04-01
We present a flexible interactive 3D morpho-kinematical modeling application for astrophysics. Compared to other systems, our application reduces the restrictions on the physical assumptions, data type, and amount that is required for a reconstruction of an object's morphology. It is one of the first publicly available tools to apply interactive graphics to astrophysical modeling. The tool allows astrophysicists to provide a priori knowledge about the object by interactively defining 3D structural elements. By direct comparison of model prediction with observational data, model parameters can then be automatically optimized to fit the observation. The tool has already been successfully used in a number of astrophysical research projects.
Do Performance-Based Codes Support Universal Design in Architecture?
Grangaard, Sidse; Frandsen, Anne Kathrine
2016-01-01
The research project 'An analysis of the accessibility requirements' studies how Danish architectural firms experience the accessibility requirements of the Danish Building Regulations and it examines their opinions on how future regulative models can support innovative and inclusive design - Universal Design (UD). The empirical material consists of input from six workshops to which all 700 Danish Architectural firms were invited, as well as eight group interviews. The analysis shows that the current prescriptive requirements are criticized for being too homogenous and possibilities for differentiation and zoning are required. Therefore, a majority of professionals are interested in a performance-based model because they think that such a model will support 'accessibility zoning', achieving flexibility because of different levels of accessibility in a building due to its performance. The common understanding of accessibility and UD is directly related to buildings like hospitals and care centers. When the objective is both innovative and inclusive architecture, the request of a performance-based model should be followed up by a knowledge enhancement effort in the building sector. Bloom's taxonomy of educational objectives is suggested as a tool for such a boost. The research project has been financed by the Danish Transport and Construction Agency.
Vlasakakis, G; Comets, E; Keunecke, A; Gueorguieva, I; Magni, P; Terranova, N; Della Pasqua, O; de Lange, E C; Kloft, C
2013-01-01
Pharmaceutical sciences experts and regulators acknowledge that pharmaceutical development as well as drug usage requires more than scientific advancements to cope with current attrition rates/therapeutic failures. Drug disease modeling and simulation (DDM&S) creates a paradigm to enable an integrated and higher-level understanding of drugs, (diseased)systems, and their interactions (systems pharmacology) through mathematical/statistical models (pharmacometrics)1—hence facilitating decision making during drug development and therapeutic usage of medicines. To identify gaps and challenges in DDM&S, an inventory of skills and competencies currently available in academia, industry, and clinical practice was obtained through survey. The survey outcomes revealed benefits, weaknesses, and hurdles for the implementation of DDM&S. In addition, the survey indicated that no consensus exists about the knowledge, skills, and attributes required to perform DDM&S activities effectively. Hence, a landscape of technical and conceptual requirements for DDM&S was identified and serves as a basis for developing a framework of competencies to guide future education and training in DDM&S. PMID:23887723
Theory and ontology for sharing temporal knowledge
NASA Technical Reports Server (NTRS)
Loganantharaj, Rasiah
1996-01-01
Using current technology, the sharing or re-using of knowledge-bases is very difficult, if not impossible. ARPA has correctly recognized the problem and funded a knowledge sharing initiative. One of the outcomes of this project is a formal language called Knowledge Interchange Format (KIF) for representing knowledge that could be translated into other languages. Capturing and representing design knowledge and reasoning with them have become very important for NASA who is a pioneer of innovative design of unique products. For upgrading an existing design for changing technology, needs, or requirements, it is essential to understand the design rationale, design choices, options and other relevant information associated with the design. Capturing such information and presenting them in the appropriate form are part of the ongoing Design Knowledge Capture project of NASA. The behavior of an object and various other aspects related to time are captured by the appropriate temporal knowledge. The captured design knowledge will be represented in such a way that various groups of NASA who are interested in various aspects of the design cycle should be able to access and use the design knowledge effectively. To facilitate knowledge sharing among these groups, one has to develop a very well defined ontology. Ontology is a specification of conceptualization. In the literature several specific domains were studied and some well defined ontologies were developed for such domains. However, very little, or no work has been done in the area of representing temporal knowledge to facilitate sharing. During the ASEE summer program, I have investigated several temporal models and have proposed a theory for time that is flexible to accommodate the time elements, such as, points and intervals, and is capable of handling the qualitative and quantitative temporal constraints. I have also proposed a primitive temporal ontology using which other relevant temporal ontologies can be built. I have investigated various issues of sharing knowledge and have proposed a formal framework for modeling the concept of knowledge sharing. This work may be implemented and tested in the software environment supplied by Knowledge Based System, Inc.
Learning about knowledge: A complex network approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fontoura Costa, Luciano da
2006-08-15
An approach to modeling knowledge acquisition in terms of walks along complex networks is described. Each subset of knowledge is represented as a node, and relations between such knowledge are expressed as edges. Two types of edges are considered, corresponding to free and conditional transitions. The latter case implies that a node can only be reached after visiting previously a set of nodes (the required conditions). The process of knowledge acquisition can then be simulated by considering the number of nodes visited as a single agent moves along the network, starting from its lowest layer. It is shown that hierarchicalmore » networks--i.e., networks composed of successive interconnected layers--are related to compositions of the prerequisite relationships between the nodes. In order to avoid deadlocks--i.e., unreachable nodes--the subnetwork in each layer is assumed to be a connected component. Several configurations of such hierarchical knowledge networks are simulated and the performance of the moving agent quantified in terms of the percentage of visited nodes after each movement. The Barabasi-Albert and random models are considered for the layer and interconnecting subnetworks. Although all subnetworks in each realization have the same number of nodes, several interconnectivities, defined by the average node degree of the interconnection networks, have been considered. Two visiting strategies are investigated: random choice among the existing edges and preferential choice to so far untracked edges. A series of interesting results are obtained, including the identification of a series of plateaus of knowledge stagnation in the case of the preferential movement strategy in the presence of conditional edges.« less
Diagnosis: Reasoning from first principles and experiential knowledge
NASA Technical Reports Server (NTRS)
Williams, Linda J. F.; Lawler, Dennis G.
1987-01-01
Completeness, efficiency and autonomy are requirements for suture diagnostic reasoning systems. Methods for automating diagnostic reasoning systems include diagnosis from first principles (i.e., reasoning from a thorough description of structure and behavior) and diagnosis from experiential knowledge (i.e., reasoning from a set of examples obtained from experts). However, implementation of either as a single reasoning method fails to meet these requirements. The approach of combining reasoning from first principles and reasoning from experiential knowledge does address the requirements discussed above and can possibly ease some of the difficulties associated with knowledge acquisition by allowing developers to systematically enumerate a portion of the knowledge necessary to build the diagnosis program. The ability to enumerate knowledge systematically facilitates defining the program's scope, completeness, and competence and assists in bounding, controlling, and guiding the knowledge acquisition process.
Iommarini, Luisa; Peralta, Susana; Torraco, Alessandra; Diaz, Francisca
2015-01-01
Mitochondrial disorders are defined as defects that affect the oxidative phosphorylation system (OXPHOS). They are characterized by a heterogeneous array of clinical presentations due in part to a wide variety of factors required for proper function of the components of the OXPHOS system. There is no cure for these disorders owing our poor knowledge of the pathogenic mechanisms of disease. To understand the mechanisms of human disease numerous mouse models have been developed in recent years. Here we summarize the features of several mouse models of mitochondrial diseases directly related to those factors affecting mtDNA maintenance, replication, transcription, translation as well to other proteins that are involved in mitochondrial dynamics and quality control which affect mitochondrial OXPHOS function without been intrinsic components of the system. We discuss how these models have contributed to our understanding of mitochondrial diseases and their pathogenic mechanisms. PMID:25640959
Utility of Small Animal Models of Developmental Programming.
Reynolds, Clare M; Vickers, Mark H
2018-01-01
Any effective strategy to tackle the global obesity and rising noncommunicable disease epidemic requires an in-depth understanding of the mechanisms that underlie these conditions that manifest as a consequence of complex gene-environment interactions. In this context, it is now well established that alterations in the early life environment, including suboptimal nutrition, can result in an increased risk for a range of metabolic, cardiovascular, and behavioral disorders in later life, a process preferentially termed developmental programming. To date, most of the mechanistic knowledge around the processes underpinning development programming has been derived from preclinical research performed mostly, but not exclusively, in laboratory mouse and rat strains. This review will cover the utility of small animal models in developmental programming, the limitations of such models, and potential future directions that are required to fully maximize information derived from preclinical models in order to effectively translate to clinical use.
Monteiro, Kristina A; George, Paul; Dollase, Richard; Dumenco, Luba
2017-01-01
The use of multiple academic indicators to identify students at risk of experiencing difficulty completing licensure requirements provides an opportunity to increase support services prior to high-stakes licensure examinations, including the United States Medical Licensure Examination (USMLE) Step 2 clinical knowledge (CK). Step 2 CK is becoming increasingly important in decision-making by residency directors because of increasing undergraduate medical enrollment and limited available residency vacancies. We created and validated a regression equation to predict students' Step 2 CK scores from previous academic indicators to identify students at risk, with sufficient time to intervene with additional support services as necessary. Data from three cohorts of students (N=218) with preclinical mean course exam score, National Board of Medical Examination subject examinations, and USMLE Step 1 and Step 2 CK between 2011 and 2013 were used in analyses. The authors created models capable of predicting Step 2 CK scores from academic indicators to identify at-risk students. In model 1, preclinical mean course exam score and Step 1 score accounted for 56% of the variance in Step 2 CK score. The second series of models included mean preclinical course exam score, Step 1 score, and scores on three NBME subject exams, and accounted for 67%-69% of the variance in Step 2 CK score. The authors validated the findings on the most recent cohort of graduating students (N=89) and predicted Step 2 CK score within a mean of four points (SD=8). The authors suggest using the first model as a needs assessment to gauge the level of future support required after completion of preclinical course requirements, and rescreening after three of six clerkships to identify students who might benefit from additional support before taking USMLE Step 2 CK.
McNamara, J P; Hanigan, M D; White, R R
2016-12-01
The National Animal Nutrition Program "National Research Support Project 9" supports efforts in livestock nutrition, including the National Research Council's committees on the nutrient requirements of animals. Our objective was to review the status of experimentation and data reporting in animal nutrition literature and to provide suggestions for the advancement of animal nutrition research and the ongoing improvement of field-applied nutrient requirement models. Improved data reporting consistency and completeness represent a substantial opportunity to improve nutrition-related mathematical models. We reviewed a body of nutrition research; recorded common phrases used to describe diets, animals, housing, and environmental conditions; and proposed equivalent numerical data that could be reported. With the increasing availability of online supplementary material sections in journals, we developed a comprehensive checklist of data that should be included in publications. To continue to improve our research effectiveness, studies utilizing multiple research methodologies to address complex systems and measure multiple variables will be necessary. From the current body of animal nutrition literature, we identified a series of opportunities to integrate research focuses (nutrition, reproduction and genetics) to advance the development of nutrient requirement models. From our survey of current experimentation and data reporting in animal nutrition, we identified 4 key opportunities to advance animal nutrition knowledge: (1) coordinated experiments should be designed to employ multiple research methodologies; (2) systems-oriented research approaches should be encouraged and supported; (3) publication guidelines should be updated to encourage and support sharing of more complete data sets; and (4) new experiments should be more rapidly integrated into our knowledge bases, research programs and practical applications. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Group Contribution Methods for Phase Equilibrium Calculations.
Gmehling, Jürgen; Constantinescu, Dana; Schmid, Bastian
2015-01-01
The development and design of chemical processes are carried out by solving the balance equations of a mathematical model for sections of or the whole chemical plant with the help of process simulators. For process simulation, besides kinetic data for the chemical reaction, various pure component and mixture properties are required. Because of the great importance of separation processes for a chemical plant in particular, a reliable knowledge of the phase equilibrium behavior is required. The phase equilibrium behavior can be calculated with the help of modern equations of state or g(E)-models using only binary parameters. But unfortunately, only a very small part of the experimental data for fitting the required binary model parameters is available, so very often these models cannot be applied directly. To solve this problem, powerful predictive thermodynamic models have been developed. Group contribution methods allow the prediction of the required phase equilibrium data using only a limited number of group interaction parameters. A prerequisite for fitting the required group interaction parameters is a comprehensive database. That is why for the development of powerful group contribution methods almost all published pure component properties, phase equilibrium data, excess properties, etc., were stored in computerized form in the Dortmund Data Bank. In this review, the present status, weaknesses, advantages and disadvantages, possible applications, and typical results of the different group contribution methods for the calculation of phase equilibria are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foy, J; Marsh, R; Owen, D
2015-06-15
Purpose: Creating high quality SBRT treatment plans for the spine is often tedious and time consuming. In addition, the quality of treatment plans can vary greatly between treatment facilities due to inconsistencies in planning methods. This study investigates the performance of knowledge-based planning (KBP) for spine SBRT. Methods: Treatment plans were created for 28 spine SBRT patients. Each case was planned to meet strict dose objectives and guidelines. After physician and physicist approval, the plans were added to a custom model in a KBP system (RapidPlan, Varian Eclipse v13.5). The model was then trained to be able to predict estimatedmore » DVHs and provide starting objective functions for future patients based on both generated and manual objectives. To validate the model, ten additional spine SBRT cases were planned manually as well as using the model objectives. Plans were compared based on planning time and quality (ability to meet the plan objectives, including dose metrics and conformity). Results: The average dose to the spinal cord and the cord PRV differed between the validation and control plans by <0.25% demonstrating iso-toxicity. Six out of 10 validation plans met all dose objectives without the need for modifications, and overall, target dose coverage was increased by about 4.8%. If the validation plans did not meet the dose requirements initially, only 1–2 iterations of modifying the planning parameters were required before an acceptable plan was achieved. While manually created plans usually required 30 minutes to 3 hours to create, KBP can be used to create similar quality plans in 15–20 minutes. Conclusion: KBP for spinal tumors has shown to greatly decrease the amount of time required to achieve high quality treatment plans with minimal human intervention and could feasibly be used to standardize plan quality between institutions. Supported by Varian Medical Systems.« less
Operator agency in process intervention: tampering versus application of tacit knowledge
NASA Astrophysics Data System (ADS)
Van Gestel, P.; Pons, D. J.; Pulakanam, V.
2015-09-01
Statistical process control (SPC) theory takes a negative view of adjustment of process settings, which is termed tampering. In contrast, quality and lean programmes actively encourage operators to acts of intervention and personal agency in the improvement of production outcomes. This creates a conflict that requires operator judgement: How does one differentiate between unnecessary tampering and needful intervention? Also, difficult is that operators apply tacit knowledge to such judgements. There is a need to determine where in a given production process the operators are applying tacit knowledge, and whether this is hindering or aiding quality outcomes. The work involved the conjoint application of systems engineering, statistics, and knowledge management principles, in the context of a case study. Systems engineering was used to create a functional model of a real plant. Actual plant data were analysed with the statistical methods of ANOVA, feature selection, and link analysis. This identified the variables to which the output quality was most sensitive. These key variables were mapped back to the functional model. Fieldwork was then directed to those areas to prospect for operator judgement activities. A natural conversational approach was used to determine where and how operators were applying judgement. This contrasts to the interrogative approach of conventional knowledge management. Data are presented for a case study of a meat rendering plant. The results identify specific areas where operators' tacit knowledge and mental model contribute to quality outcomes and untangles the motivations behind their agency. Also evident is how novice and expert operators apply their knowledge differently. Novices were focussed on meeting throughput objectives, and their incomplete understanding of the plant characteristics led them to inadvertently sacrifice quality in the pursuit of productivity in certain situations. Operators' responses to the plant are affected by their individual mental models of the plant, which differ between operators and have variable validity. Their behaviour is also affected by differing interpretations of how their personal agency should be applied to the achievement of production objectives. The methodology developed here is an integration of systems engineering, statistical analysis, and knowledge management. It shows how to determine where in a given production process the operator intervention is occurring, how it affects quality outcomes, and what tacit knowledge operators are using. It thereby assists the continuous quality improvement processes in a different way to SPC. A second contribution is the provision of a novel methodology for knowledge management, one that circumvents the usual codification barriers to knowledge management.
Planning bioinformatics workflows using an expert system.
Chen, Xiaoling; Chang, Jeffrey T
2017-04-15
Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. https://github.com/jefftc/changlab. jeffrey.t.chang@uth.tmc.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Planning bioinformatics workflows using an expert system
Chen, Xiaoling; Chang, Jeffrey T.
2017-01-01
Abstract Motivation: Bioinformatic analyses are becoming formidably more complex due to the increasing number of steps required to process the data, as well as the proliferation of methods that can be used in each step. To alleviate this difficulty, pipelines are commonly employed. However, pipelines are typically implemented to automate a specific analysis, and thus are difficult to use for exploratory analyses requiring systematic changes to the software or parameters used. Results: To automate the development of pipelines, we have investigated expert systems. We created the Bioinformatics ExperT SYstem (BETSY) that includes a knowledge base where the capabilities of bioinformatics software is explicitly and formally encoded. BETSY is a backwards-chaining rule-based expert system comprised of a data model that can capture the richness of biological data, and an inference engine that reasons on the knowledge base to produce workflows. Currently, the knowledge base is populated with rules to analyze microarray and next generation sequencing data. We evaluated BETSY and found that it could generate workflows that reproduce and go beyond previously published bioinformatics results. Finally, a meta-investigation of the workflows generated from the knowledge base produced a quantitative measure of the technical burden imposed by each step of bioinformatics analyses, revealing the large number of steps devoted to the pre-processing of data. In sum, an expert system approach can facilitate exploratory bioinformatic analysis by automating the development of workflows, a task that requires significant domain expertise. Availability and Implementation: https://github.com/jefftc/changlab Contact: jeffrey.t.chang@uth.tmc.edu PMID:28052928
Gawande, Nitin A; Reinhart, Debra R; Yeh, Gour-Tsyh
2010-02-01
Biodegradation process modeling of municipal solid waste (MSW) bioreactor landfills requires the knowledge of various process reactions and corresponding kinetic parameters. Mechanistic models available to date are able to simulate biodegradation processes with the help of pre-defined species and reactions. Some of these models consider the effect of critical parameters such as moisture content, pH, and temperature. Biomass concentration is a vital parameter for any biomass growth model and often not compared with field and laboratory results. A more complex biodegradation model includes a large number of chemical and microbiological species. Increasing the number of species and user defined process reactions in the simulation requires a robust numerical tool. A generalized microbiological and chemical model, BIOKEMOD-3P, was developed to simulate biodegradation processes in three-phases (Gawande et al. 2009). This paper presents the application of this model to simulate laboratory-scale MSW bioreactors under anaerobic conditions. BIOKEMOD-3P was able to closely simulate the experimental data. The results from this study may help in application of this model to full-scale landfill operation.
Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P
2011-05-19
There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.
From consensus to action: knowledge transfer, education and influencing policy on sports concussion.
Provvidenza, Christine; Engebretsen, Lars; Tator, Charles; Kissick, Jamie; McCrory, Paul; Sills, Allen; Johnston, Karen M
2013-04-01
To: (1) provide a review of knowledge transfer (KT) and related concepts; (2) look at the impact of traditional and emerging KT strategies on concussion knowledge and education; (3) discuss the value and impact of KT to organisations and concussion-related decision making and (4) make recommendations for the future of concussion education. Qualitative literature review of KT and concussion education literature. PubMed, Medline and Sport Discus databases were reviewed and an internet search was conducted. The literature search was restricted to articles published in the English language, but not restricted to any particular years. Altogether, 67 journal articles, 21 websites, 1 book and 1 report were reviewed. The value of KT as part of concussion education is increasingly becoming recognised. Target audiences benefit from specific learning strategies. Concussion tools exist, but their effectiveness and impact require further evaluation. The media is valuable in drawing attention to concussion, but efforts need to ensure that the public is aware of the right information. Social media as a concussion education tool is becoming more prominent. Implementation of KT models is one approach which organisations can use to assess knowledge gaps; identify, develop and evaluate education strategies and use the outcomes to facilitate decision-making. Implementing KT strategies requires a defined plan. Identifying the needs, learning styles and preferred learning strategies of target audiences, coupled with evaluation, should be a piece of the overall concussion education puzzle to have an impact on enhancing knowledge and awareness.
Livingston, Patricia; Evans, Faye; Nsereko, Etienne; Nyirigira, Gaston; Ruhato, Paulin; Sargeant, Joan; Chipp, Megan; Enright, Angela
2014-11-01
High rates of maternal mortality remain a widespread problem in the developing world. Skilled anesthesia providers are required for the safe conduct of Cesarean delivery and resuscitation during obstetrical crises. Few anesthesia providers in low-resource settings have access to continuing education. In Rwanda, anesthesia technicians with only three years of post-secondary training must manage complex maternal emergencies in geographically isolated areas. The purpose of this special article is to describe implementation of the SAFE (Safer Anesthesia From Education) Obstetric Anesthesia course in Rwanda, a three-day refresher course designed to improve obstetrical anesthesia knowledge and skills for practitioners in low-resource areas. In addition, we describe how the course facilitated the knowledge-to-action (KTA) cycle whereby a series of steps are followed to promote the uptake of new knowledge into clinical practice. The KTA cycle requires locally relevant teaching interventions and continuation of knowledge post intervention. In Rwanda, this meant carefully considering educational needs, revising curricula to suit the local context, employing active experiential learning during the SAFE Obstetric Anesthesia course, encouraging supportive relationships with peers and mentors, and using participant action plans for change, post-course logbooks, and follow-up interviews with participants six months after the course. During those interviews, participants reported improvements in clinical practice and greater confidence in coordinating team activities. Anesthesia safety remains challenged by resource limitations and resistance to change by health care providers who did not attend the course. Future teaching interventions will address the need for team training.
14 CFR 65.35 - Knowledge requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Knowledge requirements. 65.35 Section 65.35 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN CERTIFICATION: AIRMEN OTHER THAN FLIGHT CREWMEMBERS Air Traffic Control Tower Operators § 65.35 Knowledge...
14 CFR 65.35 - Knowledge requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 2 2011-01-01 2011-01-01 false Knowledge requirements. 65.35 Section 65.35 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF TRANSPORTATION (CONTINUED) AIRMEN CERTIFICATION: AIRMEN OTHER THAN FLIGHT CREWMEMBERS Air Traffic Control Tower Operators § 65.35 Knowledge...
EMPIRE: A Reaction Model Code for Nuclear Astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palumbo, A., E-mail: apalumbo@bnl.gov; Herman, M.; Capote, R.
The correct modeling of abundances requires knowledge of nuclear cross sections for a variety of neutron, charged particle and γ induced reactions. These involve targets far from stability and are therefore difficult (or currently impossible) to measure. Nuclear reaction theory provides the only way to estimate values of such cross sections. In this paper we present application of the EMPIRE reaction code to nuclear astrophysics. Recent measurements are compared to the calculated cross sections showing consistent agreement for n-, p- and α-induced reactions of strophysical relevance.
Determination of debris albedo from visible and infrared brightnesses
NASA Astrophysics Data System (ADS)
Lambert, John V.; Osteen, Thomas J.; Kraszewski, Butch
1993-09-01
The Air Force Phillips Laboratory is conducting measurements to characterize the orbital debris environment using wide-field optical systems located at the Air Force's Maui, Hawaii, Space Surveillance Site. Conversion of the observed visible brightnesses of detected debris objects to physical sizes require knowledge of the albedo (reflectivity). A thermal model for small debris objects has been developed and is used to calculate albedos from simultaneous visible and thermal infrared observations of catalogued debris objects. The model and initial results will be discussed.
MODELING MICROBUBBLE DYNAMICS IN BIOMEDICAL APPLICATIONS*
CHAHINE, Georges L.; HSIAO, Chao-Tsung
2012-01-01
Controlling microbubble dynamics to produce desirable biomedical outcomes when and where necessary and avoid deleterious effects requires advanced knowledge, which can be achieved only through a combination of experimental and numerical/analytical techniques. The present communication presents a multi-physics approach to study the dynamics combining viscous- in-viscid effects, liquid and structure dynamics, and multi bubble interaction. While complex numerical tools are developed and used, the study aims at identifying the key parameters influencing the dynamics, which need to be included in simpler models. PMID:22833696
Online information search behaviour of physicians.
Mikalef, Patrick; Kourouthanassis, Panos E; Pateli, Adamantia G
2017-03-01
Although doctors increasingly engage in online information seeking to complement their medical practice, little is known regarding what online information sources are used and how effective they are. Grounded on self-determination and needs theory, this study posits that doctors tend to use online information sources to fulfil their information requirements in three pre-defined areas: patient care, knowledge development and research activities. Fulfilling these information needs is argued to improve doctors' perceived medical practice competence. Performing PLS-SEM analysis on primary survey data from 303 medical doctors practicing in four major Greek hospitals, a conceptual model is empirically tested. Using authoritative online information sources was found to fulfil all types of information needs. Contrarily, using non-authoritative information sources had no significant effect. Satisfying information requirements relating to patient care and research activities enhanced doctors' perceptions about their medical practice competence. In contrast, meeting knowledge development information needs had the opposite result. Consistent with past studies, outcomes indicate that doctors tend to use non-authoritative online information sources; yet their use was found to have no significant value in fulfilling their information requirements. Authoritative online information sources are found to improve perceived medical practice competence by satisfying doctors' diverse information requirements. © 2017 Health Libraries Group.
Warnke, Ingeborg; Gamma, Alex; Buadze, Anna; Schleifer, Roman; Canela, Carlos; Rüsch, Nicolas; Rössler, Wulf; Strebel, Bernd; Tényi, Tamás; Liebrenz, Michael
While forensic psychiatry is of increasing importance in mental health care, limited available evidence shows that attitudes toward the discipline are contradictory and that knowledge about it seems to be limited in medical students. We aimed to shed light on this subject by analyzing medical students' central attitudes toward and their association with knowledge about forensic psychiatry as well as with socio-demographic and education-specific predictor variables. We recruited N = 1345 medical students from 45 universities with a German language curriculum across four European countries (Germany, Switzerland, Austria and Hungary) by using an innovative approach, namely snowball sampling via Facebook. Students completed an online questionnaire, and data were analyzed descriptively and multivariably by linear mixed effects models and multinomial regression. The results showed overall neutral to positive attitudes toward forensic psychiatry, with indifferent attitudes toward the treatment of sex offenders, and forensic psychiatrists' expertise in the media. Whereas medical students knew about the term 'forensic psychiatry', they showed a lack of specific medico-legal knowledge. Multivariable models on predictor variables revealed statistically significant findings with, however, small estimates and variance explanation. Therefore, further research is required along with the development of a refined assessment instrument for medical students to explore both attitudes and knowledge in forensic psychiatry. Copyright © 2018 Elsevier Ltd. All rights reserved.
Modelling the social and structural determinants of tuberculosis: opportunities and challenges
Boccia, D.; Dodd, P. J.; Lönnroth, K.; Dowdy, D. W.; Siroka, A.; Kimerling, M. E.; White, R. G.; Houben, R. M. G. J.
2017-01-01
INTRODUCTION: Despite the close link between tuberculosis (TB) and poverty, most mathematical models of TB have not addressed underlying social and structural determinants. OBJECTIVE: To review studies employing mathematical modelling to evaluate the epidemiological impact of the structural determinants of TB. METHODS: We systematically searched PubMed and personal libraries to identify eligible articles. We extracted data on the modelling techniques employed, research question, types of structural determinants modelled and setting. RESULTS: From 232 records identified, we included eight articles published between 2008 and 2015; six employed population-based dynamic TB transmission models and two non-dynamic analytic models. Seven studies focused on proximal TB determinants (four on nutritional status, one on wealth, one on indoor air pollution, and one examined overcrowding, socioeconomic and nutritional status), and one focused on macro-economic influences. CONCLUSIONS: Few modelling studies have attempted to evaluate structural determinants of TB, resulting in key knowledge gaps. Despite the challenges of modelling such a complex system, models must broaden their scope to remain useful for policy making. Given the intersectoral nature of the interrelations between structural determinants and TB outcomes, this work will require multidisciplinary collaborations. A useful starting point would be to focus on developing relatively simple models that can strengthen our knowledge regarding the potential effect of the structural determinants on TB outcomes. PMID:28826444
Knowledge-based requirements analysis for automating software development
NASA Technical Reports Server (NTRS)
Markosian, Lawrence Z.
1988-01-01
We present a new software development paradigm that automates the derivation of implementations from requirements. In this paradigm, informally-stated requirements are expressed in a domain-specific requirements specification language. This language is machine-understable and requirements expressed in it are captured in a knowledge base. Once the requirements are captured, more detailed specifications and eventually implementations are derived by the system using transformational synthesis. A key characteristic of the process is that the required human intervention is in the form of providing problem- and domain-specific engineering knowledge, not in writing detailed implementations. We describe a prototype system that applies the paradigm in the realm of communication engineering: the prototype automatically generates implementations of buffers following analysis of the requirements on each buffer.
Consequences of Switching to Blended Learning: The Grenoble Medical School Key Elements.
Houssein, Sahal; Di Marco, Lionel; Schwebel, Carole; Luengo, Vanda; Morand, Patrice; Gillois, Pierre
2018-01-01
In 2006, the Grenoble-Alpes University Medical School decided to switch the learning paradigm of the first year to a blended learning model based on a flipped classroom with a continuous dual assessment system providing personal follow-up. We report a descriptive analysis of two pedagogical models. The innovative blended learning model is divided into 5 week-sequences of learning, starting with a series of knowledge capsules, following with Interactive On Line Questions, Interactive On Site Training and an Explanation Meeting. The fourth and final steps are the dual assessment system that prepares for the final contest and the personal weekly follow-up. The data were extracted from the information systems over 17 years, during which the same learning model was applied. With the same student workload, the hourly knowledge/skills ratio decreased to approximately 50% with the blended learning model. The teachers' workload increased significantly in the first year (+70%), and then decreased each year (reaching -20%). Furthermore, the type of education has also changed for the teacher, from an initial hourly knowledge/skill ratio of 3, to a ratio of 1/3 with the new model after a few years. The institution also needed to resize the classroom from a large amphitheatre to small interactive learning spaces. There is a significant initial effort required to establish this model both for the teachers and for the institution, which have different needs and costs However, the satisfaction rates and the demand for extension to the other curriculums from medics and paramedics learners indicate that this model provides the enhanced learning paradigm of the future.
Comments on the MIT Assessment of the Mars One Plan
NASA Technical Reports Server (NTRS)
Jones, Harry W.
2015-01-01
The MIT assessment of the Mars One mission plan reveals design assumptions that would cause significant difficulties. Growing crops in the crew chamber produces excessive oxygen levels. The assumed in-situ resource utilization (ISRU) equipment has too low a Technology Readiness Level (TRL). The required spare parts cause a large and increasing launch mass logistics burden. The assumed International Space Station (ISS) Environmental Control and Life Support (ECLS) technologies were developed for microgravity and therefore are not suitable for Mars gravity. Growing food requires more mass than sending food from Earth. The large number of spares is due to the relatively low reliability of ECLS and the low TRL of ISRU. The Mars One habitat design is similar to past concepts but does not incorporate current knowledge. The MIT architecture analysis tool for long-term settlements on the Martian surface includes an ECLS system simulation, an ISRU sizing model, and an analysis of required spares. The MIT tool showed the need for separate crop and crew chambers, the large spare parts logistics, that crops require more mass than Earth food, and that more spares are needed if reliability is lower. That ISRU has low TRL and ISS ECLS was designed for microgravity are well known. Interestingly, the results produced by the architecture analysis tool - separate crop chamber, large spares mass, large crop chamber mass, and low reliability requiring more spares - were also well known. A common approach to ECLS architecture analysis is to build a complex model that is intended to be all-inclusive and is hoped will help solve all design problems. Such models can struggle to replicate obvious and well-known results and are often unable to answer unanticipated new questions. A better approach would be to survey the literature for background knowledge and then directly analyze the important problems.
2018-01-01
We review key mathematical models of the South African human immunodeficiency virus (HIV) epidemic from the early 1990s onwards. In our descriptions, we sometimes differentiate between the concepts of a model world and its mathematical or computational implementation. The model world is the conceptual realm in which we explicitly declare the rules – usually some simplification of ‘real world’ processes as we understand them. Computing details of informative scenarios in these model worlds is a task requiring specialist knowledge, but all other aspects of the modelling process, from describing the model world to identifying the scenarios and interpreting model outputs, should be understandable to anyone with an interest in the epidemic. PMID:29568647
Education and health knowledge: evidence from UK compulsory schooling reform.
Johnston, David W; Lordan, Grace; Shields, Michael A; Suziedelyte, Agne
2015-02-01
We investigate if there is a causal link between education and health knowledge using data from the 1984/85 and 1991/92 waves of the UK Health and Lifestyle Survey (HALS). Uniquely, the survey asks respondents what they think are the main causes of ten common health conditions, and we compare these answers to those given by medical professionals to form an index of health knowledge. For causal identification we use increases in the UK minimum school leaving age in 1947 (from 14 to 15) and 1972 (from 15 to 16) to provide exogenous variation in education. These reforms predominantly induced adolescents who would have left school to stay for one additionally mandated year. OLS estimates suggest that education significantly increases health knowledge, with a one-year increase in schooling increasing the health knowledge index by 15% of a standard deviation. In contrast, estimates from instrumental-variable models show that increased schooling due to the education reforms did not significantly affect health knowledge. This main result is robust to numerous specification tests and alternative formulations of the health knowledge index. Further research is required to determine whether there is also no causal link between higher levels of education - such as post-school qualifications - and health knowledge. Copyright © 2014 Elsevier Ltd. All rights reserved.
Simulation-based medical education: time for a pedagogical shift.
Kalaniti, Kaarthigeyan; Campbell, Douglas M
2015-01-01
The purpose of medical education at all levels is to prepare physicians with the knowledge and comprehensive skills, required to deliver safe and effective patient care. The traditional 'apprentice' learning model in medical education is undergoing a pedagogical shift to a 'simulation-based' learning model. Experiential learning, deliberate practice and the ability to provide immediate feedback are the primary advantages of simulation-based medical education. It is an effective way to develop new skills, identify knowledge gaps, reduce medical errors, and maintain infrequently used clinical skills even among experienced clinical teams, with the overall goal of improving patient care. Although simulation cannot replace clinical exposure as a form of experiential learning, it promotes learning without compromising patient safety. This new paradigm shift is revolutionizing medical education in the Western world. It is time that the developing countries embrace this new pedagogical shift.
Fungal model systems and the elucidation of pathogenicity determinants
Perez-Nadales, Elena; Almeida Nogueira, Maria Filomena; Baldin, Clara; Castanheira, Sónia; El Ghalid, Mennat; Grund, Elisabeth; Lengeler, Klaus; Marchegiani, Elisabetta; Mehrotra, Pankaj Vinod; Moretti, Marino; Naik, Vikram; Oses-Ruiz, Miriam; Oskarsson, Therese; Schäfer, Katja; Wasserstrom, Lisa; Brakhage, Axel A.; Gow, Neil A.R.; Kahmann, Regine; Lebrun, Marc-Henri; Perez-Martin, José; Di Pietro, Antonio; Talbot, Nicholas J.; Toquin, Valerie; Walther, Andrea; Wendland, Jürgen
2014-01-01
Fungi have the capacity to cause devastating diseases of both plants and animals, causing significant harvest losses that threaten food security and human mycoses with high mortality rates. As a consequence, there is a critical need to promote development of new antifungal drugs, which requires a comprehensive molecular knowledge of fungal pathogenesis. In this review, we critically evaluate current knowledge of seven fungal organisms used as major research models for fungal pathogenesis. These include pathogens of both animals and plants; Ashbya gossypii, Aspergillus fumigatus, Candida albicans, Fusarium oxysporum, Magnaporthe oryzae, Ustilago maydis and Zymoseptoria tritici. We present key insights into the virulence mechanisms deployed by each species and a comparative overview of key insights obtained from genomic analysis. We then consider current trends and future challenges associated with the study of fungal pathogenicity. PMID:25011008
[Impact of a training model for the Child Development Evaluation Test in primary care].
Rizzoli-Córdoba, Antonio; Delgado-Ginebra, Ismael; Cruz-Ortiz, Leopoldo Alfonso; Baqueiro-Hernández, César Iván; Martain-Pérez, Itzamara Jacqueline; Palma-Tavera, Josuha Alexander; Villasís-Keever, Miguel Ángel; Reyes-Morales, Hortensia; O'Shea-Cuevas, Gabriel; Aceves-Villagrán, Daniel; Carrasco-Mendoza, Joaquín; Antillón-Ocampo, Fátima Adriana; Villagrán-Muñoz, Víctor Manuel; Halley-Castillo, Elizabeth; Vargas-López, Guillermo; Muñoz-Hernández, Onofre
The Child Development Evaluation (CDE) Test is a screening tool designed and validated in Mexico for the early detection of child developmental problems. For professionals who will be administering the test in primary care facilities, previous acquisition of knowledge about the test is required in order to generate reliable results. The aim of this work was to evaluate the impact of a training model for primary care workers from different professions through the comparison of knowledge acquired during the training course. The study design was a before/after type considering the participation in a training course for the CDE test as the intervention. The course took place in six different Mexican states from October to December 2013. The same questions were used before and after. There were 394 participants included. Distribution according to professional profile was as follows: general physicians 73.4%, nursing 7.7%, psychology 7.1%, nutrition 6.1% and other professions 5.6%. The questions with the lowest correct answer rates were associated with the scoring of the CDE test. In the initial evaluation, 64.9% obtained a grade lower than 20 compared with 1.8% in the final evaluation. In the initial evaluation only 1.8% passed compared with 75.15% in the final evaluation. The proposed model allows the participants to acquire general knowledge about the CDE Test. To improve the general results in future training courses, it is required to reinforce during training the scoring and interpretation of the test together with the previous lecture of the material by the participants. Copyright © 2015 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved.
Coherent anti-Stokes Raman spectroscopic modeling for combustion diagnostics
NASA Technical Reports Server (NTRS)
Hall, R. J.
1983-01-01
The status of modelling the coherent anti-Stokes Raman spectroscopy (CARS) spectra of molecules important in combustion, such as N2, H2O, and CO2, is reviewed. It is shown that accurate modelling generally requires highly precise knowledge of line positions and reasonable estimates of Raman linewidths, and the sources of these data are discussed. CARS technique and theory is reviewed, and the status of modelling the phenomenon of collisional narrowing at pressures well above atmospheric for N2, H2O, and CO2 is described. It is shown that good agreement with experiment can be achieved using either the Gordon rotational diffusion model or phenomenological models for inelastic energy transfer rates.
Visualization of the variability of 3D statistical shape models by animation.
Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter
2004-01-01
Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.
Standard model of knowledge representation
NASA Astrophysics Data System (ADS)
Yin, Wensheng
2016-09-01
Knowledge representation is the core of artificial intelligence research. Knowledge representation methods include predicate logic, semantic network, computer programming language, database, mathematical model, graphics language, natural language, etc. To establish the intrinsic link between various knowledge representation methods, a unified knowledge representation model is necessary. According to ontology, system theory, and control theory, a standard model of knowledge representation that reflects the change of the objective world is proposed. The model is composed of input, processing, and output. This knowledge representation method is not a contradiction to the traditional knowledge representation method. It can express knowledge in terms of multivariate and multidimensional. It can also express process knowledge, and at the same time, it has a strong ability to solve problems. In addition, the standard model of knowledge representation provides a way to solve problems of non-precision and inconsistent knowledge.
MO-DE-BRA-05: Developing Effective Medical Physics Knowledge Structures: Models and Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprawls, P
Purpose: Develop a method and supporting online resources to be used by medical physics educators for teaching medical imaging professionals and trainees so they develop highly-effective physics knowledge structures that can contribute to improved diagnostic image quality on a global basis. Methods: The different types of mental knowledge structures were analyzed and modeled with respect to both the learning and teaching process for their development and the functions or tasks that can be performed with the knowledge. While symbolic verbal and mathematical knowledge structures are very important in medical physics for many purposes, the tasks of applying physics in clinicalmore » imaging--especially to optimize image quality and diagnostic accuracy--requires a sensory conceptual knowledge structure, specifically, an interconnected network of visually based concepts. This type of knowledge supports tasks such as analysis, evaluation, problem solving, interacting, and creating solutions. Traditional educational methods including lectures, online modules, and many texts are serial procedures and limited with respect to developing interconnected conceptual networks. A method consisting of the synergistic combination of on-site medical physics teachers and the online resource, CONET (Concept network developer), has been developed and made available for the topic Radiographic Image Quality. This was selected as the inaugural topic, others to follow, because it can be used by medical physicists teaching the large population of medical imaging professionals, such as radiology residents, who can apply the knowledge. Results: Tutorials for medical physics educators on developing effective knowledge structures are being presented and published and CONET is available with open access for all to use. Conclusion: An adjunct to traditional medical physics educational methods with the added focus on sensory concept development provides opportunities for medical physics teachers to share their knowledge and experience at a higher cognitive level and produce medical professionals with the enhanced ability to apply physics to clinical procedures.« less
NASA Technical Reports Server (NTRS)
Lewis, Clayton; Wilde, Nick
1989-01-01
Space construction will require heavy investment in the development of a wide variety of user interfaces for the computer-based tools that will be involved at every stage of construction operations. Using today's technology, user interface development is very expensive for two reasons: (1) specialized and scarce programming skills are required to implement the necessary graphical representations and complex control regimes for high-quality interfaces; (2) iteration on prototypes is required to meet user and task requirements, since these are difficult to anticipate with current (and foreseeable) design knowledge. We are attacking this problem by building a user interface development tool based on extensions to the spreadsheet model of computation. The tool provides high-level support for graphical user interfaces and permits dynamic modification of interfaces, without requiring conventional programming concepts and skills.
ERIC Educational Resources Information Center
Beauregard, Caroline; Rousseau, Cécile; Mustafa, Sally
2015-01-01
Because they propose a form of modeling, videos have been recognised to be useful to transfer knowledge about practices requiring teachers to adopt a different role. This paper describes the results of a satisfaction survey with 98 teachers, school administrators and professionals regarding their appreciation of training videos showing teacher-led…
Knowledge, Skills, and Resources for Pharmacy Informatics Education
Fox, Brent I.; Flynn, Allen J.; Fortier, Christopher R.; Clauson, Kevin A.
2011-01-01
Pharmacy has an established history of technology use to support business processes. Pharmacy informatics education within doctor of pharmacy programs, however, is inconsistent, despite its inclusion as a requirement in the 2007 Accreditation Council for Pharmacy Education Standards and Guidelines. This manuscript describes pharmacy informatics knowledge and skills that all graduating pharmacy students should possess, conceptualized within the framework of the medication use process. Additionally, we suggest core source materials and specific learning activities to support pharmacy informatics education. We conclude with a brief discussion of emerging changes in the practice model. These changes are facilitated by pharmacy informatics and will inevitably become commonplace in our graduates’ practice environment. PMID:21829267
Knowledge, skills, and resources for pharmacy informatics education.
Fox, Brent I; Flynn, Allen J; Fortier, Christopher R; Clauson, Kevin A
2011-06-10
Pharmacy has an established history of technology use to support business processes. Pharmacy informatics education within doctor of pharmacy programs, however, is inconsistent, despite its inclusion as a requirement in the 2007 Accreditation Council for Pharmacy Education Standards and Guidelines. This manuscript describes pharmacy informatics knowledge and skills that all graduating pharmacy students should possess, conceptualized within the framework of the medication use process. Additionally, we suggest core source materials and specific learning activities to support pharmacy informatics education. We conclude with a brief discussion of emerging changes in the practice model. These changes are facilitated by pharmacy informatics and will inevitably become commonplace in our graduates' practice environment.
Cope, Vicki; Murray, Melanie
2017-06-21
Nurses are often asked to think about leadership, particularly in times of rapid change in healthcare, and where questions have been raised about whether leaders and managers have adequate insight into the requirements of care. This article discusses several leadership styles relevant to contemporary healthcare and nursing practice. Nurses who are aware of leadership styles may find this knowledge useful in maintaining a cohesive working environment. Leadership knowledge and skills can be improved through training, where, rather than having to undertake formal leadership roles without adequate preparation, nurses are able to learn, nurture, model and develop effective leadership behaviours, ultimately improving nursing staff retention and enhancing the delivery of safe and effective care.