Questioning and Experimentation
ERIC Educational Resources Information Center
Mutanen, Arto
2014-01-01
The paper is a philosophical analysis of experimentation. The philosophical framework of the analysis is the interrogative model of inquiry developed by Hintikka. The basis of the model is explicit and well-formed logic of questions and answers. The framework allows us to formulate a flexible logic of experimentation. In particular, the formulated…
Specification and Verification of Web Applications in Rewriting Logic
NASA Astrophysics Data System (ADS)
Alpuente, María; Ballis, Demis; Romero, Daniel
This paper presents a Rewriting Logic framework that formalizes the interactions between Web servers and Web browsers through a communicating protocol abstracting HTTP. The proposed framework includes a scripting language that is powerful enough to model the dynamics of complex Web applications by encompassing the main features of the most popular Web scripting languages (e.g. PHP, ASP, Java Servlets). We also provide a detailed characterization of browser actions (e.g. forward/backward navigation, page refresh, and new window/tab openings) via rewrite rules, and show how our models can be naturally model-checked by using the Linear Temporal Logic of Rewriting (LTLR), which is a Linear Temporal Logic specifically designed for model-checking rewrite theories. Our formalization is particularly suitable for verification purposes, since it allows one to perform in-depth analyses of many subtle aspects related to Web interaction. Finally, the framework has been completely implemented in Maude, and we report on some successful experiments that we conducted by using the Maude LTLR model-checker.
Wong, Sabrina T; Yin, Delu; Bhattacharyya, Onil; Wang, Bin; Liu, Liqun; Chen, Bowen
2010-11-18
China has had no effective and systematic information system to provide guidance for strengthening PHC (Primary Health Care) or account to citizens on progress. We report on the development of the China results-based Logic Model for Community Health Facilities and Stations (CHS) and a set of relevant PHC indicators intended to measure CHS priorities. We adapted the PHC Results Based Logic Model developed in Canada and current work conducted in the community health system in China to create the China CHS Logic Model framework. We used a staged approach by first constructing the framework and indicators and then validating their content through an interactive process involving policy analysis, critical review of relevant literature and multiple stakeholder consultation. The China CHS Logic Model includes inputs, activities, outputs and outcomes with a total of 287 detailed performance indicators. In these indicators, 31 indicators measure inputs, 64 measure activities, 105 measure outputs, and 87 measure immediate (n = 65), intermediate (n = 15), or final (n = 7) outcomes. A Logic Model framework can be useful in planning, implementation, analysis and evaluation of PHC at a system and service level. The development and content validation of the China CHS Logic Model and subsequent indicators provides a means for stronger accountability and a clearer sense of overall direction and purpose needed to renew and strengthen the PHC system in China. Moreover, this work will be useful in moving towards developing a PHC information system and performance measurement across districts in urban China, and guiding the pursuit of quality in PHC.
2010-01-01
Background China has had no effective and systematic information system to provide guidance for strengthening PHC (Primary Health Care) or account to citizens on progress. We report on the development of the China results-based Logic Model for Community Health Facilities and Stations (CHS) and a set of relevant PHC indicators intended to measure CHS priorities. Methods We adapted the PHC Results Based Logic Model developed in Canada and current work conducted in the community health system in China to create the China CHS Logic Model framework. We used a staged approach by first constructing the framework and indicators and then validating their content through an interactive process involving policy analysis, critical review of relevant literature and multiple stakeholder consultation. Results The China CHS Logic Model includes inputs, activities, outputs and outcomes with a total of 287 detailed performance indicators. In these indicators, 31 indicators measure inputs, 64 measure activities, 105 measure outputs, and 87 measure immediate (n = 65), intermediate (n = 15), or final (n = 7) outcomes. Conclusion A Logic Model framework can be useful in planning, implementation, analysis and evaluation of PHC at a system and service level. The development and content validation of the China CHS Logic Model and subsequent indicators provides a means for stronger accountability and a clearer sense of overall direction and purpose needed to renew and strengthen the PHC system in China. Moreover, this work will be useful in moving towards developing a PHC information system and performance measurement across districts in urban China, and guiding the pursuit of quality in PHC. PMID:21087516
ERIC Educational Resources Information Center
Zantal-Wiener, Kathy; Horwood, Thomas J.
2010-01-01
The authors propose a comprehensive evaluation framework to prepare for evaluating school emergency management programs. This framework involves a logic model that incorporates Government Performance and Results Act (GPRA) measures as a foundation for comprehensive evaluation that complements performance monitoring used by the U.S. Department of…
Logical Modeling and Dynamical Analysis of Cellular Networks
Abou-Jaoudé, Wassim; Traynard, Pauline; Monteiro, Pedro T.; Saez-Rodriguez, Julio; Helikar, Tomáš; Thieffry, Denis; Chaouiya, Claudine
2016-01-01
The logical (or logic) formalism is increasingly used to model regulatory and signaling networks. Complementing these applications, several groups contributed various methods and tools to support the definition and analysis of logical models. After an introduction to the logical modeling framework and to several of its variants, we review here a number of recent methodological advances to ease the analysis of large and intricate networks. In particular, we survey approaches to determine model attractors and their reachability properties, to assess the dynamical impact of variations of external signals, and to consistently reduce large models. To illustrate these developments, we further consider several published logical models for two important biological processes, namely the differentiation of T helper cells and the control of mammalian cell cycle. PMID:27303434
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops
NASA Astrophysics Data System (ADS)
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
Qualitative models and experimental investigation of chaotic NOR gates and set/reset flip-flops.
Rahman, Aminur; Jordan, Ian; Blackmore, Denis
2018-01-01
It has been observed through experiments and SPICE simulations that logical circuits based upon Chua's circuit exhibit complex dynamical behaviour. This behaviour can be used to design analogues of more complex logic families and some properties can be exploited for electronics applications. Some of these circuits have been modelled as systems of ordinary differential equations. However, as the number of components in newer circuits increases so does the complexity. This renders continuous dynamical systems models impractical and necessitates new modelling techniques. In recent years, some discrete dynamical models have been developed using various simplifying assumptions. To create a robust modelling framework for chaotic logical circuits, we developed both deterministic and stochastic discrete dynamical models, which exploit the natural recurrence behaviour, for two chaotic NOR gates and a chaotic set/reset flip-flop. This work presents a complete applied mathematical investigation of logical circuits. Experiments on our own designs of the above circuits are modelled and the models are rigorously analysed and simulated showing surprisingly close qualitative agreement with the experiments. Furthermore, the models are designed to accommodate dynamics of similarly designed circuits. This will allow researchers to develop ever more complex chaotic logical circuits with a simple modelling framework.
Hayes, Holly; Parchman, Michael L.; Howard, Ray
2012-01-01
Evaluating effective growth and development of a Practice-Based Research Network (PBRN) can be challenging. The purpose of this article is to describe the development of a logic model and how the framework has been used for planning and evaluation in a primary care PBRN. An evaluation team was formed consisting of the PBRN directors, staff and its board members. After the mission and the target audience were determined, facilitated meetings and discussions were held with stakeholders to identify the assumptions, inputs, activities, outputs, outcomes and outcome indicators. The long-term outcomes outlined in the final logic model are two-fold: 1.) Improved health outcomes of patients served by PBRN community clinicians; and 2.) Community clinicians are recognized leaders of quality research projects. The Logic Model proved useful in identifying stakeholder interests and dissemination activities as an area that required more attention in the PBRN. The logic model approach is a useful planning tool and project management resource that increases the probability that the PBRN mission will be successfully implemented. PMID:21900441
Detection of epistatic effects with logic regression and a classical linear regression model.
Malina, Magdalena; Ickstadt, Katja; Schwender, Holger; Posch, Martin; Bogdan, Małgorzata
2014-02-01
To locate multiple interacting quantitative trait loci (QTL) influencing a trait of interest within experimental populations, usually methods as the Cockerham's model are applied. Within this framework, interactions are understood as the part of the joined effect of several genes which cannot be explained as the sum of their additive effects. However, if a change in the phenotype (as disease) is caused by Boolean combinations of genotypes of several QTLs, this Cockerham's approach is often not capable to identify them properly. To detect such interactions more efficiently, we propose a logic regression framework. Even though with the logic regression approach a larger number of models has to be considered (requiring more stringent multiple testing correction) the efficient representation of higher order logic interactions in logic regression models leads to a significant increase of power to detect such interactions as compared to a Cockerham's approach. The increase in power is demonstrated analytically for a simple two-way interaction model and illustrated in more complex settings with simulation study and real data analysis.
Counter Unmanned Aerial System Decision-Aid Logic Process (C-UAS DALP)
decision -aid or logic process that bridges the middle elements of the kill... of use, location, general logic process , and reference mission. This is the framework for the IDEF0 functional architecture diagrams, decision -aid diagrams, logic process , and modeling and simulation....chain between detection to countermeasure response. This capstone project creates the logic for a decision process that transitions from the
A logical foundation for representation of clinical data.
Campbell, K E; Das, A K; Musen, M A
1994-01-01
OBJECTIVE: A general framework for representation of clinical data that provides a declarative semantics of terms and that allows developers to define explicitly the relationships among both terms and combinations of terms. DESIGN: Use of conceptual graphs as a standard representation of logic and of an existing standardized vocabulary, the Systematized Nomenclature of Medicine (SNOMED International), for lexical elements. Concepts such as time, anatomy, and uncertainty must be modeled explicitly in a way that allows relation of these foundational concepts to surface-level clinical descriptions in a uniform manner. RESULTS: The proposed framework was used to model a simple radiology report, which included temporal references. CONCLUSION: Formal logic provides a framework for formalizing the representation of medical concepts. Actual implementations will be required to evaluate the practicality of this approach. PMID:7719805
Sherman, Paul David
2016-04-01
This article presents a framework to identify key mechanisms for developing a logic model blueprint that can be used for an impending comprehensive evaluation of an undergraduate degree program in a Canadian university. The evaluation is a requirement of a comprehensive quality assurance process mandated by the university. A modified RUFDATA (Saunders, 2000) evaluation model is applied as an initiating framework to assist in decision making to provide a guide for conceptualizing a logic model for the quality assurance process. This article will show how an educational evaluation is strengthened by employing a RUFDATA reflective process in exploring key elements of the evaluation process, and then translating this information into a logic model format that could serve to offer a more focussed pathway for the quality assurance activities. Using preliminary program evaluation data from two key stakeholders of the undergraduate program as well as an audit of the curriculum's course syllabi, a case is made for, (1) the importance of inclusivity of key stakeholders participation in the design of the evaluation process to enrich the authenticity and accuracy of program participants' feedback, and (2) the diversification of data collection methods to ensure that stakeholders' narrative feedback is given ample exposure. It is suggested that the modified RUFDATA/logic model framework be applied to all academic programs at the university undergoing the quality assurance process at the same time so that economies of scale may be realized. Copyright © 2015 Elsevier Ltd. All rights reserved.
Response-Time Tests of Logical-Rule Models of Categorization
ERIC Educational Resources Information Center
Little, Daniel R.; Nosofsky, Robert M.; Denton, Stephen E.
2011-01-01
A recent resurgence in logical-rule theories of categorization has motivated the development of a class of models that predict not only choice probabilities but also categorization response times (RTs; Fific, Little, & Nosofsky, 2010). The new models combine mental-architecture and random-walk approaches within an integrated framework and…
Morris, Melody K; Shriver, Zachary; Sasisekharan, Ram; Lauffenburger, Douglas A
2012-03-01
Mathematical models have substantially improved our ability to predict the response of a complex biological system to perturbation, but their use is typically limited by difficulties in specifying model topology and parameter values. Additionally, incorporating entities across different biological scales ranging from molecular to organismal in the same model is not trivial. Here, we present a framework called "querying quantitative logic models" (Q2LM) for building and asking questions of constrained fuzzy logic (cFL) models. cFL is a recently developed modeling formalism that uses logic gates to describe influences among entities, with transfer functions to describe quantitative dependencies. Q2LM does not rely on dedicated data to train the parameters of the transfer functions, and it permits straight-forward incorporation of entities at multiple biological scales. The Q2LM framework can be employed to ask questions such as: Which therapeutic perturbations accomplish a designated goal, and under what environmental conditions will these perturbations be effective? We demonstrate the utility of this framework for generating testable hypotheses in two examples: (i) a intracellular signaling network model; and (ii) a model for pharmacokinetics and pharmacodynamics of cell-cytokine interactions; in the latter, we validate hypotheses concerning molecular design of granulocyte colony stimulating factor. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Sheehan, T.; Baker, B.; Degagne, R. S.
2015-12-01
With the abundance of data sources, analytical methods, and computer models, land managers are faced with the overwhelming task of making sense of a profusion of data of wildly different types. Luckily, fuzzy logic provides a method to work with different types of data using language-based propositions such as "the landscape is undisturbed," and a simple set of logic constructs. Just as many surveys allow different levels of agreement with a proposition, fuzzy logic allows values reflecting different levels of truth for a proposition. Truth levels fall within a continuum ranging from Fully True to Fully False. Hence a fuzzy logic model produces continuous results. The Environmental Evaluation Modeling System (EEMS) is a platform-independent, tree-based, fuzzy logic modeling framework. An EEMS model provides a transparent definition of an evaluation model and is commonly developed as a collaborative effort among managers, scientists, and GIS experts. Managers specify a set of evaluative propositions used to characterize the landscape. Scientists, working with managers, formulate functions that convert raw data values into truth values for the propositions and produce a logic tree to combine results into a single metric used to guide decisions. Managers, scientists, and GIS experts then work together to implement and iteratively tune the logic model and produce final results. We present examples of two successful EEMS projects that provided managers with map-based results suitable for guiding decisions: sensitivity and climate change exposure in Utah and the Colorado Plateau modeled for the Bureau of Land Management; and terrestrial ecological intactness in the Mojave and Sonoran region of southern California modeled for the Desert Renewable Energy Conservation Plan.
A framework to find the logic backbone of a biological network.
Maheshwari, Parul; Albert, Réka
2017-12-06
Cellular behaviors are governed by interaction networks among biomolecules, for example gene regulatory and signal transduction networks. An often used dynamic modeling framework for these networks, Boolean modeling, can obtain their attractors (which correspond to cell types and behaviors) and their trajectories from an initial state (e.g. a resting state) to the attractors, for example in response to an external signal. The existing methods however do not elucidate the causal relationships between distant nodes in the network. In this work, we propose a simple logic framework, based on categorizing causal relationships as sufficient or necessary, as a complement to Boolean networks. We identify and explore the properties of complex subnetworks that are distillable into a single logic relationship. We also identify cyclic subnetworks that ensure the stabilization of the state of participating nodes regardless of the rest of the network. We identify the logic backbone of biomolecular networks, consisting of external signals, self-sustaining cyclic subnetworks (stable motifs), and output nodes. Furthermore, we use the logic framework to identify crucial nodes whose override can drive the system from one steady state to another. We apply these techniques to two biological networks: the epithelial-to-mesenchymal transition network corresponding to a developmental process exploited in tumor invasion, and the network of abscisic acid induced stomatal closure in plants. We find interesting subnetworks with logical implications in these networks. Using these subgraphs and motifs, we efficiently reduce both networks to succinct backbone structures. The logic representation identifies the causal relationships between distant nodes and subnetworks. This knowledge can form the basis of network control or used in the reverse engineering of networks.
Fuzzy branching temporal logic.
Moon, Seong-ick; Lee, Kwang H; Lee, Doheon
2004-04-01
Intelligent systems require a systematic way to represent and handle temporal information containing uncertainty. In particular, a logical framework is needed that can represent uncertain temporal information and its relationships with logical formulae. Fuzzy linear temporal logic (FLTL), a generalization of propositional linear temporal logic (PLTL) with fuzzy temporal events and fuzzy temporal states defined on a linear time model, was previously proposed for this purpose. However, many systems are best represented by branching time models in which each state can have more than one possible future path. In this paper, fuzzy branching temporal logic (FBTL) is proposed to address this problem. FBTL adopts and generalizes concurrent tree logic (CTL*), which is a classical branching temporal logic. The temporal model of FBTL is capable of representing fuzzy temporal events and fuzzy temporal states, and the order relation among them is represented as a directed graph. The utility of FBTL is demonstrated using a fuzzy job shop scheduling problem as an example.
Wasserman, Deborah L
2010-05-01
This paper offers a framework for using a systems orientation and "foundational theory" to enhance theory-driven evaluations and logic models. The framework guides the process of identifying and explaining operative relationships and perspectives within human service program systems. Self-Determination Theory exemplifies how a foundational theory can be used to support the framework in a wide range of program evaluations. Two examples illustrate how applications of the framework have improved the evaluators' abilities to observe and explain program effect. In both exemplars improvements involved addressing and organizing into a single logic model heretofore seemingly disparate evaluation issues regarding valuing (by whose values); the role of organizational and program context; and evaluation anxiety and utilization. Copyright 2009 Elsevier Ltd. All rights reserved.
Usage Intention Framework Model: A Fuzzy Logic Interpretation of the Classical Utaut Model
ERIC Educational Resources Information Center
Sandaire, Johnny
2009-01-01
A fuzzy conjoint analysis (FCA: Turksen, 1992) model for enhancing management decision in the technology adoption domain was implemented as an extension to the UTAUT model (Venkatesh, Morris, Davis, & Davis, 2003). Additionally, a UTAUT-based Usage Intention Framework Model (UIFM) introduced a closed-loop feedback system. The empirical evidence…
Modular Knowledge Representation and Reasoning in the Semantic Web
NASA Astrophysics Data System (ADS)
Serafini, Luciano; Homola, Martin
Construction of modular ontologies by combining different modules is becoming a necessity in ontology engineering in order to cope with the increasing complexity of the ontologies and the domains they represent. The modular ontology approach takes inspiration from software engineering, where modularization is a widely acknowledged feature. Distributed reasoning is the other side of the coin of modular ontologies: given an ontology comprising of a set of modules, it is desired to perform reasoning by combination of multiple reasoning processes performed locally on each of the modules. In the last ten years, a number of approaches for combining logics has been developed in order to formalize modular ontologies. In this chapter, we survey and compare the main formalisms for modular ontologies and distributed reasoning in the Semantic Web. We select four formalisms build on formal logical grounds of Description Logics: Distributed Description Logics, ℰ-connections, Package-based Description Logics and Integrated Distributed Description Logics. We concentrate on expressivity and distinctive modeling features of each framework. We also discuss reasoning capabilities of each framework.
Towards An Engineering Discipline of Computational Security
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mili, Ali; Sheldon, Frederick T; Jilani, Lamia Labed
2007-01-01
George Boole ushered the era of modern logic by arguing that logical reasoning does not fall in the realm of philosophy, as it was considered up to his time, but in the realm of mathematics. As such, logical propositions and logical arguments are modeled using algebraic structures. Likewise, we submit that security attributes must be modeled as formal mathematical propositions that are subject to mathematical analysis. In this paper, we approach this problem by attempting to model security attributes in a refinement-like framework that has traditionally been used to represent reliability and safety claims. Keywords: Computable security attributes, survivability, integrity,more » dependability, reliability, safety, security, verification, testing, fault tolerance.« less
Mediation Analysis in a Latent Growth Curve Modeling Framework
ERIC Educational Resources Information Center
von Soest, Tilmann; Hagtvet, Knut A.
2011-01-01
This article presents several longitudinal mediation models in the framework of latent growth curve modeling and provides a detailed account of how such models can be constructed. Logical and statistical challenges that might arise when such analyses are conducted are also discussed. Specifically, we discuss how the initial status (intercept) and…
Disability Policy Evaluation: Combining Logic Models and Systems Thinking.
Claes, Claudia; Ferket, Neelke; Vandevelde, Stijn; Verlet, Dries; De Maeyer, Jessica
2017-07-01
Policy evaluation focuses on the assessment of policy-related personal, family, and societal changes or benefits that follow as a result of the interventions, services, and supports provided to those persons to whom the policy is directed. This article describes a systematic approach to policy evaluation based on an evaluation framework and an evaluation process that combine the use of logic models and systems thinking. The article also includes an example of how the framework and process have recently been used in policy development and evaluation in Flanders (Belgium), as well as four policy evaluation guidelines based on relevant published literature.
Alternating-Offers Protocol for Multi-issue Bilateral Negotiation in Semantic-Enabled Marketplaces
NASA Astrophysics Data System (ADS)
Ragone, Azzurra; di Noia, Tommaso; di Sciascio, Eugenio; Donini, Francesco M.
We present a semantic-based approach to multi-issue bilateral negotiation for e-commerce. We use Description Logics to model advertisements, and relations among issues as axioms in a TBox. We then introduce a logic-based alternating-offers protocol, able to handle conflicting information, that merges non-standard reasoning services in Description Logics with utility thoery to find the most suitable agreements. We illustrate and motivate the theoretical framework, the logical language, and the negotiation protocol.
Harmonising Nursing Terminologies Using a Conceptual Framework.
Jansen, Kay; Kim, Tae Youn; Coenen, Amy; Saba, Virginia; Hardiker, Nicholas
2016-01-01
The International Classification for Nursing Practice (ICNP®) and the Clinical Care Classification (CCC) System are standardised nursing terminologies that identify discrete elements of nursing practice, including nursing diagnoses, interventions, and outcomes. While CCC uses a conceptual framework or model with 21 Care Components to classify these elements, ICNP, built on a formal Web Ontology Language (OWL) description logic foundation, uses a logical hierarchical framework that is useful for computing and maintenance of ICNP. Since the logical framework of ICNP may not always align with the needs of nursing practice, an informal framework may be a more useful organisational tool to represent nursing content. The purpose of this study was to classify ICNP nursing diagnoses using the 21 Care Components of the CCC as a conceptual framework to facilitate usability and inter-operability of nursing diagnoses in electronic health records. Findings resulted in all 521 ICNP diagnoses being assigned to one of the 21 CCC Care Components. Further research is needed to validate the resulting product of this study with practitioners and develop recommendations for improvement of both terminologies.
Depicting the logic of three evaluation theories.
Hansen, Mark; Alkin, Marvin C; Wallace, Tanner Lebaron
2013-06-01
Here, we describe the development of logic models depicting three theories of evaluation practice: Practical Participatory (Cousins & Whitmore, 1998), Values-engaged (Greene, 2005a, 2005b), and Emergent Realist (Mark et al., 1998). We begin with a discussion of evaluation theory and the particular theories that were chosen for our analysis. We then outline the steps involved in constructing the models. The theoretical prescriptions and claims represented here follow a logic model template developed at the University Wisconsin-Extension (Taylor-Powell & Henert, 2008), which also closely aligns with Mark's (2008) framework for research on evaluation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Jafari, Mohieddin; Ansari-Pour, Naser; Azimzadeh, Sadegh; Mirzaie, Mehdi
It is nearly half a century past the age of the introduction of the Central Dogma (CD) of molecular biology. This biological axiom has been developed and currently appears to be all the more complex. In this study, we modified CD by adding further species to the CD information flow and mathematically expressed CD within a dynamic framework by using Boolean network based on its present-day and 1965 editions. We show that the enhancement of the Dogma not only now entails a higher level of complexity, but it also shows a higher level of robustness, thus far more consistent with the nature of biological systems. Using this mathematical modeling approach, we put forward a logic-based expression of our conceptual view of molecular biology. Finally, we show that such biological concepts can be converted into dynamic mathematical models using a logic-based approach and thus may be useful as a framework for improving static conceptual models in biology.
Jafari, Mohieddin; Ansari-Pour, Naser; Azimzadeh, Sadegh; Mirzaie, Mehdi
2017-01-01
It is nearly half a century past the age of the introduction of the Central Dogma (CD) of molecular biology. This biological axiom has been developed and currently appears to be all the more complex. In this study, we modified CD by adding further species to the CD information flow and mathematically expressed CD within a dynamic framework by using Boolean network based on its present-day and 1965 editions. We show that the enhancement of the Dogma not only now entails a higher level of complexity, but it also shows a higher level of robustness, thus far more consistent with the nature of biological systems. Using this mathematical modeling approach, we put forward a logic-based expression of our conceptual view of molecular biology. Finally, we show that such biological concepts can be converted into dynamic mathematical models using a logic-based approach and thus may be useful as a framework for improving static conceptual models in biology. PMID:29267315
Technical Assistance Model for Long-Term Systems Change: Three State Examples
ERIC Educational Resources Information Center
Kasprzak, Christina; Hurth, Joicey; Lucas, Anne; Marshall, Jacqueline; Terrell, Adriane; Jones, Elizabeth
2010-01-01
The National Early Childhood Technical Assistance Center (NECTAC) Technical Assistance (TA) Model for Long-Term Systems Change (LTSC) is grounded in conceptual frameworks in the literature on systems change and systems thinking. The NECTAC conceptual framework uses a logic model approach to change developed specifically for states' infant and…
Query Expansion and Query Translation as Logical Inference.
ERIC Educational Resources Information Center
Nie, Jian-Yun
2003-01-01
Examines query expansion during query translation in cross language information retrieval and develops a general framework for inferential information retrieval in two particular contexts: using fuzzy logic and probability theory. Obtains evaluation formulas that are shown to strongly correspond to those used in other information retrieval models.…
Use of program logic models in the Southern Rural Access Program evaluation.
Pathman, Donald; Thaker, Samruddhi; Ricketts, Thomas C; Albright, Jennifer B
2003-01-01
The Southern Rural Access Program (SRAP) evaluation team used program logic models to clarify grantees' activities, objectives, and timelines. This information was used to benchmark data from grantees' progress reports to assess the program's successes. This article presents a brief background on the use of program logic models--essentially charts or diagrams specifying a program's planned activities, objectives, and goals--for evaluating and managing a program. It discusses the structure of the logic models chosen for the SRAP and how the model concept was introduced to the grantees to promote acceptance and use of the models. The article describes how the models helped clarify the program's objectives and helped lead agencies plan and manage the many program initiatives and subcontractors in their states. Models also provided a framework for grantees to report their progress to the National Program Office and evaluators and promoted the evaluators' visibility and acceptance by the grantees. Program logics, however, increased grantees' reporting requirements and demanded substantial time of the evaluators. Program logic models, on balance, proved their merit in the SRAP through their contributions to its management and evaluation and by providing a better understanding of the program's initiatives, successes, and potential impact.
Kneale, Dylan; Thomas, James; Harris, Katherine
2015-01-01
Logic models are becoming an increasingly common feature of systematic reviews, as is the use of programme theory more generally in systematic reviewing. Logic models offer a framework to help reviewers to 'think' conceptually at various points during the review, and can be a useful tool in defining study inclusion and exclusion criteria, guiding the search strategy, identifying relevant outcomes, identifying mediating and moderating factors, and communicating review findings. In this paper we critique the use of logic models in systematic reviews and protocols drawn from two databases representing reviews of health interventions and international development interventions. Programme theory featured only in a minority of the reviews and protocols included. Despite drawing from different disciplinary traditions, reviews and protocols from both sources shared several limitations in their use of logic models and theories of change, and these were used almost unanimously to solely depict pictorially the way in which the intervention worked. Logic models and theories of change were consequently rarely used to communicate the findings of the review. Logic models have the potential to be an aid integral throughout the systematic reviewing process. The absence of good practice around their use and development may be one reason for the apparent limited utility of logic models in many existing systematic reviews. These concerns are addressed in the second half of this paper, where we offer a set of principles in the use of logic models and an example of how we constructed a logic model for a review of school-based asthma interventions.
Modelling default and likelihood reasoning as probabilistic
NASA Technical Reports Server (NTRS)
Buntine, Wray
1990-01-01
A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. 'Likely' and 'by default' are in fact treated as duals in the same sense as 'possibility' and 'necessity'. To model these four forms probabilistically, a logic QDP and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequence results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.
A Logic Model for Evaluating the Academic Health Department.
Erwin, Paul Campbell; McNeely, Clea S; Grubaugh, Julie H; Valentine, Jennifer; Miller, Mark D; Buchanan, Martha
2016-01-01
Academic Health Departments (AHDs) are collaborative partnerships between academic programs and practice settings. While case studies have informed our understanding of the development and activities of AHDs, there has been no formal published evaluation of AHDs, either singularly or collectively. Developing a framework for evaluating AHDs has potential to further aid our understanding of how these relationships may matter. In this article, we present a general theory of change, in the form of a logic model, for how AHDs impact public health at the community level. We then present a specific example of how the logic model has been customized for a specific AHD. Finally, we end with potential research questions on the AHD based on these concepts. We conclude that logic models are valuable tools, which can be used to assess the value and ultimate impact of the AHD.
Kneale, Dylan; Thomas, James; Harris, Katherine
2015-01-01
Background Logic models are becoming an increasingly common feature of systematic reviews, as is the use of programme theory more generally in systematic reviewing. Logic models offer a framework to help reviewers to ‘think’ conceptually at various points during the review, and can be a useful tool in defining study inclusion and exclusion criteria, guiding the search strategy, identifying relevant outcomes, identifying mediating and moderating factors, and communicating review findings. Methods and Findings In this paper we critique the use of logic models in systematic reviews and protocols drawn from two databases representing reviews of health interventions and international development interventions. Programme theory featured only in a minority of the reviews and protocols included. Despite drawing from different disciplinary traditions, reviews and protocols from both sources shared several limitations in their use of logic models and theories of change, and these were used almost unanimously to solely depict pictorially the way in which the intervention worked. Logic models and theories of change were consequently rarely used to communicate the findings of the review. Conclusions Logic models have the potential to be an aid integral throughout the systematic reviewing process. The absence of good practice around their use and development may be one reason for the apparent limited utility of logic models in many existing systematic reviews. These concerns are addressed in the second half of this paper, where we offer a set of principles in the use of logic models and an example of how we constructed a logic model for a review of school-based asthma interventions. PMID:26575182
Modelling default and likelihood reasoning as probabilistic reasoning
NASA Technical Reports Server (NTRS)
Buntine, Wray
1990-01-01
A probabilistic analysis of plausible reasoning about defaults and about likelihood is presented. Likely and by default are in fact treated as duals in the same sense as possibility and necessity. To model these four forms probabilistically, a qualitative default probabilistic (QDP) logic and its quantitative counterpart DP are derived that allow qualitative and corresponding quantitative reasoning. Consistency and consequent results for subsets of the logics are given that require at most a quadratic number of satisfiability tests in the underlying propositional logic. The quantitative logic shows how to track the propagation error inherent in these reasoning forms. The methodology and sound framework of the system highlights their approximate nature, the dualities, and the need for complementary reasoning about relevance.
Making It Logical: Implementation of Inclusive Education Using a Logic Model Framework
ERIC Educational Resources Information Center
Stegemann, Kim Calder; Jaciw, Andrew P.
2018-01-01
Educational inclusion of children with special learning needs is a philosophy and movement with an international presence. Though Canada is a leader in educational inclusion, many would claim that our public educational systems have not yet fully realized the dream of inclusive education. As other countries have noted, making full-fledged changes…
A Framework for Building and Reasoning with Adaptive and Interoperable PMESII Models
2007-11-01
Description Logic SOA Service Oriented Architecture SPARQL Simple Protocol And RDF Query Language SQL Standard Query Language SROM Stability and...another by providing a more expressive ontological structure for one of the models, e.g., semantic networks can be mapped to first- order logical...Pellet is an open-source reasoner that works with OWL-DL. It accepts the SPARQL protocol and RDF query language ( SPARQL ) and provides a Java API to
The BMW Model: A New Framework for Teaching Monetary Economics
ERIC Educational Resources Information Center
Bofinger, Peter; Mayer, Eric; Wollmershauser, Timo
2006-01-01
Although the IS/LM-AS/AD model is still the central tool of macroeconomic teaching in most macroeconomic textbooks, it has been criticized by several economists. Colander (1995) demonstrated that the framework is logically inconsistent, Romer (2000) showed that it is unable to deal with a monetary policy that uses the interest rate as its…
Petruzzello, Steven J.; Ryan, Katherine E.
2014-01-01
Transportation workers, who constitute a large sector of the workforce, have worksite factors that harm their health. Worksite wellness programs must target this at-risk population. Although physical activity is often a component of worksite wellness logic models, we consider it the cornerstone for improving the health of mass transit employees. Program theory was based on in-person interviews and focus groups of employees. We identified 4 short-term outcome categories, which provided a chain of responses based on the program activities that should lead to the desired end results. This logic model may have significant public health impact, because it can serve as a framework for other US mass transit districts and worksite populations that face similar barriers to wellness, including truck drivers, railroad employees, and pilots. The objective of this article is to discuss the development of a logic model for a physical activity–based mass-transit employee wellness program by describing the target population, program theory, the components of the logic model, and the process of its development. PMID:25032838
Das, Bhibha M; Petruzzello, Steven J; Ryan, Katherine E
2014-07-17
Transportation workers, who constitute a large sector of the workforce, have worksite factors that harm their health. Worksite wellness programs must target this at-risk population. Although physical activity is often a component of worksite wellness logic models, we consider it the cornerstone for improving the health of mass transit employees. Program theory was based on in-person interviews and focus groups of employees. We identified 4 short-term outcome categories, which provided a chain of responses based on the program activities that should lead to the desired end results. This logic model may have significant public health impact, because it can serve as a framework for other US mass transit districts and worksite populations that face similar barriers to wellness, including truck drivers, railroad employees, and pilots. The objective of this article is to discuss the development of a logic model for a physical activity-based mass-transit employee wellness program by describing the target population, program theory, the components of the logic model, and the process of its development.
Knebel, Ann R.; Sharpe, Virginia A.; Danis, Marion; Toomey, Lauren M.; Knickerbocker, Deborah K.
2017-01-01
During catastrophic disasters, government leaders must decide how to efficiently and effectively allocate scarce public health and medical resources. The literature about triage decision making at the individual patient level is substantial, and the National Response Framework provides guidance about the distribution of responsibilities between federal and state governments. However, little has been written about the decision-making process of federal leaders in disaster situations when resources are not sufficient to meet the needs of several states simultaneously. We offer an ethical framework and logic model for decision making in such circumstances. We adapted medical triage and the federalism principle to the decision-making process for allocating scarce federal public health and medical resources. We believe that the logic model provides a values-based framework that can inform the gestalt during the iterative decision process used by federal leaders as they allocate scarce resources to states during catastrophic disasters. PMID:24612854
Evaluation of properties over phylogenetic trees using stochastic logics.
Requeno, José Ignacio; Colom, José Manuel
2016-06-14
Model checking has been recently introduced as an integrated framework for extracting information of the phylogenetic trees using temporal logics as a querying language, an extension of modal logics that imposes restrictions of a boolean formula along a path of events. The phylogenetic tree is considered a transition system modeling the evolution as a sequence of genomic mutations (we understand mutation as different ways that DNA can be changed), while this kind of logics are suitable for traversing it in a strict and exhaustive way. Given a biological property that we desire to inspect over the phylogeny, the verifier returns true if the specification is satisfied or a counterexample that falsifies it. However, this approach has been only considered over qualitative aspects of the phylogeny. In this paper, we repair the limitations of the previous framework for including and handling quantitative information such as explicit time or probability. To this end, we apply current probabilistic continuous-time extensions of model checking to phylogenetics. We reinterpret a catalog of qualitative properties in a numerical way, and we also present new properties that couldn't be analyzed before. For instance, we obtain the likelihood of a tree topology according to a mutation model. As case of study, we analyze several phylogenies in order to obtain the maximum likelihood with the model checking tool PRISM. In addition, we have adapted the software for optimizing the computation of maximum likelihoods. We have shown that probabilistic model checking is a competitive framework for describing and analyzing quantitative properties over phylogenetic trees. This formalism adds soundness and readability to the definition of models and specifications. Besides, the existence of model checking tools hides the underlying technology, omitting the extension, upgrade, debugging and maintenance of a software tool to the biologists. A set of benchmarks justify the feasibility of our approach.
Learning Probabilistic Logic Models from Probabilistic Examples
Chen, Jianzhong; Muggleton, Stephen; Santos, José
2009-01-01
Abstract We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples. PMID:19888348
Learning Probabilistic Logic Models from Probabilistic Examples.
Chen, Jianzhong; Muggleton, Stephen; Santos, José
2008-10-01
We revisit an application developed originally using abductive Inductive Logic Programming (ILP) for modeling inhibition in metabolic networks. The example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ILP (PILP) approaches - abductive Stochastic Logic Programs (SLPs) and PRogramming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a PILP framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ILP, the PILP approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the PILP approach provides a way of learning probabilistic logic models from probabilistic examples, and the PILP models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared with the PILP models learned from non-probabilistic examples.
Assessment of groundwater vulnerability using supervised committee to combine fuzzy logic models.
Nadiri, Ata Allah; Gharekhani, Maryam; Khatibi, Rahman; Moghaddam, Asghar Asghari
2017-03-01
Vulnerability indices of an aquifer assessed by different fuzzy logic (FL) models often give rise to differing values with no theoretical or empirical basis to establish a validated baseline or to develop a comparison basis between the modeling results and baselines, if any. Therefore, this research presents a supervised committee fuzzy logic (SCFL) method, which uses artificial neural networks to overarch and combine a selection of FL models. The indices are expressed by the widely used DRASTIC framework, which include geological, hydrological, and hydrogeological parameters often subject to uncertainty. DRASTIC indices represent collectively intrinsic (or natural) vulnerability and give a sense of contaminants, such as nitrate-N, percolating to aquifers from the surface. The study area is an aquifer in Ardabil plain, the province of Ardabil, northwest Iran. Improvements on vulnerability indices are achieved by FL techniques, which comprise Sugeno fuzzy logic (SFL), Mamdani fuzzy logic (MFL), and Larsen fuzzy logic (LFL). As the correlation between estimated DRASTIC vulnerability index values and nitrate-N values is as low as 0.4, it is improved significantly by FL models (SFL, MFL, and LFL), which perform in similar ways but have differences. Their synergy is exploited by SCFL and uses the FL modeling results "conditioned" by nitrate-N values to raise their correlation to higher than 0.9.
ERIC Educational Resources Information Center
Stenning, Keith; van Lambalgen, Michiel
2004-01-01
Modern logic provides accounts of both interpretation and derivation which work together to provide abstract frameworks for modelling the sensitivity of human reasoning to task, context and content. Cognitive theories have underplayed the importance of interpretative processes. We illustrate, using Wason's [Q. J. Exp. Psychol. 20 (1968) 273]…
'Healthy Eating and Lifestyle in Pregnancy (HELP)' trial: Process evaluation framework.
Simpson, Sharon A; Cassidy, Dunla; John, Elinor
2014-07-01
We developed and tested in a cluster RCT a theory-driven group-based intervention for obese pregnant women. It was designed to support women to moderate weight gain during pregnancy and reduce BMI one year after birth, in addition to targeting secondary health and wellbeing outcomes. In line with MRC guidance on developing and evaluating complex interventions in health, we conducted a process evaluation alongside the trial. This paper describes the development of the process evaluation framework. This cluster RCT recruited 598 pregnant women. Women in the intervention group were invited to attend a weekly weight-management group. Following a review of relevant literature, we developed a process evaluation framework which outlined key process indicators that we wanted to address and how we would measure these. Central to the process evaluation was to understand the mechanism of effect of the intervention. We utilised a logic-modelling approach to describe the intervention which helped us focus on what potential mediators of intervention effect to measure, and how. The resulting process evaluation framework was designed to address 9 core elements; context, reach, exposure, recruitment, fidelity, recruitment, retention, contamination and theory-testing. These were assessed using a variety of qualitative and quantitative approaches. The logic model explained the processes by which intervention components bring about change in target outcomes through various mediators and theoretical pathways including self-efficacy, social support, self-regulation and motivation. Process evaluation is a key element in assessing the effect of any RCT. We developed a process evaluation framework and logic model, and the results of analyses using these will offer insights into why the intervention is or is not effective. Copyright © 2014.
Discovering Knowledge from Noisy Databases Using Genetic Programming.
ERIC Educational Resources Information Center
Wong, Man Leung; Leung, Kwong Sak; Cheng, Jack C. Y.
2000-01-01
Presents a framework that combines Genetic Programming and Inductive Logic Programming, two approaches in data mining, to induce knowledge from noisy databases. The framework is based on a formalism of logic grammars and is implemented as a data mining system called LOGENPRO (Logic Grammar-based Genetic Programming System). (Contains 34…
McDonald, Steve; Turner, Tari; Chamberlain, Catherine; Lumbiganon, Pisake; Thinkhamrop, Jadsada; Festin, Mario R; Ho, Jacqueline J; Mohammad, Hakimi; Henderson-Smart, David J; Short, Jacki; Crowther, Caroline A; Martis, Ruth; Green, Sally
2010-07-01
Rates of maternal and perinatal mortality remain high in developing countries despite the existence of effective interventions. Efforts to strengthen evidence-based approaches to improve health in these settings are partly hindered by restricted access to the best available evidence, limited training in evidence-based practice and concerns about the relevance of existing evidence. South East Asia--Optimising Reproductive and Child Health in Developing Countries (SEA-ORCHID) was a five-year project that aimed to determine whether a multifaceted intervention designed to strengthen the capacity for research synthesis, evidence-based care and knowledge implementation improved clinical practice and led to better health outcomes for mothers and babies. This paper describes the development and design of the SEA-ORCHID intervention plan using a logical framework approach. SEA-ORCHID used a before-and-after design to evaluate the impact of a multifaceted tailored intervention at nine sites across Thailand, Malaysia, Philippines and Indonesia, supported by three centres in Australia. We used a logical framework approach to systematically prepare and summarise the project plan in a clear and logical way. The development and design of the SEA-ORCHID project was based around the three components of a logical framework (problem analysis, project plan and evaluation strategy). The SEA-ORCHID logical framework defined the project's goal and purpose (To improve the health of mothers and babies in South East Asia and To improve clinical practice in reproductive health in South East Asia), and outlined a series of project objectives and activities designed to achieve these. The logical framework also established outcome and process measures appropriate to each level of the project plan, and guided project work in each of the participating countries and hospitals. Development of a logical framework in the SEA-ORCHID project enabled a reasoned, logical approach to the project design that ensured the project activities would achieve the desired outcomes and that the evaluation plan would assess both the process and outcome of the project. The logical framework was also valuable over the course of the project to facilitate communication, assess progress and build a shared understanding of the project activities, purpose and goal.
2010-01-01
Background Rates of maternal and perinatal mortality remain high in developing countries despite the existence of effective interventions. Efforts to strengthen evidence-based approaches to improve health in these settings are partly hindered by restricted access to the best available evidence, limited training in evidence-based practice and concerns about the relevance of existing evidence. South East Asia - Optimising Reproductive and Child Health in Developing Countries (SEA-ORCHID) was a five-year project that aimed to determine whether a multifaceted intervention designed to strengthen the capacity for research synthesis, evidence-based care and knowledge implementation improved clinical practice and led to better health outcomes for mothers and babies. This paper describes the development and design of the SEA-ORCHID intervention plan using a logical framework approach. Methods SEA-ORCHID used a before-and-after design to evaluate the impact of a multifaceted tailored intervention at nine sites across Thailand, Malaysia, Philippines and Indonesia, supported by three centres in Australia. We used a logical framework approach to systematically prepare and summarise the project plan in a clear and logical way. The development and design of the SEA-ORCHID project was based around the three components of a logical framework (problem analysis, project plan and evaluation strategy). Results The SEA-ORCHID logical framework defined the project's goal and purpose (To improve the health of mothers and babies in South East Asia and To improve clinical practice in reproductive health in South East Asia), and outlined a series of project objectives and activities designed to achieve these. The logical framework also established outcome and process measures appropriate to each level of the project plan, and guided project work in each of the participating countries and hospitals. Conclusions Development of a logical framework in the SEA-ORCHID project enabled a reasoned, logical approach to the project design that ensured the project activities would achieve the desired outcomes and that the evaluation plan would assess both the process and outcome of the project. The logical framework was also valuable over the course of the project to facilitate communication, assess progress and build a shared understanding of the project activities, purpose and goal. PMID:20594325
Agent Based Modeling and Simulation Framework for Supply Chain Risk Management
2012-03-01
Christopher and Peck 2004) macroeconomic , policy, competition, and resource (Ghoshal 1987) value chain, operational, event, and recurring (Shi 2004...clustering algorithms in agent logic to protect company privacy ( da Silva et al. 2006), aggregation of domain context in agent data analysis logic (Xiang...Operational Availability ( OA ) for FMC and PMC. 75 Mission Capable (MICAP) Hours is the measure of total time (in a month) consumable or reparable
Burnett, E; Curran, E; Loveday, H P; Kiernan, M A; Tannahill, M
2014-01-01
Healthcare is delivered in a dynamic environment with frequent changes in populations, methods, equipment and settings. Infection prevention and control practitioners (IPCPs) must ensure that they are competent in addressing the challenges they face and are equipped to develop infection prevention and control (IPC) services in line with a changing world of healthcare provision. A multifaceted Framework was developed to assist IPCPs to enhance competence at an individual, team and organisational level to enable quality performance and improved quality of care. However, if these aspirations are to be met, it is vital that competency frameworks are fit for purpose or they risk being ignored. The aim of this unique study was to evaluate short and medium term outcomes as set out in the Outcome Logic Model to assist with the evaluation of the impact and success of the Framework. This study found that while the Framework is being used effectively in some areas, it is not being used as much or in the ways that were anticipated. The findings will enable future work on revision, communication and dissemination, and will provide intelligence to those initiating education and training in the utilisation of the competences.
Curran, E; Loveday, HP; Kiernan, MA; Tannahill, M
2013-01-01
Healthcare is delivered in a dynamic environment with frequent changes in populations, methods, equipment and settings. Infection prevention and control practitioners (IPCPs) must ensure that they are competent in addressing the challenges they face and are equipped to develop infection prevention and control (IPC) services in line with a changing world of healthcare provision. A multifaceted Framework was developed to assist IPCPs to enhance competence at an individual, team and organisational level to enable quality performance and improved quality of care. However, if these aspirations are to be met, it is vital that competency frameworks are fit for purpose or they risk being ignored. The aim of this unique study was to evaluate short and medium term outcomes as set out in the Outcome Logic Model to assist with the evaluation of the impact and success of the Framework. This study found that while the Framework is being used effectively in some areas, it is not being used as much or in the ways that were anticipated. The findings will enable future work on revision, communication and dissemination, and will provide intelligence to those initiating education and training in the utilisation of the competences. PMID:28989348
Oral health disparities and the workforce: a framework to guide innovation.
Hilton, Irene V; Lester, Arlene M
2010-06-01
Oral health disparities currently exist in the United States, and workforce innovations have been proposed as one strategy to address these disparities. A framework is needed to logically assess the possible role of workforce as a contributor to and to analyze workforce strategies addressing the issue of oral health disparities. Using an existing framework, A Strategic Framework for Improving Racial/Ethnic Minority Health and Eliminating Racial/Ethnic Health Disparities, workforce was sequentially applied across individual, environmental/community, and system levels to identify long-term problems, contributing factors, strategies/innovation, measurable outcomes/impacts, and long-term goals. Examples of current workforce innovations were applied to the framework. Contributing factors to oral health disparities included lack of racial/ethnic diversity of the workforce, lack of appropriate training, provider distribution, and a nonuser-centered system. The framework was applied to selected workforce innovation models delineating the potential impact on contributing factors across the individual, environmental/community, and system levels. The framework helps to define expected outcomes from workforce models that would contribute to the goal of reducing oral health disparities and examine impacts across multiple levels. However, the contributing factors to oral health disparities cannot be addressed by workforce innovation alone. The Strategic Framework is a logical approach to guide workforce innovation, solutions, and identification of other aspects of the oral healthcare delivery system that need innovation in order to reduce oral health disparities.
LEGO-MM: LEarning structured model by probabilistic loGic Ontology tree for MultiMedia.
Tang, Jinhui; Chang, Shiyu; Qi, Guo-Jun; Tian, Qi; Rui, Yong; Huang, Thomas S
2016-09-22
Recent advances in Multimedia ontology have resulted in a number of concept models, e.g., LSCOM and Mediamill 101, which are accessible and public to other researchers. However, most current research effort still focuses on building new concepts from scratch, very few work explores the appropriate method to construct new concepts upon the existing models already in the warehouse. To address this issue, we propose a new framework in this paper, termed LEGO1-MM, which can seamlessly integrate both the new target training examples and the existing primitive concept models to infer the more complex concept models. LEGOMM treats the primitive concept models as the lego toy to potentially construct an unlimited vocabulary of new concepts. Specifically, we first formulate the logic operations to be the lego connectors to combine existing concept models hierarchically in probabilistic logic ontology trees. Then, we incorporate new target training information simultaneously to efficiently disambiguate the underlying logic tree and correct the error propagation. Extensive experiments are conducted on a large vehicle domain data set from ImageNet. The results demonstrate that LEGO-MM has significantly superior performance over existing state-of-the-art methods, which build new concept models from scratch.
An Argumentation Framework based on Paraconsistent Logic
NASA Astrophysics Data System (ADS)
Umeda, Yuichi; Takahashi, Takehisa; Sawamura, Hajime
Argumentation is the most representative of intelligent activities of humans. Therefore, it is natural to think that it could have many implications for artificial intelligence and computer science as well. Specifically, argumentation may be considered a most primitive capability for interaction among computational agents. In this paper we present an argumentation framework based on the four-valued paraconsistent logic. Tolerance and acceptance of inconsistency that this logic has as its logical feature allow for arguments on inconsistent knowledge bases with which we are often confronted. We introduce various concepts for argumentation, such as arguments, attack relations, argument justification, preferential criteria of arguments based on social norms, and so on, in a way proper to the four-valued paraconsistent logic. Then, we provide the fixpoint semantics and dialectical proof theory for our argumentation framework. We also give the proofs of the soundness and completeness.
Navigating a Mobile Robot Across Terrain Using Fuzzy Logic
NASA Technical Reports Server (NTRS)
Seraji, Homayoun; Howard, Ayanna; Bon, Bruce
2003-01-01
A strategy for autonomous navigation of a robotic vehicle across hazardous terrain involves the use of a measure of traversability of terrain within a fuzzy-logic conceptual framework. This navigation strategy requires no a priori information about the environment. Fuzzy logic was selected as a basic element of this strategy because it provides a formal methodology for representing and implementing a human driver s heuristic knowledge and operational experience. Within a fuzzy-logic framework, the attributes of human reasoning and decision- making can be formulated by simple IF (antecedent), THEN (consequent) rules coupled with easily understandable and natural linguistic representations. The linguistic values in the rule antecedents convey the imprecision associated with measurements taken by sensors onboard a mobile robot, while the linguistic values in the rule consequents represent the vagueness inherent in the reasoning processes to generate the control actions. The operational strategies of the human expert driver can be transferred, via fuzzy logic, to a robot-navigation strategy in the form of a set of simple conditional statements composed of linguistic variables. These linguistic variables are defined by fuzzy sets in accordance with user-defined membership functions. The main advantages of a fuzzy navigation strategy lie in the ability to extract heuristic rules from human experience and to obviate the need for an analytical model of the robot navigation process.
Birnbaum, Marvin L; Daily, Elaine K; O'Rourke, Ann P; Kushner, Jennifer
2016-04-01
Disaster-related interventions are actions or responses undertaken during any phase of a disaster to change the current status of an affected community or a Societal System. Interventional disaster research aims to evaluate the results of such interventions in order to develop standards and best practices in Disaster Health that can be applied to disaster risk reduction. Considering interventions as production functions (transformation processes) structures the analyses and cataloguing of interventions/responses that are implemented prior to, during, or following a disaster or other emergency. Since currently it is not possible to do randomized, controlled studies of disasters, in order to validate the derived standards and best practices, the results of the studies must be compared and synthesized with results from other studies (ie, systematic reviews). Such reviews will be facilitated by the selected studies being structured using accepted frameworks. A logic model is a graphic representation of the transformation processes of a program [project] that shows the intended relationships between investments and results. Logic models are used to describe a program and its theory of change, and they provide a method for the analyzing and evaluating interventions. The Disaster Logic Model (DLM) is an adaptation of a logic model used for the evaluation of educational programs and provides the structure required for the analysis of disaster-related interventions. It incorporates a(n): definition of the current functional status of a community or Societal System, identification of needs, definition of goals, selection of objectives, implementation of the intervention(s), and evaluation of the effects, outcomes, costs, and impacts of the interventions. It is useful for determining the value of an intervention and it also provides the structure for analyzing the processes used in providing the intervention according to the Relief/Recovery and Risk-Reduction Frameworks.
Assessment of Evidence-based Management Training Program: Application of a Logic Model.
Guo, Ruiling; Farnsworth, Tracy J; Hermanson, Patrick M
2016-06-01
The purposes of this study were to apply a logic model to plan and implement an evidence-based management (EBMgt) educational training program for healthcare administrators and to examine whether a logic model is a useful tool for evaluating the outcomes of the educational program. The logic model was used as a conceptual framework to guide the investigators in developing an EBMgt educational training program and evaluating the outcomes of the program. The major components of the logic model were constructed as inputs, outputs, and outcomes/impacts. The investigators delineated the logic model based on the results of the needs assessment survey. Two 3-hour training workshops were delivered to 30 participants. To assess the outcomes of the EBMgt educational program, pre- and post-tests and self-reflection surveys were conducted. The data were collected and analyzed descriptively and inferentially, using the IBM Statistical Package for the Social Sciences (SPSS) 22.0. A paired sample t-test was performed to compare the differences in participants' EBMgt knowledge and skills prior to and after the training. The assessment results showed that there was a statistically significant difference in participants' EBMgt knowledge and information searching skills before and after the training (p< 0.001). Participants' confidence in using the EBMgt approach for decision-making was significantly increased after the training workshops (p< 0.001). Eighty-three percent of participants indicated that the knowledge and skills they gained through the training program could be used for future management decision-making in their healthcare organizations. The overall evaluation results of the program were positive. It is suggested that the logic model is a useful tool for program planning, implementation, and evaluation, and it also improves the outcomes of the educational program.
POLE.VAULT: A Semantic Framework for Health Policy Evaluation and Logical Testing.
Shaban-Nejad, Arash; Okhmatovskaia, Anya; Shin, Eun Kyong; Davis, Robert L; Buckeridge, David L
2017-01-01
The major goal of our study is to provide an automatic evaluation framework that aligns the results generated through semantic reasoning with the best available evidence regarding effective interventions to support the logical evaluation of public health policies. To this end, we have designed the POLicy EVAlUation & Logical Testing (POLE.VAULT) Framework to assist different stakeholders and decision-makers in making informed decisions about different health-related interventions, programs and ultimately policies, based on the contextual knowledge and the best available evidence at both individual and aggregate levels.
Cognitive pathways and historical research.
Sutherland, J A
1997-01-01
The nursing literature is replete with articles detailing the logical reasoning processes required by the individual scientist to implement the rigors of research and theory development. Much less attention has been focused on creative and critical thinking as modes for deriving explanations, inferences, and conclusions essential to science as a product. Historical research, as a particular kind of qualitative research, is dependent on and compatible with such mental strategies as logical, creative, and critical thinking. These strategies depict an intellectual framework for the scientist examining archival data and offer a structure for such inquiry. A model for analyzing historical data delineating the cognitive pathways of logical reasoning, creative processing, and critical thinking is proposed.
The motor theory of speech perception revisited.
Massaro, Dominic W; Chen, Trevor H
2008-04-01
Galantucci, Fowler, and Turvey (2006) have claimed that perceiving speech is perceiving gestures and that the motor system is recruited for perceiving speech. We make the counter argument that perceiving speech is not perceiving gestures, that the motor system is not recruitedfor perceiving speech, and that speech perception can be adequately described by a prototypical pattern recognition model, the fuzzy logical model of perception (FLMP). Empirical evidence taken as support for gesture and motor theory is reconsidered in more detail and in the framework of the FLMR Additional theoretical and logical arguments are made to challenge gesture and motor theory.
A conceptual review of decision making in social dilemmas: applying a logic of appropriateness.
Weber, J Mark; Kopelman, Shirli; Messick, David M
2004-01-01
Despite decades of experimental social dilemma research, "theoretical integration has proven elusive" (Smithson & Foddy, 1999, p. 14). To advance a theory of decision making in social dilemmas, this article provides a conceptual review of the literature that applies a "logic of appropriateness" (March, 1994) framework. The appropriateness framework suggests that people making decisions ask themselves (explicitly or implicitly), "What does a person like me do in a situation like this? " This question identifies 3 significant factors: recognition and classification of the kind of situation encountered, the identity of the individual making the decision, and the application of rules or heuristics in guiding behavioral choice. In contrast with dominant rational choice models, the appropriateness framework proposed accommodates the inherently social nature of social dilemmas, and the role of rule and heuristic based processing. Implications for the interpretation of past findings and the direction of future research are discussed.
Theory Learning as Stochastic Search in the Language of Thought
ERIC Educational Resources Information Center
Ullman, Tomer D.; Goodman, Noah D.; Tenenbaum, Joshua B.
2012-01-01
We present an algorithmic model for the development of children's intuitive theories within a hierarchical Bayesian framework, where theories are described as sets of logical laws generated by a probabilistic context-free grammar. We contrast our approach with connectionist and other emergentist approaches to modeling cognitive development. While…
The Trimeric Model: A New Model of Periodontal Treatment Planning
Tarakji, Bassel
2014-01-01
Treatment of periodontal disease is a complex and multidisciplinary procedure, requiring periodontal, surgical, restorative, and orthodontic treatment modalities. Several authors attempted to formulate models for periodontal treatment that orders the treatment steps in a logical and easy to remember manner. In this article, we discuss two models of periodontal treatment planning from two of the most well-known textbook in the specialty of periodontics internationally. Then modify them to arrive at a new model of periodontal treatment planning, The Trimeric Model. Adding restorative and orthodontic interrelationships with periodontal treatment allows us to expand this model into the Extended Trimeric Model of periodontal treatment planning. These models will provide a logical framework and a clear order of the treatment of periodontal disease for general practitioners and periodontists alike. PMID:25177662
Cell-to-Cell Communication Circuits: Quantitative Analysis of Synthetic Logic Gates
Hoffman-Sommer, Marta; Supady, Adriana; Klipp, Edda
2012-01-01
One of the goals in the field of synthetic biology is the construction of cellular computation devices that could function in a manner similar to electronic circuits. To this end, attempts are made to create biological systems that function as logic gates. In this work we present a theoretical quantitative analysis of a synthetic cellular logic-gates system, which has been implemented in cells of the yeast Saccharomyces cerevisiae (Regot et al., 2011). It exploits endogenous MAP kinase signaling pathways. The novelty of the system lies in the compartmentalization of the circuit where all basic logic gates are implemented in independent single cells that can then be cultured together to perform complex logic functions. We have constructed kinetic models of the multicellular IDENTITY, NOT, OR, and IMPLIES logic gates, using both deterministic and stochastic frameworks. All necessary model parameters are taken from literature or estimated based on published kinetic data, in such a way that the resulting models correctly capture important dynamic features of the included mitogen-activated protein kinase pathways. We analyze the models in terms of parameter sensitivity and we discuss possible ways of optimizing the system, e.g., by tuning the culture density. We apply a stochastic modeling approach, which simulates the behavior of whole populations of cells and allows us to investigate the noise generated in the system; we find that the gene expression units are the major sources of noise. Finally, the model is used for the design of system modifications: we show how the current system could be transformed to operate on three discrete values. PMID:22934039
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ondrej Linda; Todd Vollmer; Jim Alves-Foss
2011-08-01
Resiliency and cyber security of modern critical infrastructures is becoming increasingly important with the growing number of threats in the cyber-environment. This paper proposes an extension to a previously developed fuzzy logic based anomaly detection network security cyber sensor via incorporating Type-2 Fuzzy Logic (T2 FL). In general, fuzzy logic provides a framework for system modeling in linguistic form capable of coping with imprecise and vague meanings of words. T2 FL is an extension of Type-1 FL which proved to be successful in modeling and minimizing the effects of various kinds of dynamic uncertainties. In this paper, T2 FL providesmore » a basis for robust anomaly detection and cyber security state awareness. In addition, the proposed algorithm was specifically developed to comply with the constrained computational requirements of low-cost embedded network security cyber sensors. The performance of the system was evaluated on a set of network data recorded from an experimental cyber-security test-bed.« less
On the formalization and reuse of scientific research.
King, Ross D; Liakata, Maria; Lu, Chuan; Oliver, Stephen G; Soldatova, Larisa N
2011-10-07
The reuse of scientific knowledge obtained from one investigation in another investigation is basic to the advance of science. Scientific investigations should therefore be recorded in ways that promote the reuse of the knowledge they generate. The use of logical formalisms to describe scientific knowledge has potential advantages in facilitating such reuse. Here, we propose a formal framework for using logical formalisms to promote reuse. We demonstrate the utility of this framework by using it in a worked example from biology: demonstrating cycles of investigation formalization [F] and reuse [R] to generate new knowledge. We first used logic to formally describe a Robot scientist investigation into yeast (Saccharomyces cerevisiae) functional genomics [f(1)]. With Robot scientists, unlike human scientists, the production of comprehensive metadata about their investigations is a natural by-product of the way they work. We then demonstrated how this formalism enabled the reuse of the research in investigating yeast phenotypes [r(1) = R(f(1))]. This investigation found that the removal of non-essential enzymes generally resulted in enhanced growth. The phenotype investigation was then formally described using the same logical formalism as the functional genomics investigation [f(2) = F(r(1))]. We then demonstrated how this formalism enabled the reuse of the phenotype investigation to investigate yeast systems-biology modelling [r(2) = R(f(2))]. This investigation found that yeast flux-balance analysis models fail to predict the observed changes in growth. Finally, the systems biology investigation was formalized for reuse in future investigations [f(3) = F(r(2))]. These cycles of reuse are a model for the general reuse of scientific knowledge.
Rule-Based Runtime Verification
NASA Technical Reports Server (NTRS)
Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik
2003-01-01
We present a rule-based framework for defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. Our logic, EAGLE, is implemented as a Java library and involves novel techniques for rule definition, manipulation and execution. Monitoring is done on a state-by-state basis, without storing the execution trace.
Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity
Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny
2015-01-01
Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity). PMID:25976626
Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity.
Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny
2015-05-15
Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity).
Notes on stochastic (bio)-logic gates: computing with allosteric cooperativity
NASA Astrophysics Data System (ADS)
Agliari, Elena; Altavilla, Matteo; Barra, Adriano; Dello Schiavo, Lorenzo; Katz, Evgeny
2015-05-01
Recent experimental breakthroughs have finally allowed to implement in-vitro reaction kinetics (the so called enzyme based logic) which code for two-inputs logic gates and mimic the stochastic AND (and NAND) as well as the stochastic OR (and NOR). This accomplishment, together with the already-known single-input gates (performing as YES and NOT), provides a logic base and paves the way to the development of powerful biotechnological devices. However, as biochemical systems are always affected by the presence of noise (e.g. thermal), standard logic is not the correct theoretical reference framework, rather we show that statistical mechanics can work for this scope: here we formulate a complete statistical mechanical description of the Monod-Wyman-Changeaux allosteric model for both single and double ligand systems, with the purpose of exploring their practical capabilities to express noisy logical operators and/or perform stochastic logical operations. Mixing statistical mechanics with logics, and testing quantitatively the resulting findings on the available biochemical data, we successfully revise the concept of cooperativity (and anti-cooperativity) for allosteric systems, with particular emphasis on its computational capabilities, the related ranges and scaling of the involved parameters and its differences with classical cooperativity (and anti-cooperativity).
Reasoning about real-time systems with temporal interval logic constraints on multi-state automata
NASA Technical Reports Server (NTRS)
Gabrielian, Armen
1991-01-01
Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.
Person-centered nursing practice with older people in Ireland.
Landers, Margaret G; McCarthy, Geraldine M
2007-01-01
This column presents an analysis of McCormack's conceptual framework for person-centered practice with older people as a theoretical basis for the delivery of care of older adults in an Irish context. The evaluative process is guided by the framework proposed by Fawcett (2000) for the analysis and evaluation of conceptual models of nursing. The historical evolution, philosophical claims, and an overview of the content of the model are addressed. The following criteria are then applied: logical congruence, the generation of the theory, the credibility of the model, and the contribution of the model to the discipline of nursing.
Implementing Eco-Logical 2014-2015 Annual Report
DOT National Transportation Integrated Search
2015-12-01
The Eco-Logical approach offers an ecosystem-based framework for integrated infrastructure and natural resource planning, project development, and delivery. The 2014/2015 Implementing Eco-Logical Program Annual Report provides updates on the Federal ...
Logics of Business Education for Sustainability
ERIC Educational Resources Information Center
Andersson, Pernilla; Öhman, Johan
2016-01-01
This paper explores various kinds of logics of "business education for sustainability" and how these "logics" position the subject business person, based on eight teachers' reasoning of their own practices. The concept of logics developed within a discourse theoretical framework is employed to analyse the teachers' reasoning.…
Teaching Religious Doubt with Toulmin's Model of Reasoning
ERIC Educational Resources Information Center
Horne, Milton P.
2008-01-01
Teaching students to doubt, that is, to "test," theological arguments as one might test any other kind of knowledge is challenging in that the warrant for such testing is not immediately clear. Stephen Toulmin, Richard Rieke, and Allan Janik's model of reasoning provides a conceptual framework that demonstrates the logical relationships between a…
2013/2014 Eco-Logical program annual report
DOT National Transportation Integrated Search
2014-12-01
The Eco-Logical approach offers an ecosystem-based framework for integrated infrastructure and natural resource planning, project development, and delivery. The 2013/2014 Eco-Logical Program Annual Report provides updates on the Federal Highway Admin...
Conceptualising the effectiveness of impact assessment processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chanchitpricha, Chaunjit, E-mail: chaunjit@g.sut.ac.th; Bond, Alan, E-mail: alan.bond@uea.ac.uk; Unit for Environmental Sciences and Management School of Geo and Spatial Sciences, Internal Box 375, North West University
2013-11-15
This paper aims at conceptualising the effectiveness of impact assessment processes through the development of a literature-based framework of criteria to measure impact assessment effectiveness. Four categories of effectiveness were established: procedural, substantive, transactive and normative, each containing a number of criteria; no studies have previously brought together all four of these categories into such a comprehensive, criteria-based framework and undertaken systematic evaluation of practice. The criteria can be mapped within a cycle/or cycles of evaluation, based on the ‘logic model’, at the stages of input, process, output and outcome to enable the identification of connections between the criteria acrossmore » the categories of effectiveness. This framework is considered to have potential application in measuring the effectiveness of many impact assessment processes, including strategic environmental assessment (SEA), environmental impact assessment (EIA), social impact assessment (SIA) and health impact assessment (HIA). -- Highlights: • Conceptualising effectiveness of impact assessment processes. • Identification of factors influencing effectiveness of impact assessment processes. • Development of criteria within a framework for evaluating IA effectiveness. • Applying the logic model to examine connections between effectiveness criteria.« less
Ostrowski, M; Paulevé, L; Schaub, T; Siegel, A; Guziolowski, C
2016-11-01
Boolean networks (and more general logic models) are useful frameworks to study signal transduction across multiple pathways. Logic models can be learned from a prior knowledge network structure and multiplex phosphoproteomics data. However, most efficient and scalable training methods focus on the comparison of two time-points and assume that the system has reached an early steady state. In this paper, we generalize such a learning procedure to take into account the time series traces of phosphoproteomics data in order to discriminate Boolean networks according to their transient dynamics. To that end, we identify a necessary condition that must be satisfied by the dynamics of a Boolean network to be consistent with a discretized time series trace. Based on this condition, we use Answer Set Programming to compute an over-approximation of the set of Boolean networks which fit best with experimental data and provide the corresponding encodings. Combined with model-checking approaches, we end up with a global learning algorithm. Our approach is able to learn logic models with a true positive rate higher than 78% in two case studies of mammalian signaling networks; for a larger case study, our method provides optimal answers after 7min of computation. We quantified the gain in our method predictions precision compared to learning approaches based on static data. Finally, as an application, our method proposes erroneous time-points in the time series data with respect to the optimal learned logic models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Mixing Categories and Modal Logics in the Quantum Setting
NASA Astrophysics Data System (ADS)
Cinà, Giovanni
The study of the foundations of Quantum Mechanics, especially after the advent of Quantum Computation and Information, has benefited from the application of category-theoretic tools and modal logics to the analysis of Quantum processes: we witness a wealth of theoretical frameworks casted in either of the two languages. This paper explores the interplay of the two formalisms in the peculiar context of Quantum Theory. After a review of some influential abstract frameworks, we show how different modal logic frames can be extracted from the category of finite dimensional Hilbert spaces, connecting the Categorical Quantum Mechanics approach to some modal logics that have been proposed for Quantum Computing. We then apply a general version of the same technique to two other categorical frameworks, the `topos approach' of Doering and Isham and the sheaf-theoretic work on contextuality by Abramsky and Brandenburger, suggesting how some key features can be expressed with modal languages.
Diehl, Glen; Major, Solomon
2015-01-01
Measuring the effectiveness of military Global Health Engagements (GHEs) has become an area of increasing interest to the military medical field. As a result, there have been efforts to more logically and rigorously evaluate GHE projects and programs; many of these have been based on the Logic and Results Frameworks. However, while these Frameworks are apt and appropriate planning tools, they are not ideally suited to measuring programs' effectiveness. This article introduces military medicine professionals to the Measures of Effectiveness for Defense Engagement and Learning (MODEL) program, which implements a new method of assessment, one that seeks to rigorously use Measures of Effectiveness (vs. Measures of Performance) to gauge programs' and projects' success and fidelity to Theater Campaign goals. While the MODEL method draws on the Logic and Results Frameworks where appropriate, it goes beyond their planning focus by using the latest social scientific and econometric evaluation methodologies to link on-the-ground GHE "lines of effort" to the realization of national and strategic goals and end-states. It is hoped these methods will find use beyond the MODEL project itself, and will catalyze a new body of rigorous, empirically based work, which measures the effectiveness of a broad spectrum of GHE and security cooperation activities. We based our strategies on the principle that it is much more cost-effective to prevent conflicts than it is to stop one once it's started. I cannot overstate the importance of our theater security cooperation programs as the centerpiece to securing our Homeland from the irregular and catastrophic threats of the 21st Century.-GEN James L. Jones, USMC (Ret.). Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
S3DB core: a framework for RDF generation and management in bioinformatics infrastructures
2010-01-01
Background Biomedical research is set to greatly benefit from the use of semantic web technologies in the design of computational infrastructure. However, beyond well defined research initiatives, substantial issues of data heterogeneity, source distribution, and privacy currently stand in the way towards the personalization of Medicine. Results A computational framework for bioinformatic infrastructure was designed to deal with the heterogeneous data sources and the sensitive mixture of public and private data that characterizes the biomedical domain. This framework consists of a logical model build with semantic web tools, coupled with a Markov process that propagates user operator states. An accompanying open source prototype was developed to meet a series of applications that range from collaborative multi-institution data acquisition efforts to data analysis applications that need to quickly traverse complex data structures. This report describes the two abstractions underlying the S3DB-based infrastructure, logical and numerical, and discusses its generality beyond the immediate confines of existing implementations. Conclusions The emergence of the "web as a computer" requires a formal model for the different functionalities involved in reading and writing to it. The S3DB core model proposed was found to address the design criteria of biomedical computational infrastructure, such as those supporting large scale multi-investigator research, clinical trials, and molecular epidemiology. PMID:20646315
EAGLE can do Efficient LTL Monitoring
NASA Technical Reports Server (NTRS)
Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik
2003-01-01
We briefly present a rule-based framework, called EAGLE, that has been shown to be capable of defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time logics, interval logics, forms of quantified temporal logics, and so on. In this paper we show how EAGLE can do linear temporal logic (LTL) monitoring in an efficient way. We give an upper bound on the space and time complexity of this monitoring.
From complexity to reality: providing useful frameworks for defining systems of care.
Levison-Johnson, Jody; Wenz-Gross, Melodie
2010-02-01
Because systems of care are not uniform across communities, there is a need to better document the process of system development, define the complexity, and describe the development of the structures, processes, and relationships within communities engaged in system transformation. By doing so, we begin to identify the necessary and sufficient components that, at minimum, move us from usual care within a naturally occurring system to a true system of care. Further, by documenting and measuring the degree to which key components are operating, we may be able to identify the most successful strategies in creating system reform. The theory of change and logic model offer a useful framework for communities to begin the adaptive work necessary to effect true transformation. Using the experience of two system of care communities, this new definition and the utility of a theory of change and logic model framework for defining local system transformation efforts will be discussed. Implications for the field, including the need to further examine the natural progression of systems change and to create quantifiable measures of transformation, will be raised as new challenges for the evolving system of care movement.
Polynomial algebra of discrete models in systems biology.
Veliz-Cuba, Alan; Jarrah, Abdul Salam; Laubenbacher, Reinhard
2010-07-01
An increasing number of discrete mathematical models are being published in Systems Biology, ranging from Boolean network models to logical models and Petri nets. They are used to model a variety of biochemical networks, such as metabolic networks, gene regulatory networks and signal transduction networks. There is increasing evidence that such models can capture key dynamic features of biological networks and can be used successfully for hypothesis generation. This article provides a unified framework that can aid the mathematical analysis of Boolean network models, logical models and Petri nets. They can be represented as polynomial dynamical systems, which allows the use of a variety of mathematical tools from computer algebra for their analysis. Algorithms are presented for the translation into polynomial dynamical systems. Examples are given of how polynomial algebra can be used for the model analysis. alanavc@vt.edu Supplementary data are available at Bioinformatics online.
Weighted Description Logics Preference Formulas for Multiattribute Negotiation
NASA Astrophysics Data System (ADS)
Ragone, Azzurra; di Noia, Tommaso; Donini, Francesco M.; di Sciascio, Eugenio; Wellman, Michael P.
We propose a framework to compute the utility of an agreement w.r.t a preference set in a negotiation process. In particular, we refer to preferences expressed as weighted formulas in a decidable fragment of First-order Logic and agreements expressed as a formula. We ground our framework in Description Logics (DL) endowed with disjunction, to be compliant with Semantic Web technologies. A logic based approach to preference representation allows, when a background knowledge base is exploited, to relax the often unrealistic assumption of additive independence among attributes. We provide suitable definitions of the problem and present algorithms to compute utility in our setting. We also validate our approach through an experimental evaluation.
NASA Technical Reports Server (NTRS)
Guarro, Sergio B.
2010-01-01
This report validates and documents the detailed features and practical application of the framework for software intensive digital systems risk assessment and risk-informed safety assurance presented in the NASA PRA Procedures Guide for Managers and Practitioner. This framework, called herein the "Context-based Software Risk Model" (CSRM), enables the assessment of the contribution of software and software-intensive digital systems to overall system risk, in a manner which is entirely compatible and integrated with the format of a "standard" Probabilistic Risk Assessment (PRA), as currently documented and applied for NASA missions and applications. The CSRM also provides a risk-informed path and criteria for conducting organized and systematic digital system and software testing so that, within this risk-informed paradigm, the achievement of a quantitatively defined level of safety and mission success assurance may be targeted and demonstrated. The framework is based on the concept of context-dependent software risk scenarios and on the modeling of such scenarios via the use of traditional PRA techniques - i.e., event trees and fault trees - in combination with more advanced modeling devices such as the Dynamic Flowgraph Methodology (DFM) or other dynamic logic-modeling representations. The scenarios can be synthesized and quantified in a conditional logic and probabilistic formulation. The application of the CSRM method documented in this report refers to the MiniAERCam system designed and developed by the NASA Johnson Space Center.
Developing a theoretical framework for complex community-based interventions.
Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana
2014-01-01
Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.
Toward a comprehensive areal model of earthquake-induced landslides
Miles, S.B.; Keefer, D.K.
2009-01-01
This paper provides a review of regional-scale modeling of earthquake-induced landslide hazard with respect to the needs for disaster risk reduction and sustainable development. Based on this review, it sets out important research themes and suggests computing with words (CW), a methodology that includes fuzzy logic systems, as a fruitful modeling methodology for addressing many of these research themes. A range of research, reviewed here, has been conducted applying CW to various aspects of earthquake-induced landslide hazard zonation, but none facilitate comprehensive modeling of all types of earthquake-induced landslides. A new comprehensive areal model of earthquake-induced landslides (CAMEL) is introduced here that was developed using fuzzy logic systems. CAMEL provides an integrated framework for modeling all types of earthquake-induced landslides using geographic information systems. CAMEL is designed to facilitate quantitative and qualitative representation of terrain conditions and knowledge about these conditions on the likely areal concentration of each landslide type. CAMEL is highly modifiable and adaptable; new knowledge can be easily added, while existing knowledge can be changed to better match local knowledge and conditions. As such, CAMEL should not be viewed as a complete alternative to other earthquake-induced landslide models. CAMEL provides an open framework for incorporating other models, such as Newmark's displacement method, together with previously incompatible empirical and local knowledge. ?? 2009 ASCE.
Improving ontology matching with propagation strategy and user feedback
NASA Astrophysics Data System (ADS)
Li, Chunhua; Cui, Zhiming; Zhao, Pengpeng; Wu, Jian; Xin, Jie; He, Tianxu
2015-07-01
Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. The existing approach requires a threshold to produce matching candidates and use a small set of constraints acting as filter to select the final alignments. We introduce novel match propagation strategy to model the influences between potential entity mappings across ontologies, which can help to identify the correct correspondences and produce missed correspondences. The estimation of appropriate threshold is a difficult task. We propose an interactive method for threshold selection through which we obtain an additional measurable improvement. Running experiments on a public dataset has demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.
A Logical Framework for Distributed Data
1990-11-01
A Logical Framework for Distributed Data lLl6ll1AH43 44592 -001-05-3301 A~UTHO(S RDT&E 44043-010-37 Paul Broome and Barbara Broome 1L162618AH80... 44592 -002-46-3702 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 3.PERFORMG ORGANIZATION REPORT NUMBER U. S. Army Ballistic Research Laboratory ATTN
Outcomes-Balanced Framework for Emergency Management: A Predictive Model for Preparedness
2013-09-01
Management Total Quality Management (TQM) was developed by W. Edwards Deming in the post-World War II reconstruction period in Japan. It ushered in a...FIGURES Figure 1. From Total Quality Management Principles ....................................................30 Figure 2. Outcomes Logic Model (After...THIRA Threat and Hazard Identification and Risk Assessment TQM Total Quality Management UTL Universal Task List xiv ACKNOWLEDGMENTS German
Nishimura, Stephanie T; Hishinuma, Earl S; Goebert, Deborah A; Onoye, Jane M M; Sugimoto-Matsuda, Jeanelle J
2018-02-01
To provide one model for evaluating academic research centers, given their vital role in addressing public health issues. A theoretical framework is described for a comprehensive evaluation plan for research centers. This framework is applied to one specific center by describing the center's Logic Model and Evaluation Plan, including a sample of the center's activities. Formative and summative evaluation information is summarized. In addition, a summary of outcomes is provided: improved practice and policy; reduction of risk factors and increase in protective factors; reduction of interpersonal youth violence in the community; and national prototype for prevention of interpersonal youth violence. Research centers are important mechanisms to advance science and improve people's quality of life. Because of their more infrastructure-intensive and comprehensive approach, they also require substantial resources for success, and thus, also require careful accountability. It is therefore important to comprehensively evaluate these centers. As provided herein, a more systematic and structured approach utilizing logic models, an evaluation plan, and successful processes can provide research centers with a functionally useful method in their evaluation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Understanding the dynamic effects of returning patients toward emergency department density
NASA Astrophysics Data System (ADS)
Ahmad, Norazura; Zulkepli, Jafri; Ramli, Razamin; Ghani, Noraida Abdul; Teo, Aik Howe
2017-11-01
This paper presents the development of a dynamic hypothesis for the effect of returning patients to the emergency department (ED). A logical tree from the Theory of Constraint known as Current Reality Tree was used to identify the key variables. Then, a hypothetical framework portraying the interrelated variables and its influencing relationships was developed using causal loop diagrams (CLD). The conceptual framework was designed as the basis for the development of a system dynamics model.
Metalevel programming in robotics: Some issues
NASA Technical Reports Server (NTRS)
Kumarn, A.; Parameswaran, N.
1987-01-01
Computing in robotics has two important requirements: efficiency and flexibility. Algorithms for robot actions are implemented usually in procedural languages such as VAL and AL. But, since their excessive bindings create inflexible structures of computation, it is proposed that Logic Programming is a more suitable language for robot programming due to its non-determinism, declarative nature, and provision for metalevel programming. Logic Programming, however, results in inefficient computations. As a solution to this problem, researchers discuss a framework in which controls can be described to improve efficiency. They have divided controls into: (1) in-code and (2) metalevel and discussed them with reference to selection of rules and dataflow. Researchers illustrated the merit of Logic Programming by modelling the motion of a robot from one point to another avoiding obstacles.
An Automated Design Framework for Multicellular Recombinase Logic.
Guiziou, Sarah; Ulliana, Federico; Moreau, Violaine; Leclere, Michel; Bonnet, Jerome
2018-05-18
Tools to systematically reprogram cellular behavior are crucial to address pressing challenges in manufacturing, environment, or healthcare. Recombinases can very efficiently encode Boolean and history-dependent logic in many species, yet current designs are performed on a case-by-case basis, limiting their scalability and requiring time-consuming optimization. Here we present an automated workflow for designing recombinase logic devices executing Boolean functions. Our theoretical framework uses a reduced library of computational devices distributed into different cellular subpopulations, which are then composed in various manners to implement all desired logic functions at the multicellular level. Our design platform called CALIN (Composable Asynchronous Logic using Integrase Networks) is broadly accessible via a web server, taking truth tables as inputs and providing corresponding DNA designs and sequences as outputs (available at http://synbio.cbs.cnrs.fr/calin ). We anticipate that this automated design workflow will streamline the implementation of Boolean functions in many organisms and for various applications.
NASA Astrophysics Data System (ADS)
Bandrowski, D.; Lai, Y.; Bradley, N.; Gaeuman, D. A.; Murauskas, J.; Som, N. A.; Martin, A.; Goodman, D.; Alvarez, J.
2014-12-01
In the field of river restoration sciences there is a growing need for analytical modeling tools and quantitative processes to help identify and prioritize project sites. 2D hydraulic models have become more common in recent years and with the availability of robust data sets and computing technology, it is now possible to evaluate large river systems at the reach scale. The Trinity River Restoration Program is now analyzing a 40 mile segment of the Trinity River to determine priority and implementation sequencing for its Phase II rehabilitation projects. A comprehensive approach and quantitative tool has recently been developed to analyze this complex river system referred to as: 2D-Hydrodynamic Based Logic Modeling (2D-HBLM). This tool utilizes various hydraulic output parameters combined with biological, ecological, and physical metrics at user-defined spatial scales. These metrics and their associated algorithms are the underpinnings of the 2D-HBLM habitat module used to evaluate geomorphic characteristics, riverine processes, and habitat complexity. The habitat metrics are further integrated into a comprehensive Logic Model framework to perform statistical analyses to assess project prioritization. The Logic Model will analyze various potential project sites by evaluating connectivity using principal component methods. The 2D-HBLM tool will help inform management and decision makers by using a quantitative process to optimize desired response variables with balancing important limiting factors in determining the highest priority locations within the river corridor to implement restoration projects. Effective river restoration prioritization starts with well-crafted goals that identify the biological objectives, address underlying causes of habitat change, and recognizes that social, economic, and land use limiting factors may constrain restoration options (Bechie et. al. 2008). Applying natural resources management actions, like restoration prioritization, is essential for successful project implementation (Conroy and Peterson, 2013). Evaluating tradeoffs and examining alternatives to improve fish habitat through optimization modeling is not just a trend but rather the scientific strategy by which management needs embrace and apply in its decision framework.
Automated Analysis of Stateflow Models
NASA Technical Reports Server (NTRS)
Bourbouh, Hamza; Garoche, Pierre-Loic; Garion, Christophe; Gurfinkel, Arie; Kahsaia, Temesghen; Thirioux, Xavier
2017-01-01
Stateflow is a widely used modeling framework for embedded and cyber physical systems where control software interacts with physical processes. In this work, we present a framework a fully automated safety verification technique for Stateflow models. Our approach is two-folded: (i) we faithfully compile Stateflow models into hierarchical state machines, and (ii) we use automated logic-based verification engine to decide the validity of safety properties. The starting point of our approach is a denotational semantics of State flow. We propose a compilation process using continuation-passing style (CPS) denotational semantics. Our compilation technique preserves the structural and modal behavior of the system. The overall approach is implemented as an open source toolbox that can be integrated into the existing Mathworks Simulink Stateflow modeling framework. We present preliminary experimental evaluations that illustrate the effectiveness of our approach in code generation and safety verification of industrial scale Stateflow models.
Brousselle, Astrid; Lamothe, Lise; Mercier, Céline; Perreault, Michel
2007-02-01
The co-occurrence of mental health and substance use disorders is becoming increasingly recognized as a single problem, and professionals recognize that both should be addressed at the same time. Medical best practices recommend integrated treatment. However, criticisms have arisen, particularly concerning the difficulty of implementing integrated teams in specific health-care contexts and the appropriateness of the proposed model for certain populations. Using logic analysis, we identify the key clinical and organizational factors that contribute to successful implementation. Building on both the professional and organizational literatures on integrated services, we propose a conceptual model that makes it possible to analyze integration processes and places integrated treatment within an interpretative framework. Using this model, it becomes possible to identify key factors necessary to support service integration, and suggest new models of practice adapted to particular contexts.
Terfve, Camille; Cokelaer, Thomas; Henriques, David; MacNamara, Aidan; Goncalves, Emanuel; Morris, Melody K; van Iersel, Martijn; Lauffenburger, Douglas A; Saez-Rodriguez, Julio
2012-10-18
Cells process signals using complex and dynamic networks. Studying how this is performed in a context and cell type specific way is essential to understand signaling both in physiological and diseased situations. Context-specific medium/high throughput proteomic data measured upon perturbation is now relatively easy to obtain but formalisms that can take advantage of these features to build models of signaling are still comparatively scarce. Here we present CellNOptR, an open-source R software package for building predictive logic models of signaling networks by training networks derived from prior knowledge to signaling (typically phosphoproteomic) data. CellNOptR features different logic formalisms, from Boolean models to differential equations, in a common framework. These different logic model representations accommodate state and time values with increasing levels of detail. We provide in addition an interface via Cytoscape (CytoCopteR) to facilitate use and integration with Cytoscape network-based capabilities. Models generated with this pipeline have two key features. First, they are constrained by prior knowledge about the network but trained to data. They are therefore context and cell line specific, which results in enhanced predictive and mechanistic insights. Second, they can be built using different logic formalisms depending on the richness of the available data. Models built with CellNOptR are useful tools to understand how signals are processed by cells and how this is altered in disease. They can be used to predict the effect of perturbations (individual or in combinations), and potentially to engineer therapies that have differential effects/side effects depending on the cell type or context.
2012-01-01
Background Cells process signals using complex and dynamic networks. Studying how this is performed in a context and cell type specific way is essential to understand signaling both in physiological and diseased situations. Context-specific medium/high throughput proteomic data measured upon perturbation is now relatively easy to obtain but formalisms that can take advantage of these features to build models of signaling are still comparatively scarce. Results Here we present CellNOptR, an open-source R software package for building predictive logic models of signaling networks by training networks derived from prior knowledge to signaling (typically phosphoproteomic) data. CellNOptR features different logic formalisms, from Boolean models to differential equations, in a common framework. These different logic model representations accommodate state and time values with increasing levels of detail. We provide in addition an interface via Cytoscape (CytoCopteR) to facilitate use and integration with Cytoscape network-based capabilities. Conclusions Models generated with this pipeline have two key features. First, they are constrained by prior knowledge about the network but trained to data. They are therefore context and cell line specific, which results in enhanced predictive and mechanistic insights. Second, they can be built using different logic formalisms depending on the richness of the available data. Models built with CellNOptR are useful tools to understand how signals are processed by cells and how this is altered in disease. They can be used to predict the effect of perturbations (individual or in combinations), and potentially to engineer therapies that have differential effects/side effects depending on the cell type or context. PMID:23079107
Hallinan, Christine M
2010-01-01
In this paper, program logic will be used to 'map out' the planning, development and evaluation of the general practice Pap nurse program in the Australian general practice arena. The incorporation of program logic into the evaluative process supports a greater appreciation of the theoretical assumptions and external influences that underpin general practice Pap nurse activity. The creation of a program logic model is a conscious strategy that results an explicit understanding of the challenges ahead, the resources available and time frames for outcomes. Program logic also enables a recognition that all players in the general practice arena need to be acknowledged by policy makers, bureaucrats and program designers when addressing through policy, issues relating to equity and accessibility of health initiatives. Logic modelling allows decision makers to consider the complexities of causal associations when developing health care proposals and programs. It enables the Pap nurse in general practice program to be represented diagrammatically by linking outcomes (short, medium and long term) with both the program activities and program assumptions. The research methodology used in the evaluation of the Pap nurse in general practice program includes a descriptive study design and the incorporation of program logic, with a retrospective analysis of Australian data from 2001 to 2009. For the purposes of gaining both empirical and contextual data for this paper, a data set analysis and literature review was performed. The application of program logic as an evaluative tool for analysis of the Pap PN incentive program facilitates a greater understanding of complex general practice activity triggers, and also allows this greater understanding to be incorporated into policy to facilitate Pap PN activity, increase general practice cervical smear and ultimately decrease burden of disease.
Framework for analysis of guaranteed QOS systems
NASA Astrophysics Data System (ADS)
Chaudhry, Shailender; Choudhary, Alok
1997-01-01
Multimedia data is isochronous in nature and entails managing and delivering high volumes of data. Multiprocessors with their large processing power, vast memory, and fast interconnects, are an ideal candidate for the implementation of multimedia applications. Initially, multiprocessors were designed to execute scientific programs and thus their architecture was optimized to provide low message latency and efficiently support regular communication patterns. Hence, they have a regular network topology and most use wormhole routing. The design offers the benefits of a simple router, small buffer size, and network latency that is almost independent of path length. Among the various multimedia applications, video on demand (VOD) server is well-suited for implementation using parallel multiprocessors. Logical models for VOD servers are presently mapped onto multiprocessors. Our paper provides a framework for calculating bounds on utilization of system resources with which QoS parameters for each isochronous stream can be guaranteed. Effects of the architecture of multiprocessors, and efficiency of various local models and mapping on particular architectures can be investigated within our framework. Our framework is based on rigorous proofs and provides tight bounds. The results obtained may be used as the basis for admission control tests. To illustrate the versatility of our framework, we provide bounds on utilization for various logical models applied to mesh connected architectures for a video on demand server. Our results show that worm hole routing can lead to packets waiting for transmission of other packets that apparently share no common resources. This situation is analogous to head-of-the-line blocking. We find that the provision of multiple VCs per link and multiple flit buffers improves utilization (even under guaranteed QoS parameters). This analogous to parallel iterative matching.
EMDS 3.0: A modeling framework for coping with complexity in environmental assessment and planning.
K.M. Reynolds
2006-01-01
EMDS 3.0 is implemented as an ArcMap® extension and integrates the logic engine of NetWeaver® to perform landscape evaluations, and the decision modeling engine of Criterium DecisionPlus® for evaluating management priorities. Key features of the system's evaluation component include abilities to (1) reason about large, abstract, multifaceted ecosystem management...
Mathematics and morphogenesis of cities: A geometrical approach
NASA Astrophysics Data System (ADS)
Courtat, Thomas; Gloaguen, Catherine; Douady, Stephane
2011-03-01
Cities are living organisms. They are out of equilibrium, open systems that never stop developing and sometimes die. The local geography can be compared to a shell constraining its development. In brief, a city’s current layout is a step in a running morphogenesis process. Thus cities display a huge diversity of shapes and none of the traditional models, from random graphs, complex networks theory, or stochastic geometry, takes into account the geometrical, functional, and dynamical aspects of a city in the same framework. We present here a global mathematical model dedicated to cities that permits describing, manipulating, and explaining cities’ overall shape and layout of their street systems. This street-based framework conciliates the topological and geometrical sides of the problem. From the static analysis of several French towns (topology of first and second order, anisotropy, streets scaling) we make the hypothesis that the development of a city follows a logic of division or extension of space. We propose a dynamical model that mimics this logic and that, from simple general rules and a few parameters, succeeds in generating a large diversity of cities and in reproducing the general features the static analysis has pointed out.
Disability Policy Evaluation: Combining Logic Models and Systems Thinking
ERIC Educational Resources Information Center
Claes, Claudia; Ferket, Neelke; Vandevelde, Stijn; Verlet, Dries; De Maeyer, Jessica
2017-01-01
Policy evaluation focuses on the assessment of policy-related personal, family, and societal changes or benefits that follow as a result of the interventions, services, and supports provided to those persons to whom the policy is directed. This article describes a systematic approach to policy evaluation based on an evaluation framework and an…
On Enhancing On-Line Collaboration Using Fuzzy Logic Modeling
ERIC Educational Resources Information Center
Hadjileontiadou, Sofia J.; Nikolaidou, Georgia N.; Hadjileontiadis, Leontios J.; Balafoutas, George N.
2004-01-01
Web-based collaboration calls for professional skills and competences to the benefit of the quality of the collaboration and its output. Within this framework, educational virtual environments may provide a means for training upon these skills and in particular the collaborative ones. On the basis of the existing technological means such training…
ERIC Educational Resources Information Center
Faw, Leyla; Hogue, Aaron; Liddle, Howard A.
2005-01-01
The authors applied contemporary methods from the evaluation literature to measure implementation in a residential treatment program for adolescent substance abuse. A logic model containing two main components was measured. Program structure (adherence to the intended framework of service delivery) was measured using data from daily activity logs…
A Strategic Quality Assurance Framework in an African Higher Education Context
ERIC Educational Resources Information Center
Ansah, Francis
2015-01-01
This study is based on a pragmatist analysis of selected international accounts on quality assurance in higher education. A pragmatist perspective was used to conceptualise a logical internal quality assurance model to embed and support the alignment of graduate competencies in curriculum and assessment of Ghanaian polytechnics. Through focus…
ERIC Educational Resources Information Center
Yu, Chong Ho
Although quantitative research methodology is widely applied by psychological researchers, there is a common misconception that quantitative research is based on logical positivism. This paper examines the relationship between quantitative research and eight major notions of logical positivism: (1) verification; (2) pro-observation; (3)…
An interval logic for higher-level temporal reasoning
NASA Technical Reports Server (NTRS)
Schwartz, R. L.; Melliar-Smith, P. M.; Vogt, F. H.; Plaisted, D. A.
1983-01-01
Prior work explored temporal logics, based on classical modal logics, as a framework for specifying and reasoning about concurrent programs, distributed systems, and communications protocols, and reported on efforts using temporal reasoning primitives to express very high level abstract requirements that a program or system is to satisfy. Based on experience with those primitives, this report describes an Interval Logic that is more suitable for expressing such higher level temporal properties. The report provides a formal semantics for the Interval Logic, and several examples of its use. A description of decision procedures for the logic is also included.
A Policy Language for Modelling Recommendations
NASA Astrophysics Data System (ADS)
Abou El Kalam, Anas; Balbiani, Philippe
While current and emergent applications become more and more complex, most of existing security policies and models only consider a yes/no response to the access requests. Consequently, modelling, formalizing and implementing permissions, obligations and prohibitions do not cover the richness of all the possible scenarios. In fact, several applications have access rules with the recommendation access modality. In this paper we focus on the problem of formalizing security policies with recommendation needs. The aim is to provide a generic domain-independent formal system for modelling not only permissions, prohibitions and obligations, but also recommendations. In this respect, we present our logic-based language, the semantics, the truth conditions, our axiomatic as well as inference rules. We also give a representative use case with our specification of recommendation requirements. Finally, we explain how our logical framework could be used to query the security policy and to check its consistency.
Brousselle, Astrid; Lamothe, Lise; Mercier, Céline; Perreault, Michel
2012-01-01
The co-occurrence of mental health and substance use disorders is becoming increasingly recognized as a single problem, and professionals recognize that both should be addressed at the same time. Medical best practices recommend integrated treatment. However, criticisms have arisen, particularly concerning the difficulty of implementing integrated teams in specific health-care contexts and the appropriateness of the proposed model for certain populations. Using logic analysis, we identify the key clinical and organizational factors that contribute to successful implementation. Building on both the professional and organizational literatures on integrated services, we propose a conceptual model that makes it possible to analyze integration processes and places integrated treatment within an interpretative framework. Using this model, it becomes possible to identify key factors necessary to support service integration, and suggest new models of practice adapted to particular contexts. PMID:17689316
Obfuscation Framework Based on Functionally Equivalent Combinatorial Logic Families
2008-03-01
of Defense, or the United States Government . AFIT/GCS/ENG/08-12 Obfuscation Framework Based on Functionally Equivalent Combinatorial Logic Families...time, United States policy strongly encourages the sale and transfer of some military equipment to foreign governments and makes it easier for...Proceedings of the International Conference on Availability, Reliability and Security, 2007. 14. McDonald, J. Todd and Alec Yasinsac. “Of unicorns and random
The Flow Engine Framework: A Cognitive Model of Optimal Human Experience
Šimleša, Milija; Guegan, Jérôme; Blanchard, Edouard; Tarpin-Bernard, Franck; Buisine, Stéphanie
2018-01-01
Flow is a well-known concept in the fields of positive and applied psychology. Examination of a large body of flow literature suggests there is a need for a conceptual model rooted in a cognitive approach to explain how this psychological phenomenon works. In this paper, we propose the Flow Engine Framework, a theoretical model explaining dynamic interactions between rearranged flow components and fundamental cognitive processes. Using an IPO framework (Inputs – Processes – Outputs) including a feedback process, we organize flow characteristics into three logically related categories: inputs (requirements for flow), mediating and moderating cognitive processes (attentional and motivational mechanisms) and outputs (subjective and objective outcomes), describing the process of the flow. Comparing flow with an engine, inputs are depicted as flow-fuel, core processes cylinder strokes and outputs as power created to provide motion. PMID:29899807
A Current Logical Framework: The Propositional Fragment
2003-01-01
Under the Curry- Howard isomorphism, M can also be read as a proof term, and A as a proposition of intuitionistic linear logic in its formulation as DILL...the obliga- tion to ensure that the underlying logic (via the Curry- Howard isomorphism, if you like) is sensible. In particular, the principles of...Proceedings of the International Logic Programming Symposium (ILPS), pages 51-65, Portland, Oregon, December 1995. MIT Press. 6. G. Bellin and P. J
Baxter, S; Killoran, A; Kelly, M P; Goyder, E
2010-02-01
The nature of public health evidence presents challenges for conventional systematic review processes, with increasing recognition of the need to include a broader range of work including observational studies and qualitative research, yet with methods to combine diverse sources remaining underdeveloped. The objective of this paper is to report the application of a new approach for review of evidence in the public health sphere. The method enables a diverse range of evidence types to be synthesized in order to examine potential relationships between a public health environment and outcomes. The study drew on previous work by the National Institute for Health and Clinical Excellence on conceptual frameworks. It applied and further extended this work to the synthesis of evidence relating to one particular public health area: the enhancement of employee mental well-being in the workplace. The approach utilized thematic analysis techniques from primary research, together with conceptual modelling, to explore potential relationships between factors and outcomes. The method enabled a logic framework to be built from a diverse document set that illustrates how elements and associations between elements may impact on the well-being of employees. Whilst recognizing potential criticisms of the approach, it is suggested that logic models can be a useful way of examining the complexity of relationships between factors and outcomes in public health, and of highlighting potential areas for interventions and further research. The use of techniques from primary qualitative research may also be helpful in synthesizing diverse document types. Copyright 2010 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Retro-causation, Minimum Contradictions and Non-locality
NASA Astrophysics Data System (ADS)
Kafatos, Menas; Nassikas, Athanassios A.
2011-11-01
Retro-causation has been experimentally verified by Bem and proposed by Kafatos in the form of space-time non-locality in the quantum framework. Every theory includes, beyond its specific axioms, the principles of logical communication (logical language), through which it is defined. This communication obeys the Aristotelian logic (Classical Logic), the Leibniz Sufficient Reason Principle, and a hidden axiom, which basically states that there is anterior-posterior relationship everywhere in communication. By means of a theorem discussed here, it can be proved that the communication mentioned implies contradictory statements, which can only be transcended through silence, i.e. the absence of any statements. Moreover, the breaking of silence is meaningful through the claim for minimum contradictions, which implies the existence of both a logical and an illogical dimension; contradictions refer to causality, implying its opposite, namely retro-causation, and the anterior posterior axiom, implying space-time non-locality. The purpose of this paper is to outline a framework accounting for retro-causation, through both purely theoretical and reality based points of view.
Three Sets of Case Studies Suggest Logic and Consistency Challenges with Value Frameworks.
Cohen, Joshua T; Anderson, Jordan E; Neumann, Peter J
2017-02-01
To assess the logic and consistency of three prominent value frameworks. We reviewed the value frameworks from three organizations: the Memorial Sloan Kettering Cancer Center (DrugAbacus), the American Society of Clinical Oncologists, and the Institute for Clinical and Economic Review. For each framework, we developed case studies to explore the degree to which the frameworks have face validity in the sense that they are consistent with four important principles: value should be proportional to a therapy's benefit; components of value should matter to framework users (patients and payers); attribute weights should reflect user preferences; and value estimates used to inform therapy prices should reflect per-person benefit. All three frameworks can aid decision making by elucidating factors not explicitly addressed by conventional evaluation techniques (in particular, cost-effectiveness analyses). Our case studies identified four challenges: 1) value is not always proportional to benefit; 2) value reflects factors that may not be relevant to framework users (patients or payers); 3) attribute weights do not necessarily reflect user preferences or relate to value in ways that are transparent; and 4) value does not reflect per-person benefit. Although the value frameworks we reviewed capture value in a way that is important to various audiences, they are not always logical or consistent. Because these frameworks may have a growing influence on therapy access, it is imperative that analytic challenges be further explored. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Crossing the Virtual World Barrier with OpenAvatar
NASA Technical Reports Server (NTRS)
Joy, Bruce; Kavle, Lori; Tan, Ian
2012-01-01
There are multiple standards and formats for 3D models in virtual environments. The problem is that there is no open source platform for generating models out of discrete parts; this results in the process of having to "reinvent the wheel" when new games, virtual worlds and simulations want to enable their users to create their own avatars or easily customize in-world objects. OpenAvatar is designed to provide a framework to allow artists and programmers to create reusable assets which can be used by end users to generate vast numbers of complete models that are unique and functional. OpenAvatar serves as a framework which facilitates the modularization of 3D models allowing parts to be interchanged within a set of logical constraints.
A relevance theory of induction.
Medin, Douglas L; Coley, John D; Storms, Gert; Hayes, Brett K
2003-09-01
A framework theory, organized around the principle of relevance, is proposed for category-based reasoning. According to the relevance principle, people assume that premises are informative with respect to conclusions. This idea leads to the prediction that people will use causal scenarios and property reinforcement strategies in inductive reasoning. These predictions are contrasted with both existing models and normative logic. Judgments of argument strength were gathered in three different countries, and the results showed the importance of both causal scenarios and property reinforcement in category-based inferences. The relation between the relevance framework and existing models of category-based inductive reasoning is discussed in the light of these findings.
Stability analysis of host dynamics for hiv
NASA Astrophysics Data System (ADS)
Geetha, V.; Balamuralitharan, S.
2018-04-01
The phenomenon of disease modeling can be easily accomplished through mathematical framework. In this paper the transmission of disease in human is represented mathematically as a nonlinear system. We think about the components of the Human Immunodeficiency Virus (HIV) among the beginning periods of illness. Throughout this paper we have determined those logical representation of a three-compartmental HIV demonstrate for their stability evaluation. We tend to likewise explore the stimulating behavior of the model and acquire those Steady states for the disease-free and the endemic agreement. The framework can be evaluated by reproduction number R0. We additionally clarify the numerical recreation and their outcomes.
Automating Access Control Logics in Simple Type Theory with LEO-II
NASA Astrophysics Data System (ADS)
Benzmüller, Christoph
Garg and Abadi recently proved that prominent access control logics can be translated in a sound and complete way into modal logic S4. We have previously outlined how normal multimodal logics, including monomodal logics K and S4, can be embedded in simple type theory and we have demonstrated that the higher-order theorem prover LEO-II can automate reasoning in and about them. In this paper we combine these results and describe a sound (and complete) embedding of different access control logics in simple type theory. Employing this framework we show that the off the shelf theorem prover LEO-II can be applied to automate reasoning in and about prominent access control logics.
A framework for the selection and ensemble development of flood vulnerability models
NASA Astrophysics Data System (ADS)
Figueiredo, Rui; Schröter, Kai; Kreibich, Heidi; Martina, Mario
2017-04-01
Effective understanding and management of flood risk requires comprehensive risk assessment studies that consider not only the hazard component, but also the impacts that the phenomena may have on the built environment, economy and society. This integrated approach has gained importance over recent decades, and with it so has the scientific attention given to flood vulnerability models describing the relationships between flood intensity metrics and damage to physical assets, also known as flood loss models. Despite considerable progress in this field, many challenges persist. Flood damage mechanisms are complex and depend on multiple variables, which can have different degrees of importance depending on the application setting. In addition, data required for the development and validation of such models tend to be scarce, particularly in data poor regions. These issues are reflected in the large amount of flood vulnerability models that are available in the literature today, as well as in their high heterogeneity: they are built with different modelling approaches, in different geographic contexts, utilizing different explanatory variables, and with varying levels of complexity. Notwithstanding recent developments in this area, uncertainty remains high, and large disparities exist among models. For these reasons, identifying which model or models, given their properties, are appropriate for a given context is not straightforward. In the present study, we propose a framework that guides the structured selection of flood vulnerability models and enables ranking them according to their suitability for a certain application, based on expert judgement. The approach takes advantage of current state of the art and most up-to-date knowledge on flood vulnerability processes. Given the heterogeneity and uncertainty currently present in flood vulnerability models, we propose the use of a model ensemble. With this in mind, the proposed approach is based on a weighting scheme within a logic-tree framework that enables the generation of such ensembles in a logically consistent manner. We test and discuss the results by applying the framework to the case study of the 2002 floods along the Mulde River in Germany. Applications of individual models and model ensembles are compared and discussed.
The Quantification of Consistent Subjective Logic Tree Branch Weights for PSHA
NASA Astrophysics Data System (ADS)
Runge, A. K.; Scherbaum, F.
2012-04-01
The development of quantitative models for the rate of exceedance of seismically generated ground motion parameters is the target of probabilistic seismic hazard analysis (PSHA). In regions of low to moderate seismicity, the selection and evaluation of source- and/or ground-motion models is often a major challenge to hazard analysts and affected by large epistemic uncertainties. In PSHA this type of uncertainties is commonly treated within a logic tree framework in which the branch weights express the degree-of-belief values of an expert in the corresponding set of models. For the calculation of the distribution of hazard curves, these branch weights are subsequently used as subjective probabilities. However the quality of the results depends strongly on the "quality" of the expert knowledge. A major challenge for experts in this context is to provide weight estimates which are logically consistent (in the sense of Kolmogorov's axioms) and to be aware of and to deal with the multitude of heuristics and biases which affect human judgment under uncertainty. For example, people tend to give smaller weights to each branch of a logic tree the more branches it has, starting with equal weights for all branches and then adjusting this uniform distribution based on his/her beliefs about how the branches differ. This effect is known as pruning bias.¹ A similar unwanted effect, which may even wrongly suggest robustness of the corresponding hazard estimates, will appear in cases where all models are first judged according to some numerical quality measure approach and the resulting weights are subsequently normalized to sum up to one.2 To address these problems, we have developed interactive graphical tools for the determination of logic tree branch weights in form of logically consistent subjective probabilities, based on the concepts suggested in Curtis and Wood (2004).3 Instead of determining the set of weights for all the models in a single step, the computer driven elicitation process is performed as a sequence of evaluations of relative weights for small subsets of models which are presented to the analyst. From these, the distribution of logic tree weights for the whole model set is determined as solution of an optimization problem. The model subset presented to the analyst in each step is designed to maximize the expected information. The result of this process is a set of logically consistent weights together with a measure of confidence determined from the amount of conflicting information which is provided by the expert during the relative weighting process.
An Evaluation of a School-Based Teenage Pregnancy Prevention Program Using a Logic Model Framework
ERIC Educational Resources Information Center
Hulton, Linda J.
2007-01-01
Teenage pregnancy and the subsequent social morbidities associated with unintended pregnancies are complex issues facing school nurses in their daily work. In contemporary practice, school nurses are being held to higher standards of accountability and being asked to demonstrate the effective outcomes of their interventions. The purpose of this…
Verifying the Modal Logic Cube Is an Easy Task (For Higher-Order Automated Reasoners)
NASA Astrophysics Data System (ADS)
Benzmüller, Christoph
Prominent logics, including quantified multimodal logics, can be elegantly embedded in simple type theory (classical higher-order logic). Furthermore, off-the-shelf reasoning systems for simple type type theory exist that can be uniformly employed for reasoning within and about embedded logics. In this paper we focus on reasoning about modal logics and exploit our framework for the automated verification of inclusion and equivalence relations between them. Related work has applied first-order automated theorem provers for the task. Our solution achieves significant improvements, most notably, with respect to elegance and simplicity of the problem encodings as well as with respect to automation performance.
RPD-based Hypothesis Reasoning for Cyber Situation Awareness
NASA Astrophysics Data System (ADS)
Yen, John; McNeese, Michael; Mullen, Tracy; Hall, David; Fan, Xiaocong; Liu, Peng
Intelligence workers such as analysts, commanders, and soldiers often need a hypothesis reasoning framework to gain improved situation awareness of the highly dynamic cyber space. The development of such a framework requires the integration of interdisciplinary techniques, including supports for distributed cognition (human-in-the-loop hypothesis generation), supports for team collaboration (identification of information for hypothesis evaluation), and supports for resource-constrained information collection (hypotheses competing for information collection resources). We here describe a cognitively-inspired framework that is built upon Klein’s recognition-primed decision model and integrates the three components of Endsley’s situation awareness model. The framework naturally connects the logic world of tools for cyber situation awareness with the mental world of human analysts, enabling the perception, comprehension, and prediction of cyber situations for better prevention, survival, and response to cyber attacks by adapting missions at the operational, tactical, and strategic levels.
Assessment of Seismic Damage on The Exist Buildings Using Fuzzy Logic
NASA Astrophysics Data System (ADS)
Pınar, USTA; Nihat, MOROVA; EVCİ, Ahmet; ERGÜN, Serap
2018-01-01
Earthquake as a natural disaster could damage the lives of many people and buildings all over the world. These is micvulnerability of the buildings needs to be evaluated. Accurate evaluation of damage sustained by buildings during natural disaster events is critical to determine the buildings safety and their suitability for future occupancy. The earthquake is one of the disasters that structures face the most. There fore, there is a need to evaluate seismic damage and vulnerability of the buildings to protect them. These days fuzzy systems have been widely used in different fields of science because of its simpli city and efficiency. Fuzzy logic provides a suitable framework for reasoning, deduction, and decision making in fuzzy conditions. In this paper, studies on earthquake hazard evaluation of buildings by fuzzy logic modeling concepts in the literature have been investigated and evaluated, as a whole.
An autonomous satellite architecture integrating deliberative reasoning and behavioural intelligence
NASA Technical Reports Server (NTRS)
Lindley, Craig A.
1993-01-01
This paper describes a method for the design of autonomous spacecraft, based upon behavioral approaches to intelligent robotics. First, a number of previous spacecraft automation projects are reviewed. A methodology for the design of autonomous spacecraft is then presented, drawing upon both the European Space Agency technological center (ESTEC) automation and robotics methodology and the subsumption architecture for autonomous robots. A layered competency model for autonomous orbital spacecraft is proposed. A simple example of low level competencies and their interaction is presented in order to illustrate the methodology. Finally, the general principles adopted for the control hardware design of the AUSTRALIS-1 spacecraft are described. This system will provide an orbital experimental platform for spacecraft autonomy studies, supporting the exploration of different logical control models, different computational metaphors within the behavioral control framework, and different mappings from the logical control model to its physical implementation.
Evolutionary fuzzy modeling human diagnostic decisions.
Peña-Reyes, Carlos Andrés
2004-05-01
Fuzzy CoCo is a methodology, combining fuzzy logic and evolutionary computation, for constructing systems able to accurately predict the outcome of a human decision-making process, while providing an understandable explanation of the underlying reasoning. Fuzzy logic provides a formal framework for constructing systems exhibiting both good numeric performance (accuracy) and linguistic representation (interpretability). However, fuzzy modeling--meaning the construction of fuzzy systems--is an arduous task, demanding the identification of many parameters. To solve it, we use evolutionary computation techniques (specifically cooperative coevolution), which are widely used to search for adequate solutions in complex spaces. We have successfully applied the algorithm to model the decision processes involved in two breast cancer diagnostic problems, the WBCD problem and the Catalonia mammography interpretation problem, obtaining systems both of high performance and high interpretability. For the Catalonia problem, an evolved system was embedded within a Web-based tool-called COBRA-for aiding radiologists in mammography interpretation.
OpCost: an open-source system for estimating costs of stand-level forest operations
Conor K. Bell; Robert F. Keefe; Jeremy S. Fried
2017-01-01
This report describes and documents the OpCost forest operations cost model, a key component of the BioSum analysis framework. OpCost is available in two editions: as a callable module for use with BioSum, and in a stand-alone edition that can be run directly from R. OpCost model logic and assumptions for this open-source tool are explained, references to the...
Designing an evaluation framework for WFME basic standards for medical education.
Tackett, Sean; Grant, Janet; Mmari, Kristin
2016-01-01
To create an evaluation plan for the World Federation for Medical Education (WFME) accreditation standards for basic medical education. We conceptualized the 100 basic standards from "Basic Medical Education: WFME Global Standards for Quality Improvement: The 2012 Revision" as medical education program objectives. Standards were simplified into evaluable items, which were then categorized as inputs, processes, outputs and/or outcomes to generate a logic model and corresponding plan for data collection. WFME standards posed significant challenges to evaluation due to complex wording, inconsistent formatting and lack of existing assessment tools. Our resulting logic model contained 244 items. Standard B 5.1.1 separated into 24 items, the most for any single standard. A large proportion of items (40%) required evaluation of more than one input, process, output and/or outcome. Only one standard (B 3.2.2) was interpreted as requiring evaluation of a program outcome. Current WFME standards are difficult to use for evaluation planning. Our analysis may guide adaptation and revision of standards to make them more evaluable. Our logic model and data collection plan may be useful to medical schools planning an institutional self-review and to accrediting authorities wanting to provide guidance to schools under their purview.
2015-03-01
a hotel and a hospital. 2. Event handler for emergency policies (item 2 above): this has been implemented in two UG projects, one project developed a...Workshop on Logical and Se- mantic Frameworks, with Applications, Brasilia, Brazil , September 2014. Electronic Notes in Theoretical Computer Science (to...Brasilia, Brazil , September 2014, 2015. [3] S. Barker. The next 700 access control models or a unifying meta-model? In SACMAT 2009, 14th ACM Symposium on
Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R.
2015-01-01
Motivation: Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. Results: In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary information: Supplementary data are available at Bioinformatics online. Contact: julio@iim.csic.es or saezrodriguez@ebi.ac.uk PMID:26002881
Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R
2015-09-15
Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary data are available at Bioinformatics online. julio@iim.csic.es or saezrodriguez@ebi.ac.uk. © The Author 2015. Published by Oxford University Press.
Improving the use of health data for health system strengthening.
Nutley, Tara; Reynolds, Heidi W
2013-02-13
Good quality and timely data from health information systems are the foundation of all health systems. However, too often data sit in reports, on shelves or in databases and are not sufficiently utilised in policy and program development, improvement, strategic planning and advocacy. Without specific interventions aimed at improving the use of data produced by information systems, health systems will never fully be able to meet the needs of the populations they serve. To employ a logic model to describe a pathway of how specific activities and interventions can strengthen the use of health data in decision making to ultimately strengthen the health system. A logic model was developed to provide a practical strategy for developing, monitoring and evaluating interventions to strengthen the use of data in decision making. The model draws on the collective strengths and similarities of previous work and adds to those previous works by making specific recommendations about interventions and activities that are most proximate to affect the use of data in decision making. The model provides an organizing framework for how interventions and activities work to strengthen the systematic demand, synthesis, review, and use of data. The logic model and guidance are presented to facilitate its widespread use and to enable improved data-informed decision making in program review and planning, advocacy, policy development. Real world examples from the literature support the feasible application of the activities outlined in the model. The logic model provides specific and comprehensive guidance to improve data demand and use. It can be used to design, monitor and evaluate interventions, and to improve demand for, and use of, data in decision making. As more interventions are implemented to improve use of health data, those efforts need to be evaluated.
Property Specification Patterns for intelligence building software
NASA Astrophysics Data System (ADS)
Chun, Seungsu
2018-03-01
In this paper, through the property specification pattern research for Modal MU(μ) logical aspects present a single framework based on the pattern of intelligence building software. In this study, broken down by state property specification pattern classification of Dwyer (S) and action (A) and was subdivided into it again strong (A) and weaknesses (E). Through these means based on a hierarchical pattern classification of the property specification pattern analysis of logical aspects Mu(μ) was applied to the pattern classification of the examples used in the actual model checker. As a result, not only can a more accurate classification than the existing classification systems were easy to create and understand the attributes specified.
Ecological resilience in lakes and the conjunction fallacy.
Spears, Bryan M; Futter, Martyn N; Jeppesen, Erik; Huser, Brian J; Ives, Stephen; Davidson, Thomas A; Adrian, Rita; Angeler, David G; Burthe, Sarah J; Carvalho, Laurence; Daunt, Francis; Gsell, Alena S; Hessen, Dag O; Janssen, Annette B G; Mackay, Eleanor B; May, Linda; Moorhouse, Heather; Olsen, Saara; Søndergaard, Martin; Woods, Helen; Thackeray, Stephen J
2017-11-01
There is a pressing need to apply stability and resilience theory to environmental management to restore degraded ecosystems effectively and to mitigate the effects of impending environmental change. Lakes represent excellent model case studies in this respect and have been used widely to demonstrate theories of ecological stability and resilience that are needed to underpin preventative management approaches. However, we argue that this approach is not yet fully developed because the pursuit of empirical evidence to underpin such theoretically grounded management continues in the absence of an objective probability framework. This has blurred the lines between intuitive logic (based on the elementary principles of probability) and extensional logic (based on assumption and belief) in this field.
Wang, Edwin; Zaman, Naif; Mcgee, Shauna; Milanese, Jean-Sébastien; Masoudi-Nejad, Ali; O'Connor-McCourt, Maureen
2015-02-01
Tumor genome sequencing leads to documenting thousands of DNA mutations and other genomic alterations. At present, these data cannot be analyzed adequately to aid in the understanding of tumorigenesis and its evolution. Moreover, we have little insight into how to use these data to predict clinical phenotypes and tumor progression to better design patient treatment. To meet these challenges, we discuss a cancer hallmark network framework for modeling genome sequencing data to predict cancer clonal evolution and associated clinical phenotypes. The framework includes: (1) cancer hallmarks that can be represented by a few molecular/signaling networks. 'Network operational signatures' which represent gene regulatory logics/strengths enable to quantify state transitions and measures of hallmark traits. Thus, sets of genomic alterations which are associated with network operational signatures could be linked to the state/measure of hallmark traits. The network operational signature transforms genotypic data (i.e., genomic alterations) to regulatory phenotypic profiles (i.e., regulatory logics/strengths), to cellular phenotypic profiles (i.e., hallmark traits) which lead to clinical phenotypic profiles (i.e., a collection of hallmark traits). Furthermore, the framework considers regulatory logics of the hallmark networks under tumor evolutionary dynamics and therefore also includes: (2) a self-promoting positive feedback loop that is dominated by a genomic instability network and a cell survival/proliferation network is the main driver of tumor clonal evolution. Surrounding tumor stroma and its host immune systems shape the evolutionary paths; (3) cell motility initiating metastasis is a byproduct of the above self-promoting loop activity during tumorigenesis; (4) an emerging hallmark network which triggers genome duplication dominates a feed-forward loop which in turn could act as a rate-limiting step for tumor formation; (5) mutations and other genomic alterations have specific patterns and tissue-specificity, which are driven by aging and other cancer-inducing agents. This framework represents the logics of complex cancer biology as a myriad of phenotypic complexities governed by a limited set of underlying organizing principles. It therefore adds to our understanding of tumor evolution and tumorigenesis, and moreover, potential usefulness of predicting tumors' evolutionary paths and clinical phenotypes. Strategies of using this framework in conjunction with genome sequencing data in an attempt to predict personalized drug targets, drug resistance, and metastasis for cancer patients, as well as cancer risks for healthy individuals are discussed. Accurate prediction of cancer clonal evolution and clinical phenotypes will have substantial impact on timely diagnosis, personalized treatment and personalized prevention of cancer. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Instantons in Self-Organizing Logic Gates
NASA Astrophysics Data System (ADS)
Bearden, Sean R. B.; Manukian, Haik; Traversa, Fabio L.; Di Ventra, Massimiliano
2018-03-01
Self-organizing logic is a recently suggested framework that allows the solution of Boolean truth tables "in reverse"; i.e., it is able to satisfy the logical proposition of gates regardless to which terminal(s) the truth value is assigned ("terminal-agnostic logic"). It can be realized if time nonlocality (memory) is present. A practical realization of self-organizing logic gates (SOLGs) can be done by combining circuit elements with and without memory. By employing one such realization, we show, numerically, that SOLGs exploit elementary instantons to reach equilibrium points. Instantons are classical trajectories of the nonlinear equations of motion describing SOLGs and connect topologically distinct critical points in the phase space. By linear analysis at those points, we show that these instantons connect the initial critical point of the dynamics, with at least one unstable direction, directly to the final fixed point. We also show that the memory content of these gates affects only the relaxation time to reach the logically consistent solution. Finally, we demonstrate, by solving the corresponding stochastic differential equations, that, since instantons connect critical points, noise and perturbations may change the instanton trajectory in the phase space but not the initial and final critical points. Therefore, even for extremely large noise levels, the gates self-organize to the correct solution. Our work provides a physical understanding of, and can serve as an inspiration for, models of bidirectional logic gates that are emerging as important tools in physics-inspired, unconventional computing.
ERIC Educational Resources Information Center
Kimpel, Lucina
2010-01-01
This research was comprised of a case study conducted at Grand View University to determine faculty perceptions and perspectives of outcomes related to a Title III grant-funded, professional development program. The conceptual framework for the study was based on a systematic process called the logic model (W. K. Kellogg Foundation, 2004). A…
ERIC Educational Resources Information Center
Futris, Ted G.; Schramm, David G.
2015-01-01
What goes into designing and implementing a successful program? How do both research and practice inform program development? In this article, the process through which a federally funded training curriculum was developed and piloted tested is described. Using a logic model framework, important lessons learned are shared in defining the situation,…
Xu, Haiyang; Wang, Ping
2016-01-01
In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system.
Xu, Haiyang; Wang, Ping
2016-01-01
In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system. PMID:27918594
Yeung, Daniel; Boes, Peter; Ho, Meng Wei; Li, Zuofeng
2015-05-08
Image-guided radiotherapy (IGRT), based on radiopaque markers placed in the prostate gland, was used for proton therapy of prostate patients. Orthogonal X-rays and the IBA Digital Image Positioning System (DIPS) were used for setup correction prior to treatment and were repeated after treatment delivery. Following a rationale for margin estimates similar to that of van Herk,(1) the daily post-treatment DIPS data were analyzed to determine if an adaptive radiotherapy plan was necessary. A Web application using ASP.NET MVC5, Entity Framework, and an SQL database was designed to automate this process. The designed features included state-of-the-art Web technologies, a domain model closely matching the workflow, a database-supporting concurrency and data mining, access to the DIPS database, secured user access and roles management, and graphing and analysis tools. The Model-View-Controller (MVC) paradigm allowed clean domain logic, unit testing, and extensibility. Client-side technologies, such as jQuery, jQuery Plug-ins, and Ajax, were adopted to achieve a rich user environment and fast response. Data models included patients, staff, treatment fields and records, correction vectors, DIPS images, and association logics. Data entry, analysis, workflow logics, and notifications were implemented. The system effectively modeled the clinical workflow and IGRT process.
Fuzzy and process modelling of contour ridge water dynamics
NASA Astrophysics Data System (ADS)
Mhizha, Alexander; Ndiritu, John
2018-05-01
Contour ridges are an in-situ rainwater harvesting technology developed initially for soil erosion control but are currently also widely promoted for rainwater harvesting. The effectiveness of contour ridges depends on geophysical, hydro-climatic and socio economic factors that are highly varied in time and space. Furthermore, field-scale data on these factors are often unavailable. This together with the complexity of hydrological processes at field scale limits the application of classical distributed process modelling to highly-instrumented experimental fields. This paper presents a framework that combines fuzzy logic and process-based approach for modelling contour ridges for rainwater harvesting where detailed field data are not available. Water balance for a representative contour-ridged field incorporating the water flow processes across the boundaries is integrated with fuzzy logic to incorporate the uncertainties in estimating runoff. The model is tested using data collected during the 2009/2010 and 2010/2011 rainfall seasons from two contour-ridged fields in Zhulube located in the semi-arid parts of Zimbabwe. The model is found to replicate soil moisture in the root zone reasonably well (NSE = 0.55 to 0.66 and PBIAS = -1.3 to 6.1 %). The results show that combining fuzzy logic and process based approaches can adequately model soil moisture in a contour ridged-field and could help to assess the water dynamics in contour ridged fields.
A Complex Systems Model Approach to Quantified Mineral Resource Appraisal
Gettings, M.E.; Bultman, M.W.; Fisher, F.S.
2004-01-01
For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.
Logic Models for Program Design, Implementation, and Evaluation: Workshop Toolkit. REL 2015-057
ERIC Educational Resources Information Center
Shakman, Karen; Rodriguez, Sheila M.
2015-01-01
The Logic Model Workshop Toolkit is designed to help practitioners learn the purpose of logic models, the different elements of a logic model, and the appropriate steps for developing and using a logic model for program evaluation. Topics covered in the sessions include an overview of logic models, the elements of a logic model, an introduction to…
The engineering of cybernetic systems
NASA Astrophysics Data System (ADS)
Fry, Robert L.
2002-05-01
This tutorial develops a logical basis for the engineering of systems that operate cybernetically. The term cybernetic system has a clear quantitative definition. It is a system that dynamically matches acquired information to selected actions relative to a computational issue that defines the essential purpose of the system or machine. This notion requires that information and control be further quantified. The logic of questions and assertions as developed by Cox provides one means of doing this. The design and operation of cybernetic systems can be understood by contrasting these kinds of systems with communication systems and information theory as developed by Shannon. The joint logic of questions and assertions can be seen to underlie and be common to both information theory as applied to the design of discrete communication systems and to a theory of discrete general systems. The joint logic captures a natural complementarity between systems that transmit and receive information and those that acquire and act on it. Specific comparisons and contrasts are made between the source rate and channel capacity of a communication system and the acquisition rate and control capacity of a general system. An overview is provided of the joint logic of questions and assertions and the ties that this logic has to both conventional information theory and to a general theory of systems. I-diagrams, the interrogative complement of Venn diagrams, are described as providing valuable reasoning tools. An initial framework is suggested for the design of cybernetic systems. Two examples are given to illustrate this framework as applied to discrete cybernetic systems. These examples include a predator-prey problem as illustrated through "The Dog Chrysippus Pursuing its Prey," and the derivation of a single-neuron system that operates cybernetically and is biologically plausible. Future areas of research are highlighted which require development for a mature engineering framework.
Research Ethics Review: Identifying Public Policy and Program Gaps
Strosberg, Martin A.; Gefenas, Eugenijus; Famenka, Andrei
2014-01-01
We present an analytical frame-work for use by fellows of the Fogarty International Center–sponsored Advanced Certificate Program in Research Ethics for Central and Eastern Europe to identify gaps in the public policies establishing research ethics review systems that impede them from doing their job of protecting human research subjects. The framework, illustrated by examples from post-Communist countries, employs a logic model based on the public policy and public management literature. This paper is part of a collection of papers analyzing the Fogarty International Center’s International Research Ethics Education and Curriculum program. PMID:24782068
NASA Technical Reports Server (NTRS)
Lowrie, J. W.; Fermelia, A. J.; Haley, D. C.; Gremban, K. D.; Vanbaalen, J.; Walsh, R. W.
1982-01-01
A variety of artificial intelligence techniques which could be used with regard to NASA space applications and robotics were evaluated. The techniques studied were decision tree manipulators, problem solvers, rule based systems, logic programming languages, representation language languages, and expert systems. The overall structure of a robotic simulation tool was defined and a framework for that tool developed. Nonlinear and linearized dynamics equations were formulated for n link manipulator configurations. A framework for the robotic simulation was established which uses validated manipulator component models connected according to a user defined configuration.
Lamort-Bouché, Marion; Sarnin, Philippe; Kok, Gerjo; Rouat, Sabrina; Péron, Julien; Letrilliart, Laurent; Fassier, Jean-Baptiste
2018-04-01
The Intervention Mapping (IM) protocol provides a structured framework to develop, implement, and evaluate complex interventions. The main objective of this review was to identify and describe the content of the interventions developed in the field of cancer with the IM protocol. Secondary objectives were to assess their fidelity to the IM protocol and to review their theoretical frameworks. Medline, Web of Science, PsycINFO, PASCAL, FRANCIS, and BDSP databases were searched. All titles and abstracts were reviewed. A standardized extraction form was developed. All included studies were reviewed by 2 reviewers blinded to each other. Sixteen studies were identified, and these reported 15 interventions. The objectives were to increase cancer screening participation (n = 7), early consultation (n = 1), and aftercare/quality of life among cancer survivors (n = 7). Six reported a complete participatory planning group, and 7 described a complete logic model of the problem. Ten studies described a complete logic model of change. The main theoretical frameworks used were the theory of planned behaviour (n = 8), the transtheoretical model (n = 6), the health belief model (n = 6), and the social cognitive theory (n = 6). The environment was rarely integrated in the interventions (n = 4). Five interventions were reported as effective. Culturally relevant interventions were developed with the IM protocol that were effective to increase cancer screening and reduce social disparities, particularly when they were developed through a participative approach and integrated the environment. Stakeholders' involvement and the role of the environment were heterogeneously integrated in the interventions. Copyright © 2017 John Wiley & Sons, Ltd.
A fuzzy logic expert system for evaluating policy progress towards sustainability goals.
Cisneros-Montemayor, Andrés M; Singh, Gerald G; Cheung, William W L
2017-12-16
Evaluating progress towards environmental sustainability goals can be difficult due to a lack of measurable benchmarks and insufficient or uncertain data. Marine settings are particularly challenging, as stakeholders and objectives tend to be less well defined and ecosystem components have high natural variability and are difficult to observe directly. Fuzzy logic expert systems are useful analytical frameworks to evaluate such systems, and we develop such a model here to formally evaluate progress towards sustainability targets based on diverse sets of indicators. Evaluation criteria include recent (since policy enactment) and historical (from earliest known state) change, type of indicators (state, benefit, pressure, response), time span and spatial scope, and the suitability of an indicator in reflecting progress toward a specific objective. A key aspect of the framework is that all assumptions are transparent and modifiable to fit different social and ecological contexts. We test the method by evaluating progress towards four Aichi Biodiversity Targets in Canadian oceans, including quantitative progress scores, information gaps, and the sensitivity of results to model and data assumptions. For Canadian marine systems, national protection plans and biodiversity awareness show good progress, but species and ecosystem states overall do not show strong improvement. Well-defined goals are vital for successful policy implementation, as ambiguity allows for conflicting potential indicators, which in natural systems increases uncertainty in progress evaluations. Importantly, our framework can be easily adapted to assess progress towards policy goals with different themes, globally or in specific regions.
Enrollment Logics and Discourses: Toward Developing an Enrollment Knowledge Framework
ERIC Educational Resources Information Center
Snowden, Monique L.
2013-01-01
This article brings attention to a typology of enrollment knowledge possessed and enacted by contemporary chief enrollment officers. Interview narratives are used to reveal enrollment principles and associated actions--enrollment logics--that form enrollment discourses, which in turn shape the institutionalized presence of strategic enrollment…
Tan, Chao; Xu, Rongxin; Wang, Zhongbin; Si, Lei; Liu, Xinhua
2016-01-01
In order to reduce the enlargement of coal floor deformation and the manual adjustment frequency of rocker arms, an improved approach through integration of improved genetic algorithm and fuzzy logic control (GFLC) method is proposed. The enlargement of coal floor deformation is analyzed and a model is built. Then, the framework of proposed approach is built. Moreover, the constituents of GA such as tangent function roulette wheel selection (Tan-RWS) selection, uniform crossover, and nonuniform mutation are employed to enhance the performance of GFLC. Finally, two simulation examples and an industrial application example are carried out and the results indicate that the proposed method is feasible and efficient. PMID:27217824
Fuzzy Modelling for Human Dynamics Based on Online Social Networks
Cuenca-Jara, Jesus; Valdes-Vela, Mercedes; Skarmeta, Antonio F.
2017-01-01
Human mobility mining has attracted a lot of attention in the research community due to its multiple implications in the provisioning of innovative services for large metropolises. In this scope, Online Social Networks (OSN) have arisen as a promising source of location data to come up with new mobility models. However, the human nature of this data makes it rather noisy and inaccurate. In order to deal with such limitations, the present work introduces a framework for human mobility mining based on fuzzy logic. Firstly, a fuzzy clustering algorithm extracts the most active OSN areas at different time periods. Next, such clusters are the building blocks to compose mobility patterns. Furthermore, a location prediction service based on a fuzzy rule classifier has been developed on top of the framework. Finally, both the framework and the predictor has been tested with a Twitter and Flickr dataset in two large cities. PMID:28837120
Fuzzy Modelling for Human Dynamics Based on Online Social Networks.
Cuenca-Jara, Jesus; Terroso-Saenz, Fernando; Valdes-Vela, Mercedes; Skarmeta, Antonio F
2017-08-24
Human mobility mining has attracted a lot of attention in the research community due to its multiple implications in the provisioning of innovative services for large metropolises. In this scope, Online Social Networks (OSN) have arisen as a promising source of location data to come up with new mobility models. However, the human nature of this data makes it rather noisy and inaccurate. In order to deal with such limitations, the present work introduces a framework for human mobility mining based on fuzzy logic. Firstly, a fuzzy clustering algorithm extracts the most active OSN areas at different time periods. Next, such clusters are the building blocks to compose mobility patterns. Furthermore, a location prediction service based on a fuzzy rule classifier has been developed on top of the framework. Finally, both the framework and the predictor has been tested with a Twitter and Flickr dataset in two large cities.
A Framework to Debug Diagnostic Matrices
NASA Technical Reports Server (NTRS)
Kodal, Anuradha; Robinson, Peter; Patterson-Hine, Ann
2013-01-01
Diagnostics is an important concept in system health and monitoring of space operations. Many of the existing diagnostic algorithms utilize system knowledge in the form of diagnostic matrix (D-matrix, also popularly known as diagnostic dictionary, fault signature matrix or reachability matrix) gleaned from physical models. But, sometimes, this may not be coherent to obtain high diagnostic performance. In such a case, it is important to modify this D-matrix based on knowledge obtained from other sources such as time-series data stream (simulated or maintenance data) within the context of a framework that includes the diagnostic/inference algorithm. A systematic and sequential update procedure, diagnostic modeling evaluator (DME) is proposed to modify D-matrix and wrapper logic considering least expensive solution first. This iterative procedure includes conditions ranging from modifying 0s and 1s in the matrix, or adding/removing the rows (failure sources) columns (tests). We will experiment this framework on datasets from DX challenge 2009.
Symbolic interactionism: a framework for the care of parents of preterm infants.
Edwards, L D; Saunders, R B
1990-04-01
Because of stressors surrounding preterm birth, parents can be expected to have difficulty in early interactions with their preterm infants. Care givers who work with preterm infants and their parents can positively affect the early parental experiences of these mothers and fathers. If care givers are consciously guided by a conceptual model, therapeutic care for distressed parents is more likely to be provided. A logical framework, such as symbolic interactionism, helps care givers to proceed systematically in assessing parental behaviors, in intervening appropriately, and in evaluating both the process and outcome of the care. Selected aspects of the symbolic interaction model are described in this article and applied to the care of parents of preterm infants.
Logical Demonomy Among the Ewe in West Africa
ERIC Educational Resources Information Center
Dzobo, N. K.
1974-01-01
Examines the indigenous pattern of moral behavior among the Ewe, an ethnic and linguistic group in West Africa, and assesses its role in moral education within the African context. The author develops a conceptual framework he calls "logical demonomy" that he uses to define the Ewe system of moral laws. (JT)
Organizational Politics in Schools: Micro, Macro, and Logics of Action.
ERIC Educational Resources Information Center
Bacharach, Samuel B.; Mundell, Bryan L.
1993-01-01
Develops a framework for analyzing the politics of school organizations, affirming a Weberian perspective as most appropriate. Develops "logic of action" (the implicit relationship between means and goals) as the focal point of organizational politics. Underlines the importance of analyzing interest groups and their strategies. Political…
Björck-Åkesson, Eva; Wilder, Jenny; Granlund, Mats; Pless, Mia; Simeonsson, Rune; Adolfsson, Margareta; Almqvist, Lena; Augustine, Lilly; Klang, Nina; Lillvist, Anne
2010-01-01
Early childhood intervention and habilitation services for children with disabilities operate on an interdisciplinary basis. It requires a common language between professionals, and a shared framework for intervention goals and intervention implementation. The International Classification of Functioning, Disability and Health (ICF) and the version for children and youth (ICF-CY) may serve as this common framework and language. This overview of studies implemented by our research group is based on three research questions: Do the ICF-CY conceptual model have a valid content and is it logically coherent when investigated empirically? Is the ICF-CY classification useful for documenting child characteristics in services? What difficulties and benefits are related to using ICF-CY model as a basis for intervention when it is implemented in services? A series of studies, undertaken by the CHILD researchers are analysed. The analysis is based on data sets from published studies or master theses. Results and conclusion show that the ICF-CY has a useful content and is logically coherent on model level. Professionals find it useful for documenting children's body functions and activities. Guidelines for separating activity and participation are needed. ICF-CY is a complex classification, implementing it in services is a long-term project.
Architecting a Simulation Framework for Model Rehosting
NASA Technical Reports Server (NTRS)
Madden, Michael M.
2004-01-01
The utility of vehicle math models extends beyond human-in-the-loop simulation. It is desirable to deploy a given model across a multitude of applications that target design, analysis, and research. However, the vehicle model alone represents an incomplete simulation. One must also replicate the environment models (e.g., atmosphere, gravity, terrain) to achieve identical vehicle behavior across all applications. Environment models are increasing in complexity and represent a substantial investment to re-engineer for a new application. A software component that can be rehosted in each application is one solution to the deployment problem. The component must encapsulate both the vehicle and environment models. The component must have a well-defined interface that abstracts the bulk of the logic to operate the models. This paper examines the characteristics of a rehostable modeling component from the perspective of a human-in-the-loop simulation framework. The Langley Standard Real-Time Simulation in C++ (LaSRS++) is used as an example. LaSRS++ was recently redesigned to transform its modeling package into a rehostable component.
Breton, Mylaine; Green, Michael; Kreindler, Sara; Sutherland, Jason; Jbilou, Jalila; Wong, Sabrina T; Shaw, Jay; Crooks, Valorie A; Contandriopoulos, Damien; Smithman, Mélanie Ann; Brousselle, Astrid
2017-01-21
Having a regular primary care provider (i.e., family physician or nurse practitioner) is widely considered to be a prerequisite for obtaining healthcare that is timely, accessible, continuous, comprehensive, and well-coordinated with other parts of the healthcare system. Yet, 4.6 million Canadians, approximately 15% of Canada's population, are unattached; that is, they do not have a regular primary care provider. To address the critical need for attachment, especially for more vulnerable patients, six Canadian provinces have implemented centralized waiting lists for unattached patients. These waiting lists centralize unattached patients' requests for a primary care provider in a given territory and match patients with providers. From the little information we have on each province's centralized waiting list, we know the way they work varies significantly from province to province. The main objective of this study is to compare the different models of centralized waiting lists for unattached patients implemented in six provinces of Canada to each other and to available scientific knowledge to make recommendations on ways to improve their design in an effort to increase attachment of patients to a primary care provider. A logic analysis approach developed in three steps will be used. Step 1: build logic models that describe each province's centralized waiting list through interviews with key stakeholders in each province; step 2: develop a conceptual framework, separate from the provincially informed logic models, that identifies key characteristics of centralized waiting lists for unattached patients and factors influencing their implementation through a literature review and interviews with experts; step 3: compare the logic models to the conceptual framework to make recommendations to improve centralized waiting lists in different provinces during a pan Canadian face-to-face exchange with decision-makers, clinicians and researchers. This study is based on an inter-provincial learning exchange approach where we propose to compare centralized waiting lists and analyze variations in strategies used to increase attachment to a regular primary care provider. Fostering inter-provincial healthcare systems connectivity to improve centralized waiting lists' practices across Canada can lever attachment to a regular provider for timely access to continuous, comprehensive and coordinated healthcare for all Canadians and particular for those who are vulnerable.
Logic regression and its extensions.
Schwender, Holger; Ruczinski, Ingo
2010-01-01
Logic regression is an adaptive classification and regression procedure, initially developed to reveal interacting single nucleotide polymorphisms (SNPs) in genetic association studies. In general, this approach can be used in any setting with binary predictors, when the interaction of these covariates is of primary interest. Logic regression searches for Boolean (logic) combinations of binary variables that best explain the variability in the outcome variable, and thus, reveals variables and interactions that are associated with the response and/or have predictive capabilities. The logic expressions are embedded in a generalized linear regression framework, and thus, logic regression can handle a variety of outcome types, such as binary responses in case-control studies, numeric responses, and time-to-event data. In this chapter, we provide an introduction to the logic regression methodology, list some applications in public health and medicine, and summarize some of the direct extensions and modifications of logic regression that have been proposed in the literature. Copyright © 2010 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Wang, Zhijun; Anderson, Terry; Chen, Li; Barbera, Elena
2017-01-01
Connectivist learning is interaction-centered learning. A framework describing interaction and cognitive engagement in connectivist learning was constructed using logical reasoning techniques. The framework and analysis was designed to help researchers and learning designers understand and adapt the characteristics and principles of interaction in…
Multilayer Stock Forecasting Model Using Fuzzy Time Series
Javedani Sadaei, Hossein; Lee, Muhammad Hisyam
2014-01-01
After reviewing the vast body of literature on using FTS in stock market forecasting, certain deficiencies are distinguished in the hybridization of findings. In addition, the lack of constructive systematic framework, which can be helpful to indicate direction of growth in entire FTS forecasting systems, is outstanding. In this study, we propose a multilayer model for stock market forecasting including five logical significant layers. Every single layer has its detailed concern to assist forecast development by reconciling certain problems exclusively. To verify the model, a set of huge data containing Taiwan Stock Index (TAIEX), National Association of Securities Dealers Automated Quotations (NASDAQ), Dow Jones Industrial Average (DJI), and S&P 500 have been chosen as experimental datasets. The results indicate that the proposed methodology has the potential to be accepted as a framework for model development in stock market forecasts using FTS. PMID:24605058
Applying Toulmin: Teaching Logical Reasoning and Argumentative Writing
ERIC Educational Resources Information Center
Rex, Lesley A.; Thomas, Ebony Elizabeth; Engel, Steven
2010-01-01
To learn to write well-reasoned persuasive arguments, students need in situ help thinking through the complexity and complications of an issue, making inferences based on evidence, and hierarchically grouping and logically sequencing ideas. They rely on teachers to make this happen. In this article, the authors explain the framework they used and…
Multidimensional Simulation Applied to Water Resources Management
NASA Astrophysics Data System (ADS)
Camara, A. S.; Ferreira, F. C.; Loucks, D. P.; Seixas, M. J.
1990-09-01
A framework for an integrated decision aiding simulation (IDEAS) methodology using numerical, linguistic, and pictorial entities and operations is introduced. IDEAS relies upon traditional numerical formulations, logical rules to handle linguistic entities with linguistic values, and a set of pictorial operations. Pictorial entities are defined by their shape, size, color, and position. Pictorial operators include reproduction (copy of a pictorial entity), mutation (expansion, rotation, translation, change in color), fertile encounters (intersection, reunion), and sterile encounters (absorption). Interaction between numerical, linguistic, and pictorial entities is handled through logical rules or a simplified vector calculus operation. This approach is shown to be applicable to various environmental and water resources management analyses using a model to assess the impacts of an oil spill. Future developments, including IDEAS implementation on parallel processing machines, are also discussed.
A beginner's guide to writing the nursing conceptual model-based theoretical rationale.
Gigliotti, Eileen; Manister, Nancy N
2012-10-01
Writing the theoretical rationale for a study can be a daunting prospect for novice researchers. Nursing's conceptual models provide excellent frameworks for placement of study variables, but moving from the very abstract concepts of the nursing model to the less abstract concepts of the study variables is difficult. Similar to the five-paragraph essay used by writing teachers to assist beginning writers to construct a logical thesis, the authors of this column present guidelines that beginners can follow to construct their theoretical rationale. This guide can be used with any nursing conceptual model but Neuman's model was chosen here as the exemplar.
Use of Knowledge Base Systems (EMDS) in Strategic and Tactical Forest Planning
NASA Astrophysics Data System (ADS)
Jensen, M. E.; Reynolds, K.; Stockmann, K.
2008-12-01
The USDA Forest Service 2008 Planning Rule requires Forest plans to provide a strategic vision for maintaining the sustainability of ecological, economic, and social systems across USFS lands through the identification of desired conditions and objectives. In this paper we show how knowledge-based systems can be efficiently used to evaluate disparate natural resource information to assess desired conditions and related objectives in Forest planning. We use the Ecosystem Management Decision Support (EMDS) system (http://www.institute.redlands.edu/emds/), which facilitates development of both logic-based models for evaluating ecosystem sustainability (desired conditions) and decision models to identify priority areas for integrated landscape restoration (objectives). The study area for our analysis spans 1,057 subwatersheds within western Montana and northern Idaho. Results of our study suggest that knowledge-based systems such as EMDS are well suited to both strategic and tactical planning and that the following points merit consideration in future National Forest (and other land management) planning efforts: 1) Logic models provide a consistent, transparent, and reproducible method for evaluating broad propositions about ecosystem sustainability such as: are watershed integrity, ecosystem and species diversity, social opportunities, and economic integrity in good shape across a planning area? The ability to evaluate such propositions in a formal logic framework also allows users the opportunity to evaluate statistical changes in outcomes over time, which could be very useful for regional and national reporting purposes and for addressing litigation; 2) The use of logic and decision models in strategic and tactical Forest planning provides a repository for expert knowledge (corporate memory) that is critical to the evaluation and management of ecosystem sustainability over time. This is especially true for the USFS and other federal resource agencies, which are likely to experience rapid turnover in tenured resource specialist positions within the next five years due to retirements; 3) Use of logic model output in decision models is an efficient method for synthesizing the typically large amounts of information needed to support integrated landscape restoration. Moreover, use of logic and decision models to design customized scenarios for integrated landscape restoration, as we have demonstrated with EMDS, offers substantial improvements to traditional GIS-based procedures such as suitability analysis. To our knowledge, this study represents the first attempt to link evaluations of desired conditions for ecosystem sustainability in strategic planning to tactical planning regarding the location of subwatersheds that best meet the objectives of integrated landscape restoration. The basic knowledge-based approach implemented in EMDS, with its logic (NetWeaver) and decision (Criterion Decision Plus) engines, is well suited both to multi-scale strategic planning and to multi-resource tactical planning.
Framework for a clinical information system.
Van De Velde, R; Lansiers, R; Antonissen, G
2002-01-01
The design and implementation of Clinical Information System architecture is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the "middle" tier apply the clinical (business) model and application rules. The main characteristics are the focus on modelling and reuse of both data and business logic. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.
Program Monitoring with LTL in EAGLE
NASA Technical Reports Server (NTRS)
Barringer, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik
2004-01-01
We briefly present a rule-based framework called EAGLE, shown to be capable of defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics (MTL), interval logics, forms of quantified temporal logics, and so on. In this paper we focus on a linear temporal logic (LTL) specialization of EAGLE. For an initial formula of size m, we establish upper bounds of O(m(sup 2)2(sup m)log m) and O(m(sup 4)2(sup 2m)log(sup 2) m) for the space and time complexity, respectively, of single step evaluation over an input trace. This bound is close to the lower bound O(2(sup square root m) for future-time LTL presented. EAGLE has been successfully used, in both LTL and metric LTL forms, to test a real-time controller of an experimental NASA planetary rover.
NASA Astrophysics Data System (ADS)
Wan, Danny; Manfrini, Mauricio; Vaysset, Adrien; Souriau, Laurent; Wouters, Lennaert; Thiam, Arame; Raymenants, Eline; Sayan, Safak; Jussot, Julien; Swerts, Johan; Couet, Sebastien; Rassoul, Nouredine; Babaei Gavan, Khashayar; Paredis, Kristof; Huyghebaert, Cedric; Ercken, Monique; Wilson, Christopher J.; Mocuta, Dan; Radu, Iuliana P.
2018-04-01
Magnetic tunnel junctions (MTJs) interconnected via a continuous ferromagnetic free layer were fabricated for spin torque majority gate (STMG) logic. The MTJs are biased independently and show magnetoelectric response under spin transfer torque. The electrical control of these devices paves the way to future spin logic devices based on domain wall (DW) motion. In particular, it is a significant step towards the realization of a majority gate. To our knowledge, this is the first fabrication of a cross-shaped free layer shared by several perpendicular MTJs. The fabrication process can be generalized to any geometry and any number of MTJs. Thus, this framework can be applied to other spin logic concepts based on magnetic interconnect. Moreover, it allows exploration of spin dynamics for logic applications.
Implementing neural nets with programmable logic
NASA Technical Reports Server (NTRS)
Vidal, Jacques J.
1988-01-01
Networks of Boolean programmable logic modules are presented as one purely digital class of artificial neural nets. The approach contrasts with the continuous analog framework usually suggested. Programmable logic networks are capable of handling many neural-net applications. They avoid some of the limitations of threshold logic networks and present distinct opportunities. The network nodes are called dynamically programmable logic modules. They can be implemented with digitally controlled demultiplexers. Each node performs a Boolean function of its inputs which can be dynamically assigned. The overall network is therefore a combinational circuit and its outputs are Boolean global functions of the network's input variables. The approach offers definite advantages for VLSI implementation, namely, a regular architecture with limited connectivity, simplicity of the control machinery, natural modularity, and the support of a mature technology.
Boes, Peter; Ho, Meng Wei; Li, Zuofeng
2015-01-01
Image‐guided radiotherapy (IGRT), based on radiopaque markers placed in the prostate gland, was used for proton therapy of prostate patients. Orthogonal X‐rays and the IBA Digital Image Positioning System (DIPS) were used for setup correction prior to treatment and were repeated after treatment delivery. Following a rationale for margin estimates similar to that of van Herk,(1) the daily post‐treatment DIPS data were analyzed to determine if an adaptive radiotherapy plan was necessary. A Web application using ASP.NET MVC5, Entity Framework, and an SQL database was designed to automate this process. The designed features included state‐of‐the‐art Web technologies, a domain model closely matching the workflow, a database‐supporting concurrency and data mining, access to the DIPS database, secured user access and roles management, and graphing and analysis tools. The Model‐View‐Controller (MVC) paradigm allowed clean domain logic, unit testing, and extensibility. Client‐side technologies, such as jQuery, jQuery Plug‐ins, and Ajax, were adopted to achieve a rich user environment and fast response. Data models included patients, staff, treatment fields and records, correction vectors, DIPS images, and association logics. Data entry, analysis, workflow logics, and notifications were implemented. The system effectively modeled the clinical workflow and IGRT process. PACS number: 87 PMID:26103504
An iLab for Teaching Advanced Logic Concepts with Hardware Descriptive Languages
ERIC Educational Resources Information Center
Ayodele, Kayode P.; Inyang, Isaac A.; Kehinde, Lawrence O.
2015-01-01
One of the more interesting approaches to teaching advanced logic concepts is the use of online laboratory frameworks to provide student access to remote field-programmable devices. There is as yet, however, no conclusive evidence of the effectiveness of such an approach. This paper presents the Advanced Digital Lab, a remote laboratory based on…
ERIC Educational Resources Information Center
Zarcone, Alessandra; Padó, Sebastian; Lenci, Alessandro
2014-01-01
Logical metonymy resolution ("begin a book" ? "begin reading a book" or "begin writing a book") has traditionally been explained either through complex lexical entries (qualia structures) or through the integration of the implicit event via post-lexical access to world knowledge. We propose that recent work within the…
Methods of Product Evaluation. Guide Number 10. Evaluation Guides Series.
ERIC Educational Resources Information Center
St. John, Mark
In this guide the logic of product evaluation is described in a framework that is meant to be general and adaptable to all kinds of evaluations. Evaluators should consider using the logic and methods of product evaluation when (1) the purpose of the evaluation is to aid evaluators in making a decision about purchases; (2) a comprehensive…
Interpreting Abstract Interpretations in Membership Equational Logic
NASA Technical Reports Server (NTRS)
Fischer, Bernd; Rosu, Grigore
2001-01-01
We present a logical framework in which abstract interpretations can be naturally specified and then verified. Our approach is based on membership equational logic which extends equational logics by membership axioms, asserting that a term has a certain sort. We represent an abstract interpretation as a membership equational logic specification, usually as an overloaded order-sorted signature with membership axioms. It turns out that, for any term, its least sort over this specification corresponds to its most concrete abstract value. Maude implements membership equational logic and provides mechanisms to calculate the least sort of a term efficiently. We first show how Maude can be used to get prototyping of abstract interpretations "for free." Building on the meta-logic facilities of Maude, we further develop a tool that automatically checks and abstract interpretation against a set of user-defined properties. This can be used to select an appropriate abstract interpretation, to characterize the specified loss of information during abstraction, and to compare different abstractions with each other.
Evaluation of Model-Based Training for Vertical Guidance Logic
NASA Technical Reports Server (NTRS)
Feary, Michael; Palmer, Everett; Sherry, Lance; Polson, Peter; Alkin, Marty; McCrobie, Dan; Kelley, Jerry; Rosekind, Mark (Technical Monitor)
1997-01-01
This paper will summarize the results of a study which introduces a structured, model based approach to learning how the automated vertical guidance system works on a modern commercial air transport. The study proposes a framework to provide accurate and complete information in an attempt to eliminate confusion about 'what the system is doing'. This study will examine a structured methodology for organizing the ideas on which the system was designed, communicating this information through the training material, and displaying it in the airplane. Previous research on model-based, computer aided instructional technology has shown reductions in the amount of time to a specified level of competence. The lessons learned from the development of these technologies are well suited for use with the design methodology which was used to develop the vertical guidance logic for a large commercial air transport. The design methodology presents the model from which to derive the training material, and the content of information to be displayed to the operator. The study consists of a 2 X 2 factorial experiment which will compare a new method of training vertical guidance logic and a new type of display. The format of the material used to derive both the training and the display will be provided by the Operational Procedure Methodology. The training condition will compare current training material to the new structured format. The display condition will involve a change of the content of the information displayed into pieces that agree with the concepts with which the system was designed.
New fundamental evidence of non-classical structure in the combination of natural concepts.
Aerts, D; Sozzo, S; Veloz, T
2016-01-13
We recently performed cognitive experiments on conjunctions and negations of two concepts with the aim of investigating the combination problem of concepts. Our experiments confirmed the deviations (conceptual vagueness, underextension, overextension etc.) from the rules of classical (fuzzy) logic and probability theory observed by several scholars in concept theory, while our data were successfully modelled in a quantum-theoretic framework developed by ourselves. In this paper, we isolate a new, very stable and systematic pattern of violation of classicality that occurs in concept combinations. In addition, the strength and regularity of this non-classical effect leads us to believe that it occurs at a more fundamental level than the deviations observed up to now. It is our opinion that we have identified a deep non-classical mechanism determining not only how concepts are combined but, rather, how they are formed. We show that this effect can be faithfully modelled in a two-sector Fock space structure, and that it can be exactly explained by assuming that human thought is the superposition of two processes, a 'logical reasoning', guided by 'logic', and a 'conceptual reasoning', guided by 'emergence', and that the latter generally prevails over the former. All these findings provide new fundamental support to our quantum-theoretic approach to human cognition. © 2015 The Author(s).
Fuzzy logic-based assessment for mapping potential infiltration areas in low-gradient watersheds.
Quiroz Londoño, Orlando Mauricio; Romanelli, Asunción; Lima, María Lourdes; Massone, Héctor Enrique; Martínez, Daniel Emilio
2016-07-01
This paper gives an account of the design a logic-based approach for identifying potential infiltration areas in low-gradient watersheds based on remote sensing data. This methodological framework is applied in a sector of the Pampa Plain, Argentina, which has high level of agricultural activities and large demands for groundwater supplies. Potential infiltration sites are assessed as a function of two primary topics: hydrologic and soil conditions. This model shows the state of each evaluated subwatershed respecting to its potential contribution to infiltration mainly based on easily measurable and commonly used parameters: drainage density, geomorphologic units, soil media, land-cover, slope and aspect (slope orientation). Mapped outputs from the logic model displayed 42% very low-low, 16% moderate, 41% high-very high contribution to potential infiltration in the whole watershed. Subwatersheds in the upper and lower section were identified as areas with high to very high potential infiltration according to the following media features: low drainage density (<1.5 km/km(2)), arable land and pastures as the main land-cover categories, sandy clay loam to loam - clay loam soils and with the geomorphological units named poorly drained plain, channelized drainage plain and, dunes and beaches. Copyright © 2016 Elsevier Ltd. All rights reserved.
Command Disaggregation Attack and Mitigation in Industrial Internet of Things
Zhu, Pei-Dong; Hu, Yi-Fan; Cui, Peng-Shuai; Zhang, Yan
2017-01-01
A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1) the command sequence is disordered and (2) disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework. PMID:29065461
Command Disaggregation Attack and Mitigation in Industrial Internet of Things.
Xun, Peng; Zhu, Pei-Dong; Hu, Yi-Fan; Cui, Peng-Shuai; Zhang, Yan
2017-10-21
A cyber-physical attack in the industrial Internet of Things can cause severe damage to physical system. In this paper, we focus on the command disaggregation attack, wherein attackers modify disaggregated commands by intruding command aggregators like programmable logic controllers, and then maliciously manipulate the physical process. It is necessary to investigate these attacks, analyze their impact on the physical process, and seek effective detection mechanisms. We depict two different types of command disaggregation attack modes: (1) the command sequence is disordered and (2) disaggregated sub-commands are allocated to wrong actuators. We describe three attack models to implement these modes with going undetected by existing detection methods. A novel and effective framework is provided to detect command disaggregation attacks. The framework utilizes the correlations among two-tier command sequences, including commands from the output of central controller and sub-commands from the input of actuators, to detect attacks before disruptions occur. We have designed components of the framework and explain how to mine and use these correlations to detect attacks. We present two case studies to validate different levels of impact from various attack models and the effectiveness of the detection framework. Finally, we discuss how to enhance the detection framework.
Dal Palù, Alessandro; Pontelli, Enrico; He, Jing; Lu, Yonggang
2007-01-01
The paper describes a novel framework, constructed using Constraint Logic Programming (CLP) and parallelism, to determine the association between parts of the primary sequence of a protein and alpha-helices extracted from 3D low-resolution descriptions of large protein complexes. The association is determined by extracting constraints from the 3D information, regarding length, relative position and connectivity of helices, and solving these constraints with the guidance of a secondary structure prediction algorithm. Parallelism is employed to enhance performance on large proteins. The framework provides a fast, inexpensive alternative to determine the exact tertiary structure of unknown proteins.
NASA Astrophysics Data System (ADS)
Mezentsev, Yu A.; Baranova, N. V.
2018-05-01
A universal economical and mathematical model designed for determination of optimal strategies for managing subsystems (components of subsystems) of production and logistics of enterprises is considered. Declared universality allows taking into account on the system level both production components, including limitations on the ways of converting raw materials and components into sold goods, as well as resource and logical restrictions on input and output material flows. The presented model and generated control problems are developed within the framework of the unified approach that allows one to implement logical conditions of any complexity and to define corresponding formal optimization tasks. Conceptual meaning of used criteria and limitations are explained. The belonging of the generated tasks of the mixed programming with the class of NP is shown. An approximate polynomial algorithm for solving the posed optimization tasks for mixed programming of real dimension with high computational complexity is proposed. Results of testing the algorithm on the tasks in a wide range of dimensions are presented.
Fault trees for decision making in systems analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Howard E.
1975-10-09
The application of fault tree analysis (FTA) to system safety and reliability is presented within the framework of system safety analysis. The concepts and techniques involved in manual and automated fault tree construction are described and their differences noted. The theory of mathematical reliability pertinent to FTA is presented with emphasis on engineering applications. An outline of the quantitative reliability techniques of the Reactor Safety Study is given. Concepts of probabilistic importance are presented within the fault tree framework and applied to the areas of system design, diagnosis and simulation. The computer code IMPORTANCE ranks basic events and cut setsmore » according to a sensitivity analysis. A useful feature of the IMPORTANCE code is that it can accept relative failure data as input. The output of the IMPORTANCE code can assist an analyst in finding weaknesses in system design and operation, suggest the most optimal course of system upgrade, and determine the optimal location of sensors within a system. A general simulation model of system failure in terms of fault tree logic is described. The model is intended for efficient diagnosis of the causes of system failure in the event of a system breakdown. It can also be used to assist an operator in making decisions under a time constraint regarding the future course of operations. The model is well suited for computer implementation. New results incorporated in the simulation model include an algorithm to generate repair checklists on the basis of fault tree logic and a one-step-ahead optimization procedure that minimizes the expected time to diagnose system failure.« less
A case study of evaluating informatics impact on diffusion of scientific knowledge.
Katz, Susan B
2008-11-06
This case study poster uses a newly developed framework to evaluate an informatics effort in its public health context. The electronic clearance system being evaluated provides the potential for increasing the speed and quality of scientific diffusion of knowledge, and thus translation of research into practice. A graphical logic model and tabular results of the evaluation are presented. Public health history suggests potential benefits of more timely and coordinated diffusion of scientific information.
Mohammed, Abdul-Wahid; Xu, Yang; Hu, Haixiao; Agyemang, Brighter
2016-09-21
In novel collaborative systems, cooperative entities collaborate services to achieve local and global objectives. With the growing pervasiveness of cyber-physical systems, however, such collaboration is hampered by differences in the operations of the cyber and physical objects, and the need for the dynamic formation of collaborative functionality given high-level system goals has become practical. In this paper, we propose a cross-layer automation and management model for cyber-physical systems. This models the dynamic formation of collaborative services pursuing laid-down system goals as an ontology-oriented hierarchical task network. Ontological intelligence provides the semantic technology of this model, and through semantic reasoning, primitive tasks can be dynamically composed from high-level system goals. In dealing with uncertainty, we further propose a novel bridge between hierarchical task networks and Markov logic networks, called the Markov task network. This leverages the efficient inference algorithms of Markov logic networks to reduce both computational and inferential loads in task decomposition. From the results of our experiments, high-precision service composition under uncertainty can be achieved using this approach.
Organized Cognition: Theoretical Framework for Future C2 Research and Implementation
2011-06-01
Dordrecht, The Netherlands: Kluwer Academic Publishers. 105. Husserl, E., Analyses Concerning Passive and Active Synthesis: Lectures on Transcendental...Logic. 2001, Dordrecht, The Netherlands: Kluwer Academic Publishers. 106. Merleau-Ponty, M., Phenomenology of Perception. 1962, London: Routledge...Secaucus, NJ: Kluwer Academic Publishers. 110. Husserl, E. and L. Landgrebe, Experience and Judgment: Investigations in a Genealogy of Logic. 1973
Urban Growth Modeling Using Anfis Algorithm: a Case Study for Sanandaj City, Iran
NASA Astrophysics Data System (ADS)
Mohammady, S.; Delavar, M. R.; Pijanowski, B. C.
2013-10-01
Global urban population has increased from 22.9% in 1985 to 47% in 2010. In spite of the tendency for urbanization worldwide, only about 2% of Earth's land surface is covered by cities. Urban population in Iran is increasing due to social and economic development. The proportion of the population living in Iran urban areas has consistently increased from about 31% in 1956 to 68.4% in 2006. Migration of the rural population to cities and population growth in cities have caused many problems, such as irregular growth of cities, improper placement of infrastructure and urban services. Air and environmental pollution, resource degradation and insufficient infrastructure, are the results of poor urban planning that have negative impact on the environment or livelihoods of people living in cities. These issues are a consequence of improper land use planning. Models have been employed to assist in our understanding of relations between land use and its subsequent effects. Different models for urban growth modeling have been developed. Methods from computational intelligence have made great contributions in all specific application domains and hybrid algorithms research as a part of them has become a big trend in computational intelligence. Artificial Neural Network (ANN) has the capability to deal with imprecise data by training, while fuzzy logic can deal with the uncertainty of human cognition. ANN learns from scratch by adjusting the interconnections between layers and Fuzzy Inference Systems (FIS) is a popular computing framework based on the concept of fuzzy set theory, fuzzy logic, and fuzzy reasoning. Fuzzy logic has many advantages such as flexibility and at the other sides, one of the biggest problems in fuzzy logic application is the location and shape and of membership function for each fuzzy variable which is generally being solved by trial and error method. In contrast, numerical computation and learning are the advantages of neural network, however, it is not easy to obtain the optimal structure. Since, in this type of fuzzy logic, neural network has been used, therefore, by using a learning algorithm the parameters have been changed until reach the optimal solution. Adaptive Neuro Fuzzy Inference System (ANFIS) computing due to ability to understand nonlinear structures is a popular framework for solving complex problems. Fusion of ANN and FIS has attracted the growing interest of researchers in various scientific and engineering areas due to the growing need of adaptive intelligent systems to solve the real world problems. In this research, an ANFIS method has been developed for modeling land use change and interpreting the relationship between the drivers of urbanization. Our study area is the city of Sanandaj located in the west of Iran. Landsat images acquired in 2000 and 2006 have been used for model development and calibration. The parameters used in this study include distance to major roads, distance to residential regions, elevation, number of urban pixels in a 3 by 3 neighborhood and distance to green space. Percent Correct Match (PCM) and Figure of Merit were used to assess model goodness of fit were 93.77% and 64.30%, respectively.
Knowledge representation in fuzzy logic
NASA Technical Reports Server (NTRS)
Zadeh, Lotfi A.
1989-01-01
The author presents a summary of the basic concepts and techniques underlying the application of fuzzy logic to knowledge representation. He then describes a number of examples relating to its use as a computational system for dealing with uncertainty and imprecision in the context of knowledge, meaning, and inference. It is noted that one of the basic aims of fuzzy logic is to provide a computational framework for knowledge representation and inference in an environment of uncertainty and imprecision. In such environments, fuzzy logic is effective when the solutions need not be precise and/or it is acceptable for a conclusion to have a dispositional rather than categorical validity. The importance of fuzzy logic derives from the fact that there are many real-world applications which fit these conditions, especially in the realm of knowledge-based systems for decision-making and control.
EAGLE Monitors by Collecting Facts and Generating Obligations
NASA Technical Reports Server (NTRS)
Barrnger, Howard; Goldberg, Allen; Havelund, Klaus; Sen, Koushik
2003-01-01
We present a rule-based framework, called EAGLE, that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. A monitor for an EAGLE formula checks if a finite trace of states satisfies the given formula. We present, in details, an algorithm for the synthesis of monitors for EAGLE. The algorithm is implemented as a Java application and involves novel techniques for rule definition, manipulation and execution. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace of states. Our initial experiments have been successful as EAGLE detected a previously unknown bug while testing a planetary rover controller.
Molecular system identification for enzyme directed evolution and design
NASA Astrophysics Data System (ADS)
Guan, Xiangying; Chakrabarti, Raj
2017-09-01
The rational design of chemical catalysts requires methods for the measurement of free energy differences in the catalytic mechanism for any given catalyst Hamiltonian. The scope of experimental learning algorithms that can be applied to catalyst design would also be expanded by the availability of such methods. Methods for catalyst characterization typically either estimate apparent kinetic parameters that do not necessarily correspond to free energy differences in the catalytic mechanism or measure individual free energy differences that are not sufficient for establishing the relationship between the potential energy surface and catalytic activity. Moreover, in order to enhance the duty cycle of catalyst design, statistically efficient methods for the estimation of the complete set of free energy differences relevant to the catalytic activity based on high-throughput measurements are preferred. In this paper, we present a theoretical and algorithmic system identification framework for the optimal estimation of free energy differences in solution phase catalysts, with a focus on one- and two-substrate enzymes. This framework, which can be automated using programmable logic, prescribes a choice of feasible experimental measurements and manipulated input variables that identify the complete set of free energy differences relevant to the catalytic activity and minimize the uncertainty in these free energy estimates for each successive Hamiltonian design. The framework also employs decision-theoretic logic to determine when model reduction can be applied to improve the duty cycle of high-throughput catalyst design. Automation of the algorithm using fluidic control systems is proposed, and applications of the framework to the problem of enzyme design are discussed.
NASA Astrophysics Data System (ADS)
Antle, J. M.; Valdivia, R. O.; Claessens, L.; Nelson, G. C.; Rosenzweig, C.; Ruane, A. C.; Vervoort, J.
2013-12-01
The global change research community has recognized that new pathway and scenario concepts are needed to implement impact and vulnerability assessment that is logically consistent across local, regional and global scales. For impact and vulnerability assessment, new socio-economic pathway and scenario concepts are being developed. Representative Agricultural Pathways (RAPs) are designed to extend global pathways to provide the detail needed for global and regional assessment of agricultural systems. In addition, research by the Agricultural Model Inter-comparison and Improvement Project (AgMIP) shows that RAPs provide a powerful way to engage stakeholders in climate-related research throughout the research process and in communication of research results. RAPs are based on the integrated assessment framework developed by AgMIP. This framework shows that both bio-physical and socio-economic drivers are essential components of agricultural pathways and logically precede the definition of adaptation and mitigation scenarios that embody associated capabilities and challenges. This approach is based on a trans-disciplinary process for designing pathways and then translating them into parameter sets for bio-physical and economic models that are components of agricultural integrated assessments of climate impact, adaptation and mitigation. RAPs must be designed to be part of a logically consistent set of drivers and outcomes from global to regional and local. Global RAPs are designed to be consistent with higher-level global socio-economic pathways, but add key agricultural drivers such as agricultural growth trends that are not specified in more general pathways, as illustrated in a recent inter-comparison of global agricultural models. To create pathways at regional or local scales, further detail is needed. At this level, teams of scientists and other experts with knowledge of the agricultural systems and regions work together through a step-wise process. Experiences from AgMIP Regional Teams, and from the project on Regional Approaches to Climate Change in the Pacific Northwest, are used to discuss how the RAPs procedures can be further developed and improved, and how RAPs can help engage stakeholders in climate-related research throughout the research process and in communication of research results.
NASA Astrophysics Data System (ADS)
Irawan, Adi; Mardiyana; Retno Sari Saputro, Dewi
2017-06-01
This research is aimed to find out the effect of learning model towards learning achievement in terms of students’ logical mathematics intelligences. The learning models that were compared were NHT by Concept Maps, TGT by Concept Maps, and Direct Learning model. This research was pseudo experimental by factorial design 3×3. The population of this research was all of the students of class XI Natural Sciences of Senior High School in all regency of Karanganyar in academic year 2016/2017. The conclusions of this research were: 1) the students’ achievements with NHT learning model by Concept Maps were better than students’ achievements with TGT model by Concept Maps and Direct Learning model. The students’ achievements with TGT model by Concept Maps were better than the students’ achievements with Direct Learning model. 2) The students’ achievements that exposed high logical mathematics intelligences were better than students’ medium and low logical mathematics intelligences. The students’ achievements that exposed medium logical mathematics intelligences were better than the students’ low logical mathematics intelligences. 3) Each of student logical mathematics intelligences with NHT learning model by Concept Maps has better achievement than students with TGT learning model by Concept Maps, students with NHT learning model by Concept Maps have better achievement than students with the direct learning model, and the students with TGT by Concept Maps learning model have better achievement than students with Direct Learning model. 4) Each of learning model, students who have logical mathematics intelligences have better achievement then students who have medium logical mathematics intelligences, and students who have medium logical mathematics intelligences have better achievement than students who have low logical mathematics intelligences.
Expanding a First-Order Logic Mitigation Framework to Handle Multimorbid Patient Preferences
Michalowski, Martin; Wilk, Szymon; Rosu, Daniela; Kezadri, Mounira; Michalowski, Wojtek; Carrier, Marc
2015-01-01
The increasing prevalence of multimorbidity is a challenge for physicians who have to manage a constantly growing number of patients with simultaneous diseases. Adding to this challenge is the need to incorporate patient preferences as key components of the care process, thanks in part to the emergence of personalized and participatory medicine. In our previous work we proposed a framework employing first order logic to represent clinical practice guidelines (CPGs) and to mitigate possible adverse interactions when concurrently applying multiple CPGs to a multimorbid patient. In this paper, we describe extensions to our methodological framework that (1) broaden our definition of revision operators to support required and desired types of revisions defined in secondary knowledge sources, and (2) expand the mitigation algorithm to apply revisions based on their type. We illustrate the capabilities of the expanded framework using a clinical case study of a multimorbid patient with stable cardiac artery disease who suffers a sudden onset of deep vein thrombosis. PMID:26958226
An Assessment of Agency Theory as a Framework for the Government-University Relationship
ERIC Educational Resources Information Center
Kivisto, Jussi
2008-01-01
The aim of this paper is to use agency theory as the theoretical framework for an examination of the government-university relationship and to assess the main strengths and weaknesses of the theory in this context. Because of its logically consistent framework, agency theory is able to manifest many of the complexities and difficulties that…
A Data Model Framework for the Characterization of a Satellite Data Handling Software
NASA Astrophysics Data System (ADS)
Camatto, Gianluigi; Tipaldi, Massimo; Bothmer, Wolfgang; Ferraguto, Massimo; Bruenjes, Bernhard
2014-08-01
This paper describes an approach for the modelling of the characterization and configuration data yielded when developing a Satellite Data Handling Software (DHSW). The model can then be used as an input for the preparation of the logical and physical representation of the Satellite Reference Database (SRDB) contents and related SW suite, an essential product that allows transferring the information between the different system stakeholders, but also to produce part of the DHSW documentation and artefacts. Special attention is given to the shaping of the general Parameter concept, which is shared by a number of different entities within a Space System.
Generalized Symbolic Execution for Model Checking and Testing
NASA Technical Reports Server (NTRS)
Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)
2003-01-01
Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.
Model Checking Degrees of Belief in a System of Agents
NASA Technical Reports Server (NTRS)
Raimondi, Franco; Primero, Giuseppe; Rungta, Neha
2014-01-01
Reasoning about degrees of belief has been investigated in the past by a number of authors and has a number of practical applications in real life. In this paper we present a unified framework to model and verify degrees of belief in a system of agents. In particular, we describe an extension of the temporal-epistemic logic CTLK and we introduce a semantics based on interpreted systems for this extension. In this way, degrees of beliefs do not need to be provided externally, but can be derived automatically from the possible executions of the system, thereby providing a computationally grounded formalism. We leverage the semantics to (a) construct a model checking algorithm, (b) investigate its complexity, (c) provide a Java implementation of the model checking algorithm, and (d) evaluate our approach using the standard benchmark of the dining cryptographers. Finally, we provide a detailed case study: using our framework and our implementation, we assess and verify the situational awareness of the pilot of Air France 447 flying in off-nominal conditions.
Runtime Analysis of Linear Temporal Logic Specifications
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus
2001-01-01
This report presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to B chi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Applying gene regulatory network logic to the evolution of social behavior.
Baran, Nicole M; McGrath, Patrick T; Streelman, J Todd
2017-06-06
Animal behavior is ultimately the product of gene regulatory networks (GRNs) for brain development and neural networks for brain function. The GRN approach has advanced the fields of genomics and development, and we identify organizational similarities between networks of genes that build the brain and networks of neurons that encode brain function. In this perspective, we engage the analogy between developmental networks and neural networks, exploring the advantages of using GRN logic to study behavior. Applying the GRN approach to the brain and behavior provides a quantitative and manipulative framework for discovery. We illustrate features of this framework using the example of social behavior and the neural circuitry of aggression.
Devil is in the details: Using logic models to investigate program process.
Peyton, David J; Scicchitano, Michael
2017-12-01
Theory-based logic models are commonly developed as part of requirements for grant funding. As a tool to communicate complex social programs, theory based logic models are an effective visual communication. However, after initial development, theory based logic models are often abandoned and remain in their initial form despite changes in the program process. This paper examines the potential benefits of committing time and resources to revising the initial theory driven logic model and developing detailed logic models that describe key activities to accurately reflect the program and assist in effective program management. The authors use a funded special education teacher preparation program to exemplify the utility of drill down logic models. The paper concludes with lessons learned from the iterative revision process and suggests how the process can lead to more flexible and calibrated program management. Copyright © 2017 Elsevier Ltd. All rights reserved.
Logic-Based Models for the Analysis of Cell Signaling Networks†
2010-01-01
Computational models are increasingly used to analyze the operation of complex biochemical networks, including those involved in cell signaling networks. Here we review recent advances in applying logic-based modeling to mammalian cell biology. Logic-based models represent biomolecular networks in a simple and intuitive manner without describing the detailed biochemistry of each interaction. A brief description of several logic-based modeling methods is followed by six case studies that demonstrate biological questions recently addressed using logic-based models and point to potential advances in model formalisms and training procedures that promise to enhance the utility of logic-based methods for studying the relationship between environmental inputs and phenotypic or signaling state outputs of complex signaling networks. PMID:20225868
The Illness Narratives of Health Managers: Developing an Analytical Framework
ERIC Educational Resources Information Center
Exworthy, Mark
2011-01-01
This paper examines the personal experience of illness and healthcare by health managers through their illness narratives. By synthesising a wider literature of illness narratives and health management, an analytical framework is presented, which considers the impact of illness narratives, comprising the logic of illness narratives, the actors…
Framework for Transforming Departmental Culture to Support Educational Innovation
ERIC Educational Resources Information Center
Corbo, Joel C.; Reinholz, Daniel L.; Dancy, Melissa H.; Deetz, Stanley; Finkelstein, Noah
2016-01-01
This paper provides a research-based framework for promoting institutional change in higher education. To date, most educational change efforts have focused on relatively narrow subsets of the university system (e.g., faculty teaching practices or administrative policies) and have been largely driven by implicit change logics; both of these…
Contandriopoulos, Damien; Brousselle, Astrid; Dubois, Carl-Ardy; Perroux, Mélanie; Beaulieu, Marie-Dominique; Brault, Isabelle; Kilpatrick, Kelley; D'Amour, Danielle; Sansgter-Gormley, Esther
2015-02-27
Integrating Nurse Practitioners into primary care teams is a process that involves significant challenges. To be successful, nurse practitioner integration into primary care teams requires, among other things, a redefinition of professional boundaries, in particular those of medicine and nursing, a coherent model of inter- and intra- professional collaboration, and team-based work processes that make the best use of the subsidiarity principle. There have been numerous studies on nurse practitioner integration, and the literature provides a comprehensive list of barriers to, and facilitators of, integration. However, this literature is much less prolific in discussing the operational level implications of those barriers and facilitators and in offering practical recommendations. In the context of a large-scale research project on the introduction of nurse practitioners in Quebec (Canada) we relied on a logic-analysis approach based, on the one hand on a realist review of the literature and, on the other hand, on qualitative case-studies in 6 primary healthcare teams in rural and urban area of Quebec. Five core themes that need to be taken into account when integrating nurse practitioners into primary care teams were identified. Those themes are: planning, role definition, practice model, collaboration, and team support. The present paper has two objectives: to present the methods used to develop the themes, and to discuss an integrative model of nurse practitioner integration support centered around these themes. It concludes with a discussion of how this framework contributes to existing knowledge and some ideas for future avenues of study.
Linguistic Summarization of Video for Fall Detection Using Voxel Person and Fuzzy Logic
Anderson, Derek; Luke, Robert H.; Keller, James M.; Skubic, Marjorie; Rantz, Marilyn; Aud, Myra
2009-01-01
In this paper, we present a method for recognizing human activity from linguistic summarizations of temporal fuzzy inference curves representing the states of a three-dimensional object called voxel person. A hierarchy of fuzzy logic is used, where the output from each level is summarized and fed into the next level. We present a two level model for fall detection. The first level infers the states of the person at each image. The second level operates on linguistic summarizations of voxel person’s states and inference regarding activity is performed. The rules used for fall detection were designed under the supervision of nurses to ensure that they reflect the manner in which elders perform these activities. The proposed framework is extremely flexible. Rules can be modified, added, or removed, allowing for per-resident customization based on knowledge about their cognitive and physical ability. PMID:20046216
Dal Palú, Alessandro; Spyrakis, Francesca; Cozzini, Pietro
2012-03-01
We describe the potential of a novel method, based on Constraint Logic Programming (CLP), developed for an exhaustive sampling of protein conformational space. The CLP framework proposed here has been tested and applied to the estrogen receptor, whose activity and function is strictly related to its intrinsic, and well known, dynamics. We have investigated in particular the flexibility of H12, focusing on the pathways followed by the helix when moving from one stable crystallographic conformation to the others. Millions of geometrically feasible conformations were generated, selected and the traces connecting the different forms were determined by using a shortest path algorithm. The preliminary analyses showed a marked agreement between the crystallographic agonist-like, antagonist-like and hypothetical apo forms, and the corresponding conformations identified by the CLP framework. These promising results, together with the short computational time required to perform the analyses, make this constraint-based approach a valuable tool for the study of protein folding prediction. The CLP framework enables one to consider various structural and energetic scenarious, without changing the core algorithm. To show the feasibility of the method, we intentionally choose a pure geometric setting, neglecting the energetic evaluation of the poses, in order to be independent from a specific force field and to provide the possibility of comparing different behaviours associated with various energy models. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.
2016-01-01
Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models. PMID:27876821
Knijnenburg, Theo A; Klau, Gunnar W; Iorio, Francesco; Garnett, Mathew J; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F A
2016-11-23
Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present 'Logic Optimization for Binary Input to Continuous Output' (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.
Interpretation of IEEE-854 floating-point standard and definition in the HOL system
NASA Technical Reports Server (NTRS)
Carreno, Victor A.
1995-01-01
The ANSI/IEEE Standard 854-1987 for floating-point arithmetic is interpreted by converting the lexical descriptions in the standard into mathematical conditional descriptions organized in tables. The standard is represented in higher-order logic within the framework of the HOL (Higher Order Logic) system. The paper is divided in two parts with the first part the interpretation and the second part the description in HOL.
Bayne, Jay S
2008-06-01
In support of a generalization of systems theory, this paper introduces a new approach in modeling complex distributed systems. It offers an analytic framework for describing the behavior of interactive cyberphysical systems (CPSs), which are networked stationary or mobile information systems responsible for the real-time governance of physical processes whose behaviors unfold in cyberspace. The framework is predicated on a cyberspace-time reference model comprising three spatial dimensions plus time. The spatial domains include geospatial, infospatial, and sociospatial references, the latter describing relationships among sovereign enterprises (rational agents) that choose voluntarily to organize and interoperate for individual and mutual benefit through geospatial (physical) and infospatial (logical) transactions. Of particular relevance to CPSs are notions of timeliness and value, particularly as they relate to the real-time governance of physical processes and engagements with other cooperating CPS. Our overarching interest, as with celestial mechanics, is in the formation and evolution of clusters of cyberspatial objects and the federated systems they form.
Nazzi, Francesco; Pennacchio, Francesco
2018-03-30
Any attempt to outline a logical framework in which to interpret the honey bee health decline and its contribution to elevated colony losses should recognize the importance of the multifactorial nature of the responsible syndrome and provide a functional model as a basis for defining and testing working hypotheses. We propose that covert infections by deformed wing virus (DWV) represent a sword of Damocles permanently threatening the survival of honey bee colonies and suggest that any factor affecting the honey bee’s antiviral defenses can turn this pathogen into a killer. Here we discuss the available experimental evidence in the framework of a model based on honey bee immune competence as affected by multiple stress factors that is proposed as a conceptual tool for analyzing bee mortality and its underlying mechanisms.
NASA Astrophysics Data System (ADS)
Argoneto, Pierluigi; Renna, Paolo
2016-02-01
This paper proposes a Framework for Capacity Sharing in Cloud Manufacturing (FCSCM) able to support the capacity sharing issue among independent firms. The success of geographical distributed plants depends strongly on the use of opportune tools to integrate their resources and demand forecast in order to gather a specific production objective. The framework proposed is based on two different tools: a cooperative game algorithm, based on the Gale-Shapley model, and a fuzzy engine. The capacity allocation policy takes into account the utility functions of the involved firms. It is shown how the capacity allocation policy proposed induces all firms to report truthfully their information about their requirements. A discrete event simulation environment has been developed to test the proposed FCSCM. The numerical results show the drastic reduction of unsatisfied capacity obtained by the model of cooperation implemented in this work.
NASA Astrophysics Data System (ADS)
Kostyuchenko, Yuriy V.; Sztoyka, Yulia; Kopachevsky, Ivan; Artemenko, Igor; Yuschenko, Maxim
2017-10-01
Multi-model approach for remote sensing data processing and interpretation is described. The problem of satellite data utilization in multi-modeling approach for socio-ecological risks assessment is formally defined. Observation, measurement and modeling data utilization method in the framework of multi-model approach is described. Methodology and models of risk assessment in framework of decision support approach are defined and described. Method of water quality assessment using satellite observation data is described. Method is based on analysis of spectral reflectance of aquifers. Spectral signatures of freshwater bodies and offshores are analyzed. Correlations between spectral reflectance, pollutions and selected water quality parameters are analyzed and quantified. Data of MODIS, MISR, AIRS and Landsat sensors received in 2002-2014 have been utilized verified by in-field spectrometry and lab measurements. Fuzzy logic based approach for decision support in field of water quality degradation risk is discussed. Decision on water quality category is making based on fuzzy algorithm using limited set of uncertain parameters. Data from satellite observations, field measurements and modeling is utilizing in the framework of the approach proposed. It is shown that this algorithm allows estimate water quality degradation rate and pollution risks. Problems of construction of spatial and temporal distribution of calculated parameters, as well as a problem of data regularization are discussed. Using proposed approach, maps of surface water pollution risk from point and diffuse sources are calculated and discussed.
Comprehensive Fault Tolerance and Science-Optimal Attitude Planning for Spacecraft Applications
NASA Astrophysics Data System (ADS)
Nasir, Ali
Spacecraft operate in a harsh environment, are costly to launch, and experience unavoidable communication delay and bandwidth constraints. These factors motivate the need for effective onboard mission and fault management. This dissertation presents an integrated framework to optimize science goal achievement while identifying and managing encountered faults. Goal-related tasks are defined by pointing the spacecraft instrumentation toward distant targets of scientific interest. The relative value of science data collection is traded with risk of failures to determine an optimal policy for mission execution. Our major innovation in fault detection and reconfiguration is to incorporate fault information obtained from two types of spacecraft models: one based on the dynamics of the spacecraft and the second based on the internal composition of the spacecraft. For fault reconfiguration, we consider possible changes in both dynamics-based control law configuration and the composition-based switching configuration. We formulate our problem as a stochastic sequential decision problem or Markov Decision Process (MDP). To avoid the computational complexity involved in a fully-integrated MDP, we decompose our problem into multiple MDPs. These MDPs include planning MDPs for different fault scenarios, a fault detection MDP based on a logic-based model of spacecraft component and system functionality, an MDP for resolving conflicts between fault information from the logic-based model and the dynamics-based spacecraft models" and the reconfiguration MDP that generates a policy optimized over the relative importance of the mission objectives versus spacecraft safety. Approximate Dynamic Programming (ADP) methods for the decomposition of the planning and fault detection MDPs are applied. To show the performance of the MDP-based frameworks and ADP methods, a suite of spacecraft attitude planning case studies are described. These case studies are used to analyze the content and behavior of computed policies in response to the changes in design parameters. A primary case study is built from the Far Ultraviolet Spectroscopic Explorer (FUSE) mission for which component models and their probabilities of failure are based on realistic mission data. A comparison of our approach with an alternative framework for spacecraft task planning and fault management is presented in the context of the FUSE mission.
C code generation from Petri-net-based logic controller specification
NASA Astrophysics Data System (ADS)
Grobelny, Michał; Grobelna, Iwona; Karatkevich, Andrei
2017-08-01
The article focuses on programming of logic controllers. It is important that a programming code of a logic controller is executed flawlessly according to the primary specification. In the presented approach we generate C code for an AVR microcontroller from a rule-based logical model of a control process derived from a control interpreted Petri net. The same logical model is also used for formal verification of the specification by means of the model checking technique. The proposed rule-based logical model and formal rules of transformation ensure that the obtained implementation is consistent with the already verified specification. The approach is validated by practical experiments.
Size reduction techniques for vital compliant VHDL simulation models
Rich, Marvin J.; Misra, Ashutosh
2006-08-01
A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.
A computational framework for prime implicants identification in noncoherent dynamic systems.
Di Maio, Francesco; Baronchelli, Samuele; Zio, Enrico
2015-01-01
Dynamic reliability methods aim at complementing the capability of traditional static approaches (e.g., event trees [ETs] and fault trees [FTs]) by accounting for the system dynamic behavior and its interactions with the system state transition process. For this, the system dynamics is here described by a time-dependent model that includes the dependencies with the stochastic transition events. In this article, we present a novel computational framework for dynamic reliability analysis whose objectives are i) accounting for discrete stochastic transition events and ii) identifying the prime implicants (PIs) of the dynamic system. The framework entails adopting a multiple-valued logic (MVL) to consider stochastic transitions at discretized times. Then, PIs are originally identified by a differential evolution (DE) algorithm that looks for the optimal MVL solution of a covering problem formulated for MVL accident scenarios. For testing the feasibility of the framework, a dynamic noncoherent system composed of five components that can fail at discretized times has been analyzed, showing the applicability of the framework to practical cases. © 2014 Society for Risk Analysis.
Physical constraints determine the logic of bacterial promoter architectures
Ezer, Daphne; Zabet, Nicolae Radu; Adryan, Boris
2014-01-01
Site-specific transcription factors (TFs) bind to their target sites on the DNA, where they regulate the rate at which genes are transcribed. Bacterial TFs undergo facilitated diffusion (a combination of 3D diffusion around and 1D random walk on the DNA) when searching for their target sites. Using computer simulations of this search process, we show that the organization of the binding sites, in conjunction with TF copy number and binding site affinity, plays an important role in determining not only the steady state of promoter occupancy, but also the order at which TFs bind. These effects can be captured by facilitated diffusion-based models, but not by standard thermodynamics. We show that the spacing of binding sites encodes complex logic, which can be derived from combinations of three basic building blocks: switches, barriers and clusters, whose response alone and in higher orders of organization we characterize in detail. Effective promoter organizations are commonly found in the E. coli genome and are highly conserved between strains. This will allow studies of gene regulation at a previously unprecedented level of detail, where our framework can create testable hypothesis of promoter logic. PMID:24476912
In search of tools to aid logical thinking and communicating about medical decision making.
Hunink, M G
2001-01-01
To have real-time impact on medical decision making, decision analysts need a wide variety of tools to aid logical thinking and communication. Decision models provide a formal framework to integrate evidence and values, but they are commonly perceived as complex and difficult to understand by those unfamiliar with the methods, especially in the context of clinical decision making. The theory of constraints, introduced by Eliyahu Goldratt in the business world, provides a set of tools for logical thinking and communication that could potentially be useful in medical decision making. The author used the concept of a conflict resolution diagram to analyze the decision to perform carotid endarterectomy prior to coronary artery bypass grafting in a patient with both symptomatic coronary and asymptomatic carotid artery disease. The method enabled clinicians to visualize and analyze the issues, identify and discuss the underlying assumptions, search for the best available evidence, and use the evidence to make a well-founded decision. The method also facilitated communication among those involved in the care of the patient. Techniques from fields other than decision analysis can potentially expand the repertoire of tools available to support medical decision making and to facilitate communication in decision consults.
Lai, Agnes Y; Stewart, Sunita M; Mui, Moses W; Wan, Alice; Yew, Carol; Lam, Tai Hing; Chan, Sophia S
2017-01-01
Evaluation studies on train-the-trainer workshops (TTTs) to develop family well-being interventions are limited in the literature. The Logic Model offers a framework to place some important concepts and tools of intervention science in the hands of frontline service providers. This paper reports on the evaluation of a TTT for a large community-based program to enhance family well-being in Hong Kong. The 2-day TTT introduced positive psychology themes (relevant to the programs that the trainees would deliver) and the Logic Model (which provides a framework to guide intervention development and evaluation) for social service workers to guide their community-based family interventions. The effectiveness of the TTT was examined by self-administered questionnaires that assessed trainees' changes in learning (perceived knowledge, self-efficacy, attitude, and intention), trainees' reactions to training content, knowledge sharing, and benefits to their service organizations before and after the training and then 6 months and 1 year later. Missing data were replaced by baseline values in an intention-to-treat analysis. Focus group interviews were conducted approximately 6 months after training. Fifty-six trainees (79% women) joined the TTT. Forty-four and 31 trainees completed the 6-month and 1-year questionnaires, respectively. The trainees indicated that the workshop was informative and well organized. The TTT-enhanced trainees' perceived knowledge, self-efficacy, and attitudes toward the application of the Logic Model and positive psychology constructs in program design. These changes were present with small to large effect size that persisted to the 1 year follow-up. The skills learned were used to develop 31 family interventions that were delivered to about 1,000 families. Qualitative feedback supported the quantitative results. This TTT offers a practical example of academic-community partnerships that promote capacity among community social service workers. Goals included sharing basic tools of intervention development and evaluation, and the TTT offered, therefore, the potential of learning skills that extended beyond the lifetime of a single program. The research protocol was registered at the National Institutes of Health (identifier number: NCT01796275).
Intelligent manipulation technique for multi-branch robotic systems
NASA Technical Reports Server (NTRS)
Chen, Alexander Y. K.; Chen, Eugene Y. S.
1990-01-01
New analytical development in kinematics planning is reported. The INtelligent KInematics Planner (INKIP) consists of the kinematics spline theory and the adaptive logic annealing process. Also, a novel framework of robot learning mechanism is introduced. The FUzzy LOgic Self Organized Neural Networks (FULOSONN) integrates fuzzy logic in commands, control, searching, and reasoning, the embedded expert system for nominal robotics knowledge implementation, and the self organized neural networks for the dynamic knowledge evolutionary process. Progress on the mechanical construction of SRA Advanced Robotic System (SRAARS) and the real time robot vision system is also reported. A decision was made to incorporate the Local Area Network (LAN) technology in the overall communication system.
Neural networks and logical reasoning systems: a translation table.
Martins, J; Mendes, R V
2001-04-01
A correspondence is established between the basic elements of logic reasoning systems (knowledge bases, rules, inference and queries) and the structure and dynamical evolution laws of neural networks. The correspondence is pictured as a translation dictionary which might allow to go back and forth between symbolic and network formulations, a desirable step in learning-oriented systems and multicomputer networks. In the framework of Horn clause logics, it is found that atomic propositions with n arguments correspond to nodes with nth order synapses, rules to synaptic intensity constraints, forward chaining to synaptic dynamics and queries either to simple node activation or to a query tensor dynamics.
Reasoning on Weighted Delegatable Authorizations
NASA Astrophysics Data System (ADS)
Ruan, Chun; Varadharajan, Vijay
This paper studies logic based methods for representing and evaluating complex access control policies needed by modern database applications. In our framework, authorization and delegation rules are specified in a Weighted Delegatable Authorization Program (WDAP) which is an extended logic program. We show how extended logic programs can be used to specify complex security policies which support weighted administrative privilege delegation, weighted positive and negative authorizations, and weighted authorization propagations. We also propose a conflict resolution method that enables flexible delegation control by considering priorities of authorization grantors and weights of authorizations. A number of rules are provided to achieve delegation depth control, conflict resolution, and authorization and delegation propagations.
Huser, Vojtech; Rasmussen, Luke V; Oberg, Ryan; Starren, Justin B
2011-04-10
Workflow engine technology represents a new class of software with the ability to graphically model step-based knowledge. We present application of this novel technology to the domain of clinical decision support. Successful implementation of decision support within an electronic health record (EHR) remains an unsolved research challenge. Previous research efforts were mostly based on healthcare-specific representation standards and execution engines and did not reach wide adoption. We focus on two challenges in decision support systems: the ability to test decision logic on retrospective data prior prospective deployment and the challenge of user-friendly representation of clinical logic. We present our implementation of a workflow engine technology that addresses the two above-described challenges in delivering clinical decision support. Our system is based on a cross-industry standard of XML (extensible markup language) process definition language (XPDL). The core components of the system are a workflow editor for modeling clinical scenarios and a workflow engine for execution of those scenarios. We demonstrate, with an open-source and publicly available workflow suite, that clinical decision support logic can be executed on retrospective data. The same flowchart-based representation can also function in a prospective mode where the system can be integrated with an EHR system and respond to real-time clinical events. We limit the scope of our implementation to decision support content generation (which can be EHR system vendor independent). We do not focus on supporting complex decision support content delivery mechanisms due to lack of standardization of EHR systems in this area. We present results of our evaluation of the flowchart-based graphical notation as well as architectural evaluation of our implementation using an established evaluation framework for clinical decision support architecture. We describe an implementation of a free workflow technology software suite (available at http://code.google.com/p/healthflow) and its application in the domain of clinical decision support. Our implementation seamlessly supports clinical logic testing on retrospective data and offers a user-friendly knowledge representation paradigm. With the presented software implementation, we demonstrate that workflow engine technology can provide a decision support platform which evaluates well against an established clinical decision support architecture evaluation framework. Due to cross-industry usage of workflow engine technology, we can expect significant future functionality enhancements that will further improve the technology's capacity to serve as a clinical decision support platform.
Jung, Hyesil; Park, Hyeoun-Ae; Song, Tae-Min
2017-07-24
Social networking services (SNSs) contain abundant information about the feelings, thoughts, interests, and patterns of behavior of adolescents that can be obtained by analyzing SNS postings. An ontology that expresses the shared concepts and their relationships in a specific field could be used as a semantic framework for social media data analytics. The aim of this study was to refine an adolescent depression ontology and terminology as a framework for analyzing social media data and to evaluate description logics between classes and the applicability of this ontology to sentiment analysis. The domain and scope of the ontology were defined using competency questions. The concepts constituting the ontology and terminology were collected from clinical practice guidelines, the literature, and social media postings on adolescent depression. Class concepts, their hierarchy, and the relationships among class concepts were defined. An internal structure of the ontology was designed using the entity-attribute-value (EAV) triplet data model, and superclasses of the ontology were aligned with the upper ontology. Description logics between classes were evaluated by mapping concepts extracted from the answers to frequently asked questions (FAQs) onto the ontology concepts derived from description logic queries. The applicability of the ontology was validated by examining the representability of 1358 sentiment phrases using the ontology EAV model and conducting sentiment analyses of social media data using ontology class concepts. We developed an adolescent depression ontology that comprised 443 classes and 60 relationships among the classes; the terminology comprised 1682 synonyms of the 443 classes. In the description logics test, no error in relationships between classes was found, and about 89% (55/62) of the concepts cited in the answers to FAQs mapped onto the ontology class. Regarding applicability, the EAV triplet models of the ontology class represented about 91.4% of the sentiment phrases included in the sentiment dictionary. In the sentiment analyses, "academic stresses" and "suicide" contributed negatively to the sentiment of adolescent depression. The ontology and terminology developed in this study provide a semantic foundation for analyzing social media data on adolescent depression. To be useful in social media data analysis, the ontology, especially the terminology, needs to be updated constantly to reflect rapidly changing terms used by adolescents in social media postings. In addition, more attributes and value sets reflecting depression-related sentiments should be added to the ontology. ©Hyesil Jung, Hyeoun-Ae Park, Tae-Min Song. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 24.07.2017.
Jung, Hyesil; Song, Tae-Min
2017-01-01
Background Social networking services (SNSs) contain abundant information about the feelings, thoughts, interests, and patterns of behavior of adolescents that can be obtained by analyzing SNS postings. An ontology that expresses the shared concepts and their relationships in a specific field could be used as a semantic framework for social media data analytics. Objective The aim of this study was to refine an adolescent depression ontology and terminology as a framework for analyzing social media data and to evaluate description logics between classes and the applicability of this ontology to sentiment analysis. Methods The domain and scope of the ontology were defined using competency questions. The concepts constituting the ontology and terminology were collected from clinical practice guidelines, the literature, and social media postings on adolescent depression. Class concepts, their hierarchy, and the relationships among class concepts were defined. An internal structure of the ontology was designed using the entity-attribute-value (EAV) triplet data model, and superclasses of the ontology were aligned with the upper ontology. Description logics between classes were evaluated by mapping concepts extracted from the answers to frequently asked questions (FAQs) onto the ontology concepts derived from description logic queries. The applicability of the ontology was validated by examining the representability of 1358 sentiment phrases using the ontology EAV model and conducting sentiment analyses of social media data using ontology class concepts. Results We developed an adolescent depression ontology that comprised 443 classes and 60 relationships among the classes; the terminology comprised 1682 synonyms of the 443 classes. In the description logics test, no error in relationships between classes was found, and about 89% (55/62) of the concepts cited in the answers to FAQs mapped onto the ontology class. Regarding applicability, the EAV triplet models of the ontology class represented about 91.4% of the sentiment phrases included in the sentiment dictionary. In the sentiment analyses, “academic stresses” and “suicide” contributed negatively to the sentiment of adolescent depression. Conclusions The ontology and terminology developed in this study provide a semantic foundation for analyzing social media data on adolescent depression. To be useful in social media data analysis, the ontology, especially the terminology, needs to be updated constantly to reflect rapidly changing terms used by adolescents in social media postings. In addition, more attributes and value sets reflecting depression-related sentiments should be added to the ontology. PMID:28739560
A Component-Based FPGA Design Framework for Neuronal Ion Channel Dynamics Simulations
Mak, Terrence S. T.; Rachmuth, Guy; Lam, Kai-Pui; Poon, Chi-Sang
2008-01-01
Neuron-machine interfaces such as dynamic clamp and brain-implantable neuroprosthetic devices require real-time simulations of neuronal ion channel dynamics. Field Programmable Gate Array (FPGA) has emerged as a high-speed digital platform ideal for such application-specific computations. We propose an efficient and flexible component-based FPGA design framework for neuronal ion channel dynamics simulations, which overcomes certain limitations of the recently proposed memory-based approach. A parallel processing strategy is used to minimize computational delay, and a hardware-efficient factoring approach for calculating exponential and division functions in neuronal ion channel models is used to conserve resource consumption. Performances of the various FPGA design approaches are compared theoretically and experimentally in corresponding implementations of the AMPA and NMDA synaptic ion channel models. Our results suggest that the component-based design framework provides a more memory economic solution as well as more efficient logic utilization for large word lengths, whereas the memory-based approach may be suitable for time-critical applications where a higher throughput rate is desired. PMID:17190033
The development of a digital logic concept inventory
NASA Astrophysics Data System (ADS)
Herman, Geoffrey Lindsay
Instructors in electrical and computer engineering and in computer science have developed innovative methods to teach digital logic circuits. These methods attempt to increase student learning, satisfaction, and retention. Although there are readily accessible and accepted means for measuring satisfaction and retention, there are no widely accepted means for assessing student learning. Rigorous assessment of learning is elusive because differences in topic coverage, curriculum and course goals, and exam content prevent direct comparison of two teaching methods when using tools such as final exam scores or course grades. Because of these difficulties, computing educators have issued a general call for the adoption of assessment tools to critically evaluate and compare the various teaching methods. Science, Technology, Engineering, and Mathematics (STEM) education researchers commonly measure students' conceptual learning to compare how much different pedagogies improve learning. Conceptual knowledge is often preferred because all engineering courses should teach a fundamental set of concepts even if they emphasize design or analysis to different degrees. Increasing conceptual learning is also important, because students who can organize facts and ideas within a consistent conceptual framework are able to learn new information quickly and can apply what they know in new situations. If instructors can accurately assess their students' conceptual knowledge, they can target instructional interventions to remedy common problems. To properly assess conceptual learning, several researchers have developed concept inventories (CIs) for core subjects in engineering sciences. CIs are multiple-choice assessment tools that evaluate how well a student's conceptual framework matches the accepted conceptual framework of a discipline or common faulty conceptual frameworks. We present how we created and evaluated the digital logic concept inventory (DLCI).We used a Delphi process to identify the important and difficult concepts to include on the DLCI. To discover and describe common student misconceptions, we interviewed students who had completed a digital logic course. Students vocalized their thoughts as they solved digital logic problems. We analyzed the interview data using a qualitative grounded theory approach. We have administered the DLCI at several institutions and have checked the validity, reliability, and bias of the DLCI with classical testing theory procedures. These procedures consisted of follow-up interviews with students, analysis of administration results with statistical procedures, and expert feedback. We discuss these results and present the DLCI's potential for providing a meaningful tool for comparing student learning at different institutions.
Scarinci, Isabel C; Moore, Artisha; Benjamin, Regina; Vickers, Selwyn; Shikany, James; Fouad, Mona
2017-02-01
We describe the formulation and implementation of a participatory evaluation plan for three Transdisciplinary Collaborative Centers for Health Disparities Research funded by the National Institute of Minority Health and Health Disparities. Although different in scope of work, all three centers share a common goal of establishing sustainable centers in health disparities science in three priority areas - social determinants of health, men's health research, and health policy research. The logic model guides the process, impact, and outcome evaluation. Emphasis is placed on process evaluation in order to establish a "blue print" that can guide other efforts as well as assure that activities are being implemented as planned. We have learned three major lessons in this process: (1) Significant engagement, participation, and commitment of all involved is critical for the evaluation process; (2) Having a "roadmap" (logic model) and "directions" (evaluation worksheets) are instrumental in getting members from different backgrounds to follow the same path; and (3) Participation of the evaluator in the leadership and core meetings facilitates continuous feedback. Copyright © 2016 Elsevier Ltd. All rights reserved.
Reactive system verification case study: Fault-tolerant transputer communication
NASA Technical Reports Server (NTRS)
Crane, D. Francis; Hamory, Philip J.
1993-01-01
A reactive program is one which engages in an ongoing interaction with its environment. A system which is controlled by an embedded reactive program is called a reactive system. Examples of reactive systems are aircraft flight management systems, bank automatic teller machine (ATM) networks, airline reservation systems, and computer operating systems. Reactive systems are often naturally modeled (for logical design purposes) as a composition of autonomous processes which progress concurrently and which communicate to share information and/or to coordinate activities. Formal (i.e., mathematical) frameworks for system verification are tools used to increase the users' confidence that a system design satisfies its specification. A framework for reactive system verification includes formal languages for system modeling and for behavior specification and decision procedures and/or proof-systems for verifying that the system model satisfies the system specifications. Using the Ostroff framework for reactive system verification, an approach to achieving fault-tolerant communication between transputers was shown to be effective. The key components of the design, the decoupler processes, may be viewed as discrete-event-controllers introduced to constrain system behavior such that system specifications are satisfied. The Ostroff framework was also effective. The expressiveness of the modeling language permitted construction of a faithful model of the transputer network. The relevant specifications were readily expressed in the specification language. The set of decision procedures provided was adequate to verify the specifications of interest. The need for improved support for system behavior visualization is emphasized.
A VGI data integration framework based on linked data model
NASA Astrophysics Data System (ADS)
Wan, Lin; Ren, Rongrong
2015-12-01
This paper aims at the geographic data integration and sharing method for multiple online VGI data sets. We propose a semantic-enabled framework for online VGI sources cooperative application environment to solve a target class of geospatial problems. Based on linked data technologies - which is one of core components of semantic web, we can construct the relationship link among geographic features distributed in diverse VGI platform by using linked data modeling methods, then deploy these semantic-enabled entities on the web, and eventually form an interconnected geographic data network to support geospatial information cooperative application across multiple VGI data sources. The mapping and transformation from VGI sources to RDF linked data model is presented to guarantee the unique data represent model among different online social geographic data sources. We propose a mixed strategy which combined spatial distance similarity and feature name attribute similarity as the measure standard to compare and match different geographic features in various VGI data sets. And our work focuses on how to apply Markov logic networks to achieve interlinks of the same linked data in different VGI-based linked data sets. In our method, the automatic generating method of co-reference object identification model according to geographic linked data is discussed in more detail. It finally built a huge geographic linked data network across loosely-coupled VGI web sites. The results of the experiment built on our framework and the evaluation of our method shows the framework is reasonable and practicable.
A Verification System for Distributed Objects with Asynchronous Method Calls
NASA Astrophysics Data System (ADS)
Ahrendt, Wolfgang; Dylla, Maximilian
We present a verification system for Creol, an object-oriented modeling language for concurrent distributed applications. The system is an instance of KeY, a framework for object-oriented software verification, which has so far been applied foremost to sequential Java. Building on KeY characteristic concepts, like dynamic logic, sequent calculus, explicit substitutions, and the taclet rule language, the system presented in this paper addresses functional correctness of Creol models featuring local cooperative thread parallelism and global communication via asynchronous method calls. The calculus heavily operates on communication histories which describe the interfaces of Creol units. Two example scenarios demonstrate the usage of the system.
A formal language for the specification and verification of synchronous and asynchronous circuits
NASA Technical Reports Server (NTRS)
Russinoff, David M.
1993-01-01
A formal hardware description language for the intended application of verifiable asynchronous communication is described. The language is developed within the logical framework of the Nqthm system of Boyer and Moore and is based on the event-driven behavioral model of VHDL, including the basic VHDL signal propagation mechanisms, the notion of simulation deltas, and the VHDL simulation cycle. A core subset of the language corresponds closely with a subset of VHDL and is adequate for the realistic gate-level modeling of both combinational and sequential circuits. Various extensions to this subset provide means for convenient expression of behavioral circuit specifications.
Cumulative biological impacts framework for solar energy projects in the California Desert
Davis, Frank W.; Kreitler, Jason R.; Soong, Oliver; Stoms, David M.; Dashiell, Stephanie; Hannah, Lee; Wilkinson, Whitney; Dingman, John
2013-01-01
This project developed analytical approaches, tools and geospatial data to support conservation planning for renewable energy development in the California deserts. Research focused on geographical analysis to avoid, minimize and mitigate the cumulative biological effects of utility-scale solar energy development. A hierarchical logic model was created to map the compatibility of new solar energy projects with current biological conservation values. The research indicated that the extent of compatible areas is much greater than the estimated land area required to achieve 2040 greenhouse gas reduction goals. Species distribution models were produced for 65 animal and plant species that were of potential conservation significance to the Desert Renewable Energy Conservation Plan process. These models mapped historical and projected future habitat suitability using 270 meter resolution climate grids. The results were integrated into analytical frameworks to locate potential sites for offsetting project impacts and evaluating the cumulative effects of multiple solar energy projects. Examples applying these frameworks in the Western Mojave Desert ecoregion show the potential of these publicly-available tools to assist regional planning efforts. Results also highlight the necessity to explicitly consider projected land use change and climate change when prioritizing areas for conservation and mitigation offsets. Project data, software and model results are all available online.
Rothgangel, Andreas; Braun, Susy; de Witte, Luc; Beurskens, Anna; Smeets, Rob
2016-04-01
To describe the development and content of a clinical framework for mirror therapy (MT) in patients with phantom limb pain (PLP) following amputation. Based on an a priori formulated theoretical model, 3 sources of data collection were used to develop the clinical framework. First, a review of the literature took place on important clinical aspects and the evidence on the effectiveness of MT in patients with phantom limb pain. In addition, questionnaires and semi-structured interviews were used to analyze clinical experiences and preferences of physical and occupational therapists and patients suffering from PLP regarding the application of MT. All data were finally clustered into main and subcategories and were used to complement and refine the theoretical model. For every main category of the a priori formulated theoretical model, several subcategories emerged from the literature search, patient, and therapist interviews. Based on these categories, we developed a clinical flowchart that incorporates the main and subcategories in a logical way according to the phases in methodical intervention defined by the Royal Dutch Society for Physical Therapy. In addition, we developed a comprehensive booklet that illustrates the individual steps of the clinical flowchart. In this study, a structured clinical framework for the application of MT in patients with PLP was developed. This framework is currently being tested for its effectiveness in a multicenter randomized controlled trial. © 2015 World Institute of Pain.
Logic models as a tool for sexual violence prevention program development.
Hawkins, Stephanie R; Clinton-Sherrod, A Monique; Irvin, Neil; Hart, Laurie; Russell, Sarah Jane
2009-01-01
Sexual violence is a growing public health problem, and there is an urgent need to develop sexual violence prevention programs. Logic models have emerged as a vital tool in program development. The Centers for Disease Control and Prevention funded an empowerment evaluation designed to work with programs focused on the prevention of first-time male perpetration of sexual violence, and it included as one of its goals, the development of program logic models. Two case studies are presented that describe how significant positive changes can be made to programs as a result of their developing logic models that accurately describe desired outcomes. The first case study describes how the logic model development process made an organization aware of the importance of a program's environmental context for program success; the second case study demonstrates how developing a program logic model can elucidate gaps in organizational programming and suggest ways to close those gaps.
Using logic models in a community-based agricultural injury prevention project.
Helitzer, Deborah; Willging, Cathleen; Hathorn, Gary; Benally, Jeannie
2009-01-01
The National Institute for Occupational Safety and Health has long promoted the logic model as a useful tool in an evaluator's portfolio. Because a logic model supports a systematic approach to designing interventions, it is equally useful for program planners. Undertaken with community stakeholders, a logic model process articulates the underlying foundations of a particular programmatic effort and enhances program design and evaluation. Most often presented as sequenced diagrams or flow charts, logic models demonstrate relationships among the following components: statement of a problem, various causal and mitigating factors related to that problem, available resources to address the problem, theoretical foundations of the selected intervention, intervention goals and planned activities, and anticipated short- and long-term outcomes. This article describes a case example of how a logic model process was used to help community stakeholders on the Navajo Nation conceive, design, implement, and evaluate agricultural injury prevention projects.
Guidance for modeling causes and effects in environmental problem solving
Armour, Carl L.; Williamson, Samuel C.
1988-01-01
Environmental problems are difficult to solve because their causes and effects are not easily understood. When attempts are made to analyze causes and effects, the principal challenge is organization of information into a framework that is logical, technically defensible, and easy to understand and communicate. When decisionmakers attempt to solve complex problems before an adequate cause and effect analysis is performed there are serious risks. These risks include: greater reliance on subjective reasoning, lessened chance for scoping an effective problem solving approach, impaired recognition of the need for supplemental information to attain understanding, increased chance for making unsound decisions, and lessened chance for gaining approval and financial support for a program/ Cause and effect relationships can be modeled. This type of modeling has been applied to various environmental problems, including cumulative impact assessment (Dames and Moore 1981; Meehan and Weber 1985; Williamson et al. 1987; Raley et al. 1988) and evaluation of effects of quarrying (Sheate 1986). This guidance for field users was written because of the current interest in documenting cause-effect logic as a part of ecological problem solving. Principal literature sources relating to the modeling approach are: Riggs and Inouye (1975a, b), Erickson (1981), and United States Office of Personnel Management (1986).
ERIC Educational Resources Information Center
Rudd, Tim
2017-01-01
This paper offers conceptual and theoretical insights relating to the Teaching Excellence Framework (TEF), highlighting a range of potential systemic and institutional outcomes and issues. The paper is organised around three key areas of discussion that are often under-explored in debates. Firstly, after considering the TEF in the wider context of…
Joining the Club: The Ideology of Quality and Business School Badging
ERIC Educational Resources Information Center
Bell, Emma; Taylor, Scott
2005-01-01
The ideology of quality and the frameworks used to measure it can profoundly affect academic identity. This article explores the role of quality frameworks in UK business schools, focusing on the way that individuals confront the logic of accreditation when they are subject to its discipline. By defining business schools as an institutional field,…
Corral framework: Trustworthy and fully functional data intensive parallel astronomical pipelines
NASA Astrophysics Data System (ADS)
Cabral, J. B.; Sánchez, B.; Beroiz, M.; Domínguez, M.; Lares, M.; Gurovich, S.; Granitto, P.
2017-07-01
Data processing pipelines represent an important slice of the astronomical software library that include chains of processes that transform raw data into valuable information via data reduction and analysis. In this work we present Corral, a Python framework for astronomical pipeline generation. Corral features a Model-View-Controller design pattern on top of an SQL Relational Database capable of handling: custom data models; processing stages; and communication alerts, and also provides automatic quality and structural metrics based on unit testing. The Model-View-Controller provides concept separation between the user logic and the data models, delivering at the same time multi-processing and distributed computing capabilities. Corral represents an improvement over commonly found data processing pipelines in astronomysince the design pattern eases the programmer from dealing with processing flow and parallelization issues, allowing them to focus on the specific algorithms needed for the successive data transformations and at the same time provides a broad measure of quality over the created pipeline. Corral and working examples of pipelines that use it are available to the community at https://github.com/toros-astro.
NASA Technical Reports Server (NTRS)
Lee, Taesik; Jeziorek, Peter
2004-01-01
Large complex projects cost large sums of money throughout their life cycle for a variety of reasons and causes. For such large programs, the credible estimation of the project cost, a quick assessment of the cost of making changes, and the management of the project budget with effective cost reduction determine the viability of the project. Cost engineering that deals with these issues requires a rigorous method and systematic processes. This paper introduces a logical framework to a&e effective cost engineering. The framework is built upon Axiomatic Design process. The structure in the Axiomatic Design process provides a good foundation to closely tie engineering design and cost information together. The cost framework presented in this paper is a systematic link between the functional domain (FRs), physical domain (DPs), cost domain (CUs), and a task/process-based model. The FR-DP map relates a system s functional requirements to design solutions across all levels and branches of the decomposition hierarchy. DPs are mapped into CUs, which provides a means to estimate the cost of design solutions - DPs - from the cost of the physical entities in the system - CUs. The task/process model describes the iterative process ot-developing each of the CUs, and is used to estimate the cost of CUs. By linking the four domains, this framework provides a superior traceability from requirements to cost information.
The shuttle main engine: A first look
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1996-01-01
Anyone entering the Space Shuttle Main Engine (SSME) team attends a two week course to become familiar with the design and workings of the engine. This course provides intensive coverage of the individual hardware items and their functions. Some individuals, particularly those involved with software maintenance and development, have felt overwhelmed by this volume of material and their lack of a logical framework in which to place it. To provide this logical framework, it was decided that a brief self-taught introduction to the overall operation of the SSME should be designed. To aid the people or new team members with an interest in the software, this new course should also explain the structure and functioning of the controller and its software. This paper presents a description of this presentation.
Agent Based Modeling of Air Carrier Behavior for Evaluation of Technology Equipage and Adoption
NASA Technical Reports Server (NTRS)
Horio, Brant M.; DeCicco, Anthony H.; Stouffer, Virginia L.; Hasan, Shahab; Rosenbaum, Rebecca L.; Smith, Jeremy C.
2014-01-01
As part of ongoing research, the National Aeronautics and Space Administration (NASA) and LMI developed a research framework to assist policymakers in identifying impacts on the U.S. air transportation system (ATS) of potential policies and technology related to the implementation of the Next Generation Air Transportation System (NextGen). This framework, called the Air Transportation System Evolutionary Simulation (ATS-EVOS), integrates multiple models into a single process flow to best simulate responses by U.S. commercial airlines and other ATS stakeholders to NextGen-related policies, and in turn, how those responses impact the ATS. Development of this framework required NASA and LMI to create an agent-based model of airline and passenger behavior. This Airline Evolutionary Simulation (AIRLINE-EVOS) models airline decisions about tactical airfare and schedule adjustments, and strategic decisions related to fleet assignments, market prices, and equipage. AIRLINE-EVOS models its own heterogeneous population of passenger agents that interact with airlines; this interaction allows the model to simulate the cycle of action-reaction as airlines compete with each other and engage passengers. We validated a baseline configuration of AIRLINE-EVOS against Airline Origin and Destination Survey (DB1B) data and subject matter expert opinion, and we verified the ATS-EVOS framework and agent behavior logic through scenario-based experiments. These experiments demonstrated AIRLINE-EVOS's capabilities in responding to an input price shock in fuel prices, and to equipage challenges in a series of analyses based on potential incentive policies for best equipped best served, optimal-wind routing, and traffic management initiative exemption concepts..
NASA Astrophysics Data System (ADS)
Niazi, M. Khalid Khan; Beamer, Gillian; Gurcan, Metin N.
2017-03-01
Accurate detection and quantification of normal lung tissue in the context of Mycobacterium tuberculosis infection is of interest from a biological perspective. The automatic detection and quantification of normal lung will allow the biologists to focus more intensely on regions of interest within normal and infected tissues. We present a computational framework to extract individual tissue sections from whole slide images having multiple tissue sections. It automatically detects the background, red blood cells and handwritten digits to bring efficiency as well as accuracy in quantification of tissue sections. For efficiency, we model our framework with logical and morphological operations as they can be performed in linear time. We further divide these individual tissue sections into normal and infected areas using deep neural network. The computational framework was trained on 60 whole slide images. The proposed computational framework resulted in an overall accuracy of 99.2% when extracting individual tissue sections from 120 whole slide images in the test dataset. The framework resulted in a relatively higher accuracy (99.7%) while classifying individual lung sections into normal and infected areas. Our preliminary findings suggest that the proposed framework has good agreement with biologists on how define normal and infected lung areas.
Using Mobile TLA as a Logic for Dynamic I/O Automata
NASA Astrophysics Data System (ADS)
Kapus, Tatjana
Input/Output (I/O) automata and the Temporal Logic of Actions (TLA) are two well-known techniques for the specification and verification of concurrent systems. Over the past few years, they have been extended to the so-called dynamic I/O automata and, respectively, Mobile TLA (MTLA) in order to be more appropriate for mobile agent systems. Dynamic I/O automata is just a mathematical model, whereas MTLA is a logic with a formally defined language. In this paper, therefore, we investigate how MTLA could be used as a formal language for the specification of dynamic I/O automata. We do this by writing an MTLA specification of a travel agent system which has been specified semi-formally in the literature on that model. In this specification, we deal with always existing agents as well as with an initially unknown number of dynamically created agents, with mobile and non-mobile agents, with I/O-automata-style communication, and with the changing communication capabilities of mobile agents. We have previously written a TLA specification of this system. This paper shows that an MTLA specification of such a system can be more elegant and faithful to the dynamic I/O automata definition because the agent existence and location can be expressed directly by using agent and location names instead of special variables as in TLA. It also shows how the reuse of names for dynamically created and destroyed agents within the dynamic I/O automata framework can be specified in MTLA.
Syllogistic reasoning in fuzzy logic and its application to usuality and reasoning with dispositions
NASA Technical Reports Server (NTRS)
Zadeh, L. A.
1985-01-01
A fuzzy syllogism in fuzzy logic is defined to be an inference schema in which the major premise, the minor premise and the conclusion are propositions containing fuzzy quantifiers. A basic fuzzy syllogism in fuzzy logic is the intersection/product syllogism. Several other basic syllogisms are developed that may be employed as rules of combination of evidence in expert systems. Among these is the consequent conjunction syllogism. Furthermore, it is shown that syllogistic reasoning in fuzzy logic provides a basis for reasoning with dispositions; that is, with propositions that are preponderantly but not necessarily always true. It is also shown that the concept of dispositionality is closely related to the notion of usuality and serves as a basis for what might be called a theory of usuality - a theory which may eventually provide a computational framework for commonsense reasoning.
Huang, Jian-hua; Li, Wen-wei; Bian, Qin; Shen, Zi-yin
2011-09-01
The true meanings of the terms of traditional Chinese medicine (TCM) need to be analyzed on a logical basis. It is not suitable to use a new term to interpret an old term of TCM, or arbitrarily specify the special term of TCM corresponding to some substances of modern medicine. In philosophy of language, language has a logical structure, which reflects the structure of the world, that is to say, language is the picture of the world in a logical sense. Using this idea, the authors collected the ancient literature on "kidney essence", and extracted each necessary condition for "kidney essence". All necessary conditions formed a sufficient condition to define the term "kidney essence". It is expected that this example can show the effectiveness of philosophy of language in analysis of the terms of TCM.
2014-01-01
Background This paper describes the development of a model of Comprehensive Primary Health Care (CPHC) applicable to the Australian context. CPHC holds promise as an effective model of health system organization able to improve population health and increase health equity. However, there is little literature that describes and evaluates CPHC as a whole, with most evaluation focusing on specific programs. The lack of a consensus on what constitutes CPHC, and the complex and context-sensitive nature of CPHC are all barriers to evaluation. Methods The research was undertaken in partnership with six Australian primary health care services: four state government funded and managed services, one sexual health non-government organization, and one Aboriginal community controlled health service. A draft model was crafted combining program logic and theory-based approaches, drawing on relevant literature, 68 interviews with primary health care service staff, and researcher experience. The model was then refined through an iterative process involving two to three workshops at each of the six participating primary health care services, engaging health service staff, regional health executives and central health department staff. Results The resultant Southgate Model of CPHC in Australia model articulates the theory of change of how and why CPHC service components and activities, based on the theory, evidence and values which underpin a CPHC approach, are likely to lead to individual and population health outcomes and increased health equity. The model captures the importance of context, the mechanisms of CPHC, and the space for action services have to work within. The process of development engendered and supported collaborative relationships between researchers and stakeholders and the product provided a description of CPHC as a whole and a framework for evaluation. The model was endorsed at a research symposium involving investigators, service staff, and key stakeholders. Conclusions The development of a theory-based program logic model provided a framework for evaluation that allows the tracking of progress towards desired outcomes and exploration of the particular aspects of context and mechanisms that produce outcomes. This is important because there are no existing models which enable the evaluation of CPHC services in their entirety. PMID:24885812
The Use of a Predictive Habitat Model and a Fuzzy Logic Approach for Marine Management and Planning
Hattab, Tarek; Ben Rais Lasram, Frida; Albouy, Camille; Sammari, Chérif; Romdhane, Mohamed Salah; Cury, Philippe; Leprieur, Fabien; Le Loc’h, François
2013-01-01
Bottom trawl survey data are commonly used as a sampling technique to assess the spatial distribution of commercial species. However, this sampling technique does not always correctly detect a species even when it is present, and this can create significant limitations when fitting species distribution models. In this study, we aim to test the relevance of a mixed methodological approach that combines presence-only and presence-absence distribution models. We illustrate this approach using bottom trawl survey data to model the spatial distributions of 27 commercially targeted marine species. We use an environmentally- and geographically-weighted method to simulate pseudo-absence data. The species distributions are modelled using regression kriging, a technique that explicitly incorporates spatial dependence into predictions. Model outputs are then used to identify areas that met the conservation targets for the deployment of artificial anti-trawling reefs. To achieve this, we propose the use of a fuzzy logic framework that accounts for the uncertainty associated with different model predictions. For each species, the predictive accuracy of the model is classified as ‘high’. A better result is observed when a large number of occurrences are used to develop the model. The map resulting from the fuzzy overlay shows that three main areas have a high level of agreement with the conservation criteria. These results align with expert opinion, confirming the relevance of the proposed methodology in this study. PMID:24146867
Formal Specification of Information Systems Requirements.
ERIC Educational Resources Information Center
Kampfner, Roberto R.
1985-01-01
Presents a formal model for specification of logical requirements of computer-based information systems that incorporates structural and dynamic aspects based on two separate models: the Logical Information Processing Structure and the Logical Information Processing Network. The model's role in systems development is discussed. (MBR)
ERIC Educational Resources Information Center
Martin, Ian; Carey, John
2014-01-01
A logic model was developed based on an analysis of the 2012 American School Counselor Association (ASCA) National Model in order to provide direction for program evaluation initiatives. The logic model identified three outcomes (increased student achievement/gap reduction, increased school counseling program resources, and systemic change and…
Rigorous Science: a How-To Guide
Fang, Ferric C.
2016-01-01
ABSTRACT Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word “rigor” is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education. PMID:27834205
Rigorous Science: a How-To Guide.
Casadevall, Arturo; Fang, Ferric C
2016-11-08
Proposals to improve the reproducibility of biomedical research have emphasized scientific rigor. Although the word "rigor" is widely used, there has been little specific discussion as to what it means and how it can be achieved. We suggest that scientific rigor combines elements of mathematics, logic, philosophy, and ethics. We propose a framework for rigor that includes redundant experimental design, sound statistical analysis, recognition of error, avoidance of logical fallacies, and intellectual honesty. These elements lead to five actionable recommendations for research education. Copyright © 2016 Casadevall and Fang.
Quantum field theory and coalgebraic logic in theoretical computer science.
Basti, Gianfranco; Capolupo, Antonio; Vitiello, Giuseppe
2017-11-01
We suggest that in the framework of the Category Theory it is possible to demonstrate the mathematical and logical dual equivalence between the category of the q-deformed Hopf Coalgebras and the category of the q-deformed Hopf Algebras in quantum field theory (QFT), interpreted as a thermal field theory. Each pair algebra-coalgebra characterizes a QFT system and its mirroring thermal bath, respectively, so to model dissipative quantum systems in far-from-equilibrium conditions, with an evident significance also for biological sciences. Our study is in fact inspired by applications to neuroscience where the brain memory capacity, for instance, has been modeled by using the QFT unitarily inequivalent representations. The q-deformed Hopf Coalgebras and the q-deformed Hopf Algebras constitute two dual categories because characterized by the same functor T, related with the Bogoliubov transform, and by its contravariant application T op , respectively. The q-deformation parameter is related to the Bogoliubov angle, and it is effectively a thermal parameter. Therefore, the different values of q identify univocally, and label the vacua appearing in the foliation process of the quantum vacuum. This means that, in the framework of Universal Coalgebra, as general theory of dynamic and computing systems ("labelled state-transition systems"), the so labelled infinitely many quantum vacua can be interpreted as the Final Coalgebra of an "Infinite State Black-Box Machine". All this opens the way to the possibility of designing a new class of universal quantum computing architectures based on this coalgebraic QFT formulation, as its ability of naturally generating a Fibonacci progression demonstrates. Copyright © 2017 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Pearce, Kimber Charles; Fadely, Dean
1992-01-01
Analyzes the quasi-logical argumentative framework of George Bush's address in which he endeavored to gain compliance and justify his actions at the beginning of the Persian Gulf War. Identifies arguments of comparison and sacrifice within that framework and examines the role of justice in the speech. (TB)
ERIC Educational Resources Information Center
Black, Beth; Suto, Irenka; Bramley, Tom
2011-01-01
In this paper we develop an evidence-based framework for considering many of the factors affecting marker agreement in GCSEs and A levels. A logical analysis of the demands of the marking task suggests a core grouping comprising: (i) question features; (ii) mark scheme features; and (iii) examinee response features. The framework synthesises…
Fulmer, Erika; Rogers, Todd; Glasgow, LaShawn; Brown, Susan; Kuiper, Nicole
2018-03-01
The outcome indicator framework helps tobacco prevention and control programs (TCPs) plan and implement theory-driven evaluations of their efforts to reduce and prevent tobacco use. Tobacco use is the single-most preventable cause of morbidity and mortality in the United States. The implementation of public health best practices by comprehensive state TCPs has been shown to prevent the initiation of tobacco use, reduce tobacco use prevalence, and decrease tobacco-related health care expenditures. Achieving and sustaining program goals require TCPs to evaluate the effectiveness and impact of their programs. To guide evaluation efforts by TCPs, the Centers for Disease Control and Prevention's Office on Smoking and Health developed an outcome indicator framework that includes a high-level logic model and evidence-based outcome indicators for each tobacco prevention and control goal area. In this article, we describe how TCPs and other community organizations can use the outcome indicator framework in their evaluation efforts. We also discuss how the framework is used at the national level to unify tobacco prevention and control efforts across varying state contexts, identify promising practices, and expand the public health evidence base.
Kull, Kalevi
2015-12-01
We suggest here a model of the origin of the phenomenal world via the naturalization of logical conflict or incompatibility (which is broader than, but includes logical contradiction). Physics rules out the reality of meaning because of the method of formalization, which requires that logical conflicts cannot be part of the model. We argue that (a) meaning-making requires a logical conflict; (b) logical conflict assumes a phenomenal present; (c) phenomenological specious present occurs in living systems as widely as meaning-making; (d) it is possible to provide a physiological description of a system in which the phenomenal present appears and choices are made; (e) logical conflict, or incompatibility itself, is the mechanism of intentionality; (f) meaning-making is assured by scaffolding, which is a product of earlier choices, or decision-making, or interpretation. This model can be seen as a model of semiosis. It also allows putting physiology and phenomenology (or physics and semiotics) into a natural connection. Copyright © 2015. Published by Elsevier Ltd.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Designing Experiments to Discriminate Families of Logic Models.
Videla, Santiago; Konokotina, Irina; Alexopoulos, Leonidas G; Saez-Rodriguez, Julio; Schaub, Torsten; Siegel, Anne; Guziolowski, Carito
2015-01-01
Logic models of signaling pathways are a promising way of building effective in silico functional models of a cell, in particular of signaling pathways. The automated learning of Boolean logic models describing signaling pathways can be achieved by training to phosphoproteomics data, which is particularly useful if it is measured upon different combinations of perturbations in a high-throughput fashion. However, in practice, the number and type of allowed perturbations are not exhaustive. Moreover, experimental data are unavoidably subjected to noise. As a result, the learning process results in a family of feasible logical networks rather than in a single model. This family is composed of logic models implementing different internal wirings for the system and therefore the predictions of experiments from this family may present a significant level of variability, and hence uncertainty. In this paper, we introduce a method based on Answer Set Programming to propose an optimal experimental design that aims to narrow down the variability (in terms of input-output behaviors) within families of logical models learned from experimental data. We study how the fitness with respect to the data can be improved after an optimal selection of signaling perturbations and how we learn optimal logic models with minimal number of experiments. The methods are applied on signaling pathways in human liver cells and phosphoproteomics experimental data. Using 25% of the experiments, we obtained logical models with fitness scores (mean square error) 15% close to the ones obtained using all experiments, illustrating the impact that our approach can have on the design of experiments for efficient model calibration.
A Pilot Study on Modeling of Diagnostic Criteria Using OWL and SWRL.
Hong, Na; Jiang, Guoqian; Pathak, Jyotishiman; Chute, Christopher G
2015-01-01
The objective of this study is to describe our efforts in a pilot study on modeling diagnostic criteria using a Semantic Web-based approach. We reused the basic framework of the ICD-11 content model and refined it into an operational model in the Web Ontology Language (OWL). The refinement is based on a bottom-up analysis method, in which we analyzed data elements (including value sets) in a collection (n=20) of randomly selected diagnostic criteria. We also performed a case study to formalize rule logic in the diagnostic criteria of metabolic syndrome using the Semantic Web Rule Language (SWRL). The results demonstrated that it is feasible to use OWL and SWRL to formalize the diagnostic criteria knowledge, and to execute the rules through reasoning.
Pentecost, Claire; Richards, David A; Frost, Julia
2017-11-28
To investigate the components of the Amalgamation of Marginal Gains (AMG) performance system to identify a set of principles that can be built into an innovative fundamental nursing care protocol. Nursing is urged to refocus on its fundamental care activities, but little evidence exists to guide practising nurses. Fundamental care is a combination of many small behaviours aimed at meeting a person's care needs. AMG is a successful system of performance management that focusses on small (or marginal) gains, and might provide a new delivery framework for fundamental nursing care. Qualitative interview study. We undertook in-depth interviews with healthcare and sports professionals experienced in AMG. We analysed data using open coding in a framework analysis, and then interrogated the data using Normalisation Process Theory (NPT). We triangulated findings with AMG literature to develop an intervention logic model. We interviewed 20 AMG practitioners. AMG processes were as follows: focusing on many details to optimise performance, identification of marginal gains using different sources, understanding current versus optimum performance, monitoring at micro and macro level and strong leadership. Elements of normalisation were as follows: whole team belief in AMG to improve performance, a collective desire for excellence using evidence-based actions, whole team engagement to identify choose and implement changes, and individual and group responsibility for monitoring performance. We have elicited the processes described by AMG innovators in health care and sport and have mapped the normalisation potential and work required to embed such a system into nursing practice. The development of our logic model based on AMG and NPT may provide a practical framework for improving fundamental nursing care and is ripe for further development and testing in clinical trials. © 2017 The Authors Journal of Clinical Nursing Published by John Wiley & Sons Ltd.
Framework for End-User Programming of Cross-Smart Space Applications
Palviainen, Marko; Kuusijärvi, Jarkko; Ovaska, Eila
2012-01-01
Cross-smart space applications are specific types of software services that enable users to share information, monitor the physical and logical surroundings and control it in a way that is meaningful for the user's situation. For developing cross-smart space applications, this paper makes two main contributions: it introduces (i) a component design and scripting method for end-user programming of cross-smart space applications and (ii) a backend framework of components that interwork to support the brunt of the RDFScript translation, and the use and execution of ontology models. Before end-user programming activities, the software professionals must develop easy-to-apply Driver components for the APIs of existing software systems. Thereafter, end-users are able to create applications from the commands of the Driver components with the help of the provided toolset. The paper also introduces the reference implementation of the framework, tools for the Driver component development and end-user programming of cross-smart space applications and the first evaluation results on their application. PMID:23202169
Parametric design and gridding through relational geometry
NASA Technical Reports Server (NTRS)
Letcher, John S., Jr.; Shook, D. Michael
1995-01-01
Relational Geometric Synthesis (RGS) is a new logical framework for building up precise definitions of complex geometric models from points, curves, surfaces and solids. RGS achieves unprecedented design flexibility by supporting a rich variety of useful curve and surface entities. During the design process, many qualitative and quantitative relationships between elementary objects may be captured and retained in a data structure equivalent to a directed graph, such that they can be utilized for automatically updating the complete model geometry following changes in the shape or location of an underlying object. Capture of relationships enables many new possibilities for parametric variations and optimization. Examples are given of panelization applications for submarines, sailing yachts, offshore structures, and propellers.
Authoring and verification of clinical guidelines: a model driven approach.
Pérez, Beatriz; Porres, Ivan
2010-08-01
The goal of this research is to provide a framework to enable authoring and verification of clinical guidelines. The framework is part of a larger research project aimed at improving the representation, quality and application of clinical guidelines in daily clinical practice. The verification process of a guideline is based on (1) model checking techniques to verify guidelines against semantic errors and inconsistencies in their definition, (2) combined with Model Driven Development (MDD) techniques, which enable us to automatically process manually created guideline specifications and temporal-logic statements to be checked and verified regarding these specifications, making the verification process faster and cost-effective. Particularly, we use UML statecharts to represent the dynamics of guidelines and, based on this manually defined guideline specifications, we use a MDD-based tool chain to automatically process them to generate the input model of a model checker. The model checker takes the resulted model together with the specific guideline requirements, and verifies whether the guideline fulfils such properties. The overall framework has been implemented as an Eclipse plug-in named GBDSSGenerator which, particularly, starting from the UML statechart representing a guideline, allows the verification of the guideline against specific requirements. Additionally, we have established a pattern-based approach for defining commonly occurring types of requirements in guidelines. We have successfully validated our overall approach by verifying properties in different clinical guidelines resulting in the detection of some inconsistencies in their definition. The proposed framework allows (1) the authoring and (2) the verification of clinical guidelines against specific requirements defined based on a set of property specification patterns, enabling non-experts to easily write formal specifications and thus easing the verification process. Copyright 2010 Elsevier Inc. All rights reserved.
Predicting recycling behaviour: Comparison of a linear regression model and a fuzzy logic model.
Vesely, Stepan; Klöckner, Christian A; Dohnal, Mirko
2016-03-01
In this paper we demonstrate that fuzzy logic can provide a better tool for predicting recycling behaviour than the customarily used linear regression. To show this, we take a set of empirical data on recycling behaviour (N=664), which we randomly divide into two halves. The first half is used to estimate a linear regression model of recycling behaviour, and to develop a fuzzy logic model of recycling behaviour. As the first comparison, the fit of both models to the data included in estimation of the models (N=332) is evaluated. As the second comparison, predictive accuracy of both models for "new" cases (hold-out data not included in building the models, N=332) is assessed. In both cases, the fuzzy logic model significantly outperforms the regression model in terms of fit. To conclude, when accurate predictions of recycling and possibly other environmental behaviours are needed, fuzzy logic modelling seems to be a promising technique. Copyright © 2015 Elsevier Ltd. All rights reserved.
Diagnostic Reasoning using Prognostic Information for Unmanned Aerial Systems
NASA Technical Reports Server (NTRS)
Schumann, Johann; Roychoudhury, Indranil; Kulkarni, Chetan
2015-01-01
With increasing popularity of unmanned aircraft, continuous monitoring of their systems, software, and health status is becoming more and more important to ensure safe, correct, and efficient operation and fulfillment of missions. The paper presents integration of prognosis models and prognostic information with the R2U2 (REALIZABLE, RESPONSIVE, and UNOBTRUSIVE Unit) monitoring and diagnosis framework. This integration makes available statistically reliable health information predictions of the future at a much earlier time to enable autonomous decision making. The prognostic information can be used in the R2U2 model to improve diagnostic accuracy and enable decisions to be made at the present time to deal with events in the future. This will be an advancement over the current state of the art, where temporal logic observers can only do such valuation at the end of the time interval. Usefulness and effectiveness of this integrated diagnostics and prognostics framework was demonstrated using simulation experiments with the NASA Dragon Eye electric unmanned aircraft.
Characteristics Of Ferroelectric Logic Gates Using a Spice-Based Model
NASA Technical Reports Server (NTRS)
MacLeod, Todd C.; Phillips, Thomas A.; Ho, Fat D.
2005-01-01
A SPICE-based model of an n-channel ferroelectric field effect transistor has been developed based on both theoretical and empirical data. This model was used to generate the I-V characteristic of several logic gates. The use of ferroelectric field effect transistors in memory circuits is being developed by several organizations. The use of FFETs in other circuits, both analog and digital needs to be better understood. The ability of FFETs to have different characteristics depending on the initial polarization can be used to create logic gates. These gates can have properties not available to standard CMOS logic gates, such as memory, reconfigurability and memory. This paper investigates basic properties of FFET logic gates. It models FFET inverter, NAND gate and multi-input NAND gate. The I-V characteristics of the gates are presented as well as transfer characteristics and timing. The model used is a SPICE-based model developed from empirical data from actual Ferroelectric transistors. It simulates all major characteristics of the ferroelectric transistor, including polarization, hysteresis and decay. Contrasts are made of the differences between FFET logic gates and CMOS logic gates. FFET parameters are varied to show the effect on the overall gate. A recodigurable gate is investigated which is not possible with CMOS circuits. The paper concludes that FFETs can be used in logic gates and have several advantages over standard CMOS gates.
ERIC Educational Resources Information Center
Losada, David E.; Barreiro, Alvaro
2003-01-01
Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…
Fuzzy logic based sensor performance evaluation of vehicle mounted metal detector systems
NASA Astrophysics Data System (ADS)
Abeynayake, Canicious; Tran, Minh D.
2015-05-01
Vehicle Mounted Metal Detector (VMMD) systems are widely used for detection of threat objects in humanitarian demining and military route clearance scenarios. Due to the diverse nature of such operational conditions, operational use of VMMD without a proper understanding of its capability boundaries may lead to heavy causalities. Multi-criteria fitness evaluations are crucial for determining capability boundaries of any sensor-based demining equipment. Evaluation of sensor based military equipment is a multi-disciplinary topic combining the efforts of researchers, operators, managers and commanders having different professional backgrounds and knowledge profiles. Information acquired through field tests usually involves uncertainty, vagueness and imprecision due to variations in test and evaluation conditions during a single test or series of tests. This report presents a fuzzy logic based methodology for experimental data analysis and performance evaluation of VMMD. This data evaluation methodology has been developed to evaluate sensor performance by consolidating expert knowledge with experimental data. A case study is presented by implementing the proposed data analysis framework in a VMMD evaluation scenario. The results of this analysis confirm accuracy, practicability and reliability of the fuzzy logic based sensor performance evaluation framework.
Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai
2012-01-01
In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism. PMID:23193391
Fu, Si-Yao; Yang, Guo-Sheng; Kuai, Xin-Kai
2012-01-01
In this paper, we present a quantitative, highly structured cortex-simulated model, which can be simply described as feedforward, hierarchical simulation of ventral stream of visual cortex using biologically plausible, computationally convenient spiking neural network system. The motivation comes directly from recent pioneering works on detailed functional decomposition analysis of the feedforward pathway of the ventral stream of visual cortex and developments on artificial spiking neural networks (SNNs). By combining the logical structure of the cortical hierarchy and computing power of the spiking neuron model, a practical framework has been presented. As a proof of principle, we demonstrate our system on several facial expression recognition tasks. The proposed cortical-like feedforward hierarchy framework has the merit of capability of dealing with complicated pattern recognition problems, suggesting that, by combining the cognitive models with modern neurocomputational approaches, the neurosystematic approach to the study of cortex-like mechanism has the potential to extend our knowledge of brain mechanisms underlying the cognitive analysis and to advance theoretical models of how we recognize face or, more specifically, perceive other people's facial expression in a rich, dynamic, and complex environment, providing a new starting point for improved models of visual cortex-like mechanism.
About, for, in or through entrepreneurship in engineering education
NASA Astrophysics Data System (ADS)
Mäkimurto-Koivumaa, Soili; Belt, Pekka
2016-09-01
Engineering competences form a potential basis for entrepreneurship. There are pressures to find new approaches to entrepreneurship education (EE) in engineering education, as the traditional analytical logic of engineering does not match the modern view of entrepreneurship. Since the previous models do not give tangible enough tools on how to organise EE in practice, this article aims to develop a new framework for EE at the university level. We approach this aim by analysing existing scientific literature complemented by long-term practical observations, enabling a fruitful interplay between theory and practice. The developed framework recommends aspects in EE to be emphasised during each year of the study process. Action-based learning methods are highlighted in the beginning of studies to support students' personal growth. Explicit business knowledge is to be gradually increased only when professional, field-specific knowledge has been adequately accumulated.
Prototype software model for designing intruder detection systems with simulation
NASA Astrophysics Data System (ADS)
Smith, Jeffrey S.; Peters, Brett A.; Curry, James C.; Gupta, Dinesh
1998-08-01
This article explores using discrete-event simulation for the design and control of defence oriented fixed-sensor- based detection system in a facility housing items of significant interest to enemy forces. The key issues discussed include software development, simulation-based optimization within a modeling framework, and the expansion of the framework to create real-time control tools and training simulations. The software discussed in this article is a flexible simulation environment where the data for the simulation are stored in an external database and the simulation logic is being implemented using a commercial simulation package. The simulation assesses the overall security level of a building against various intruder scenarios. A series of simulation runs with different inputs can determine the change in security level with changes in the sensor configuration, building layout, and intruder/guard strategies. In addition, the simulation model developed for the design stage of the project can be modified to produce a control tool for the testing, training, and real-time control of systems with humans and sensor hardware in the loop.
Applying differential dynamic logic to reconfigurable biological networks.
Figueiredo, Daniel; Martins, Manuel A; Chaves, Madalena
2017-09-01
Qualitative and quantitative modeling frameworks are widely used for analysis of biological regulatory networks, the former giving a preliminary overview of the system's global dynamics and the latter providing more detailed solutions. Another approach is to model biological regulatory networks as hybrid systems, i.e., systems which can display both continuous and discrete dynamic behaviors. Actually, the development of synthetic biology has shown that this is a suitable way to think about biological systems, which can often be constructed as networks with discrete controllers, and present hybrid behaviors. In this paper we discuss this approach as a special case of the reconfigurability paradigm, well studied in Computer Science (CS). In CS there are well developed computational tools to reason about hybrid systems. We argue that it is worth applying such tools in a biological context. One interesting tool is differential dynamic logic (dL), which has recently been developed by Platzer and applied to many case-studies. In this paper we discuss some simple examples of biological regulatory networks to illustrate how dL can be used as an alternative, or also as a complement to methods already used. Copyright © 2017 Elsevier Inc. All rights reserved.
Rethinking Social Barriers to Effective Adaptive Management.
West, Simon; Schultz, Lisen; Bekessy, Sarah
2016-09-01
Adaptive management is an approach to environmental management based on learning-by-doing, where complexity, uncertainty, and incomplete knowledge are acknowledged and management actions are treated as experiments. However, while adaptive management has received significant uptake in theory, it remains elusively difficult to enact in practice. Proponents have blamed social barriers and have called for social science contributions. We address this gap by adopting a qualitative approach to explore the development of an ecological monitoring program within an adaptive management framework in a public land management organization in Australia. We ask what practices are used to enact the monitoring program and how do they shape learning? We elicit a rich narrative through extensive interviews with a key individual, and analyze the narrative using thematic analysis. We discuss our results in relation to the concept of 'knowledge work' and Westley's (2002) framework for interpreting the strategies of adaptive managers-'managing through, in, out and up.' We find that enacting the program is conditioned by distinct and sometimes competing logics-scientific logics prioritizing experimentation and learning, public logics emphasizing accountability and legitimacy, and corporate logics demanding efficiency and effectiveness. In this context, implementing adaptive management entails practices of translation to negotiate tensions between objective and situated knowledge, external experts and organizational staff, and collegiate and hierarchical norms. Our contribution embraces the 'doing' of learning-by-doing and marks a shift from conceptualizing the social as an external barrier to adaptive management to be removed to an approach that situates adaptive management as social knowledge practice.
Framework for a clinical information system.
Van de Velde, R
2000-01-01
The current status of our work towards the design and implementation of a reference architecture for a Clinical Information System is presented. This architecture has been developed and implemented based on components following a strong underlying conceptual and technological model. Common Object Request Broker and n-tier technology featuring centralised and departmental clinical information systems as the back-end store for all clinical data are used. Servers located in the 'middle' tier apply the clinical (business) model and application rules to communicate with so-called 'thin client' workstations. The main characteristics are the focus on modelling and reuse of both data and business logic as there is a shift away from data and functional modelling towards object modelling. Scalability as well as adaptability to constantly changing requirements via component driven computing are the main reasons for that approach.
Telerobotic control of a mobile coordinated robotic server. M.S. Thesis Annual Technical Report
NASA Technical Reports Server (NTRS)
Lee, Gordon
1993-01-01
The annual report on telerobotic control of a mobile coordinated robotic server is presented. The goal of this effort is to develop advanced control methods for flexible space manipulator systems. As such, an adaptive fuzzy logic controller was developed in which model structure as well as parameter constraints are not required for compensation. The work builds upon previous work on fuzzy logic controllers. Fuzzy logic controllers have been growing in importance in the field of automatic feedback control. Hardware controllers using fuzzy logic have become available as an alternative to the traditional PID controllers. Software has also been introduced to aid in the development of fuzzy logic rule-bases. The advantages of using fuzzy logic controllers include the ability to merge the experience and intuition of expert operators into the rule-base and that a model of the system is not required to construct the controller. A drawback of the classical fuzzy logic controller, however, is the many parameters needed to be turned off-line prior to application in the closed-loop. In this report, an adaptive fuzzy logic controller is developed requiring no system model or model structure. The rule-base is defined to approximate a state-feedback controller while a second fuzzy logic algorithm varies, on-line, parameters of the defining controller. Results indicate the approach is viable for on-line adaptive control of systems when the model is too complex or uncertain for application of other more classical control techniques.
2011-01-01
Background Workflow engine technology represents a new class of software with the ability to graphically model step-based knowledge. We present application of this novel technology to the domain of clinical decision support. Successful implementation of decision support within an electronic health record (EHR) remains an unsolved research challenge. Previous research efforts were mostly based on healthcare-specific representation standards and execution engines and did not reach wide adoption. We focus on two challenges in decision support systems: the ability to test decision logic on retrospective data prior prospective deployment and the challenge of user-friendly representation of clinical logic. Results We present our implementation of a workflow engine technology that addresses the two above-described challenges in delivering clinical decision support. Our system is based on a cross-industry standard of XML (extensible markup language) process definition language (XPDL). The core components of the system are a workflow editor for modeling clinical scenarios and a workflow engine for execution of those scenarios. We demonstrate, with an open-source and publicly available workflow suite, that clinical decision support logic can be executed on retrospective data. The same flowchart-based representation can also function in a prospective mode where the system can be integrated with an EHR system and respond to real-time clinical events. We limit the scope of our implementation to decision support content generation (which can be EHR system vendor independent). We do not focus on supporting complex decision support content delivery mechanisms due to lack of standardization of EHR systems in this area. We present results of our evaluation of the flowchart-based graphical notation as well as architectural evaluation of our implementation using an established evaluation framework for clinical decision support architecture. Conclusions We describe an implementation of a free workflow technology software suite (available at http://code.google.com/p/healthflow) and its application in the domain of clinical decision support. Our implementation seamlessly supports clinical logic testing on retrospective data and offers a user-friendly knowledge representation paradigm. With the presented software implementation, we demonstrate that workflow engine technology can provide a decision support platform which evaluates well against an established clinical decision support architecture evaluation framework. Due to cross-industry usage of workflow engine technology, we can expect significant future functionality enhancements that will further improve the technology's capacity to serve as a clinical decision support platform. PMID:21477364
California Geriatric Education Center Logic Model: An Evaluation and Communication Tool
ERIC Educational Resources Information Center
Price, Rachel M.; Alkema, Gretchen E.; Frank, Janet C.
2009-01-01
A logic model is a communications tool that graphically represents a program's resources, activities, priority target audiences for change, and the anticipated outcomes. This article describes the logic model development process undertaken by the California Geriatric Education Center in spring 2008. The CGEC is one of 48 Geriatric Education…
Afifi, Rema A; Makhoul, Jihad; El Hajj, Taghreed; Nakkash, Rima T
2011-01-01
Although logic models are now touted as an important component of health promotion planning, implementation and evaluation, there are few published manuscripts that describe the process of logic model development, and fewer which do so with community involvement, despite the increasing emphasis on participatory research. This paper describes a process leading to the development of a logic model for a youth mental health promotion intervention using a participatory approach in a Palestinian refugee camp in Beirut, Lebanon. First, a needs assessment, including quantitative and qualitative data collection was carried out with children, parents and teachers. The second phase was identification of a priority health issue and analysis of determinants. The final phase in the construction of the logic model involved development of an intervention. The process was iterative and resulted in a more grounded depiction of the pathways of influence informed by evidence. Constructing a logic model with community input ensured that the intervention was more relevant to community needs, feasible for implementation and more likely to be sustainable. PMID:21278370
Texas traffic thermostat software tool.
DOT National Transportation Integrated Search
2013-04-01
The traffic thermostat decision tool is built to help guide the user through a logical, step-wise, process of examining potential changes to their Manage Lane/toll facility. : **NOTE: Project Title: Application of the Traffic Thermostat Framework. Ap...
Texas traffic thermostat marketing package.
DOT National Transportation Integrated Search
2013-04-01
The traffic thermostat decision tool is built to help guide the user through a logical, step-wise, process of examining potential changes to their Manage Lane/toll facility. : **NOTE: Project Title: Application of the Traffic Thermostat Framework. Ap...
Application of linear logic to simulation
NASA Astrophysics Data System (ADS)
Clarke, Thomas L.
1998-08-01
Linear logic, since its introduction by Girard in 1987 has proven expressive and powerful. Linear logic has provided natural encodings of Turing machines, Petri nets and other computational models. Linear logic is also capable of naturally modeling resource dependent aspects of reasoning. The distinguishing characteristic of linear logic is that it accounts for resources; two instances of the same variable are considered differently from a single instance. Linear logic thus must obey a form of the linear superposition principle. A proportion can be reasoned with only once, unless a special operator is applied. Informally, linear logic distinguishes two kinds of conjunction, two kinds of disjunction, and also introduces a modal storage operator that explicitly indicates propositions that can be reused. This paper discuses the application of linear logic to simulation. A wide variety of logics have been developed; in addition to classical logic, there are fuzzy logics, affine logics, quantum logics, etc. All of these have found application in simulations of one sort or another. The special characteristics of linear logic and its benefits for simulation will be discussed. Of particular interest is a connection that can be made between linear logic and simulated dynamics by using the concept of Lie algebras and Lie groups. Lie groups provide the connection between the exponential modal storage operators of linear logic and the eigen functions of dynamic differential operators. Particularly suggestive are possible relations between complexity result for linear logic and non-computability results for dynamical systems.
ERIC Educational Resources Information Center
Hamilton, Jenny; Bronte-Tinkew, Jacinta
2007-01-01
A logic model, also called a conceptual model and theory-of-change model, is a visual representation of how a program is expected to "work." It relates resources, activities, and the intended changes or impacts that a program is expected to create. Typically, logic models are diagrams or flow charts with illustrations, text, and arrows that…
A General Water Resources Regulation Software System in China
NASA Astrophysics Data System (ADS)
LEI, X.
2017-12-01
To avoid iterative development of core modules in water resource normal regulation and emergency regulation and improve the capability of maintenance and optimization upgrading of regulation models and business logics, a general water resources regulation software framework was developed based on the collection and analysis of common demands for water resources regulation and emergency management. It can provide a customizable, secondary developed and extensible software framework for the three-level platform "MWR-Basin-Province". Meanwhile, this general software system can realize business collaboration and information sharing of water resources regulation schemes among the three-level platforms, so as to improve the decision-making ability of national water resources regulation. There are four main modules involved in the general software system: 1) A complete set of general water resources regulation modules allows secondary developer to custom-develop water resources regulation decision-making systems; 2) A complete set of model base and model computing software released in the form of Cloud services; 3) A complete set of tools to build the concept map and model system of basin water resources regulation, as well as a model management system to calibrate and configure model parameters; 4) A database which satisfies business functions and functional requirements of general water resources regulation software can finally provide technical support for building basin or regional water resources regulation models.
ERIC Educational Resources Information Center
Sammis, Theodore W.; Shukla, Manoj K.; Mexal, John G.; Wang, Junming; Miller, David R.
2013-01-01
Universities develop strategic planning documents, and as part of that planning process, logic models are developed for specific programs within the university. This article examines the long-standing pecan program at New Mexico State University and the deficiencies and successes in the evolution of its logic model. The university's agricultural…
NASA Technical Reports Server (NTRS)
Pitts, E. R.
1981-01-01
Program converts cell-net data into logic-gate models for use in test and simulation programs. Input consists of either Place, Route, and Fold (PRF) or Place-and-Route-in-Two-Dimensions (PR2D) layout data deck. Output consists of either Test Pattern Generator (TPG) or Logic-Simulation (LOGSIM) logic circuitry data deck. Designer needs to build only logic-gate-model circuit description since program acts as translator. Language is FORTRAN IV.
Image/video understanding systems based on network-symbolic models
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-03-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.
Chol understandings of suicide and human agency.
Imberton, Gracia
2012-06-01
According to ethnographic material collected since 2003, the Chol Mayan indigenous people in southern Mexico have different causal explanations for suicide. It can be attributed to witchcraft that forces victims to take their lives against their own will, to excessive drinking, or to fate determined by God. However, it can also be conceived of as a conscious decision made by a person overwhelmed by daily problems. Drawing from the theoretical framework developed by Laura M. Ahearn, inspired by practice theory, the paper contends that these different explanations operate within two different logics or understandings of human agency. The first logic attributes responsibility to supernatural causes such as witchcraft or divine destiny, and reflects Chol notions of personhood. The second logic accepts personal responsibility for suicide, and is related to processes of social change such as the introduction of wage labor, education and a market economy. The contemporary Chol resort to both logics to make sense of the human drama of suicide.
Nature and place of crime scene management within forensic sciences.
Crispino, Frank
2008-03-01
This short paper presents the preliminary results of a recent study aimed at appreciating the relevant parameters required to qualify forensic science as a science through an epistemological analysis. The reader is invited to reflect upon references within a historical and logical framework which assert that forensic science is based upon two fundamental principles (those of Locard and Kirk). The basis of the assertion that forensic science is indeed a science should be appreciated not only on one epistemological criteria (as Popper's falsification raised by the Daubert hearing was), but also on the logical frameworks used by the individuals involved (investigator, expert witness and trier of fact) from the crime scene examination to the final interpretation of the evidence. Hence, it can be argued that the management of the crime scene should be integrated into the scientific way of thinking rather than remain as a technical discipline as recently suggested by Harrison.
An acceleration framework for synthetic aperture radar algorithms
NASA Astrophysics Data System (ADS)
Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.
2017-04-01
Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.
Hierarchical semantic structures for medical NLP.
Taira, Ricky K; Arnold, Corey W
2013-01-01
We present a framework for building a medical natural language processing (NLP) system capable of deep understanding of clinical text reports. The framework helps developers understand how various NLP-related efforts and knowledge sources can be integrated. The aspects considered include: 1) computational issues dealing with defining layers of intermediate semantic structures to reduce the dimensionality of the NLP problem; 2) algorithmic issues in which we survey the NLP literature and discuss state-of-the-art procedures used to map between various levels of the hierarchy; and 3) implementation issues to software developers with available resources. The objective of this poster is to educate readers to the various levels of semantic representation (e.g., word level concepts, ontological concepts, logical relations, logical frames, discourse structures, etc.). The poster presents an architecture for which diverse efforts and resources in medical NLP can be integrated in a principled way.
Nonmonotonic Logic for Use in Information Retrieval: An Exploratory Paper.
ERIC Educational Resources Information Center
Hurt, C. D.
1998-01-01
Monotonic logic requires reexamination of the entire logic string when there is a contradiction. Nonmonotonic logic allows the user to withdraw conclusions in the face of contradiction without harm to the logic string, which has considerable application to the field of information searching. Artificial intelligence models and neural networks based…
Lessons learned in detailed clinical modeling at Intermountain Healthcare
Oniki, Thomas A; Coyle, Joseph F; Parker, Craig G; Huff, Stanley M
2014-01-01
Background and objective Intermountain Healthcare has a long history of using coded terminology and detailed clinical models (DCMs) to govern storage of clinical data to facilitate decision support and semantic interoperability. The latest iteration of DCMs at Intermountain is called the clinical element model (CEM). We describe the lessons learned from our CEM efforts with regard to subjective decisions a modeler frequently needs to make in creating a CEM. We present insights and guidelines, but also describe situations in which use cases conflict with the guidelines. We propose strategies that can help reconcile the conflicts. The hope is that these lessons will be helpful to others who are developing and maintaining DCMs in order to promote sharing and interoperability. Methods We have used the Clinical Element Modeling Language (CEML) to author approximately 5000 CEMs. Results Based on our experience, we have formulated guidelines to lead our modelers through the subjective decisions they need to make when authoring models. Reported here are guidelines regarding precoordination/postcoordination, dividing content between the model and the terminology, modeling logical attributes, and creating iso-semantic models. We place our lessons in context, exploring the potential benefits of an implementation layer, an iso-semantic modeling framework, and ontologic technologies. Conclusions We assert that detailed clinical models can advance interoperability and sharing, and that our guidelines, an implementation layer, and an iso-semantic framework will support our progress toward that goal. PMID:24993546
Conceptual Modeling via Logic Programming
1990-01-01
Define User Interface and Query Language L i1W= Ltl k.l 4. Define Procedures for Specifying Output S . Select Logic Programming Language 6. Develop ...baseline s change model. sessions and baselines. It was changed 6. Develop Methodology for C 31 Users. considerably with the advent of the window This...Model Development : Implica- for Conceptual Modeling Via Logic tions for Communications of a Cognitive Programming. Marina del Rey, Calif.: Analysis of
Logic Modeling in Quantitative Systems Pharmacology
Traynard, Pauline; Tobalina, Luis; Eduati, Federica; Calzone, Laurence
2017-01-01
Here we present logic modeling as an approach to understand deregulation of signal transduction in disease and to characterize a drug's mode of action. We discuss how to build a logic model from the literature and experimental data and how to analyze the resulting model to obtain insights of relevance for systems pharmacology. Our workflow uses the free tools OmniPath (network reconstruction from the literature), CellNOpt (model fit to experimental data), MaBoSS (model analysis), and Cytoscape (visualization). PMID:28681552
Langer, Erika M; Gifford, Allen L; Chan, Kee
2011-01-01
Objective Logic models have been used to evaluate policy programs, plan projects, and allocate resources. Logic Modeling for policy analysis has been used rarely in health services research but can be helpful in evaluating the content and rationale of health policies. Comparative Logic Modeling is used here on human immunodeficiency virus (HIV) policy statements from the Department of Veterans Affairs (VA) and Centers for Disease Control and Prevention (CDC). We created visual representations of proposed HIV screening policy components in order to evaluate their structural logic and research-based justifications. Data Sources and Study Design We performed content analysis of VA and CDC HIV testing policy documents in a retrospective case study. Data Collection Using comparative Logic Modeling, we examined the content and primary sources of policy statements by the VA and CDC. We then quantified evidence-based causal inferences within each statement. Principal Findings VA HIV testing policy structure largely replicated that of the CDC guidelines. Despite similar design choices, chosen research citations did not overlap. The agencies used evidence to emphasize different components of the policies. Conclusion Comparative Logic Modeling can be used by health services researchers and policy analysts more generally to evaluate structural differences in health policies and to analyze research-based rationales used by policy makers. PMID:21689094
Model Checking Temporal Logic Formulas Using Sticker Automata
Feng, Changwei; Wu, Huanmei
2017-01-01
As an important complex problem, the temporal logic model checking problem is still far from being fully resolved under the circumstance of DNA computing, especially Computation Tree Logic (CTL), Interval Temporal Logic (ITL), and Projection Temporal Logic (PTL), because there is still a lack of approaches for DNA model checking. To address this challenge, a model checking method is proposed for checking the basic formulas in the above three temporal logic types with DNA molecules. First, one-type single-stranded DNA molecules are employed to encode the Finite State Automaton (FSA) model of the given basic formula so that a sticker automaton is obtained. On the other hand, other single-stranded DNA molecules are employed to encode the given system model so that the input strings of the sticker automaton are obtained. Next, a series of biochemical reactions are conducted between the above two types of single-stranded DNA molecules. It can then be decided whether the system satisfies the formula or not. As a result, we have developed a DNA-based approach for checking all the basic formulas of CTL, ITL, and PTL. The simulated results demonstrate the effectiveness of the new method. PMID:29119114
Comparison of learning models based on mathematics logical intelligence in affective domain
NASA Astrophysics Data System (ADS)
Widayanto, Arif; Pratiwi, Hasih; Mardiyana
2018-04-01
The purpose of this study was to examine the presence or absence of different effects of multiple treatments (used learning models and logical-mathematical intelligence) on the dependent variable (affective domain of mathematics). This research was quasi experimental using 3x3 of factorial design. The population of this research was VIII grade students of junior high school in Karanganyar under the academic year 2017/2018. Data collected in this research was analyzed by two ways analysis of variance with unequal cells using 5% of significance level. The result of the research were as follows: (1) Teaching and learning with model TS lead to better achievement in affective domain than QSH, teaching and learning with model QSH lead to better achievement in affective domain than using DI; (2) Students with high mathematics logical intelligence have better achievement in affective domain than students with low mathematics logical intelligence have; (3) In teaching and learning mathematics using learning model TS, students with moderate mathematics logical intelligence have better achievement in affective domain than using DI; and (4) In teaching and learning mathematics using learning model TS, students with low mathematics logical intelligence have better achievement in affective domain than using QSH and DI.
Segal, Leonie; Sara Opie, Rachelle; Dalziel, Kim
2012-01-01
Context Home-visiting programs have been offered for more than sixty years to at-risk families of newborns and infants. But despite decades of experience with program delivery, more than sixty published controlled trials, and more than thirty published literature reviews, there is still uncertainty surrounding the performance of these programs. Our particular interest was the performance of home visiting in reducing child maltreatment. Methods We developed a program logic framework to assist in understanding the neonate/infant home-visiting literature, identified through a systematic literature review. We tested whether success could be explained by the logic model using descriptive synthesis and statistical analysis. Findings Having a stated objective of reducing child maltreatment—a theory or mechanism of change underpinning the home-visiting program consistent with the target population and their needs and program components that can deliver against the nominated theory of change—considerably increased the chance of success. We found that only seven of fifty-three programs demonstrated such consistency, all of which had a statistically significant positive outcome, whereas of the fifteen that had no match, none was successful. Programs with a partial match had an intermediate success rate. The relationship between program success and full, partial or no match was statistically significant. Conclusions Employing a theory-driven approach provides a new way of understanding the disparate performance of neonate/infant home-visiting programs. Employing a similar theory-driven approach could also prove useful in the review of other programs that embody a diverse set of characteristics and may apply to diverse populations and settings. A program logic framework provides a rigorous approach to deriving policy-relevant meaning from effectiveness evidence of complex programs. For neonate/infant home-visiting programs, it means that in developing these programs, attention to consistency of objectives, theory of change, target population, and program components is critical. PMID:22428693
Segal, Leonie; Sara Opie, Rachelle; Dalziel, Kim
2012-03-01
Home-visiting programs have been offered for more than sixty years to at-risk families of newborns and infants. But despite decades of experience with program delivery, more than sixty published controlled trials, and more than thirty published literature reviews, there is still uncertainty surrounding the performance of these programs. Our particular interest was the performance of home visiting in reducing child maltreatment. We developed a program logic framework to assist in understanding the neonate/infant home-visiting literature, identified through a systematic literature review. We tested whether success could be explained by the logic model using descriptive synthesis and statistical analysis. Having a stated objective of reducing child maltreatment-a theory or mechanism of change underpinning the home-visiting program consistent with the target population and their needs and program components that can deliver against the nominated theory of change-considerably increased the chance of success. We found that only seven of fifty-three programs demonstrated such consistency, all of which had a statistically significant positive outcome, whereas of the fifteen that had no match, none was successful. Programs with a partial match had an intermediate success rate. The relationship between program success and full, partial or no match was statistically significant. Employing a theory-driven approach provides a new way of understanding the disparate performance of neonate/infant home-visiting programs. Employing a similar theory-driven approach could also prove useful in the review of other programs that embody a diverse set of characteristics and may apply to diverse populations and settings. A program logic framework provides a rigorous approach to deriving policy-relevant meaning from effectiveness evidence of complex programs. For neonate/infant home-visiting programs, it means that in developing these programs, attention to consistency of objectives, theory of change, target population, and program components is critical. © 2012 Milbank Memorial Fund.
Analysis of individual risk belief structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonn, B.E.; Travis, C.B.; Arrowood, L.
An interactive computer program developed at Oak Ridge National Laboratory is presented as a methodology to model individualized belief structures. The logic and general strategy of the model is presented for two risk topics: AIDs and toxic waste. Subjects identified desirable and undesirable consequences for each topic and formulated an associative rule linking topic and consequence in either a causal or correlational framework. Likelihood estimates, generated by subjects in several formats (probability, odds statements, etc.), constituted one outcome measure. Additionally, source of belief (personal experience, news media, etc.) and perceived personal and societal impact are reviewed. Briefly, subjects believe thatmore » AIDs causes significant emotional problems, and to a lesser degree, physical health problems whereas toxic waste causes significant environmental problems.« less
Diekman, Amanda B; Steinberg, Mia; Brown, Elizabeth R; Belanger, Aimee L; Clark, Emily K
2017-05-01
The goal congruity perspective provides a theoretical framework to understand how motivational processes influence and are influenced by social roles. In particular, we invoke this framework to understand communal goal processes as proximal motivators of decisions to engage in science, technology, engineering, and mathematics (STEM). STEM fields are not perceived as affording communal opportunities to work with or help others, and understanding these perceived goal affordances can inform knowledge about differences between (a) STEM and other career pathways and (b) women's and men's choices. We review the patterning of gender disparities in STEM that leads to a focus on communal goal congruity (Part I), provide evidence for the foundational logic of the perspective (Part II), and explore the implications for research and policy (Part III). Understanding and transmitting the opportunities for communal goal pursuit within STEM can reap widespread benefits for broadening and deepening participation.
NASA Astrophysics Data System (ADS)
Amyay, Omar
A method defined in terms of synthesis and verification steps is presented. The specification of the services and protocols of communication within a multilayered architecture of the Open Systems Interconnection (OSI) type is an essential issue for the design of computer networks. The aim is to obtain an operational specification of the protocol service couple of a given layer. Planning synthesis and verification steps constitute a specification trajectory. The latter is based on the progressive integration of the 'initial data' constraints and verification of the specification originating from each synthesis step, through validity constraints that characterize an admissible solution. Two types of trajectories are proposed according to the style of the initial specification of the service protocol couple: operational type and service supplier viewpoint; knowledge property oriented type and service viewpoint. Synthesis and verification activities were developed and formalized in terms of labeled transition systems, temporal logic and epistemic logic. The originality of the second specification trajectory and the use of the epistemic logic are shown. An 'artificial intelligence' approach enables a conceptual model to be defined for a knowledge base system for implementing the method proposed. It is structured in three levels of representation of the knowledge relating to the domain, the reasoning characterizing synthesis and verification activities and the planning of the steps of a specification trajectory.
NASA Astrophysics Data System (ADS)
Lanzalaco, Felix; Pissanetzky, Sergio
2013-12-01
A recent theory of physical information based on the fundamental principles of causality and thermodynamics has proposed that a large number of observable life and intelligence signals can be described in terms of the Causal Mathematical Logic (CML), which is proposed to encode the natural principles of intelligence across any physical domain and substrate. We attempt to expound the current definition of CML, the "Action functional" as a theory in terms of its ability to possess a superior explanatory power for the current neuroscientific data we use to measure the mammalian brains "intelligence" processes at its most general biophysical level. Brain simulation projects define their success partly in terms of the emergence of "non-explicitly programmed" complex biophysical signals such as self-oscillation and spreading cortical waves. Here we propose to extend the causal theory to predict and guide the understanding of these more complex emergent "intelligence Signals". To achieve this we review whether causal logic is consistent with, can explain and predict the function of complete perceptual processes associated with intelligence. Primarily those are defined as the range of Event Related Potentials (ERP) which include their primary subcomponents; Event Related Desynchronization (ERD) and Event Related Synchronization (ERS). This approach is aiming for a universal and predictive logic for neurosimulation and AGi. The result of this investigation has produced a general "Information Engine" model from translation of the ERD and ERS. The CML algorithm run in terms of action cost predicts ERP signal contents and is consistent with the fundamental laws of thermodynamics. A working substrate independent natural information logic would be a major asset. An information theory consistent with fundamental physics can be an AGi. It can also operate within genetic information space and provides a roadmap to understand the live biophysical operation of the phenotype
NASA Astrophysics Data System (ADS)
Kamaruddin, Saadi Bin Ahmad; Marponga Tolos, Siti; Hee, Pah Chin; Ghani, Nor Azura Md; Ramli, Norazan Mohamed; Nasir, Noorhamizah Binti Mohamed; Ksm Kader, Babul Salam Bin; Saiful Huq, Mohammad
2017-03-01
Neural framework has for quite a while been known for its ability to handle a complex nonlinear system without a logical model and can learn refined nonlinear associations gives. Theoretically, the most surely understood computation to set up the framework is the backpropagation (BP) count which relies on upon the minimization of the mean square error (MSE). However, this algorithm is not totally efficient in the presence of outliers which usually exist in dynamic data. This paper exhibits the modelling of quadriceps muscle model by utilizing counterfeit smart procedures named consolidated backpropagation neural network nonlinear autoregressive (BPNN-NAR) and backpropagation neural network nonlinear autoregressive moving average (BPNN-NARMA) models in view of utilitarian electrical incitement (FES). We adapted particle swarm optimization (PSO) approach to enhance the performance of backpropagation algorithm. In this research, a progression of tests utilizing FES was led. The information that is gotten is utilized to build up the quadriceps muscle model. 934 preparing information, 200 testing and 200 approval information set are utilized as a part of the improvement of muscle model. It was found that both BPNN-NAR and BPNN-NARMA performed well in modelling this type of data. As a conclusion, the neural network time series models performed reasonably efficient for non-linear modelling such as active properties of the quadriceps muscle with one input, namely output namely muscle force.
Gomez, Fernando; Curcio, Carmen Lucia
2013-01-01
The underlying rationale to support interdisciplinary collaboration in geriatrics and gerontology is based on the complexity of elderly care. The most important characteristic about interdisciplinary health care teams for older people in Latin America is their subjective-basis framework. In other regions, teams are organized according to a theoretical knowledge basis with well-justified priorities, functions, and long-term goals, in Latin America teams are arranged according to subjective interests on solving their problems. Three distinct approaches of interdisciplinary collaboration in gerontology are proposed. The first approach is grounded in the scientific rationalism of European origin. Denominated "logical-rational approach," its core is to identify the significance of knowledge. The second approach is grounded in pragmatism and is more associated with a North American tradition. The core of this approach consists in enhancing the skills and competences of each participant; denominated "logical-instrumental approach." The third approach denominated "logical-subjective approach" has a Latin America origin. Its core consists in taking into account the internal and emotional dimensions of the team. These conceptual frameworks based in geographical contexts will permit establishing the differences and shared characteristics of interdisciplinary collaboration in geriatrics and gerontology to look for operational answers to solve the "complex problems" of older adults.
Logic Model Checking of Unintended Acceleration Claims in Toyota Vehicles
NASA Technical Reports Server (NTRS)
Gamble, Ed
2012-01-01
Part of the US Department of Transportation investigation of Toyota sudden unintended acceleration (SUA) involved analysis of the throttle control software, JPL Laboratory for Reliable Software applied several techniques including static analysis and logic model checking, to the software; A handful of logic models were build, Some weaknesses were identified; however, no cause for SUA was found; The full NASA report includes numerous other analyses
NASA Technical Reports Server (NTRS)
Gamble, Ed; Holzmann, Gerard
2011-01-01
Part of the US DOT investigation of Toyota SUA involved analysis of the throttle control software. JPL LaRS applied several techniques, including static analysis and logic model checking, to the software. A handful of logic models were built. Some weaknesses were identified; however, no cause for SUA was found. The full NASA report includes numerous other analyses
Rodríguez, Daniela C; Peterson, Lauren A
2016-05-06
Factors that influence performance of community health workers (CHWs) delivering health services are not well understood. A recent logic model proposed categories of support from both health sector and communities that influence CHW performance and program outcomes. This logic model has been used to review a growth monitoring program delivered by CHWs in Honduras, known as Atención Integral a la Niñez en la Comunidad (AIN-C). A retrospective review of AIN-C was conducted through a document desk review and supplemented with in-depth interviews. Documents were systematically coded using the categories from the logic model, and gaps were addressed through interviews. Authors reviewed coded data for each category to analyze program details and outcomes as well as identify potential issues and gaps in the logic model. Categories from the logic model were inconsistently represented, with more information available for health sector than community. Context and input activities were not well documented. Information on health sector systems-level activities was available for governance but limited for other categories, while not much was found for community systems-level activities. Most available information focused on program-level activities with substantial data on technical support. Output, outcome, and impact data were drawn from various resources and suggest mixed results of AIN-C on indicators of interest. Assessing CHW performance through a desk review left gaps that could not be addressed about the relationship of activities and performance. There were critical characteristics of program design that made it contextually appropriate; however, it was difficult to identify clear links between AIN-C and malnutrition indicators. Regarding the logic model, several categories were too broad (e.g., technical support, context) and some aspects of AIN-C did not fit neatly in logic model categories (e.g., political commitment, equity, flexibility in implementation). The CHW performance logic model has potential as a tool for program planning and evaluation but would benefit from additional supporting tools and materials to facilitate and operationalize its use.
NASA Astrophysics Data System (ADS)
Bosse, Stefan
2013-05-01
Sensorial materials consisting of high-density, miniaturized, and embedded sensor networks require new robust and reliable data processing and communication approaches. Structural health monitoring is one major field of application for sensorial materials. Each sensor node provides some kind of sensor, electronics, data processing, and communication with a strong focus on microchip-level implementation to meet the goals of miniaturization and low-power energy environments, a prerequisite for autonomous behaviour and operation. Reliability requires robustness of the entire system in the presence of node, link, data processing, and communication failures. Interaction between nodes is required to manage and distribute information. One common interaction model is the mobile agent. An agent approach provides stronger autonomy than a traditional object or remote-procedure-call based approach. Agents can decide for themselves, which actions are performed, and they are capable of flexible behaviour, reacting on the environment and other agents, providing some degree of robustness. Traditionally multi-agent systems are abstract programming models which are implemented in software and executed on program controlled computer architectures. This approach does not well scale to micro-chip level and requires full equipped computers and communication structures, and the hardware architecture does not consider and reflect the requirements for agent processing and interaction. We propose and demonstrate a novel design paradigm for reliable distributed data processing systems and a synthesis methodology and framework for multi-agent systems implementable entirely on microchip-level with resource and power constrained digital logic supporting Agent-On-Chip architectures (AoC). The agent behaviour and mobility is fully integrated on the micro-chip using pipelined communicating processes implemented with finite-state machines and register-transfer logic. The agent behaviour, interaction (communication), and mobility features are modelled and specified on a machine-independent abstract programming level using a state-based agent behaviour language (APL). With this APL a high-level agent compiler is able to synthesize a hardware model (RTL, VHDL), a software model (C, ML), or a simulation model (XML) suitable to simulate a multi-agent system using the SeSAm simulator framework. Agent communication is provided by a simple tuple-space database implemented on node level providing fault tolerant access of global data. A novel synthesis development kit (SynDK) based on a graph-structured database approach is introduced to support the rapid development of compilers and synthesis tools, used for example for the design and implementation of the APL compiler.
Logic integer programming models for signaling networks.
Haus, Utz-Uwe; Niermann, Kathrin; Truemper, Klaus; Weismantel, Robert
2009-05-01
We propose a static and a dynamic approach to model biological signaling networks, and show how each can be used to answer relevant biological questions. For this, we use the two different mathematical tools of Propositional Logic and Integer Programming. The power of discrete mathematics for handling qualitative as well as quantitative data has so far not been exploited in molecular biology, which is mostly driven by experimental research, relying on first-order or statistical models. The arising logic statements and integer programs are analyzed and can be solved with standard software. For a restricted class of problems the logic models reduce to a polynomial-time solvable satisfiability algorithm. Additionally, a more dynamic model enables enumeration of possible time resolutions in poly-logarithmic time. Computational experiments are included.
Keller, Adrienne; Bauerle, Jennifer A
2009-01-01
Logic models are a ubiquitous tool for specifying the tactics--including implementation and evaluation--of interventions in the public health, health and social behaviors arenas. Similarly, social norms interventions are a common strategy, particularly in college settings, to address hazardous drinking and other dangerous or asocial behaviors. This paper illustrates an extension of logic models to include strategic as well as tactical components, using a specific example developed for social norms interventions. Placing the evaluation of projects within the context of this kind of logic model addresses issues related to the lack of a research design to evaluate effectiveness.
Institutional logic in self-management support: coexistence and diversity.
Bossy, Dagmara; Knutsen, Ingrid Ruud; Rogers, Anne; Foss, Christina
2016-11-01
The prevalence of chronic conditions in Europe has been the subject of health-political reforms that have increasingly targeted collaboration between public, private and voluntary organisations for the purpose of supporting self-management of long-term diseases. The international literature describes collaboration across sectors as challenging, which implies that their respective logics are conflicting or incompatible. In line with the European context, recent Norwegian health policy advocates inter-sectorial partnerships. The aim of this policy is to create networks supporting better self-management for people with chronic conditions. The purpose of our qualitative study was to map different understandings of self-management support in private for-profit, volunteer and public organisations. These organisations are seen as potential self-management support networks for individuals with chronic conditions in Norway. From December 2012 to April 2013, we conducted 50 semi-structured interviews with representatives from relevant health and well-being organisations in different parts of Norway. According to the theoretical framework of institutional logic, representatives' statements are embedded with organisational understandings. In the analysis, we systematically assessed the representatives' different understandings of self-management support. The institutional logic we identified revealed traits of organisational historical backgrounds, and transitions in understanding. We found that the merging of individualism and fellowship in contemporary health policy generates different types of logic in different organisational contexts. The private for-profit organisations were concerned with the logic of a healthy appearance and mindset, whereas the private non-profit organisations emphasised fellowship and moral responsibility. Finally, the public, illness-oriented organisations tended to highlight individual conditions for illness management. Different types of logic may attract different users, and simultaneously, a diversity of logic types may challenge collaboration at the user's expense. Moral implications embed institutional logic implying a change towards individual responsibility for disease. Policy makers ought to consider complexities of logic in order to tailor the different needs of users. © 2015 John Wiley & Sons Ltd.
A psychometric evaluation of the digital logic concept inventory
NASA Astrophysics Data System (ADS)
Herman, Geoffrey L.; Zilles, Craig; Loui, Michael C.
2014-10-01
Concept inventories hold tremendous promise for promoting the rigorous evaluation of teaching methods that might remedy common student misconceptions and promote deep learning. The measurements from concept inventories can be trusted only if the concept inventories are evaluated both by expert feedback and statistical scrutiny (psychometric evaluation). Classical Test Theory and Item Response Theory provide two psychometric frameworks for evaluating the quality of assessment tools. We discuss how these theories can be applied to assessment tools generally and then apply them to the Digital Logic Concept Inventory (DLCI). We demonstrate that the DLCI is sufficiently reliable for research purposes when used in its entirety and as a post-course assessment of students' conceptual understanding of digital logic. The DLCI can also discriminate between students across a wide range of ability levels, providing the most information about weaker students' ability levels.
[New horizons in medicine. The application of "fuzzy logic" in clinical and experimental medicine].
Guarini, G
1994-06-01
In medicine, the study of physiological and physiopathological problems is generally programmed by elaborating models which respond to the principals of formal logic. This gives the advantage of favouring the transformation of the formal model into a mathematical model of reference which responds to the principles of the set theories. All this is in the utopian wish to obtain as a result of each research, a net answer whether positive or negative, according to the Aristotelian principal of tertium non datur. Taking this into consideration, the A. briefly traces the principles of modal logic and, in particular, those of fuzzy logic, proposing that the latter substitute the actual definition of "logic with more truth values", with that perhaps more pertinent of "logic of conditioned possibilities". After a brief synthesis on the state of the art on the application of fuzzy logic, the A. reports an example of graphic expression of fuzzy logic by demonstrating how the basic glycemic data (expressed by the vectors magnitude) revealed in a sample of healthy individuals, constituted on the whole an unbroken continuous stream of set partials. The A. calls attention to fuzzy logic as a useful instrument to elaborate in a new way the analysis of scenario qualified to acquire the necessary information to single out the critical points which characterize the potential development of any biological phenomenon.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cintuglu, Mehmet Hazar; Youssef, Tarek; Mohammed, Osama A.
This article presents the development and application of a real-time testbed for multiagent system interoperability. As utility independent private microgrids are installed constantly, standardized interoperability frameworks are required to define behavioral models of the individual agents for expandability and plug-and-play operation. In this paper, we propose a comprehensive hybrid agent framework combining the foundation for intelligent physical agents (FIPA), IEC 61850, and data distribution service (DDS) standards. The IEC 61850 logical node concept is extended using FIPA based agent communication language (ACL) with application specific attributes and deliberative behavior modeling capability. The DDS middleware is adopted to enable a real-timemore » publisher-subscriber interoperability mechanism between platforms. The proposed multi-agent framework was validated in a laboratory based testbed involving developed intelligent electronic device (IED) prototypes and actual microgrid setups. Experimental results were demonstrated for both decentralized and distributed control approaches. Secondary and tertiary control levels of a microgrid were demonstrated for decentralized hierarchical control case study. A consensus-based economic dispatch case study was demonstrated as a distributed control example. Finally, it was shown that the developed agent platform is industrially applicable for actual smart grid field deployment.« less
NASA Astrophysics Data System (ADS)
Lee, K. David; Wiesenfeld, Eric; Gelfand, Andrew
2007-04-01
One of the greatest challenges in modern combat is maintaining a high level of timely Situational Awareness (SA). In many situations, computational complexity and accuracy considerations make the development and deployment of real-time, high-level inference tools very difficult. An innovative hybrid framework that combines Bayesian inference, in the form of Bayesian Networks, and Possibility Theory, in the form of Fuzzy Logic systems, has recently been introduced to provide a rigorous framework for high-level inference. In previous research, the theoretical basis and benefits of the hybrid approach have been developed. However, lacking is a concrete experimental comparison of the hybrid framework with traditional fusion methods, to demonstrate and quantify this benefit. The goal of this research, therefore, is to provide a statistical analysis on the comparison of the accuracy and performance of hybrid network theory, with pure Bayesian and Fuzzy systems and an inexact Bayesian system approximated using Particle Filtering. To accomplish this task, domain specific models will be developed under these different theoretical approaches and then evaluated, via Monte Carlo Simulation, in comparison to situational ground truth to measure accuracy and fidelity. Following this, a rigorous statistical analysis of the performance results will be performed, to quantify the benefit of hybrid inference to other fusion tools.
Cintuglu, Mehmet Hazar; Youssef, Tarek; Mohammed, Osama A.
2016-08-10
This article presents the development and application of a real-time testbed for multiagent system interoperability. As utility independent private microgrids are installed constantly, standardized interoperability frameworks are required to define behavioral models of the individual agents for expandability and plug-and-play operation. In this paper, we propose a comprehensive hybrid agent framework combining the foundation for intelligent physical agents (FIPA), IEC 61850, and data distribution service (DDS) standards. The IEC 61850 logical node concept is extended using FIPA based agent communication language (ACL) with application specific attributes and deliberative behavior modeling capability. The DDS middleware is adopted to enable a real-timemore » publisher-subscriber interoperability mechanism between platforms. The proposed multi-agent framework was validated in a laboratory based testbed involving developed intelligent electronic device (IED) prototypes and actual microgrid setups. Experimental results were demonstrated for both decentralized and distributed control approaches. Secondary and tertiary control levels of a microgrid were demonstrated for decentralized hierarchical control case study. A consensus-based economic dispatch case study was demonstrated as a distributed control example. Finally, it was shown that the developed agent platform is industrially applicable for actual smart grid field deployment.« less
Adapting current Arden Syntax knowledge for an object oriented event monitor.
Choi, Jeeyae; Lussier, Yves A; Mendoça, Eneida A
2003-01-01
Arden Syntax for Medical Logic Module (MLM)1 was designed for writing and sharing task-specific health knowledge in 1989. Several researchers have developed frameworks to improve the sharability and adaptability of Arden Syntax MLMs, an issue known as "curly braces" problem. Karadimas et al proposed an Arden Syntax MLM-based decision support system that uses an object oriented model and the dynamic linking features of the Java platform.2 Peleg et al proposed creating a Guideline Expression Language (GEL) based on Arden Syntax's logic grammar.3 The New York Presbyterian Hospital (NYPH) has a collection of about 200 MLMs. In a process of adapting the current MLMs for an object-oriented event monitor, we identified two problems that may influence the "curly braces" one: (1) the query expressions within the curly braces of Arden Syntax used in our institution are cryptic to the physicians, institutional dependent and written ineffectively (unpublished results), and (2) the events are coded individually within a curly braces, resulting sometimes in a large number of events - up to 200.
A framework for qualitative reasoning about solid objects
NASA Technical Reports Server (NTRS)
Davis, E.
1987-01-01
Predicting the behavior of a qualitatively described system of solid objects requires a combination of geometrical, temporal, and physical reasoning. Methods based upon formulating and solving differential equations are not adequate for robust prediction, since the behavior of a system over extended time may be much simpler than its behavior over local time. A first-order logic, in which one can state simple physical problems and derive their solution deductively, without recourse to solving the differential equations, is discussed. This logic is substantially more expressive and powerful than any previous AI representational system in this domain.
Automata-Based Verification of Temporal Properties on Running Programs
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)
2001-01-01
This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Ball, Lauren; Ball, Dianne; Leveritt, Michael; Ray, Sumantra; Collins, Clare; Patterson, Elizabeth; Ambrosini, Gina; Lee, Patricia; Chaboyer, Wendy
2017-04-01
The methodological designs underpinning many primary health-care interventions are not rigorous. Logic models can be used to support intervention planning, implementation and evaluation in the primary health-care setting. Logic models provide a systematic and visual way of facilitating shared understanding of the rationale for the intervention, the planned activities, expected outcomes, evaluation strategy and required resources. This article provides guidance for primary health-care practitioners and researchers on the use of logic models for enhancing methodological rigour of interventions. The article outlines the recommended steps in developing a logic model using the 'NutriCare' intervention as an example. The 'NutriCare' intervention is based in the Australian primary health-care setting and promotes nutrition care by general practitioners and practice nurses. The recommended approach involves canvassing the views of all stakeholders who have valuable and informed opinions about the planned project. The following four targeted, iterative steps are recommended: (1) confirm situation, intervention aim and target population; (2) document expected outcomes and outputs of the intervention; (3) identify and describe assumptions, external factors and inputs; and (4) confirm intervention components. Over a period of 2 months, three primary health-care researchers and one health-services consultant led the collaborative development of the 'NutriCare' logic model. Primary health-care practitioners and researchers are encouraged to develop a logic model when planning interventions to maximise the methodological rigour of studies, confirm that data required to answer the question are captured and ensure that the intervention meets the project goals.
Extending XNAT Platform with an Incremental Semantic Framework
Timón, Santiago; Rincón, Mariano; Martínez-Tomás, Rafael
2017-01-01
Informatics increases the yield from neuroscience due to improved data. Data sharing and accessibility enable joint efforts between different research groups, as well as replication studies, pivotal for progress in the field. Research data archiving solutions are evolving rapidly to address these necessities, however, distributed data integration is still difficult because of the need of explicit agreements for disparate data models. To address these problems, ontologies are widely used in biomedical research to obtain common vocabularies and logical descriptions, but its application may suffer from scalability issues, domain bias, and loss of low-level data access. With the aim of improving the application of semantic models in biobanking systems, an incremental semantic framework that takes advantage of the latest advances in biomedical ontologies and the XNAT platform is designed and implemented. We follow a layered architecture that allows the alignment of multi-domain biomedical ontologies to manage data at different levels of abstraction. To illustrate this approach, the development is integrated in the JPND (EU Joint Program for Neurodegenerative Disease) APGeM project, focused on finding early biomarkers for Alzheimer's and other dementia related diseases. PMID:28912709
Extending XNAT Platform with an Incremental Semantic Framework.
Timón, Santiago; Rincón, Mariano; Martínez-Tomás, Rafael
2017-01-01
Informatics increases the yield from neuroscience due to improved data. Data sharing and accessibility enable joint efforts between different research groups, as well as replication studies, pivotal for progress in the field. Research data archiving solutions are evolving rapidly to address these necessities, however, distributed data integration is still difficult because of the need of explicit agreements for disparate data models. To address these problems, ontologies are widely used in biomedical research to obtain common vocabularies and logical descriptions, but its application may suffer from scalability issues, domain bias, and loss of low-level data access. With the aim of improving the application of semantic models in biobanking systems, an incremental semantic framework that takes advantage of the latest advances in biomedical ontologies and the XNAT platform is designed and implemented. We follow a layered architecture that allows the alignment of multi-domain biomedical ontologies to manage data at different levels of abstraction. To illustrate this approach, the development is integrated in the JPND (EU Joint Program for Neurodegenerative Disease) APGeM project, focused on finding early biomarkers for Alzheimer's and other dementia related diseases.
Effects of aging on audio-visual speech integration.
Huyse, Aurélie; Leybaert, Jacqueline; Berthommier, Frédéric
2014-10-01
This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.
Logic-centered architecture for ubiquitous health monitoring.
Lewandowski, Jacek; Arochena, Hisbel E; Naguib, Raouf N G; Chao, Kuo-Ming; Garcia-Perez, Alexeis
2014-09-01
One of the key points to maintain and boost research and development in the area of smart wearable systems (SWS) is the development of integrated architectures for intelligent services, as well as wearable systems and devices for health and wellness management. This paper presents such a generic architecture for multiparametric, intelligent and ubiquitous wireless sensing platforms. It is a transparent, smartphone-based sensing framework with customizable wireless interfaces and plug'n'play capability to easily interconnect third party sensor devices. It caters to wireless body, personal, and near-me area networks. A pivotal part of the platform is the integrated inference engine/runtime environment that allows the mobile device to serve as a user-adaptable personal health assistant. The novelty of this system lays in a rapid visual development and remote deployment model. The complementary visual Inference Engine Editor that comes with the package enables artificial intelligence specialists, alongside with medical experts, to build data processing models by assembling different components and instantly deploying them (remotely) on patient mobile devices. In this paper, the new logic-centered software architecture for ubiquitous health monitoring applications is described, followed by a discussion as to how it helps to shift focus from software and hardware development, to medical and health process-centered design of new SWS applications.
Runtime verification of embedded real-time systems.
Reinbacher, Thomas; Függer, Matthias; Brauer, Jörg
We present a runtime verification framework that allows on-line monitoring of past-time Metric Temporal Logic (ptMTL) specifications in a discrete time setting. We design observer algorithms for the time-bounded modalities of ptMTL, which take advantage of the highly parallel nature of hardware designs. The algorithms can be translated into efficient hardware blocks, which are designed for reconfigurability, thus, facilitate applications of the framework in both a prototyping and a post-deployment phase of embedded real-time systems. We provide formal correctness proofs for all presented observer algorithms and analyze their time and space complexity. For example, for the most general operator considered, the time-bounded Since operator, we obtain a time complexity that is doubly logarithmic both in the point in time the operator is executed and the operator's time bounds. This result is promising with respect to a self-contained, non-interfering monitoring approach that evaluates real-time specifications in parallel to the system-under-test. We implement our framework on a Field Programmable Gate Array platform and use extensive simulation and logic synthesis runs to assess the benefits of the approach in terms of resource usage and operating frequency.
Forbes, David; Lewis, Virginia; Varker, Tracey; Phelps, Andrea; O'Donnell, Meaghan; Wade, Darryl J; Ruzek, Josef I; Watson, Patricia; Bryant, Richard A; Creamer, Mark
2011-01-01
International clinical practice guidelines for the management of psychological trauma recommend Psychological First Aid (PFA) as an early intervention for survivors of potentially traumatic events. These recommendations are consensus-based, and there is little published evidence assessing the effectiveness of PFA. This is not surprising given the nature of the intervention and the complicating factors involved in any evaluation of PFA. There is, nevertheless, an urgent need for stronger evidence evaluating its effectiveness. The current paper posits that the implementation and evaluation of PFA within high risk organizational settings is an ideal place to start. The paper provides a framework for a phasic approach to implementing PFA within such settings and presents a model for evaluating its effectiveness using a logic- or theory-based approach which considers both pre-event and post-event factors. Phases 1 and 2 of the PFA model are pre-event actions, and phases 3 and 4 are post-event actions. It is hoped that by using the Phased PFA model and evaluation method proposed in this paper, future researchers will begin to undertake the important task of building the evidence about the most effective approach to providing PFA in high risk organizational and community disaster settings.
2011-01-01
Background The National Institute for Health Research (NIHR) was established in 2006 with the aim of creating an applied health research system embedded within the English National Health Service (NHS). NIHR sought to implement an approach for monitoring its performance that effectively linked early indicators of performance with longer-term research impacts. We attempted to develop and apply a conceptual framework for defining appropriate key performance indicators for NIHR. Method Following a review of relevant literature, a conceptual framework for defining performance indicators for NIHR was developed, based on a hybridisation of the logic model and balanced scorecard approaches. This framework was validated through interviews with key NIHR stakeholders and a pilot in one division of NIHR, before being refined and applied more widely. Indicators were then selected and aggregated to create a basket of indicators aligned to NIHR's strategic goals, which could be reported to NIHR's leadership team on a quarterly basis via an oversight dashboard. Results Senior health research system managers and practitioners endorsed the conceptual framework developed and reported satisfaction with the breadth and balance of indicators selected for reporting. Conclusions The use of the hybrid conceptual framework provides a pragmatic approach to defining performance indicators that are aligned to the strategic aims of a health research system. The particular strength of this framework is its capacity to provide an empirical link, over time, between upstream activities of a health research system and its long-term strategic objectives. PMID:21435265
Characterizing the EPODE logic model: unravelling the past and informing the future.
Van Koperen, T M; Jebb, S A; Summerbell, C D; Visscher, T L S; Romon, M; Borys, J M; Seidell, J C
2013-02-01
EPODE ('Ensemble Prévenons l'Obésité De Enfants' or 'Together let's Prevent Childhood Obesity') is a large-scale, centrally coordinated, capacity-building approach for communities to implement effective and sustainable strategies to prevent childhood obesity. Since 2004, EPODE has been implemented in over 500 communities in six countries. Although based on emergent practice and scientific knowledge, EPODE, as many community programs, lacks a logic model depicting key elements of the approach. The objective of this study is to gain insight in the dynamics and key elements of EPODE and to represent these in a schematic logic model. EPODE's process manuals and documents were collected and interviews were held with professionals involved in the planning and delivery of EPODE. Retrieved data were coded, themed and placed in a four-level logic model. With input from international experts, this model was scaled down to a concise logic model covering four critical components: political commitment, public and private partnerships, social marketing and evaluation. The EPODE logic model presented here can be used as a reference for future and follow-up research; to support future implementation of EPODE in communities; as a tool in the engagement of stakeholders; and to guide the construction of a locally tailored evaluation plan. © 2012 The Authors. obesity reviews © 2012 International Association for the Study of Obesity.
Ratcliffe, Michelle M
2012-08-01
Farm to School programs hold promise to address childhood obesity. These programs may increase students’ access to healthier foods, increase students’ knowledge of and desire to eat these foods, and increase their consumption of them. Implementing Farm to School programs requires the involvement of multiple people, including nutrition services, educators, and food producers. Because these groups have not traditionally worked together and each has different goals, it is important to demonstrate how Farm to School programs that are designed to decrease childhood obesity may also address others’ objectives, such as academic achievement and economic development. A logic model is an effective tool to help articulate a shared vision for how Farm to School programs may work to accomplish multiple goals. Furthermore, there is evidence that programs based on theory are more likely to be effective at changing individuals’ behaviors. Logic models based on theory may help to explain how a program works, aid in efficient and sustained implementation, and support the development of a coherent evaluation plan. This article presents a sample theory-based logic model for Farm to School programs. The presented logic model is informed by the polytheoretical model for food and garden-based education in school settings (PMFGBE). The logic model has been applied to multiple settings, including Farm to School program development and evaluation in urban and rural school districts. This article also includes a brief discussion on the development of the PMFGBE, a detailed explanation of how Farm to School programs may enhance the curricular, physical, and social learning environments of schools, and suggestions for the applicability of the logic model for practitioners, researchers, and policy makers.
Learning a Markov Logic network for supervised gene regulatory network inference
2013-01-01
Background Gene regulatory network inference remains a challenging problem in systems biology despite the numerous approaches that have been proposed. When substantial knowledge on a gene regulatory network is already available, supervised network inference is appropriate. Such a method builds a binary classifier able to assign a class (Regulation/No regulation) to an ordered pair of genes. Once learnt, the pairwise classifier can be used to predict new regulations. In this work, we explore the framework of Markov Logic Networks (MLN) that combine features of probabilistic graphical models with the expressivity of first-order logic rules. Results We propose to learn a Markov Logic network, e.g. a set of weighted rules that conclude on the predicate “regulates”, starting from a known gene regulatory network involved in the switch proliferation/differentiation of keratinocyte cells, a set of experimental transcriptomic data and various descriptions of genes all encoded into first-order logic. As training data are unbalanced, we use asymmetric bagging to learn a set of MLNs. The prediction of a new regulation can then be obtained by averaging predictions of individual MLNs. As a side contribution, we propose three in silico tests to assess the performance of any pairwise classifier in various network inference tasks on real datasets. A first test consists of measuring the average performance on balanced edge prediction problem; a second one deals with the ability of the classifier, once enhanced by asymmetric bagging, to update a given network. Finally our main result concerns a third test that measures the ability of the method to predict regulations with a new set of genes. As expected, MLN, when provided with only numerical discretized gene expression data, does not perform as well as a pairwise SVM in terms of AUPR. However, when a more complete description of gene properties is provided by heterogeneous sources, MLN achieves the same performance as a black-box model such as a pairwise SVM while providing relevant insights on the predictions. Conclusions The numerical studies show that MLN achieves very good predictive performance while opening the door to some interpretability of the decisions. Besides the ability to suggest new regulations, such an approach allows to cross-validate experimental data with existing knowledge. PMID:24028533
Learning a Markov Logic network for supervised gene regulatory network inference.
Brouard, Céline; Vrain, Christel; Dubois, Julie; Castel, David; Debily, Marie-Anne; d'Alché-Buc, Florence
2013-09-12
Gene regulatory network inference remains a challenging problem in systems biology despite the numerous approaches that have been proposed. When substantial knowledge on a gene regulatory network is already available, supervised network inference is appropriate. Such a method builds a binary classifier able to assign a class (Regulation/No regulation) to an ordered pair of genes. Once learnt, the pairwise classifier can be used to predict new regulations. In this work, we explore the framework of Markov Logic Networks (MLN) that combine features of probabilistic graphical models with the expressivity of first-order logic rules. We propose to learn a Markov Logic network, e.g. a set of weighted rules that conclude on the predicate "regulates", starting from a known gene regulatory network involved in the switch proliferation/differentiation of keratinocyte cells, a set of experimental transcriptomic data and various descriptions of genes all encoded into first-order logic. As training data are unbalanced, we use asymmetric bagging to learn a set of MLNs. The prediction of a new regulation can then be obtained by averaging predictions of individual MLNs. As a side contribution, we propose three in silico tests to assess the performance of any pairwise classifier in various network inference tasks on real datasets. A first test consists of measuring the average performance on balanced edge prediction problem; a second one deals with the ability of the classifier, once enhanced by asymmetric bagging, to update a given network. Finally our main result concerns a third test that measures the ability of the method to predict regulations with a new set of genes. As expected, MLN, when provided with only numerical discretized gene expression data, does not perform as well as a pairwise SVM in terms of AUPR. However, when a more complete description of gene properties is provided by heterogeneous sources, MLN achieves the same performance as a black-box model such as a pairwise SVM while providing relevant insights on the predictions. The numerical studies show that MLN achieves very good predictive performance while opening the door to some interpretability of the decisions. Besides the ability to suggest new regulations, such an approach allows to cross-validate experimental data with existing knowledge.
Boegl, Karl; Adlassnig, Klaus-Peter; Hayashi, Yoichi; Rothenfluh, Thomas E; Leitich, Harald
2004-01-01
This paper describes the fuzzy knowledge representation framework of the medical computer consultation system MedFrame/CADIAG-IV as well as the specific knowledge acquisition techniques that have been developed to support the definition of knowledge concepts and inference rules. As in its predecessor system CADIAG-II, fuzzy medical knowledge bases are used to model the uncertainty and the vagueness of medical concepts and fuzzy logic reasoning mechanisms provide the basic inference processes. The elicitation and acquisition of medical knowledge from domain experts has often been described as the most difficult and time-consuming task in knowledge-based system development in medicine. It comes as no surprise that this is even more so when unfamiliar representations like fuzzy membership functions are to be acquired. From previous projects we have learned that a user-centered approach is mandatory in complex and ill-defined knowledge domains such as internal medicine. This paper describes the knowledge acquisition framework that has been developed in order to make easier and more accessible the three main tasks of: (a) defining medical concepts; (b) providing appropriate interpretations for patient data; and (c) constructing inferential knowledge in a fuzzy knowledge representation framework. Special emphasis is laid on the motivations for some system design and data modeling decisions. The theoretical framework has been implemented in a software package, the Knowledge Base Builder Toolkit. The conception and the design of this system reflect the need for a user-centered, intuitive, and easy-to-handle tool. First results gained from pilot studies have shown that our approach can be successfully implemented in the context of a complex fuzzy theoretical framework. As a result, this critical aspect of knowledge-based system development can be accomplished more easily.
Nonlinear Aerodynamic Modeling From Flight Data Using Advanced Piloted Maneuvers and Fuzzy Logic
NASA Technical Reports Server (NTRS)
Brandon, Jay M.; Morelli, Eugene A.
2012-01-01
Results of the Aeronautics Research Mission Directorate Seedling Project Phase I research project entitled "Nonlinear Aerodynamics Modeling using Fuzzy Logic" are presented. Efficient and rapid flight test capabilities were developed for estimating highly nonlinear models of airplane aerodynamics over a large flight envelope. Results showed that the flight maneuvers developed, used in conjunction with the fuzzy-logic system identification algorithms, produced very good model fits of the data, with no model structure inputs required, for flight conditions ranging from cruise to departure and spin conditions.
A novel logic-based approach for quantitative toxicology prediction.
Amini, Ata; Muggleton, Stephen H; Lodhi, Huma; Sternberg, Michael J E
2007-01-01
There is a pressing need for accurate in silico methods to predict the toxicity of molecules that are being introduced into the environment or are being developed into new pharmaceuticals. Predictive toxicology is in the realm of structure activity relationships (SAR), and many approaches have been used to derive such SAR. Previous work has shown that inductive logic programming (ILP) is a powerful approach that circumvents several major difficulties, such as molecular superposition, faced by some other SAR methods. The ILP approach reasons with chemical substructures within a relational framework and yields chemically understandable rules. Here, we report a general new approach, support vector inductive logic programming (SVILP), which extends the essentially qualitative ILP-based SAR to quantitative modeling. First, ILP is used to learn rules, the predictions of which are then used within a novel kernel to derive a support-vector generalization model. For a highly heterogeneous dataset of 576 molecules with known fathead minnow fish toxicity, the cross-validated correlation coefficients (R2CV) from a chemical descriptor method (CHEM) and SVILP are 0.52 and 0.66, respectively. The ILP, CHEM, and SVILP approaches correctly predict 55, 58, and 73%, respectively, of toxic molecules. In a set of 165 unseen molecules, the R2 values from the commercial software TOPKAT and SVILP are 0.26 and 0.57, respectively. In all calculations, SVILP showed significant improvements in comparison with the other methods. The SVILP approach has a major advantage in that it uses ILP automatically and consistently to derive rules, mostly novel, describing fragments that are toxicity alerts. The SVILP is a general machine-learning approach and has the potential of tackling many problems relevant to chemoinformatics including in silico drug design.
1986-03-21
i t a t i v e frameworks (e.g., Doyle, Toulmin , P . Cohen), and e f f o r t s t o syn thes i ze l o g i c and p r o b a b i l i t y (Nilsson...logic allows for provisional acceptance of uncer- tain premises, which may later be retracted when they lead to contradictory conclusions. Toulmin (1958...A1 researchers] have accepted without hesitation as impeccable." * The basic framework of an argument, according to Toulmin , is as follows ( Toulmin
An efficient current-based logic cell model for crosstalk delay analysis
NASA Astrophysics Data System (ADS)
Nazarian, Shahin; Das, Debasish
2013-04-01
Logic cell modelling is an important component in the analysis and design of CMOS integrated circuits, mostly due to nonlinear behaviour of CMOS cells with respect to the voltage signal at their input and output pins. A current-based model for CMOS logic cells is presented, which can be used for effective crosstalk noise and delta delay analysis in CMOS VLSI circuits. Existing current source models are expensive and need a new set of Spice-based characterisation, which is not compatible with typical EDA tools. In this article we present Imodel, a simple nonlinear logic cell model that can be derived from the typical cell libraries such as NLDM, with accuracy much higher than NLDM-based cell delay models. In fact, our experiments show an average error of 3% compared to Spice. This level of accuracy comes with a maximum runtime penalty of 19% compared to NLDM-based cell delay models on medium-sized industrial designs.
Personal Epistemology of Urban Elementary School Teachers
ERIC Educational Resources Information Center
Pearrow, Melissa; Sanchez, William
2008-01-01
Personal epistemology, originating from social construction theory, provides a framework for researchers to understand how individuals view their world. The Attitudes About Reality (AAR) scale is one survey method that qualitatively assesses personal epistemology along the logical positivist and social constructionist continuum; however, the…
Using a logical information model-driven design process in healthcare.
Cheong, Yu Chye; Bird, Linda; Tun, Nwe Ni; Brooks, Colleen
2011-01-01
A hybrid standards-based approach has been adopted in Singapore to develop a Logical Information Model (LIM) for healthcare information exchange. The Singapore LIM uses a combination of international standards, including ISO13606-1 (a reference model for electronic health record communication), ISO21090 (healthcare datatypes), SNOMED CT (healthcare terminology) and HL7 v2 (healthcare messaging). This logic-based design approach also incorporates mechanisms for achieving bi-directional semantic interoperability.
A Fuzzy Logic Optimal Control Law Solution to the CMMCA Tracking Problem
1993-03-01
or from a transfer function. Many times, however, the resulting algorithms are so complex as to be completely or essentially useless. Applications...implemented in a nearly real time computer simulation. Located within the LQ framework are all the performance data for both the ClMCA and the CX...repuired nor desired. 34 - / k more general and less exacting framework was used. In order to concentrate on tho theory and problem solution, it was
A Rewriting Logic Approach to Type Inference
NASA Astrophysics Data System (ADS)
Ellison, Chucky; Şerbănuţă, Traian Florin; Roşu, Grigore
Meseguer and Roşu proposed rewriting logic semantics (RLS) as a programing language definitional framework that unifies operational and algebraic denotational semantics. RLS has already been used to define a series of didactic and real languages, but its benefits in connection with defining and reasoning about type systems have not been fully investigated. This paper shows how the same RLS style employed for giving formal definitions of languages can be used to define type systems. The same term-rewriting mechanism used to execute RLS language definitions can now be used to execute type systems, giving type checkers or type inferencers. The proposed approach is exemplified by defining the Hindley-Milner polymorphic type inferencer mathcal{W} as a rewrite logic theory and using this definition to obtain a type inferencer by executing it in a rewriting logic engine. The inferencer obtained this way compares favorably with other definitions or implementations of mathcal{W}. The performance of the executable definition is within an order of magnitude of that of highly optimized implementations of type inferencers, such as that of OCaml.
Minimally inconsistent reasoning in Semantic Web.
Zhang, Xiaowang
2017-01-01
Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning.
Minimally inconsistent reasoning in Semantic Web
Zhang, Xiaowang
2017-01-01
Reasoning with inconsistencies is an important issue for Semantic Web as imperfect information is unavoidable in real applications. For this, different paraconsistent approaches, due to their capacity to draw as nontrivial conclusions by tolerating inconsistencies, have been proposed to reason with inconsistent description logic knowledge bases. However, existing paraconsistent approaches are often criticized for being too skeptical. To this end, this paper presents a non-monotonic paraconsistent version of description logic reasoning, called minimally inconsistent reasoning, where inconsistencies tolerated in the reasoning are minimized so that more reasonable conclusions can be inferred. Some desirable properties are studied, which shows that the new semantics inherits advantages of both non-monotonic reasoning and paraconsistent reasoning. A complete and sound tableau-based algorithm, called multi-valued tableaux, is developed to capture the minimally inconsistent reasoning. In fact, the tableaux algorithm is designed, as a framework for multi-valued DL, to allow for different underlying paraconsistent semantics, with the mere difference in the clash conditions. Finally, the complexity of minimally inconsistent description logic reasoning is shown on the same level as the (classical) description logic reasoning. PMID:28750030
Quantum Weak Values and Logic: An Uneasy Couple
NASA Astrophysics Data System (ADS)
Svensson, Bengt E. Y.
2017-03-01
Quantum mechanical weak values of projection operators have been used to answer which-way questions, e. g. to trace which arms in a multiple Mach-Zehnder setup a particle may have traversed from a given initial to a prescribed final state. I show that this procedure might lead to logical inconsistencies in the sense that different methods used to answer composite questions, like "Has the particle traversed the way X or the way Y?", may result in different answers depending on which methods are used to find the answer. I illustrate the problem by considering some examples: the "quantum pigeonhole" framework of Aharonov et al., the three-box problem, and Hardy's paradox. To prepare the ground for my main conclusion on the incompatibility in certain cases of weak values and logic, I study the corresponding situation for strong/projective measurements. In this case, no logical inconsistencies occur provided one is always careful in specifying exactly to which ensemble or sample space one refers. My results cast doubts on the utility of quantum weak values in treating cases like the examples mentioned.
Model checking for linear temporal logic: An efficient implementation
NASA Technical Reports Server (NTRS)
Sherman, Rivi; Pnueli, Amir
1990-01-01
This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.
Design of a Ferroelectric Programmable Logic Gate Array
NASA Technical Reports Server (NTRS)
MacLeod, Todd C.; Ho, Fat Duen
2003-01-01
A programmable logic gate array has been designed utilizing ferroelectric field effect transistors. The design has only a small number of gates, but this could be scaled up to a more useful size. Using FFET's in a logic array gives several advantages. First, it allows real-time programmability to the array to give high speed reconfiguration. It also allows the array to be configured nearly an unlimited number of times, unlike a FLASH FPGA. Finally, the Ferroelectric Programmable Logic Gate Array (FPLGA) can be implemented using a smaller number of transistors because of the inherent logic characteristics of an FFET. The device was only designed and modeled using Spice models of the circuit, including the FFET. The actual device was not produced. The design consists of a small array of NAND and NOR logic gates. Other gates could easily be produced. They are linked by FFET's that control the logic flow. Timing and logic tables have been produced showing the array can produce a variety of logic combinations at a real time usable speed. This device could be a prototype for a device that could be put into imbedded systems that need the high speed of hardware implementation of logic and the complexity to need to change the logic algorithm. Because of the non-volatile nature of the FFET, it would also be useful in situations that needed to program a logic array once and use it repeatedly after the power has been shut off.
A Categorical Framework for Model Classification in the Geosciences
NASA Astrophysics Data System (ADS)
Hauhs, Michael; Trancón y Widemann, Baltasar; Lange, Holger
2016-04-01
Models have a mixed record of success in the geosciences. In meteorology, model development and implementation has been among the first and most successful examples of triggering computer technology in science. On the other hand, notorious problems such as the 'equifinality issue' in hydrology lead to a rather mixed reputation of models in other areas. The most successful models in geosciences are applications of dynamic systems theory to non-living systems or phenomena. Thus, we start from the hypothesis that the success of model applications relates to the influence of life on the phenomenon under study. We thus focus on the (formal) representation of life in models. The aim is to investigate whether disappointment in model performance is due to system properties such as heterogeneity and historicity of ecosystems, or rather reflects an abstraction and formalisation problem at a fundamental level. As a formal framework for this investigation, we use category theory as applied in computer science to specify behaviour at an interface. Its methods have been developed for translating and comparing formal structures among different application areas and seems highly suited for a classification of the current "model zoo" in the geosciences. The approach is rather abstract, with a high degree of generality but a low level of expressibility. Here, category theory will be employed to check the consistency of assumptions about life in different models. It will be shown that it is sufficient to distinguish just four logical cases to check for consistency of model content. All four cases can be formalised as variants of coalgebra-algebra homomorphisms. It can be demonstrated that transitions between the four variants affect the relevant observations (time series or spatial maps), the formalisms used (equations, decision trees) and the test criteria of success (prediction, classification) of the resulting model types. We will present examples from hydrology and ecology in which a transport problem is combined with the strategic behaviour of living agents. The living and the non-living aspects of the model belong to two different model types. If a model is built to combine strategic behaviour with the constraint of mass conservation, some critical assumptions appear as inevitable, or models may become logically inconsistent. The categorical assessment and the examples demonstrate that many models at ecosystem level, where both living and non-living aspects inevitably meet, pose so far unsolved, fundamental problems. Today, these are often pragmatically resolved at the level of software engineering. Some suggestions will be given how model documentation and benchmarking may help clarifying and resolving some of these issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Raedt, Hans; Katsnelson, Mikhail I.; Donker, Hylke C.
It is shown that the Pauli equation and the concept of spin naturally emerge from logical inference applied to experiments on a charged particle under the conditions that (i) space is homogeneous (ii) the observed events are logically independent, and (iii) the observed frequency distributions are robust with respect to small changes in the conditions under which the experiment is carried out. The derivation does not take recourse to concepts of quantum theory and is based on the same principles which have already been shown to lead to e.g. the Schrödinger equation and the probability distributions of pairs of particles inmore » the singlet or triplet state. Application to Stern–Gerlach experiments with chargeless, magnetic particles, provides additional support for the thesis that quantum theory follows from logical inference applied to a well-defined class of experiments. - Highlights: • The Pauli equation is obtained through logical inference applied to robust experiments on a charged particle. • The concept of spin appears as an inference resulting from the treatment of two-valued data. • The same reasoning yields the quantum theoretical description of neutral magnetic particles. • Logical inference provides a framework to establish a bridge between objective knowledge gathered through experiments and their description in terms of concepts.« less
Modeling coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Schünemann, Holger J; Wiercioch, Wojtek; Brozek, Jan; Etxeandia-Ikobaltzeta, Itziar; Mustafa, Reem A; Manja, Veena; Brignardello-Petersen, Romina; Neumann, Ignacio; Falavigna, Maicon; Alhazzani, Waleed; Santesso, Nancy; Zhang, Yuan; Meerpohl, Jörg J; Morgan, Rebecca L; Rochwerg, Bram; Darzi, Andrea; Rojas, Maria Ximenas; Carrasco-Labra, Alonso; Adi, Yaser; AlRayees, Zulfa; Riva, John; Bollig, Claudia; Moore, Ainsley; Yepes-Nuñez, Juan José; Cuello, Carlos; Waziry, Reem; Akl, Elie A
2017-01-01
Guideline developers can: (1) adopt existing recommendations from others; (2) adapt existing recommendations to their own context; or (3) create recommendations de novo. Monetary and nonmonetary resources, credibility, maximization of uptake, as well as logical arguments should guide the choice of the approach and processes. To describe a potentially efficient model for guideline production based on adoption, adaptation, and/or de novo development of recommendations utilizing the Grading of Recommendations Assessment, Development and Evaluation (GRADE) Evidence to Decision (EtD) frameworks. We applied the model in a new national guideline program producing 22 practice guidelines. We searched for relevant evidence that informs the direction and strength of a recommendation. We then produced GRADE EtDs for guideline panels to develop recommendations. We produced a total of 80 EtD frameworks in approximately 4 months and 146 EtDs in approximately 6 months in two waves. Use of the EtD frameworks allowed panel members understand judgments of others about the criteria that bear on guideline recommendations and then make their own judgments about those criteria in a systematic approach. The "GRADE-ADOLOPMENT" approach to guideline production combines adoption, adaptation, and, as needed, de novo development of recommendations. If developers of guidelines follow EtD criteria more widely and make their work publically available, this approach should prove even more useful. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.
Ziviani, Jenny; Feeney, Rachel; Schabrun, Siobhan; Copland, David; Hodges, Paul
2014-08-01
The purpose of this study was to present the application of a logic model in depicting the underlying theory of an undergraduate research scheme for occupational therapy, physiotherapy, and speech pathology university students in Queensland, Australia. Data gathered from key written documents on the goals and intended operation of the research incubator scheme were used to create a draft (unverified) logic model. The major components of the logic model were inputs and resources, activities/outputs, and outcomes (immediate/learning, intermediate/action, and longer term/impacts). Although immediate and intermediate outcomes chiefly pertained to students' participation in honours programs, longer-term outcomes (impacts) concerned their subsequent participation in research higher-degree programs and engagement in research careers. Program logic provided an effective means of clarifying program objectives and the mechanisms by which the research incubator scheme was designed to achieve its intended outcomes. This model was developed as the basis for evaluation of the effectiveness of the scheme in achieving its stated goals.
The probability heuristics model of syllogistic reasoning.
Chater, N; Oaksford, M
1999-03-01
A probability heuristic model (PHM) for syllogistic reasoning is proposed. An informational ordering over quantified statements suggests simple probability based heuristics for syllogistic reasoning. The most important is the "min-heuristic": choose the type of the least informative premise as the type of the conclusion. The rationality of this heuristic is confirmed by an analysis of the probabilistic validity of syllogistic reasoning which treats logical inference as a limiting case of probabilistic inference. A meta-analysis of past experiments reveals close fits with PHM. PHM also compares favorably with alternative accounts, including mental logics, mental models, and deduction as verbal reasoning. Crucially, PHM extends naturally to generalized quantifiers, such as Most and Few, which have not been characterized logically and are, consequently, beyond the scope of current mental logic and mental model theories. Two experiments confirm the novel predictions of PHM when generalized quantifiers are used in syllogistic arguments. PHM suggests that syllogistic reasoning performance may be determined by simple but rational informational strategies justified by probability theory rather than by logic. Copyright 1999 Academic Press.
NASA Technical Reports Server (NTRS)
Brown, Robert B.
1994-01-01
A software pilot model for Space Shuttle proximity operations is developed, utilizing fuzzy logic. The model is designed to emulate a human pilot during the terminal phase of a Space Shuttle approach to the Space Station. The model uses the same sensory information available to a human pilot and is based upon existing piloting rules and techniques determined from analysis of human pilot performance. Such a model is needed to generate numerous rendezvous simulations to various Space Station assembly stages for analysis of current NASA procedures and plume impingement loads on the Space Station. The advantages of a fuzzy logic pilot model are demonstrated by comparing its performance with NASA's man-in-the-loop simulations and with a similar model based upon traditional Boolean logic. The fuzzy model is shown to respond well from a number of initial conditions, with results typical of an average human. In addition, the ability to model different individual piloting techniques and new piloting rules is demonstrated.
Stone, Vathsala I; Lane, Joseph P
2012-05-16
Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact-that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and "bench to bedside" expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits.
2012-01-01
Background Government-sponsored science, technology, and innovation (STI) programs support the socioeconomic aspects of public policies, in addition to expanding the knowledge base. For example, beneficial healthcare services and devices are expected to result from investments in research and development (R&D) programs, which assume a causal link to commercial innovation. Such programs are increasingly held accountable for evidence of impact—that is, innovative goods and services resulting from R&D activity. However, the absence of comprehensive models and metrics skews evidence gathering toward bibliometrics about research outputs (published discoveries), with less focus on transfer metrics about development outputs (patented prototypes) and almost none on econometrics related to production outputs (commercial innovations). This disparity is particularly problematic for the expressed intent of such programs, as most measurable socioeconomic benefits result from the last category of outputs. Methods This paper proposes a conceptual framework integrating all three knowledge-generating methods into a logic model, useful for planning, obtaining, and measuring the intended beneficial impacts through the implementation of knowledge in practice. Additionally, the integration of the Context-Input-Process-Product (CIPP) model of evaluation proactively builds relevance into STI policies and programs while sustaining rigor. Results The resulting logic model framework explicitly traces the progress of knowledge from inputs, following it through the three knowledge-generating processes and their respective knowledge outputs (discovery, invention, innovation), as it generates the intended socio-beneficial impacts. It is a hybrid model for generating technology-based innovations, where best practices in new product development merge with a widely accepted knowledge-translation approach. Given the emphasis on evidence-based practice in the medical and health fields and “bench to bedside” expectations for knowledge transfer, sponsors and grantees alike should find the model useful for planning, implementing, and evaluating innovation processes. Conclusions High-cost/high-risk industries like healthcare require the market deployment of technology-based innovations to improve domestic society in a global economy. An appropriate balance of relevance and rigor in research, development, and production is crucial to optimize the return on public investment in such programs. The technology-innovation process needs a comprehensive operational model to effectively allocate public funds and thereby deliberately and systematically accomplish socioeconomic benefits. PMID:22591638
Topological Properties of Some Integrated Circuits for Very Large Scale Integration Chip Designs
NASA Astrophysics Data System (ADS)
Swanson, S.; Lanzerotti, M.; Vernizzi, G.; Kujawski, J.; Weatherwax, A.
2015-03-01
This talk presents topological properties of integrated circuits for Very Large Scale Integration chip designs. These circuits can be implemented in very large scale integrated circuits, such as those in high performance microprocessors. Prior work considered basic combinational logic functions and produced a mathematical framework based on algebraic topology for integrated circuits composed of logic gates. Prior work also produced an historically-equivalent interpretation of Mr. E. F. Rent's work for today's complex circuitry in modern high performance microprocessors, where a heuristic linear relationship was observed between the number of connections and number of logic gates. This talk will examine topological properties and connectivity of more complex functionally-equivalent integrated circuits. The views expressed in this article are those of the author and do not reflect the official policy or position of the United States Air Force, Department of Defense or the U.S. Government.
Language, procedures, and the non-perceptual origin of number word meanings.
Barner, David
2017-05-01
Perceptual representations of objects and approximate magnitudes are often invoked as building blocks that children combine to acquire the positive integers. Systems of numerical perception are either assumed to contain the logical foundations of arithmetic innately, or to supply the basis for their induction. I propose an alternative to this framework, and argue that the integers are not learned from perceptual systems, but arise to explain perception. Using cross-linguistic and developmental data, I show that small (~1-4) and large (~5+) numbers arise both historically and in individual children via distinct mechanisms, constituting independent learning problems, neither of which begins with perceptual building blocks. Children first learn small numbers using the same logic that supports other linguistic number marking (e.g. singular/plural). Years later, they infer the logic of counting from the relations between large number words and their roles in blind counting procedures, only incidentally associating number words with approximate magnitudes.
Active matter logic for autonomous microfluidics
NASA Astrophysics Data System (ADS)
Woodhouse, Francis G.; Dunkel, Jörn
2017-04-01
Chemically or optically powered active matter plays an increasingly important role in materials design, but its computational potential has yet to be explored systematically. The competition between energy consumption and dissipation imposes stringent physical constraints on the information transport in active flow networks, facilitating global optimization strategies that are not well understood. Here, we combine insights from recent microbial experiments with concepts from lattice-field theory and non-equilibrium statistical mechanics to introduce a generic theoretical framework for active matter logic. Highlighting conceptual differences with classical and quantum computation, we demonstrate how the inherent non-locality of incompressible active flow networks can be utilized to construct universal logical operations, Fredkin gates and memory storage in set-reset latches through the synchronized self-organization of many individual network components. Our work lays the conceptual foundation for developing autonomous microfluidic transport devices driven by bacterial fluids, active liquid crystals or chemically engineered motile colloids.
Competing Logics and Healthcare
Saks, Mike
2018-01-01
This paper offers a short commentary on the editorial by Mannion and Exworthy. The paper highlights the positive insights offered by their analysis into the tensions between the competing institutional logics of standardization and customization in healthcare, in part manifested in the conflict between managers and professionals, and endorses the plea of the authors for further research in this field. However, the editorial is criticized for its lack of a strong societal reference point, the comparative absence of focus on hybridization, and its failure to highlight structural factors impinging on the opposing logics in a broader neo-institutional framework. With reference to the Procrustean metaphor, it is argued that greater stress should be placed on the healthcare user in future health policy. Finally, the case of complementary and alternative medicine is set out which – while not explicitly mentioned in the editorial – most effectively concretizes the tensions at the heart of this analysis of healthcare. PMID:29626406
Land-Atmosphere Coupling in the Multi-Scale Modelling Framework
NASA Astrophysics Data System (ADS)
Kraus, P. M.; Denning, S.
2015-12-01
The Multi-Scale Modeling Framework (MMF), in which cloud-resolving models (CRMs) are embedded within general circulation model (GCM) gridcells to serve as the model's cloud parameterization, has offered a number of benefits to GCM simulations. The coupling of these cloud-resolving models directly to land surface model instances, rather than passing averaged atmospheric variables to a single instance of a land surface model, the logical next step in model development, has recently been accomplished. This new configuration offers conspicuous improvements to estimates of precipitation and canopy through-fall, but overall the model exhibits warm surface temperature biases and low productivity.This work presents modifications to a land-surface model that take advantage of the new multi-scale modeling framework, and accommodate the change in spatial scale from a typical GCM range of ~200 km to the CRM grid-scale of 4 km.A parameterization is introduced to apportion modeled surface radiation into direct-beam and diffuse components. The diffuse component is then distributed among the land-surface model instances within each GCM cell domain. This substantially reduces the number excessively low light values provided to the land-surface model when cloudy conditions are modeled in the CRM, associated with its 1-D radiation scheme. The small spatial scale of the CRM, ~4 km, as compared with the typical ~200 km GCM scale, provides much more realistic estimates of precipitation intensity, this permits the elimination of a model parameterization of canopy through-fall. However, runoff at such scales can no longer be considered as an immediate flow to the ocean. Allowing sub-surface water flow between land-surface instances within the GCM domain affords better realism and also reduces temperature and productivity biases.The MMF affords a number of opportunities to land-surface modelers, providing both the advantages of direct simulation at the 4 km scale and a much reduced conceptual gap between model resolution and parameterized processes.
Community science, philosophy of science, and the practice of research.
Tebes, Jacob Kraemer
2005-06-01
Embedded in community science are implicit theories on the nature of reality (ontology), the justification of knowledge claims (epistemology), and how knowledge is constructed (methodology). These implicit theories influence the conceptualization and practice of research, and open up or constrain its possibilities. The purpose of this paper is to make some of these theories explicit, trace their intellectual history, and propose a shift in the way research in the social and behavioral sciences, and community science in particular, is conceptualized and practiced. After describing the influence and decline of logical empiricism, the underlying philosophical framework for science for the past century, I summarize contemporary views in the philosophy of science that are alternatives to logical empiricism. These include contextualism, normative naturalism, and scientific realism, and propose that a modified version of contextualism, known as perspectivism, affords the philosophical framework for an emerging community science. I then discuss the implications of perspectivism for community science in the form of four propositions to guide the practice of research.
Research on teacher education programs: logic model approach.
Newton, Xiaoxia A; Poon, Rebecca C; Nunes, Nicole L; Stone, Elisa M
2013-02-01
Teacher education programs in the United States face increasing pressure to demonstrate their effectiveness through pupils' learning gains in classrooms where program graduates teach. The link between teacher candidates' learning in teacher education programs and pupils' learning in K-12 classrooms implicit in the policy discourse suggests a one-to-one correspondence. However, the logical steps leading from what teacher candidates have learned in their programs to what they are doing in classrooms that may contribute to their pupils' learning are anything but straightforward. In this paper, we argue that the logic model approach from scholarship on evaluation can enhance research on teacher education by making explicit the logical links between program processes and intended outcomes. We demonstrate the usefulness of the logic model approach through our own work on designing a longitudinal study that focuses on examining the process and impact of an undergraduate mathematics and science teacher education program. Copyright © 2012 Elsevier Ltd. All rights reserved.
Compensation for Lithography Induced Process Variations during Physical Design
NASA Astrophysics Data System (ADS)
Chin, Eric Yiow-Bing
This dissertation addresses the challenge of designing robust integrated circuits in the deep sub micron regime in the presence of lithography process variability. By extending and combining existing process and circuit analysis techniques, flexible software frameworks are developed to provide detailed studies of circuit performance in the presence of lithography variations such as focus and exposure. Applications of these software frameworks to select circuits demonstrate the electrical impact of these variations and provide insight into variability aware compact models that capture the process dependent circuit behavior. These variability aware timing models abstract lithography variability from the process level to the circuit level and are used to estimate path level circuit performance with high accuracy with very little overhead in runtime. The Interconnect Variability Characterization (IVC) framework maps lithography induced geometrical variations at the interconnect level to electrical delay variations. This framework is applied to one dimensional repeater circuits patterned with both 90nm single patterning and 32nm double patterning technologies, under the presence of focus, exposure, and overlay variability. Studies indicate that single and double patterning layouts generally exhibit small variations in delay (between 1--3%) due to self compensating RC effects associated with dense layouts and overlay errors for layouts without self-compensating RC effects. The delay response of each double patterned interconnect structure is fit with a second order polynomial model with focus, exposure, and misalignment parameters with 12 coefficients and residuals of less than 0.1ps. The IVC framework is also applied to a repeater circuit with cascaded interconnect structures to emulate more complex layout scenarios, and it is observed that the variations on each segment average out to reduce the overall delay variation. The Standard Cell Variability Characterization (SCVC) framework advances existing layout-level lithography aware circuit analysis by extending it to cell-level applications utilizing a physically accurate approach that integrates process simulation, compact transistor models, and circuit simulation to characterize electrical cell behavior. This framework is applied to combinational and sequential cells in the Nangate 45nm Open Cell Library, and the timing response of these cells to lithography focus and exposure variations demonstrate Bossung like behavior. This behavior permits the process parameter dependent response to be captured in a nine term variability aware compact model based on Bossung fitting equations. For a two input NAND gate, the variability aware compact model captures the simulated response to an accuracy of 0.3%. The SCVC framework is also applied to investigate advanced process effects including misalignment and layout proximity. The abstraction of process variability from the layout level to the cell level opens up an entire new realm of circuit analysis and optimization and provides a foundation for path level variability analysis without the computationally expensive costs associated with joint process and circuit simulation. The SCVC framework is used with slight modification to illustrate the speedup and accuracy tradeoffs of using compact models. With variability aware compact models, the process dependent performance of a three stage logic circuit can be estimated to an accuracy of 0.7% with a speedup of over 50,000. Path level variability analysis also provides an accurate estimate (within 1%) of ring oscillator period in well under a second. Another significant advantage of variability aware compact models is that they can be easily incorporated into existing design methodologies for design optimization. This is demonstrated by applying cell swapping on a logic circuit to reduce the overall delay variability along a circuit path. By including these variability aware compact models in cell characterization libraries, design metrics such as circuit timing, power, area, and delay variability can be quickly assessed to optimize for the correct balance of all design metrics, including delay variability. Deterministic lithography variations can be easily captured using the variability aware compact models described in this dissertation. However, another prominent source of variability is random dopant fluctuations, which affect transistor threshold voltage and in turn circuit performance. The SCVC framework is utilized to investigate the interactions between deterministic lithography variations and random dopant fluctuations. Monte Carlo studies show that the output delay distribution in the presence of random dopant fluctuations is dependent on lithography focus and exposure conditions, with a 3.6 ps change in standard deviation across the focus exposure process window. This indicates that the electrical impact of random variations is dependent on systematic lithography variations, and this dependency should be included for precise analysis.
Calibration strategies for a groundwater model in a highly dynamic alpine floodplain
Foglia, L.; Burlando, P.; Hill, Mary C.; Mehl, S.
2004-01-01
Most surface flows to the 20-km-long Maggia Valley in Southern Switzerland are impounded and the valley is being investigated to determine environmental flow requirements. The aim of the investigation is the devel-opment of a modelling framework that simulates the dynamics of the ground-water, hydrologic, and ecologic systems. Because of the multi-scale nature of the modelling framework, large-scale models are first developed to provide the boundary conditions for more detailed models of reaches that are of eco-logical importance. We describe here the initial (large-scale) groundwa-ter/surface water model and its calibration in relation to initial and boundary conditions. A MODFLOW-2000 model was constructed to simulate the inter-action of groundwater and surface water and was developed parsimoniously to avoid modelling artefacts and parameter inconsistencies. Model calibration includes two steady-state conditions, with and without recharge to the aquifer from the adjoining hillslopes. Parameters are defined to represent areal re-charge, hydraulic conductivity of the aquifer (up to 5 classes), and streambed hydraulic conductivity. Model performance was investigated following two system representation. The first representation assumed unknown flow input at the northern end of the groundwater domain and unknown lateral inflow. The second representation used simulations of the lateral flow obtained by means of a raster-based, physically oriented and continuous in time rainfall-runoff (R-R) model. Results based on these two representations are compared and discussed.
Pedagogy of the logic model: teaching undergraduates to work together to change their communities.
Zimmerman, Lindsey; Kamal, Zohra; Kim, Hannah
2013-01-01
Undergraduate community psychology courses can empower students to address challenging problems in their local communities. Creating a logic model is an experiential way to learn course concepts by "doing." Throughout the semester, students work with peers to define a problem, develop an intervention, and plan an evaluation focused on an issue of concern to them. This report provides an overview of how to organize a community psychology course around the creation of a logic model in order for students to develop this applied skill. Two undergraduate student authors report on their experience with the logic model assignment, describing the community problem they chose to address, what they learned from the assignment, what they found challenging, and what they are doing now in their communities based on what they learned.
Fific, Mario; Little, Daniel R; Nosofsky, Robert M
2010-04-01
We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli along a set of component dimensions. Those independent decisions are then combined via logical rules to determine the overall categorization response. The time course of the independent decisions is modeled via random-walk processes operating along individual dimensions. Alternative mental architectures are used as mechanisms for combining the independent decisions to implement the logical rules. We derive fundamental qualitative contrasts for distinguishing among the predictions of the rule models and major alternative models of classification RT. We also use the models to predict detailed RT-distribution data associated with individual stimuli in tasks of speeded perceptual classification. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Dual Logic and Cerebral Coordinates for Reciprocal Interaction in Eye Contact
Lee, Ray F.
2015-01-01
In order to scientifically study the human brain’s response to face-to-face social interaction, the scientific method itself needs to be reconsidered so that both quantitative observation and symbolic reasoning can be adapted to the situation where the observer is also observed. In light of the recent development of dyadic fMRI which can directly observe dyadic brain interacting in one MRI scanner, this paper aims to establish a new form of logic, dual logic, which provides a theoretical platform for deductive reasoning in a complementary dual system with emergence mechanism. Applying the dual logic in the dfMRI experimental design and data analysis, the exogenous and endogenous dual systems in the BOLD responses can be identified; the non-reciprocal responses in the dual system can be suppressed; a cerebral coordinate for reciprocal interaction can be generated. Elucidated by dual logic deductions, the cerebral coordinate for reciprocal interaction suggests: the exogenous and endogenous systems consist of the empathy network and the mentalization network respectively; the default-mode network emerges from the resting state to activation in the endogenous system during reciprocal interaction; the cingulate plays an essential role in the emergence from the exogenous system to the endogenous system. Overall, the dual logic deductions are supported by the dfMRI experimental results and are consistent with current literature. Both the theoretical framework and experimental method set the stage to formally apply the scientific method in studying complex social interaction. PMID:25885446
Winnicott and Derrida: development of logic-of-play.
Bitan, Shachaf
2012-02-01
In this essay I develop the logic of play from the writings of the British psychoanalyst Donald W. Winnicott and the French philosopher Jacques Derrida. The logic of play serves as both a conceptual framework for theoretical clinical thinking and a space of experiencing in which the therapeutic situation is located and to which it aspires. I argue that both Winnicott and Derrida proposed a playful turn in Western thinking by their attitude towards oppositions, viewing them not as complementary or contradictory, but as 'peacefully-coexisting'. Derrida criticizes the dichotomous structure of Western thought, proposing playful movement as an alternative that does not constitute itself as a mastering construction. I will show that Winnicott, too, proposes playful logic through which he thinks and acts in the therapeutic situation. The therapeutic encounter is understood as a playful space in which analyst and analysand continuously coexist, instead of facing each other as exclusionary oppositions. I therefore propose the logic of play as the basis for the therapeutic encounter. The playful turn, then, is crucial for the thought and praxis expressed by the concept of two-person psychology. I suggest the term playful psychoanalysis to characterize the present perspective of psychoanalysis in the light of the playful turn. I will first present Derrida's playful thought, go on to Winnicott's playful revolutionism, and conclude with an analysis of Winicott's clinical material in the light of the logic of play. Copyright © 2012 Institute of Psychoanalysis.
Grossi, Enzo
2005-09-27
The concept of risk has pervaded medical literature in the last decades and has become a familiar topic, and the concept of probability, linked to binary logic approach, is commonly applied in epidemiology and clinical medicine. The application of probability theory to groups of individuals is quite straightforward but can pose communication challenges at individual level. Few articles by the way have tried to focus the concept of "risk" at the individual subject level rather than at population level. The author has reviewed the conceptual framework which has led to the use of probability theory in the medical field in a time when the principal causes of death were represented by acute disease often of infective origin. In the present scenario, in which chronic degenerative disease dominate and there are smooth transitions between health and disease the use of fuzzy logic rather than binary logic would be more appropriate. The use of fuzzy logic in which more than two possible truth-value assignments are allowed overcomes the trap of probability theory when dealing with uncertain outcomes, thereby making the meaning of a certain prognostic statement easier to understand by the patient. At individual subject level the recourse to the term plausibility, related to fuzzy logic, would help the physician to communicate to the patient more efficiently in comparison with the term probability, related to binary logic. This would represent an evident advantage for the transfer of medical evidences to individual subjects.
NASA Astrophysics Data System (ADS)
Lev, S. M.; Gallo, J.
2017-12-01
The international Arctic scientific community has identified the need for a sustained and integrated portfolio of pan-Arctic Earth-observing systems. In 2017, an international effort was undertaken to develop the first ever Value Tree framework for identifying common research and operational objectives that rely on Earth observation data derived from Earth-observing systems, sensors, surveys, networks, models, and databases to deliver societal benefits in the Arctic. A Value Tree Analysis is a common tool used to support decision making processes and is useful for defining concepts, identifying objectives, and creating a hierarchical framework of objectives. A multi-level societal benefit area value tree establishes the connection from societal benefits to the set of observation inputs that contribute to delivering those benefits. A Value Tree that relies on expert domain knowledge from Arctic and non-Arctic nations, international researchers, Indigenous knowledge holders, and other experts to develop a framework to serve as a logical and interdependent decision support tool will be presented. Value tree examples that map the contribution of Earth observations in the Arctic to achieving societal benefits will be presented in the context of the 2017 International Arctic Observations Assessment Framework. These case studies will highlight specific observing products and capability groups where investment is needed to contribute to the development of a sustained portfolio of Arctic observing systems.
Design of synthetic biological logic circuits based on evolutionary algorithm.
Chuang, Chia-Hua; Lin, Chun-Liang; Chang, Yen-Chang; Jennawasin, Tanagorn; Chen, Po-Kuei
2013-08-01
The construction of an artificial biological logic circuit using systematic strategy is recognised as one of the most important topics for the development of synthetic biology. In this study, a real-structured genetic algorithm (RSGA), which combines general advantages of the traditional real genetic algorithm with those of the structured genetic algorithm, is proposed to deal with the biological logic circuit design problem. A general model with the cis-regulatory input function and appropriate promoter activity functions is proposed to synthesise a wide variety of fundamental logic gates such as NOT, Buffer, AND, OR, NAND, NOR and XOR. The results obtained can be extended to synthesise advanced combinational and sequential logic circuits by topologically distinct connections. The resulting optimal design of these logic gates and circuits are established via the RSGA. The in silico computer-based modelling technology has been verified showing its great advantages in the purpose.
NASA Technical Reports Server (NTRS)
Crawford, D. B.; Burcham, F. W., Jr.
1984-01-01
A series of airstarts were conducted in an F-15 airplane with two prototype Pratt and Whitney F100 Engine Model Derivative engines equipped with Digital Electronic Engine Control (DEEC) systems. The airstart envelope and the time required for airstarts were defined. Comparisons were made between the original airstart logic, and modified logic which was designed to improve the airstart capability. Spooldown airstarts with the modified logic were more successful at lower altitudes than were those with the original logic. Spooldown airstart times ranged from 33 seconds at 250 knots to 83 seconds at 175 knots. The modified logic improved the airstart time from 31% to 53%, with the most improved times at slower airspeeds. Jet fuel starter (JFS)-assisted airstarts were conducted at 7000 m and airstart times were significantly faster than unassisted airstarts. The effect of altitude on airstart times was small.
Formal Compiler Implementation in a Logical Framework
2003-04-29
variable set [], we omit the brackets and use the simpler notation v. MetaPRL is a tactic-based prover that uses OCaml [20] as its meta-language. When a...rewrite is defined in MetaPRL, the framework creates an OCaml expression that can be used to apply the rewrite. Code to guide the application of...rewrites is written in OCaml , using a rich set of primitives provided by MetaPRL. MetaPRL automates the construction of most guidance code; we describe
DOE Office of Scientific and Technical Information (OSTI.GOV)
A. Alfonsi; C. Rabiti; D. Mandelli
The Reactor Analysis and Virtual control ENviroment (RAVEN) code is a software tool that acts as the control logic driver and post-processing engine for the newly developed Thermal-Hydraulic code RELAP-7. RAVEN is now a multi-purpose Probabilistic Risk Assessment (PRA) software framework that allows dispatching different functionalities: Derive and actuate the control logic required to simulate the plant control system and operator actions (guided procedures), allowing on-line monitoring/controlling in the Phase Space Perform both Monte-Carlo sampling of random distributed events and Dynamic Event Tree based analysis Facilitate the input/output handling through a Graphical User Interface (GUI) and a post-processing data miningmore » module« less
Cross-Cultural Counseling and Cross-Cultural Meanings: An Exploration of Morita Psychotherapy.
ERIC Educational Resources Information Center
Aldous, Jane L.
1994-01-01
Describes theoretical framework and techniques of Morita psychotherapy. Western research indicates that Asian American clients prefer active-directive, logical, rational, and structured approaches. Suggests that ethnocentric counseling approaches may be imposed upon clients of Asian origin because meanings attached to terms describing counseling…
Second Language Acquisition and Universal Grammar.
ERIC Educational Resources Information Center
White, Lydia
1990-01-01
Discusses the motivation for Universal Grammar (UG), as assumed in the principles and parameters framework of generative grammar (Chomsky, 1981), focusing on the logical problem of first-language acquisition and the potential role of UG in second-language acquisition. Recent experimental research regarding the second-language status of the…
NEVESIM: event-driven neural simulation framework with a Python interface.
Pecevski, Dejan; Kappel, David; Jonke, Zeno
2014-01-01
NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies.
NEVESIM: event-driven neural simulation framework with a Python interface
Pecevski, Dejan; Kappel, David; Jonke, Zeno
2014-01-01
NEVESIM is a software package for event-driven simulation of networks of spiking neurons with a fast simulation core in C++, and a scripting user interface in the Python programming language. It supports simulation of heterogeneous networks with different types of neurons and synapses, and can be easily extended by the user with new neuron and synapse types. To enable heterogeneous networks and extensibility, NEVESIM is designed to decouple the simulation logic of communicating events (spikes) between the neurons at a network level from the implementation of the internal dynamics of individual neurons. In this paper we will present the simulation framework of NEVESIM, its concepts and features, as well as some aspects of the object-oriented design approaches and simulation strategies that were utilized to efficiently implement the concepts and functionalities of the framework. We will also give an overview of the Python user interface, its basic commands and constructs, and also discuss the benefits of integrating NEVESIM with Python. One of the valuable capabilities of the simulator is to simulate exactly and efficiently networks of stochastic spiking neurons from the recently developed theoretical framework of neural sampling. This functionality was implemented as an extension on top of the basic NEVESIM framework. Altogether, the intended purpose of the NEVESIM framework is to provide a basis for further extensions that support simulation of various neural network models incorporating different neuron and synapse types that can potentially also use different simulation strategies. PMID:25177291
Tableau Calculus for the Logic of Comparative Similarity over Arbitrary Distance Spaces
NASA Astrophysics Data System (ADS)
Alenda, Régis; Olivetti, Nicola
The logic CSL (first introduced by Sheremet, Tishkovsky, Wolter and Zakharyaschev in 2005) allows one to reason about distance comparison and similarity comparison within a modal language. The logic can express assertions of the kind "A is closer/more similar to B than to C" and has a natural application to spatial reasoning, as well as to reasoning about concept similarity in ontologies. The semantics of CSL is defined in terms of models based on different classes of distance spaces and it generalizes the logic S4 u of topological spaces. In this paper we consider CSL defined over arbitrary distance spaces. The logic comprises a binary modality to represent comparative similarity and a unary modality to express the existence of the minimum of a set of distances. We first show that the semantics of CSL can be equivalently defined in terms of preferential models. As a consequence we obtain the finite model property of the logic with respect to its preferential semantic, a property that does not hold with respect to the original distance-space semantics. Next we present an analytic tableau calculus based on its preferential semantics. The calculus provides a decision procedure for the logic, its termination is obtained by imposing suitable blocking restrictions.
Conceptual and logical level of database modeling
NASA Astrophysics Data System (ADS)
Hunka, Frantisek; Matula, Jiri
2016-06-01
Conceptual and logical levels form the top most levels of database modeling. Usually, ORM (Object Role Modeling) and ER diagrams are utilized to capture the corresponding schema. The final aim of business process modeling is to store its results in the form of database solution. For this reason, value oriented business process modeling which utilizes ER diagram to express the modeling entities and relationships between them are used. However, ER diagrams form the logical level of database schema. To extend possibilities of different business process modeling methodologies, the conceptual level of database modeling is needed. The paper deals with the REA value modeling approach to business process modeling using ER-diagrams, and derives conceptual model utilizing ORM modeling approach. Conceptual model extends possibilities for value modeling to other business modeling approaches.
Fuzzy Logic as a Tool for Assessing Students' Knowledge and Skills
ERIC Educational Resources Information Center
Voskoglou, Michael Gr.
2013-01-01
Fuzzy logic, which is based on fuzzy sets theory introduced by Zadeh in 1965, provides a rich and meaningful addition to standard logic. The applications which may be generated from or adapted to fuzzy logic are wide-ranging and provide the opportunity for modeling under conditions which are imprecisely defined. In this article we develop a fuzzy…
Fuzzy logic of Aristotelian forms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perlovsky, L.I.
1996-12-31
Model-based approaches to pattern recognition and machine vision have been proposed to overcome the exorbitant training requirements of earlier computational paradigms. However, uncertainties in data were found to lead to a combinatorial explosion of the computational complexity. This issue is related here to the roles of a priori knowledge vs. adaptive learning. What is the a-priori knowledge representation that supports learning? I introduce Modeling Field Theory (MFT), a model-based neural network whose adaptive learning is based on a priori models. These models combine deterministic, fuzzy, and statistical aspects to account for a priori knowledge, its fuzzy nature, and data uncertainties.more » In the process of learning, a priori fuzzy concepts converge to crisp or probabilistic concepts. The MFT is a convergent dynamical system of only linear computational complexity. Fuzzy logic turns out to be essential for reducing the combinatorial complexity to linear one. I will discuss the relationship of the new computational paradigm to two theories due to Aristotle: theory of Forms and logic. While theory of Forms argued that the mind cannot be based on ready-made a priori concepts, Aristotelian logic operated with just such concepts. I discuss an interpretation of MFT suggesting that its fuzzy logic, combining a-priority and adaptivity, implements Aristotelian theory of Forms (theory of mind). Thus, 2300 years after Aristotle, a logic is developed suitable for his theory of mind.« less
Jefferds, Maria Elena D; Flores-Ayala, Rafael
2015-12-01
Lack of monitoring capacity is a key barrier for nutrition interventions and limits programme management, decision making and programme effectiveness in many low-income and middle-income countries. A 2011 global assessment reported lack of monitoring capacity was the top barrier for home fortification interventions, such as micronutrient powders or lipid-based nutrient supplements. A Manual for Developing and Implementing Monitoring Systems for Home Fortification Interventions was recently disseminated. It is comprehensive and describes monitoring concepts and frameworks and includes monitoring tools and worksheets. The monitoring manual describes the steps of developing and implementing a monitoring system for home fortification interventions, including identifying and engaging stakeholders; developing a programme description including logic model and logical framework; refining the purpose of the monitoring system, identifying users and their monitoring needs; describing the design of the monitoring system; developing indicators; describing the core components of a comprehensive monitoring plan; and considering factors related to stage of programme development, sustainability and scale up. A fictional home fortification example is used throughout the monitoring manual to illustrate these steps. The monitoring manual is a useful tool to support the development and implementation of home fortification intervention monitoring systems. In the context of systematic capacity gaps to design, implement and monitor nutrition interventions in many low-income and middle-income countries, the dissemination of new tools, such as monitoring manuals may have limited impact without additional attention to strengthening other individual, organisational and systems levels capacities. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Evaluating bacterial gene-finding HMM structures as probabilistic logic programs.
Mørk, Søren; Holmes, Ian
2012-03-01
Probabilistic logic programming offers a powerful way to describe and evaluate structured statistical models. To investigate the practicality of probabilistic logic programming for structure learning in bioinformatics, we undertook a simplified bacterial gene-finding benchmark in PRISM, a probabilistic dialect of Prolog. We evaluate Hidden Markov Model structures for bacterial protein-coding gene potential, including a simple null model structure, three structures based on existing bacterial gene finders and two novel model structures. We test standard versions as well as ADPH length modeling and three-state versions of the five model structures. The models are all represented as probabilistic logic programs and evaluated using the PRISM machine learning system in terms of statistical information criteria and gene-finding prediction accuracy, in two bacterial genomes. Neither of our implementations of the two currently most used model structures are best performing in terms of statistical information criteria or prediction performances, suggesting that better-fitting models might be achievable. The source code of all PRISM models, data and additional scripts are freely available for download at: http://github.com/somork/codonhmm. Supplementary data are available at Bioinformatics online.
Theorem Proving in Intel Hardware Design
NASA Technical Reports Server (NTRS)
O'Leary, John
2009-01-01
For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.
Extended GTST-MLD for aerospace system safety analysis.
Guo, Chiming; Gong, Shiyu; Tan, Lin; Guo, Bo
2012-06-01
The hazards caused by complex interactions in the aerospace system have become a problem that urgently needs to be settled. This article introduces a method for aerospace system hazard interaction identification based on extended GTST-MLD (goal tree-success tree-master logic diagram) during the design stage. GTST-MLD is a functional modeling framework with a simple architecture. Ontology is used to extend the ability of system interaction description in GTST-MLD by adding the system design knowledge and the past accident experience. From the level of functionality and equipment, respectively, this approach can help the technician detect potential hazard interactions. Finally, a case is used to show the method. © 2011 Society for Risk Analysis.
Data Model as an Architectural View
2009-10-01
store order - processing system. Logical. The logical data model is an evolution of the conceptual data model towards a data management technology (e.g...online store order - processing system at different stages. Perhaps the first draft was elaborated by the architect during discussion of requirements
Semantics-enabled service discovery framework in the SIMDAT pharma grid.
Qu, Cangtao; Zimmermann, Falk; Kumpf, Kai; Kamuzinzi, Richard; Ledent, Valérie; Herzog, Robert
2008-03-01
We present the design and implementation of a semantics-enabled service discovery framework in the data Grids for process and product development using numerical simulation and knowledge discovery (SIMDAT) Pharma Grid, an industry-oriented Grid environment for integrating thousands of Grid-enabled biological data services and analysis services. The framework consists of three major components: the Web ontology language (OWL)-description logic (DL)-based biological domain ontology, OWL Web service ontology (OWL-S)-based service annotation, and semantic matchmaker based on the ontology reasoning. Built upon the framework, workflow technologies are extensively exploited in the SIMDAT to assist biologists in (semi)automatically performing in silico experiments. We present a typical usage scenario through the case study of a biological workflow: IXodus.
NASA Technical Reports Server (NTRS)
Horio, Brant M.; Kumar, Vivek; DeCicco, Anthony H.; Hasan, Shahab; Stouffer, Virginia L.; Smith, Jeremy C.; Guerreiro, Nelson M.
2015-01-01
The implementation of the Next Generation Air Transportation System (NextGen) in the United States is an ongoing challenge for policymakers due to the complexity of the air transportation system (ATS) with its broad array of stakeholders and dynamic interdependencies between them. The successful implementation of NextGen has a hard dependency on the active participation of U.S. commercial airlines. To assist policymakers in identifying potential policy designs that facilitate the implementation of NextGen, the National Aeronautics and Space Administration (NASA) and LMI developed a research framework called the Air Transportation System Evolutionary Simulation (ATS-EVOS). This framework integrates large empirical data sets with multiple specialized models to simulate the evolution of the airline response to potential future policies and explore consequential impacts on ATS performance and market dynamics. In the ATS-EVOS configuration presented here, we leverage the Transportation Systems Analysis Model (TSAM), the Airline Evolutionary Simulation (AIRLINE-EVOS), the Airspace Concept Evaluation System (ACES), and the Aviation Environmental Design Tool (AEDT), all of which enable this research to comprehensively represent the complex facets of the ATS and its participants. We validated this baseline configuration of ATS-EVOS against Airline Origin and Destination Survey (DB1B) data and subject matter expert opinion, and we verified the ATS-EVOS framework and agent behavior logic through scenario-based experiments that explored potential implementations of a carbon tax, congestion pricing policy, and the dynamics for equipage of new technology by airlines. These experiments demonstrated ATS-EVOS's capabilities in responding to a wide range of potential NextGen-related policies and utility for decision makers to gain insights for effective policy design.
Five roles for using theory and evidence in the design and testing of behavior change interventions.
Bartholomew, L Kay; Mullen, Patricia Dolan
2011-01-01
The prevailing wisdom in the field of health-related behavior change is that well-designed and effective interventions are guided by theory. Using the framework of intervention mapping, we describe and provide examples of how investigators can effectively select and use theory to design, test, and report interventions. We propose five roles for theory and evidence about theories: a) identification of behavior and determinants of behavior related to a specified health problem (i.e., the logic model of the problem); b) explication of a causal model that includes theoretical constructs for producing change in the behavior of interest (i.e., the logic model of change); c) selection of intervention methods and delivery of practical applications to achieve changes in health behavior; d) evaluation of the resulting intervention including theoretical mediating variables; and e) reporting of the active ingredients of the intervention together with the evaluation results. In problem-driven applied behavioral or social science, researchers use one or multiple theories, empiric evidence, and new research, both to assess a problem and to solve or prevent a problem. Furthermore, the theories for description of the problem may differ from the theories for its solution. In an applied approach, the main focus is on solving problems regarding health behavior change and improvement of health outcomes, and the criteria for success are formulated in terms of the problem rather than the theory. Resulting contributions to theory development may be quite useful, but they are peripheral to the problem-solving process.
NASA Technical Reports Server (NTRS)
Jafri, Madiha J.; Ely, Jay J.; Vahala, Linda L.
2007-01-01
In this paper, neural network (NN) modeling is combined with fuzzy logic to estimate Interference Path Loss measurements on Airbus 319 and 320 airplanes. Interference patterns inside the aircraft are classified and predicted based on the locations of the doors, windows, aircraft structures and the communication/navigation system-of-concern. Modeled results are compared with measured data. Combining fuzzy logic and NN modeling is shown to improve estimates of measured data over estimates obtained with NN alone. A plan is proposed to enhance the modeling for better prediction of electromagnetic coupling problems inside aircraft.
Raft River Geothermal Area Data Models - Conceptual, Logical and Fact Models
Cuyler, David
2012-07-19
Conceptual and Logical Data Model for Geothermal Data Concerning Wells, Fields, Power Plants and Related Analyses at Raft River a. Logical Model for Geothermal Data Concerning Wells, Fields, Power Plants and Related Analyses, David Cuyler 2010 b. Fact Model for Geothermal Data Concerning Wells, Fields, Power Plants and Related Analyses, David Cuyler 2010 Derived from Tables, Figures and other Content in Reports from the Raft River Geothermal Project: "Technical Report on the Raft River Geothermal Resource, Cassia County, Idaho," GeothermEx, Inc., August 2002. "Results from the Short-Term Well Testing Program at the Raft River Geothermal Field, Cassia County, Idaho," GeothermEx, Inc., October 2004.
The application of SSADM to modelling the logical structure of proteins.
Saldanha, J; Eccles, J
1991-10-01
A logical design that describes the overall structure of proteins, together with a more detailed design describing secondary and some supersecondary structures, has been constructed using the computer-aided software engineering (CASE) tool, Auto-mate. Auto-mate embodies the philosophy of the Structured Systems Analysis and Design Method (SSADM) which enables the logical design of computer systems. Our design will facilitate the building of large information systems, such as databases and knowledgebases in the field of protein structure, by the derivation of system requirements from our logical model prior to producing the final physical system. In addition, the study has highlighted the ease of employing SSADM as a formalism in which to conduct the transferral of concepts from an expert into a design for a knowledge-based system that can be implemented on a computer (the knowledge-engineering exercise). It has been demonstrated how SSADM techniques may be extended for the purpose of modelling the constituent Prolog rules. This facilitates the integration of the logical system design model with the derived knowledge-based system.
Framework for risk analysis in Multimedia Environmental Systems (FRAMES)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whelan, G.; Buck, J.W.; Castleton, K.J.
The objectives of this workshop are to (1) provide the NRC staff and the public with an overview of currently available Federally-Sponsored dose models appropriate for decommissioning assessments and (2) discuss NRC staff-developed questions related to model selection criteria with the final rule on ``Radiological Criteria for License Termination`` (62 FR 39058). For over 40 years, medium specific models have been and will continue to be developed in an effort to understand and predict environmental phenomena, including fluid-flow patterns, contaminant migration and fate, human or wildlife exposures, impacts from specific toxicants to specific species and their organs, cost-benefit analyses, impactsmore » from remediation alternatives, etc. For nearly 40 years, medium-specific models have been combined for either sequential or concurrent assessments. The evolution of multiple-media assessment tools has followed a logic progression. To allow a suite of users the flexibility and versatility to construct, combine, and couple attributes that meet their specific needs without unnecessarily burdening the user with extraneous capabilities, the development of a computer-based methodology to implement a Risk Analysis in Multimedia Environmental Systems (FRAMES) was begun in 1994. FRAMES represents a platform which links elements together and yet does not represent the models that are linked to or within it; therefore, changes to elements that are linked to or within FRAMES do not change the framework.« less
Constraint Logic Programming approach to protein structure prediction.
Dal Palù, Alessandro; Dovier, Agostino; Fogolari, Federico
2004-11-30
The protein structure prediction problem is one of the most challenging problems in biological sciences. Many approaches have been proposed using database information and/or simplified protein models. The protein structure prediction problem can be cast in the form of an optimization problem. Notwithstanding its importance, the problem has very seldom been tackled by Constraint Logic Programming, a declarative programming paradigm suitable for solving combinatorial optimization problems. Constraint Logic Programming techniques have been applied to the protein structure prediction problem on the face-centered cube lattice model. Molecular dynamics techniques, endowed with the notion of constraint, have been also exploited. Even using a very simplified model, Constraint Logic Programming on the face-centered cube lattice model allowed us to obtain acceptable results for a few small proteins. As a test implementation their (known) secondary structure and the presence of disulfide bridges are used as constraints. Simplified structures obtained in this way have been converted to all atom models with plausible structure. Results have been compared with a similar approach using a well-established technique as molecular dynamics. The results obtained on small proteins show that Constraint Logic Programming techniques can be employed for studying protein simplified models, which can be converted into realistic all atom models. The advantage of Constraint Logic Programming over other, much more explored, methodologies, resides in the rapid software prototyping, in the easy way of encoding heuristics, and in exploiting all the advances made in this research area, e.g. in constraint propagation and its use for pruning the huge search space.
Ontology-Based Learner Categorization through Case Based Reasoning and Fuzzy Logic
ERIC Educational Resources Information Center
Sarwar, Sohail; García-Castro, Raul; Qayyum, Zia Ul; Safyan, Muhammad; Munir, Rana Faisal
2017-01-01
Learner categorization has a pivotal role in making e-learning systems a success. However, learner characteristics exploited at abstract level of granularity by contemporary techniques cannot categorize the learners effectively. In this paper, an architecture of e-learning framework has been presented that exploits the machine learning based…
Cyber Power Potential of the Army’s Reserve Component
2017-01-01
and could extend logically to include electric power, water, food, railway, gas pipelines , and so forth. One consideration to note is that in cases...29 CHAPTER FOUR Army Reserve Component Cyber Inventory Analysis .......................... 31...Background and Analytical Framework ........................................................... 31 Army Reserve Component Cyber Inventory Analysis , 2015
Consensus Knowledge Acquisition
1989-12-01
ex- plicit the logical structure of their positions. Structured frameworks for analyzing 3 SOME USEFUL IDEAS 3 arguments ( Toulmin , 1958; Fogelin, 1982...358-87, 1987. Stefik M, et al., Beyond the chalkboard, CACM, 30:1, Jan 1987, pp. 32-47. Toulmin , S. The Uses of Argument. Cambridge, England: Cambridge University Press, 1958. 01
Some Thoughts on John Dewey's Ethics and Education
ERIC Educational Resources Information Center
Karafillis, Gregorios
2012-01-01
The philosopher and educator, John Dewey, explores the emergence of the terms "ethics" and "education" from a pragmatist's perspective, i.e., within the linguistic and social components' framework, and society's existing cognitive and cultural level. In the current article, we examine the development, logical control and the relation between…
Rejoinder to Guterman, Martin, and Kopp
ERIC Educational Resources Information Center
Hansen, James T.
2012-01-01
In their reply to the author's keystone article (Hansen, 2012), Guterman, Martin, and Kopp (2012) charge that the author's integrative framework was not sufficiently integrative. They also argue that his proposal results in logical contradictions and the mind-body problem. The author responds by noting that his proposal fully integrates the…
Testing for Factorial Invariance in the Context of Construct Validation
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
2010-01-01
This article describes the logic and procedures behind testing for factorial invariance across groups in the context of construct validation. The procedures include testing for configural, measurement, and structural invariance in the framework of multiple-group confirmatory factor analysis (CFA). The "forward" (sequential constraint imposition)…
The Seven Silos of Accountability in Higher Education: Systematizing Multiple Logics and Fields
ERIC Educational Resources Information Center
Brown, Joshua Travis
2017-01-01
Higher education accountability is a field characterized by complexity. Prior frameworks grounded in psychometrics, economics, and history fall short in explaining the persistence and composition of its complexity. This article employs organizational theory to identify the multiple conflicting approaches of higher education accountability and…
Modeling the peak of emergence in systems: Design and katachi.
Cardier, Beth; Goranson, H T; Casas, Niccolo; Lundberg, Patric; Erioli, Alessio; Takaki, Ryuji; Nagy, Dénes; Ciavarra, Richard; Sanford, Larry D
2017-12-01
It is difficult to model emergence in biological systems using reductionist paradigms. A requirement for computational modeling is that individual entities can be recorded parametrically and related logically, but their transformation into whole systems cannot be captured this way. The problem stems from an inability to formally represent the implicit influences that inform emergent organization, such as context, shifts in causal agency or scale, and self-reference. This lack hampers biological systems modeling and its computational counterpart, indicating a need for new fundamental abstraction frameworks that support system-level characteristics. We develop an approach that formally captures these characteristics, focusing on the way they come together to enable transformation at the 'peak' of the emergent process. An example from virology is presented, in which two seemingly antagonistic systems - the herpes cold sore virus and its host - are capable of altering their basic biological objectives to achieve a new equilibrium. The usual barriers to modeling this process are overcome by incorporating mechanisms from practices centered on its emergent peak: design and katachi. In the Japanese science of form, katachi refers to the emergence of intrinsic structure from real situations, where an optimal balance between implicit influences is achieved. Design indicates how such optimization is guided by principles of flow. These practices leverage qualities of situated abstraction, which we understand through the intuitive method of physicist Kôdi Husimi. Early results indicate that this approach can capture the functional transformations of biological emergence, whilst being reasonably computable. Due to its geometric foundations and narrative-based extension to logic, the method will also generate speculative predictions. This research forms the foundations of a new biomedical modeling platform, which is discussed. Copyright © 2017. Published by Elsevier Ltd.
Creating healthy work environments: a strategic perspective.
Adamson, Bonnie J
2010-01-01
Although I find Graham Lowe and Ben Chan's logic model and work environment metrics thought provoking, a healthy work environment framework must be more comprehensive and consider the addition of recommended diagnostic tools, vehicles to deliver the necessary change and a sustainability strategy that allows for the tweaking and refinement of ideas. Basic structure is required to frame and initiate an effective process, while allowing creativity and enhancements to be made by organizations as they learn. I support the construction of a suggested Canadian health sector framework for measuring the health of an organization, but I feel that organizations need to have some freedom in that design and the ability to incorporate their own indicators within the established proven drivers. Reflecting on my organization's experience with large-scale transformation efforts, I find that emotional intelligence along with formal leadership development and front-line engagement in Lean process improvement activities are essential for creating healthy work environments that produce the balanced set of outcomes listed in my hospital's Balanced Scorecard.
Causal learning and inference as a rational process: the new synthesis.
Holyoak, Keith J; Cheng, Patricia W
2011-01-01
Over the past decade, an active line of research within the field of human causal learning and inference has converged on a general representational framework: causal models integrated with bayesian probabilistic inference. We describe this new synthesis, which views causal learning and inference as a fundamentally rational process, and review a sample of the empirical findings that support the causal framework over associative alternatives. Causal events, like all events in the distal world as opposed to our proximal perceptual input, are inherently unobservable. A central assumption of the causal approach is that humans (and potentially nonhuman animals) have been designed in such a way as to infer the most invariant causal relations for achieving their goals based on observed events. In contrast, the associative approach assumes that learners only acquire associations among important observed events, omitting the representation of the distal relations. By incorporating bayesian inference over distributions of causal strength and causal structures, along with noisy-logical (i.e., causal) functions for integrating the influences of multiple causes on a single effect, human judgments about causal strength and structure can be predicted accurately for relatively simple causal structures. Dynamic models of learning based on the causal framework can explain patterns of acquisition observed with serial presentation of contingency data and are consistent with available neuroimaging data. The approach has been extended to a diverse range of inductive tasks, including category-based and analogical inferences.
Developing Critical Thinking through Leadership Education
ERIC Educational Resources Information Center
Jenkins, Daniel M.; Andenoro, Anthony C.
2016-01-01
This chapter provides the critical leadership logic model as a tool to help educators develop leadership-learning opportunities. This proactive logic model includes curricular and co-curricular educational experiences to ensure critical thinking through leadership education.
UTP and Temporal Logic Model Checking
NASA Astrophysics Data System (ADS)
Anderson, Hugh; Ciobanu, Gabriel; Freitas, Leo
In this paper we give an additional perspective to the formal verification of programs through temporal logic model checking, which uses Hoare and He Unifying Theories of Programming (UTP). Our perspective emphasizes the use of UTP designs, an alphabetised relational calculus expressed as a pre/post condition pair of relations, to verify state or temporal assertions about programs. The temporal model checking relation is derived from a satisfaction relation between the model and its properties. The contribution of this paper is that it shows a UTP perspective to temporal logic model checking. The approach includes the notion of efficiency found in traditional model checkers, which reduced a state explosion problem through the use of efficient data structures
Markov Logic Networks in the Analysis of Genetic Data
Sakhanenko, Nikita A.
2010-01-01
Abstract Complex, non-additive genetic interactions are common and can be critical in determining phenotypes. Genome-wide association studies (GWAS) and similar statistical studies of linkage data, however, assume additive models of gene interactions in looking for genotype-phenotype associations. These statistical methods view the compound effects of multiple genes on a phenotype as a sum of influences of each gene and often miss a substantial part of the heritable effect. Such methods do not use any biological knowledge about underlying mechanisms. Modeling approaches from the artificial intelligence (AI) field that incorporate deterministic knowledge into models to perform statistical analysis can be applied to include prior knowledge in genetic analysis. We chose to use the most general such approach, Markov Logic Networks (MLNs), for combining deterministic knowledge with statistical analysis. Using simple, logistic regression-type MLNs we can replicate the results of traditional statistical methods, but we also show that we are able to go beyond finding independent markers linked to a phenotype by using joint inference without an independence assumption. The method is applied to genetic data on yeast sporulation, a complex phenotype with gene interactions. In addition to detecting all of the previously identified loci associated with sporulation, our method identifies four loci with smaller effects. Since their effect on sporulation is small, these four loci were not detected with methods that do not account for dependence between markers due to gene interactions. We show how gene interactions can be detected using more complex models, which can be used as a general framework for incorporating systems biology with genetics. PMID:20958249
de Bruin, Jeroen S; Adlassnig, Klaus-Peter; Leitich, Harald; Rappelsberger, Andrea
2018-01-01
Evidence-based clinical guidelines have a major positive effect on the physician's decision-making process. Computer-executable clinical guidelines allow for automated guideline marshalling during a clinical diagnostic process, thus improving the decision-making process. Implementation of a digital clinical guideline for the prevention of mother-to-child transmission of hepatitis B as a computerized workflow, thereby separating business logic from medical knowledge and decision-making. We used the Business Process Model and Notation language system Activiti for business logic and workflow modeling. Medical decision-making was performed by an Arden-Syntax-based medical rule engine, which is part of the ARDENSUITE software. We succeeded in creating an electronic clinical workflow for the prevention of mother-to-child transmission of hepatitis B, where institution-specific medical decision-making processes could be adapted without modifying the workflow business logic. Separation of business logic and medical decision-making results in more easily reusable electronic clinical workflows.
Cooperative Learning Model toward a Reading Comprehensions on the Elementary School
ERIC Educational Resources Information Center
Murtono
2015-01-01
The purposes of this research are: (1) description of reading skill the students who join in CIRC learning model, Jigsaw learning model, and STAD learning model; (2) finding out the effective of learning model cooperative toward a reading comprehensions between the students who have high language logic and low language logic; and (3) finding out…
Design and verification of distributed logic controllers with application of Petri nets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał
2015-12-31
The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.
Design and verification of distributed logic controllers with application of Petri nets
NASA Astrophysics Data System (ADS)
Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał; Wiśniewska, Monika
2015-12-01
The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.
NASA Astrophysics Data System (ADS)
van Aalsvoort, Joke
2004-09-01
Secondary school chemical education has a problem: namely, the seeming irrelevance to the pupils of chemistry. Chemical education prepares pupils for participation in society. Therefore, it must imply a model of society, of chemistry, and of the relation between them. In this article it is hypothesized that logical positivism currently offers this model. Logical positivism is a philosophy of science that creates a divide between science and society. It is therefore further hypothesized that the adoption of logical positivism causes chemistry's lack of relevance in chemical education. Both hypotheses could be confirmed by an analysis of a grade nine course.
[Health surveillance: foundations, interfaces and tendencies].
Arreaza, Antonio Luis Vicente; de Moraes, José Cássio
2010-07-01
The present article rescues initially the forms, content and operational projection of the epidemiological surveillance as indispensable tool for the knowledge field and public health practices. After that, we verify that the health surveillance model establishes an enlargement of this operational concept of surveillance by integrating the collectives and individuals practices in different health necessities dimensions, which includes beyond of the risks and damages control also the eco-socials determinants. In the sequence, we search to dimension the distinct levels of actuation of this sanitary practice articulated to the interventions of promotion, protection and recovery under a located and integrated logic of the unique system of Brazilian health. Finally, we argue that all the conceptual-operational model framework of public health surveillance itself constitutes as a politics and sanitary base for the consolidation of the health promotion paradigm in the collective health field.
Metabolic Free Energy and Biological Codes: A 'Data Rate Theorem' Aging Model.
Wallace, Rodrick
2015-06-01
A famous argument by Maturana and Varela (Autopoiesis and cognition. Reidel, Dordrecht, 1980) holds that the living state is cognitive at every scale and level of organization. Since it is possible to associate many cognitive processes with 'dual' information sources, pathologies can sometimes be addressed using statistical models based on the Shannon Coding, the Shannon-McMillan Source Coding, the Rate Distortion, and the Data Rate Theorems, which impose necessary conditions on information transmission and system control. Deterministic-but-for-error biological codes do not directly invoke cognition, but may be essential subcomponents within larger cognitive processes. A formal argument, however, places such codes within a similar framework, with metabolic free energy serving as a 'control signal' stabilizing biochemical code-and-translator dynamics in the presence of noise. Demand beyond available energy supply triggers punctuated destabilization of the coding channel, affecting essential biological functions. Aging, normal or prematurely driven by psychosocial or environmental stressors, must interfere with the routine operation of such mechanisms, initiating the chronic diseases associated with senescence. Amyloid fibril formation, intrinsically disordered protein logic gates, and cell surface glycan/lectin 'kelp bed' logic gates are reviewed from this perspective. The results generalize beyond coding machineries having easily recognizable symmetry modes, and strip a layer of mathematical complication from the study of phase transitions in nonequilibrium biological systems.
Development of Algorithms for Control of Humidity in Plant Growth Chambers
NASA Technical Reports Server (NTRS)
Costello, Thomas A.
2003-01-01
Algorithms were developed to control humidity in plant growth chambers used for research on bioregenerative life support at Kennedy Space Center. The algorithms used the computed water vapor pressure (based on measured air temperature and relative humidity) as the process variable, with time-proportioned outputs to operate the humidifier and de-humidifier. Algorithms were based upon proportional-integral-differential (PID) and Fuzzy Logic schemes and were implemented using I/O Control software (OPTO-22) to define and download the control logic to an autonomous programmable logic controller (PLC, ultimate ethernet brain and assorted input-output modules, OPTO-22), which performed the monitoring and control logic processing, as well the physical control of the devices that effected the targeted environment in the chamber. During limited testing, the PLC's successfully implemented the intended control schemes and attained a control resolution for humidity of less than 1%. The algorithms have potential to be used not only with autonomous PLC's but could also be implemented within network-based supervisory control programs. This report documents unique control features that were implemented within the OPTO-22 framework and makes recommendations regarding future uses of the hardware and software for biological research by NASA.
Quantum structure of negation and conjunction in human thought
Aerts, Diederik; Sozzo, Sandro; Veloz, Tomas
2015-01-01
We analyze in this paper the data collected in a set of experiments investigating how people combine natural concepts. We study the mutual influence of conceptual conjunction and negation by measuring the membership weights of a list of exemplars with respect to two concepts, e.g., Fruits and Vegetables, and their conjunction Fruits And Vegetables, but also their conjunction when one or both concepts are negated, namely, Fruits And Not Vegetables, Not Fruits And Vegetables, and Not Fruits And Not Vegetables. Our findings sharpen and advance existing analysis on conceptual combinations, revealing systematic deviations from classical (fuzzy set) logic and probability theory. And, more important, our results give further considerable evidence to the validity of our quantum-theoretic framework for the combination of two concepts. Indeed, the representation of conceptual negation naturally arises from the general assumptions of our two-sector Fock space model, and this representation faithfully agrees with the collected data. In addition, we find a new significant and a priori unexpected deviation from classicality, which can exactly be explained by assuming that human reasoning is the superposition of an “emergent reasoning” and a “logical reasoning,” and that these two processes are represented in a Fock space algebraic structure. PMID:26483715
Jaegers, Lisa; Dale, Ann Marie; Weaver, Nancy; Buchholz, Bryan; Welch, Laura; Evanoff, Bradley
2014-03-01
Intervention studies in participatory ergonomics (PE) are often difficult to interpret due to limited descriptions of program planning and evaluation. In an ongoing PE program with floor layers, we developed a logic model to describe our program plan, and process and summative evaluations designed to describe the efficacy of the program. The logic model was a useful tool for describing the program elements and subsequent modifications. The process evaluation measured how well the program was delivered as intended, and revealed the need for program modifications. The summative evaluation provided early measures of the efficacy of the program as delivered. Inadequate information on program delivery may lead to erroneous conclusions about intervention efficacy due to Type III error. A logic model guided the delivery and evaluation of our intervention and provides useful information to aid interpretation of results. © 2013 Wiley Periodicals, Inc.
Jaegers, Lisa; Dale, Ann Marie; Weaver, Nancy; Buchholz, Bryan; Welch, Laura; Evanoff, Bradley
2013-01-01
Background Intervention studies in participatory ergonomics (PE) are often difficult to interpret due to limited descriptions of program planning and evaluation. Methods In an ongoing PE program with floor layers, we developed a logic model to describe our program plan, and process and summative evaluations designed to describe the efficacy of the program. Results The logic model was a useful tool for describing the program elements and subsequent modifications. The process evaluation measured how well the program was delivered as intended, and revealed the need for program modifications. The summative evaluation provided early measures of the efficacy of the program as delivered. Conclusions Inadequate information on program delivery may lead to erroneous conclusions about intervention efficacy due to Type III error. A logic model guided the delivery and evaluation of our intervention and provides useful information to aid interpretation of results. PMID:24006097
Don't Fear Optimality: Sampling for Probabilistic-Logic Sequence Models
NASA Astrophysics Data System (ADS)
Thon, Ingo
One of the current challenges in artificial intelligence is modeling dynamic environments that change due to the actions or activities undertaken by people or agents. The task of inferring hidden states, e.g. the activities or intentions of people, based on observations is called filtering. Standard probabilistic models such as Dynamic Bayesian Networks are able to solve this task efficiently using approximative methods such as particle filters. However, these models do not support logical or relational representations. The key contribution of this paper is the upgrade of a particle filter algorithm for use with a probabilistic logical representation through the definition of a proposal distribution. The performance of the algorithm depends largely on how well this distribution fits the target distribution. We adopt the idea of logical compilation into Binary Decision Diagrams for sampling. This allows us to use the optimal proposal distribution which is normally prohibitively slow.
Answer Sets in a Fuzzy Equilibrium Logic
NASA Astrophysics Data System (ADS)
Schockaert, Steven; Janssen, Jeroen; Vermeir, Dirk; de Cock, Martine
Since its introduction, answer set programming has been generalized in many directions, to cater to the needs of real-world applications. As one of the most general “classical” approaches, answer sets of arbitrary propositional theories can be defined as models in the equilibrium logic of Pearce. Fuzzy answer set programming, on the other hand, extends answer set programming with the capability of modeling continuous systems. In this paper, we combine the expressiveness of both approaches, and define answer sets of arbitrary fuzzy propositional theories as models in a fuzzification of equilibrium logic. We show that the resulting notion of answer set is compatible with existing definitions, when the syntactic restrictions of the corresponding approaches are met. We furthermore locate the complexity of the main reasoning tasks at the second level of the polynomial hierarchy. Finally, as an illustration of its modeling power, we show how fuzzy equilibrium logic can be used to find strong Nash equilibria.
Parallel logic gates in synthetic gene networks induced by non-Gaussian noise.
Xu, Yong; Jin, Xiaoqin; Zhang, Huiqing
2013-11-01
The recent idea of logical stochastic resonance is verified in synthetic gene networks induced by non-Gaussian noise. We realize the switching between two kinds of logic gates under optimal moderate noise intensity by varying two different tunable parameters in a single gene network. Furthermore, in order to obtain more logic operations, thus providing additional information processing capacity, we obtain in a two-dimensional toggle switch model two complementary logic gates and realize the transformation between two logic gates via the methods of changing different parameters. These simulated results contribute to improve the computational power and functionality of the networks.
Fabac, Robert; Radošević, Danijel; Magdalenić, Ivan
2014-01-01
When considering strategic games from the conceptual perspective that focuses on the questions of participants' decision-making rationality, the very issues of modelling and simulation are rarely discussed. The well-known Rational Pigs matrix game has been relatively intensively analyzed in terms of reassessment of the logic of two players involved in asymmetric situations as gluttons that differ significantly by their attributes. This paper presents a successful attempt of using autogenerator for creating the framework of the game, including the predefined scenarios and corresponding payoffs. Autogenerator offers flexibility concerning the specification of game parameters, which consist of variations in the number of simultaneous players and their features and game objects and their attributes as well as some general game characteristics. In the proposed approach the model of autogenerator was upgraded so as to enable program specification updates. For the purpose of treatment of more complex strategic scenarios, we created the Rational Pigs Game Extended (RPGE), in which the introduction of a third glutton entails significant structural changes. In addition, due to the existence of particular attributes of the new player, "the tramp," one equilibrium point from the original game is destabilized which has an influence on the decision-making of rational players.
Magdalenić, Ivan
2014-01-01
When considering strategic games from the conceptual perspective that focuses on the questions of participants' decision-making rationality, the very issues of modelling and simulation are rarely discussed. The well-known Rational Pigs matrix game has been relatively intensively analyzed in terms of reassessment of the logic of two players involved in asymmetric situations as gluttons that differ significantly by their attributes. This paper presents a successful attempt of using autogenerator for creating the framework of the game, including the predefined scenarios and corresponding payoffs. Autogenerator offers flexibility concerning the specification of game parameters, which consist of variations in the number of simultaneous players and their features and game objects and their attributes as well as some general game characteristics. In the proposed approach the model of autogenerator was upgraded so as to enable program specification updates. For the purpose of treatment of more complex strategic scenarios, we created the Rational Pigs Game Extended (RPGE), in which the introduction of a third glutton entails significant structural changes. In addition, due to the existence of particular attributes of the new player, “the tramp,” one equilibrium point from the original game is destabilized which has an influence on the decision-making of rational players. PMID:25254228
Johnson, Victoria A; Ronan, Kevin R; Johnston, David M; Peace, Robin
2016-11-01
A main weakness in the evaluation of disaster education programs for children is evaluators' propensity to judge program effectiveness based on changes in children's knowledge. Few studies have articulated an explicit program theory of how children's education would achieve desired outcomes and impacts related to disaster risk reduction in households and communities. This article describes the advantages of constructing program theory models for the purpose of evaluating disaster education programs for children. Following a review of some potential frameworks for program theory development, including the logic model, the program theory matrix, and the stage step model, the article provides working examples of these frameworks. The first example is the development of a program theory matrix used in an evaluation of ShakeOut, an earthquake drill practiced in two Washington State school districts. The model illustrates a theory of action; specifically, the effectiveness of school earthquake drills in preventing injuries and deaths during disasters. The second example is the development of a stage step model used for a process evaluation of What's the Plan Stan?, a voluntary teaching resource distributed to all New Zealand primary schools for curricular integration of disaster education. The model illustrates a theory of use; specifically, expanding the reach of disaster education for children through increased promotion of the resource. The process of developing the program theory models for the purpose of evaluation planning is discussed, as well as the advantages and shortcomings of the theory-based approaches. © 2015 Society for Risk Analysis.
Mailloux, Shay; Halámek, Jan; Katz, Evgeny
2014-03-07
A new Sense-and-Act system was realized by the integration of a biocomputing system, performing analytical processes, with a signal-responsive electrode. A drug-mimicking release process was triggered by biomolecular signals processed by different logic networks, including three concatenated AND logic gates or a 3-input OR logic gate. Biocatalytically produced NADH, controlled by various combinations of input signals, was used to activate the electrochemical system. A biocatalytic electrode associated with signal-processing "biocomputing" systems was electrically connected to another electrode coated with a polymer film, which was dissolved upon the formation of negative potential releasing entrapped drug-mimicking species, an enzyme-antibody conjugate, operating as a model for targeted immune-delivery and consequent "prodrug" activation. The system offers great versatility for future applications in controlled drug release and personalized medicine.
Design of Learning Model of Logic and Algorithms Based on APOS Theory
ERIC Educational Resources Information Center
Hartati, Sulis Janu
2014-01-01
This research questions were "how do the characteristics of learning model of logic & algorithm according to APOS theory" and "whether or not these learning model can improve students learning outcomes". This research was conducted by exploration, and quantitative approach. Exploration used in constructing theory about the…
Regression Models and Fuzzy Logic Prediction of TBM Penetration Rate
NASA Astrophysics Data System (ADS)
Minh, Vu Trieu; Katushin, Dmitri; Antonov, Maksim; Veinthal, Renno
2017-03-01
This paper presents statistical analyses of rock engineering properties and the measured penetration rate of tunnel boring machine (TBM) based on the data of an actual project. The aim of this study is to analyze the influence of rock engineering properties including uniaxial compressive strength (UCS), Brazilian tensile strength (BTS), rock brittleness index (BI), the distance between planes of weakness (DPW), and the alpha angle (Alpha) between the tunnel axis and the planes of weakness on the TBM rate of penetration (ROP). Four
Topological and Orthomodular Modeling of Context in Behavioral Science
NASA Astrophysics Data System (ADS)
Narens, Louis
2017-02-01
Two non-boolean methods are discussed for modeling context in behavioral data and theory. The first is based on intuitionistic logic, which is similar to classical logic except that not every event has a complement. Its probability theory is also similar to classical probability theory except that the definition of probability function needs to be generalized to unions of events instead of applying only to unions of disjoint events. The generalization is needed, because intuitionistic event spaces may not contain enough disjoint events for the classical definition to be effective. The second method develops a version of quantum logic for its underlying probability theory. It differs from Hilbert space logic used in quantum mechanics as a foundation for quantum probability theory in variety of ways. John von Neumann and others have commented about the lack of a relative frequency approach and a rational foundation for this probability theory. This article argues that its version of quantum probability theory does not have such issues. The method based on intuitionistic logic is useful for modeling cognitive interpretations that vary with context, for example, the mood of the decision maker, the context produced by the influence of other items in a choice experiment, etc. The method based on this article's quantum logic is useful for modeling probabilities across contexts, for example, how probabilities of events from different experiments are related.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Jason D.; Schroeppel, Richard Crabtree; Robertson, Perry J.
With the build-out of large transport networks utilizing optical technologies, more and more capacity is being made available. Innovations in Dense Wave Division Multiplexing (DWDM) and the elimination of optical-electrical-optical conversions have brought on advances in communication speeds as we move into 10 Gigabit Ethernet and above. Of course, there is a need to encrypt data on these optical links as the data traverses public and private network backbones. Unfortunately, as the communications infrastructure becomes increasingly optical, advances in encryption (done electronically) have failed to keep up. This project examines the use of optical logic for implementing encryption in themore » photonic domain to achieve the requisite encryption rates. This paper documents the innovations and advances of work first detailed in 'Photonic Encryption using All Optical Logic,' [1]. A discussion of underlying concepts can be found in SAND2003-4474. In order to realize photonic encryption designs, technology developed for electrical logic circuits must be translated to the photonic regime. This paper examines S-SEED devices and how discrete logic elements can be interconnected and cascaded to form an optical circuit. Because there is no known software that can model these devices at a circuit level, the functionality of S-SEED devices in an optical circuit was modeled in PSpice. PSpice allows modeling of the macro characteristics of the devices in context of a logic element as opposed to device level computational modeling. By representing light intensity as voltage, 'black box' models are generated that accurately represent the intensity response and logic levels in both technologies. By modeling the behavior at the systems level, one can incorporate systems design tools and a simulation environment to aid in the overall functional design. Each black box model takes certain parameters (reflectance, intensity, input response), and models the optical ripple and time delay characteristics. These 'black box' models are interconnected and cascaded in an encrypting/scrambling algorithm based on a study of candidate encryption algorithms. Demonstration circuits show how these logic elements can be used to form NAND, NOR, and XOR functions. This paper also presents functional analysis of a serial, low gate count demonstration algorithm suitable for scrambling/encryption using S-SEED devices.« less
Rethinking Social Barriers to Effective Adaptive Management
NASA Astrophysics Data System (ADS)
West, Simon; Schultz, Lisen; Bekessy, Sarah
2016-09-01
Adaptive management is an approach to environmental management based on learning-by-doing, where complexity, uncertainty, and incomplete knowledge are acknowledged and management actions are treated as experiments. However, while adaptive management has received significant uptake in theory, it remains elusively difficult to enact in practice. Proponents have blamed social barriers and have called for social science contributions. We address this gap by adopting a qualitative approach to explore the development of an ecological monitoring program within an adaptive management framework in a public land management organization in Australia. We ask what practices are used to enact the monitoring program and how do they shape learning? We elicit a rich narrative through extensive interviews with a key individual, and analyze the narrative using thematic analysis. We discuss our results in relation to the concept of `knowledge work' and Westley's 2002) framework for interpreting the strategies of adaptive managers—`managing through, in, out and up.' We find that enacting the program is conditioned by distinct and sometimes competing logics—scientific logics prioritizing experimentation and learning, public logics emphasizing accountability and legitimacy, and corporate logics demanding efficiency and effectiveness. In this context, implementing adaptive management entails practices of translation to negotiate tensions between objective and situated knowledge, external experts and organizational staff, and collegiate and hierarchical norms. Our contribution embraces the `doing' of learning-by-doing and marks a shift from conceptualizing the social as an external barrier to adaptive management to be removed to an approach that situates adaptive management as social knowledge practice.
Michalowski, Martin; Wilk, Szymon; Tan, Xing; Michalowski, Wojtek
2014-01-01
Clinical practice guidelines (CPGs) implement evidence-based medicine designed to help generate a therapy for a patient suffering from a single disease. When applied to a comorbid patient, the concurrent combination of treatment steps from multiple CPGs is susceptible to adverse interactions in the resulting combined therapy (i.e., a therapy established according to all considered CPGs). This inability to concurrently apply CPGs has been shown to be one of the key shortcomings of CPG uptake in a clinical setting1. Several research efforts are underway to address this issue such as the K4CARE2 and GuideLine INteraction Detection Assistant (GLINDA)3 projects and our previous research on applying constraint logic programming to developing a consistent combined therapy for a comorbid patient4. However, there is no generalized framework for mitigation that effectively captures general characteristics of the problem while handling nuances such as time and ordering requirements imposed by specific CPGs. In this paper we propose a first-order logic-based (FOL) approach for developing a generalized framework of mitigation. This approach uses a meta-algorithm and entailment properties to mitigate (i.e., identify and address) adverse interactions introduced by concurrently applied CPGs. We use an illustrative case study of a patient suffering from type 2 diabetes being treated for an onset of severe rheumatoid arthritis to show the expressiveness and robustness of our proposed FOL-based approach, and we discuss its appropriateness as the basis for the generalized theory.
Schulz, S; Romacker, M; Hahn, U
1998-01-01
The development of powerful and comprehensive medical ontologies that support formal reasoning on a large scale is one of the key requirements for clinical computing in the next millennium. Taxonomic medical knowledge, a major portion of these ontologies, is mainly characterized by generalization and part-whole relations between concepts. While reasoning in generalization hierarchies is quite well understood, no fully conclusive mechanism as yet exists for part-whole reasoning. The approach we take emulates part-whole reasoning via classification-based reasoning using SEP triplets, a special data structure for encoding part-whole relations that is fully embedded in the formal framework of standard description logics.
Schulz, S.; Romacker, M.; Hahn, U.
1998-01-01
The development of powerful and comprehensive medical ontologies that support formal reasoning on a large scale is one of the key requirements for clinical computing in the next millennium. Taxonomic medical knowledge, a major portion of these ontologies, is mainly characterized by generalization and part-whole relations between concepts. While reasoning in generalization hierarchies is quite well understood, no fully conclusive mechanism as yet exists for part-whole reasoning. The approach we take emulates part-whole reasoning via classification-based reasoning using SEP triplets, a special data structure for encoding part-whole relations that is fully embedded in the formal framework of standard description logics. Images Figure 3 PMID:9929335
ERIC Educational Resources Information Center
Fific, Mario; Little, Daniel R.; Nosofsky, Robert M.
2010-01-01
We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli…
Advanced Fault Diagnosis Methods in Molecular Networks
Habibi, Iman; Emamian, Effat S.; Abdi, Ali
2014-01-01
Analysis of the failure of cell signaling networks is an important topic in systems biology and has applications in target discovery and drug development. In this paper, some advanced methods for fault diagnosis in signaling networks are developed and then applied to a caspase network and an SHP2 network. The goal is to understand how, and to what extent, the dysfunction of molecules in a network contributes to the failure of the entire network. Network dysfunction (failure) is defined as failure to produce the expected outputs in response to the input signals. Vulnerability level of a molecule is defined as the probability of the network failure, when the molecule is dysfunctional. In this study, a method to calculate the vulnerability level of single molecules for different combinations of input signals is developed. Furthermore, a more complex yet biologically meaningful method for calculating the multi-fault vulnerability levels is suggested, in which two or more molecules are simultaneously dysfunctional. Finally, a method is developed for fault diagnosis of networks based on a ternary logic model, which considers three activity levels for a molecule instead of the previously published binary logic model, and provides equations for the vulnerabilities of molecules in a ternary framework. Multi-fault analysis shows that the pairs of molecules with high vulnerability typically include a highly vulnerable molecule identified by the single fault analysis. The ternary fault analysis for the caspase network shows that predictions obtained using the more complex ternary model are about the same as the predictions of the simpler binary approach. This study suggests that by increasing the number of activity levels the complexity of the model grows; however, the predictive power of the ternary model does not appear to be increased proportionally. PMID:25290670
People Like Logical Truth: Testing the Intuitive Detection of Logical Value in Basic Propositions.
Nakamura, Hiroko; Kawaguchi, Jun
2016-01-01
Recent studies on logical reasoning have suggested that people are intuitively aware of the logical validity of syllogisms or that they intuitively detect conflict between heuristic responses and logical norms via slight changes in their feelings. According to logical intuition studies, logically valid or heuristic logic no-conflict reasoning is fluently processed and induces positive feelings without conscious awareness. One criticism states that such effects of logicality disappear when confounding factors such as the content of syllogisms are controlled. The present study used abstract propositions and tested whether people intuitively detect logical value. Experiment 1 presented four logical propositions (conjunctive, biconditional, conditional, and material implications) regarding a target case and asked the participants to rate the extent to which they liked the statement. Experiment 2 tested the effects of matching bias, as well as intuitive logic, on the reasoners' feelings by manipulating whether the antecedent or consequent (or both) of the conditional was affirmed or negated. The results showed that both logicality and matching bias affected the reasoners' feelings, and people preferred logically true targets over logically false ones for all forms of propositions. These results suggest that people intuitively detect what is true from what is false during abstract reasoning. Additionally, a Bayesian mixed model meta-analysis of conditionals indicated that people's intuitive interpretation of the conditional "if p then q" fits better with the conditional probability, q given p.
NASA Technical Reports Server (NTRS)
Yau, M.; Guarro, S.; Apostolakis, G.
1993-01-01
Dynamic Flowgraph Methodology (DFM) is a new approach developed to integrate the modeling and analysis of the hardware and software components of an embedded system. The objective is to complement the traditional approaches which generally follow the philosophy of separating out the hardware and software portions of the assurance analysis. In this paper, the DFM approach is demonstrated using the Titan 2 Space Launch Vehicle Digital Flight Control System. The hardware and software portions of this embedded system are modeled in an integrated framework. In addition, the time dependent behavior and the switching logic can be captured by this DFM model. In the modeling process, it is found that constructing decision tables for software subroutines is very time consuming. A possible solution is suggested. This approach makes use of a well-known numerical method, the Newton-Raphson method, to solve the equations implemented in the subroutines in reverse. Convergence can be achieved in a few steps.
Effects of Message Design Logic on the Communication of Intention.
ERIC Educational Resources Information Center
O'Keefe, Barbara J.; Lambert, Bruce L.
In producing and comprehending messages, a communicator relies on a "message design logic" embodying an individual's knowledge about how to relate message forms and functions. According to this model, there are three different message design logics: (1) expressive, in which self-expression is the chief function, and affective and…
Engineered modular biomaterial logic gates for environmentally triggered therapeutic delivery
NASA Astrophysics Data System (ADS)
Badeau, Barry A.; Comerford, Michael P.; Arakawa, Christopher K.; Shadish, Jared A.; Deforest, Cole A.
2018-03-01
The successful transport of drug- and cell-based therapeutics to diseased sites represents a major barrier in the development of clinical therapies. Targeted delivery can be mediated through degradable biomaterial vehicles that utilize disease biomarkers to trigger payload release. Here, we report a modular chemical framework for imparting hydrogels with precise degradative responsiveness by using multiple environmental cues to trigger reactions that operate user-programmable Boolean logic. By specifying the molecular architecture and connectivity of orthogonal stimuli-labile moieties within material cross-linkers, we show selective control over gel dissolution and therapeutic delivery. To illustrate the versatility of this methodology, we synthesized 17 distinct stimuli-responsive materials that collectively yielded all possible YES/OR/AND logic outputs from input combinations involving enzyme, reductant and light. Using these hydrogels we demonstrate the first sequential and environmentally stimulated release of multiple cell lines in well-defined combinations from a material. We expect these platforms will find utility in several diverse fields including drug delivery, diagnostics and regenerative medicine.
Software Process Assurance for Complex Electronics
NASA Technical Reports Server (NTRS)
Plastow, Richard A.
2007-01-01
Complex Electronics (CE) now perform tasks that were previously handled in software, such as communication protocols. Many methods used to develop software bare a close resemblance to CE development. Field Programmable Gate Arrays (FPGAs) can have over a million logic gates while system-on-chip (SOC) devices can combine a microprocessor, input and output channels, and sometimes an FPGA for programmability. With this increased intricacy, the possibility of software-like bugs such as incorrect design, logic, and unexpected interactions within the logic is great. With CE devices obscuring the hardware/software boundary, we propose that mature software methodologies may be utilized with slight modifications in the development of these devices. Software Process Assurance for Complex Electronics (SPACE) is a research project that used standardized S/W Assurance/Engineering practices to provide an assurance framework for development activities. Tools such as checklists, best practices and techniques were used to detect missing requirements and bugs earlier in the development cycle creating a development process for CE that was more easily maintained, consistent and configurable based on the device used.
Engineered modular biomaterial logic gates for environmentally triggered therapeutic delivery.
Badeau, Barry A; Comerford, Michael P; Arakawa, Christopher K; Shadish, Jared A; DeForest, Cole A
2018-03-01
The successful transport of drug- and cell-based therapeutics to diseased sites represents a major barrier in the development of clinical therapies. Targeted delivery can be mediated through degradable biomaterial vehicles that utilize disease biomarkers to trigger payload release. Here, we report a modular chemical framework for imparting hydrogels with precise degradative responsiveness by using multiple environmental cues to trigger reactions that operate user-programmable Boolean logic. By specifying the molecular architecture and connectivity of orthogonal stimuli-labile moieties within material cross-linkers, we show selective control over gel dissolution and therapeutic delivery. To illustrate the versatility of this methodology, we synthesized 17 distinct stimuli-responsive materials that collectively yielded all possible YES/OR/AND logic outputs from input combinations involving enzyme, reductant and light. Using these hydrogels we demonstrate the first sequential and environmentally stimulated release of multiple cell lines in well-defined combinations from a material. We expect these platforms will find utility in several diverse fields including drug delivery, diagnostics and regenerative medicine.
The logical foundations of forensic science: towards reliable knowledge
Evett, Ian
2015-01-01
The generation of observations is a technical process and the advances that have been made in forensic science techniques over the last 50 years have been staggering. But science is about reasoning—about making sense from observations. For the forensic scientist, this is the challenge of interpreting a pattern of observations within the context of a legal trial. Here too, there have been major advances over recent years and there is a broad consensus among serious thinkers, both scientific and legal, that the logical framework is furnished by Bayesian inference (Aitken et al. Fundamentals of Probability and Statistical Evidence in Criminal Proceedings). This paper shows how the paradigm has matured, centred on the notion of the balanced scientist. Progress through the courts has not been always smooth and difficulties arising from recent judgments are discussed. Nevertheless, the future holds exciting prospects, in particular the opportunities for managing and calibrating the knowledge of the forensic scientists who assign the probabilities that are at the foundation of logical inference in the courtroom. PMID:26101288
On Childhood and the Logic of Difference: Some Empirical Examples
ERIC Educational Resources Information Center
Dahlbeck, Johan
2012-01-01
This article argues that universal documents on children's rights can provide illustrative examples as to how childhood is identified as a unity using difference as an instrument. Using Gille Deleuze's theorising on difference and sameness as a framework, the article seeks to relate the children's rights project with a critique of representation.…
Understanding Understanding Mathematics. Artificial Intelligence Memo No. 488.
ERIC Educational Resources Information Center
Michener, Edwina Rissland
This document is concerned with the important extra-logical knowledge that is often outside of traditional discussions in mathematics, and looks at some of the ingredients and processes involved in the understanding of mathematics. The goal is to develop a conceptual framework in which to talk about mathematical knowledge and to understand the…
Assessment: Monitoring & Evaluation in a Stabilisation Context
2010-09-15
http://www.oecd.org/dataoecd/23/27/35281194.pdf b. SIDA (2004), The Logical Framework Approach. A summary of the theory behind the LFA method...en_21571361_34047972_39774574 _1_1_1_1,00.pdf 3. SIDA (2004), Stefan Molund and Göran Schill, Looking Back, Moving Forward, Sida Evaluation Manual. Available at
Visual unit analysis: a descriptive approach to landscape assessment
R. J. Tetlow; S. R. J. Sheppard
1979-01-01
Analysis of the visible attributes of landscapes is an important component of the planning process. When landscapes are at regional scale, economical and effective methodologies are critical. The Visual Unit concept appears to offer a logical and useful framework for description and evaluation. The concept subdivides landscape into coherent, spatially-defined units....
ERIC Educational Resources Information Center
Ormond, Christine A.
2012-01-01
Current Australian teacher accreditation processes are impacting significantly on the expectations of teacher education courses, particularly in relation to graduate resilience, flexibility and capability. This paper uses a logical conceptual format to explain how writers at a Western Australian university prepared a new Secondary Degree course,…
Identifying and Prioritizing Critical Hardwood Resources
Sam C. Doak; Sharon Johnson; Marlyce Myers
1991-01-01
A logical framework is required to provide a focus for the implementation of a variety of landowner incentive techniques in accordance with existing goals to protect and enhance hardwood resources. A system is presented for identifying and prioritizing critical hardwood resources for site specific conservation purposes. Flexibility is built into this system so that...
Sense, Nonsense, and Violence: Levinas and the Internal Logic of School Shootings
ERIC Educational Resources Information Center
Keehn, Gabriel; Boyles, Deron
2015-01-01
Utilizing a broadly Levinasian framework, specifically the interplay among his ideas of possession, violence, and negation, Gabriel Keehn and Deron Boyles illustrate how the relatively recent sharp turn toward the hypercorporatized school and the concomitant transition of the student from simple (potential) customer to a type of hybrid…
Muslim Schools in Britain: Challenging Mobilisations or Logical Developments?
ERIC Educational Resources Information Center
Meer, Nasar
2007-01-01
There are currently over 100 independent and seven state-funded Muslim schools in Britain yet their place within the British education system remains a hotly debated issue. This article argues that Muslim mobilisations for the institutional and financial incorporation of more Muslim schools into the national framework are best understood as an…
From Prosumer to Prodesigner: Participatory News Consumption
ERIC Educational Resources Information Center
Hernández-Serrano, María-José; Renés-Arellano, Paula; Graham, Gary; Greenhill, Anita
2017-01-01
New democratic participation forms and collaborative productions of diverse audiences have emerged as a result of digital innovations in the online access to and consumption of news. The aim of this paper is to propose a conceptual framework based on the possibilities of Web 2.0. outlining the construction of a "social logic," which…
The Challenge and Promise of Complexity Theory for Teacher Education Research
ERIC Educational Resources Information Center
Cochran-Smith, Marilyn; Ell, Fiona; Ludlow, Larry; Grudnoff, Lexie; Aitken, Graeme
2014-01-01
Background/Context: In many countries, there are multiple studies intended to improve initial teacher education. These have generally focused on pieces of teacher education rather than wholes, and have used an underlying linear logic. It may be, however, that what is needed are new research questions and theoretical frameworks that account for…
How Learning Logic Programming Affects Recursion Comprehension
ERIC Educational Resources Information Center
Haberman, Bruria
2004-01-01
Recursion is a central concept in computer science, yet it is difficult for beginners to comprehend. Israeli high-school students learn recursion in the framework of a special modular program in computer science (Gal-Ezer & Harel, 1999). Some of them are introduced to the concept of recursion in two different paradigms: the procedural…
Decision Making. Level Two/Three. Career Guidance.
ERIC Educational Resources Information Center
Cooper, Irene; And Others
Materials contained in this career guidance unit are designed to provide the seven-, eight-, or nine-year-old student with a framework of logical steps for decision making and problem solving. The seventeen activities included in the unit vary in length from thirty to sixty minutes; the entire unit requires approximately ten hours of instructional…
A Variational Framework for Exemplar-Based Image Inpainting
2010-04-01
Physical Review 106(4), 620–30 (1957) 37. Jia, J., Tang, C.K.: Inference of segmented color and texture description by tensor voting . IEEE Trans. on PAMI 26...use of other patch error functions based on the comparison of structure tensors , which could provide a more robust estimation of the morpho- logical
Simulation of Automatic Incidents Detection Algorithm on the Transport Network
ERIC Educational Resources Information Center
Nikolaev, Andrey B.; Sapego, Yuliya S.; Jakubovich, Anatolij N.; Berner, Leonid I.; Ivakhnenko, Andrey M.
2016-01-01
Management of traffic incident is a functional part of the whole approach to solving traffic problems in the framework of intelligent transport systems. Development of an effective process of traffic incident management is an important part of the transport system. In this research, it's suggested algorithm based on fuzzy logic to detect traffic…
ERIC Educational Resources Information Center
Weeks, Cristal E.; Kanter, Jonathan W.; Bonow, Jordan T.; Landes, Sara J.; Busch, Andrew M.
2012-01-01
Functional analytic psychotherapy (FAP) provides a behavioral analysis of the psychotherapy relationship that directly applies basic research findings to outpatient psychotherapy settings. Specifically, FAP suggests that a therapist's in vivo (i.e., in-session) contingent responding to targeted client behaviors, particularly positive reinforcement…
ERIC Educational Resources Information Center
Watson, Cate; Michael, Maureen K.
2016-01-01
This article concerns policy implementation and examines the processes of translation through which policy may be enacted at local level. In particular, it focuses on education policy and constructions of teacher professionalism, drawing on a framework of critical logics--social, political and fantasmatic--which examine different dimensions of…
Policy Transfer via Markov Logic Networks
NASA Astrophysics Data System (ADS)
Torrey, Lisa; Shavlik, Jude
We propose using a statistical-relational model, the Markov Logic Network, for knowledge transfer in reinforcement learning. Our goal is to extract relational knowledge from a source task and use it to speed up learning in a related target task. We show that Markov Logic Networks are effective models for capturing both source-task Q-functions and source-task policies. We apply them via demonstration, which involves using them for decision making in an initial stage of the target task before continuing to learn. Through experiments in the RoboCup simulated-soccer domain, we show that transfer via Markov Logic Networks can significantly improve early performance in complex tasks, and that transferring policies is more effective than transferring Q-functions.
An extensible and lightweight architecture for adaptive server applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gorton, Ian; Liu, Yan; Trivedi, Nihar
2008-07-10
Server applications augmented with behavioral adaptation logic can react to environmental changes, creating self-managing server applications with improved quality of service at runtime. However, developing adaptive server applications is challenging due to the complexity of the underlying server technologies and highly dynamic application environments. This paper presents an architecture framework, the Adaptive Server Framework (ASF), to facilitate the development of adaptive behavior for legacy server applications. ASF provides a clear separation between the implementation of adaptive behavior and the business logic of the server application. This means a server application can be extended with programmable adaptive features through the definitionmore » and implementation of control components defined in ASF. Furthermore, ASF is a lightweight architecture in that it incurs low CPU overhead and memory usage. We demonstrate the effectiveness of ASF through a case study, in which a server application dynamically determines the resolution and quality to scale an image based on the load of the server and network connection speed. The experimental evaluation demonstrates the erformance gains possible by adaptive behavior and the low overhead introduced by ASF.« less
Involving Users to Improve the Collaborative Logical Framework
2014-01-01
In order to support collaboration in web-based learning, there is a need for an intelligent support that facilitates its management during the design, development, and analysis of the collaborative learning experience and supports both students and instructors. At aDeNu research group we have proposed the Collaborative Logical Framework (CLF) to create effective scenarios that support learning through interaction, exploration, discussion, and collaborative knowledge construction. This approach draws on artificial intelligence techniques to support and foster an effective involvement of students to collaborate. At the same time, the instructors' workload is reduced as some of their tasks—especially those related to the monitoring of the students behavior—are automated. After introducing the CLF approach, in this paper, we present two formative evaluations with users carried out to improve the design of this collaborative tool and thus enrich the personalized support provided. In the first one, we analyze, following the layered evaluation approach, the results of an observational study with 56 participants. In the second one, we tested the infrastructure to gather emotional data when carrying out another observational study with 17 participants. PMID:24592196
Predit: A temporal predictive framework for scheduling systems
NASA Technical Reports Server (NTRS)
Paolucci, E.; Patriarca, E.; Sem, M.; Gini, G.
1992-01-01
Scheduling can be formalized as a Constraint Satisfaction Problem (CSP). Within this framework activities belonging to a plan are interconnected via temporal constraints that account for slack among them. Temporal representation must include methods for constraints propagation and provide a logic for symbolic and numerical deductions. In this paper we describe a support framework for opportunistic reasoning in constraint directed scheduling. In order to focus the attention of an incremental scheduler on critical problem aspects, some discrete temporal indexes are presented. They are also useful for the prediction of the degree of resources contention. The predictive method expressed through our indexes can be seen as a Knowledge Source for an opportunistic scheduler with a blackboard architecture.
Cullen, Patricia; Clapham, Kathleen; Byrne, Jake; Hunter, Kate; Senserrick, Teresa; Keay, Lisa; Ivers, Rebecca
2016-08-01
Evidence indicates that Aboriginal people are underrepresented among driver licence holders in New South Wales, which has been attributed to licensing barriers for Aboriginal people. The Driving Change program was developed to provide culturally responsive licensing services that engage Aboriginal communities and build local capacity. This paper outlines the formative evaluation of the program, including logic model construction and exploration of contextual factors. Purposive sampling was used to identify key informants (n=12) from a consultative committee of key stakeholders and program staff. Semi-structured interviews were transcribed and thematically analysed. Data from interviews informed development of the logic model. Participants demonstrated high level of support for the program and reported that it filled an important gap. The program context revealed systemic barriers to licensing that were correspondingly targeted by specific program outputs in the logic model. Addressing underlying assumptions of the program involved managing local capacity and support to strengthen implementation. This formative evaluation highlights the importance of exploring program context as a crucial first step in logic model construction. The consultation process assisted in clarifying program goals and ensuring that the program was responding to underlying systemic factors that contribute to inequitable licensing access for Aboriginal people. Copyright © 2016 Elsevier Ltd. All rights reserved.
Zarcone, Alessandra; Padó, Sebastian; Lenci, Alessandro
2014-06-01
Logical metonymy resolution (begin a book → begin reading a book or begin writing a book) has traditionally been explained either through complex lexical entries (qualia structures) or through the integration of the implicit event via post-lexical access to world knowledge. We propose that recent work within the words-as-cues paradigm can provide a more dynamic model of logical metonymy, accounting for early and dynamic integration of complex event information depending on previous contextual cues (agent and patient). We first present a self-paced reading experiment on German subordinate sentences, where metonymic sentences and their paraphrased version differ only in the presence or absence of the clause-final target verb (Der Konditor begann die Glasur → Der Konditor begann, die Glasur aufzutragen/The baker began the icing → The baker began spreading the icing). Longer reading times at the target verb position in a high-typicality condition (baker + icing → spread ) compared to a low-typicality (but still plausible) condition (child + icing → spread) suggest that we make use of knowledge activated by lexical cues to build expectations about events. The early and dynamic integration of event knowledge in metonymy interpretation is bolstered by further evidence from a second experiment using the probe recognition paradigm. Presenting covert events as probes following a high-typicality or a low-typicality metonymic sentence (Der Konditor begann die Glasur → AUFTRAGEN/The baker began the icing → SPREAD), we obtain an analogous effect of typicality at 100 ms interstimulus interval. © 2014 Cognitive Science Society, Inc.
Reid, Marianne; Botma, Yvonne
2012-06-01
The study undertook the development of a framework for expanding the public services available to children with biomedical healthcare needs related to HIV in South Africa. The study consisted of various component projects which were depicted as phases. The first phase was a descriptive quantitative analysis of healthcare services for children exposed to or infected by HIV, as rendered by the public health sector in the Free State Province. The second stage was informed by health policy research: a nominal group technique with stakeholders was used to identify strategies for expanding the healthcare services available to these children. The third phase consisted of workshops with stakeholders in order to devise and validate a framework for the expansion. The theory of change logic model served as the theoretical underpinning of the draft framework. Triangulated data from the literature and the preceding two phases of the study provided the empirical foundation. The problem identified was that of fragmented care delivered to children exposed to or infected with HIV, due to the 'over-verticalization' of programmes. A workshop was held during which the desired results, the possible factors that could influence the results, as well as the suggested strategies to expand and integrate the public services available to HIV-affected children were confirmed. Thus the framework was finalised during the validation workshop by the researchers in collaboration with the stakeholders.
UML activity diagram swimlanes in logic controller design
NASA Astrophysics Data System (ADS)
Grobelny, Michał; Grobelna, Iwona
2015-12-01
Logic controller behavior can be specified using various techniques, including UML activity diagrams and control Petri nets. Each technique has its advantages and disadvantages. Application of both specification types in one project allows to take benefits from both of them. Additional elements of UML models make it possible to divide a specification into some parts, considered from other point of view (logic controller, user or system). The paper introduces an idea to use UML activity diagrams with swimlanes to increase the understandability of design models.
Guziolowski, Carito; Videla, Santiago; Eduati, Federica; Thiele, Sven; Cokelaer, Thomas; Siegel, Anne; Saez-Rodriguez, Julio
2013-09-15
Logic modeling is a useful tool to study signal transduction across multiple pathways. Logic models can be generated by training a network containing the prior knowledge to phospho-proteomics data. The training can be performed using stochastic optimization procedures, but these are unable to guarantee a global optima or to report the complete family of feasible models. This, however, is essential to provide precise insight in the mechanisms underlaying signal transduction and generate reliable predictions. We propose the use of Answer Set Programming to explore exhaustively the space of feasible logic models. Toward this end, we have developed caspo, an open-source Python package that provides a powerful platform to learn and characterize logic models by leveraging the rich modeling language and solving technologies of Answer Set Programming. We illustrate the usefulness of caspo by revisiting a model of pro-growth and inflammatory pathways in liver cells. We show that, if experimental error is taken into account, there are thousands (11 700) of models compatible with the data. Despite the large number, we can extract structural features from the models, such as links that are always (or never) present or modules that appear in a mutual exclusive fashion. To further characterize this family of models, we investigate the input-output behavior of the models. We find 91 behaviors across the 11 700 models and we suggest new experiments to discriminate among them. Our results underscore the importance of characterizing in a global and exhaustive manner the family of feasible models, with important implications for experimental design. caspo is freely available for download (license GPLv3) and as a web service at http://caspo.genouest.org/. Supplementary materials are available at Bioinformatics online. santiago.videla@irisa.fr.
Guziolowski, Carito; Videla, Santiago; Eduati, Federica; Thiele, Sven; Cokelaer, Thomas; Siegel, Anne; Saez-Rodriguez, Julio
2013-01-01
Motivation: Logic modeling is a useful tool to study signal transduction across multiple pathways. Logic models can be generated by training a network containing the prior knowledge to phospho-proteomics data. The training can be performed using stochastic optimization procedures, but these are unable to guarantee a global optima or to report the complete family of feasible models. This, however, is essential to provide precise insight in the mechanisms underlaying signal transduction and generate reliable predictions. Results: We propose the use of Answer Set Programming to explore exhaustively the space of feasible logic models. Toward this end, we have developed caspo, an open-source Python package that provides a powerful platform to learn and characterize logic models by leveraging the rich modeling language and solving technologies of Answer Set Programming. We illustrate the usefulness of caspo by revisiting a model of pro-growth and inflammatory pathways in liver cells. We show that, if experimental error is taken into account, there are thousands (11 700) of models compatible with the data. Despite the large number, we can extract structural features from the models, such as links that are always (or never) present or modules that appear in a mutual exclusive fashion. To further characterize this family of models, we investigate the input–output behavior of the models. We find 91 behaviors across the 11 700 models and we suggest new experiments to discriminate among them. Our results underscore the importance of characterizing in a global and exhaustive manner the family of feasible models, with important implications for experimental design. Availability: caspo is freely available for download (license GPLv3) and as a web service at http://caspo.genouest.org/. Supplementary information: Supplementary materials are available at Bioinformatics online. Contact: santiago.videla@irisa.fr PMID:23853063
Intuitive Logic Revisited: New Data and a Bayesian Mixed Model Meta-Analysis
Singmann, Henrik; Klauer, Karl Christoph; Kellen, David
2014-01-01
Recent research on syllogistic reasoning suggests that the logical status (valid vs. invalid) of even difficult syllogisms can be intuitively detected via differences in conceptual fluency between logically valid and invalid syllogisms when participants are asked to rate how much they like a conclusion following from a syllogism (Morsanyi & Handley, 2012). These claims of an intuitive logic are at odds with most theories on syllogistic reasoning which posit that detecting the logical status of difficult syllogisms requires effortful and deliberate cognitive processes. We present new data replicating the effects reported by Morsanyi and Handley, but show that this effect is eliminated when controlling for a possible confound in terms of conclusion content. Additionally, we reanalyze three studies () without this confound with a Bayesian mixed model meta-analysis (i.e., controlling for participant and item effects) which provides evidence for the null-hypothesis and against Morsanyi and Handley's claim. PMID:24755777
A new approach of active compliance control via fuzzy logic control for multifingered robot hand
NASA Astrophysics Data System (ADS)
Jamil, M. F. A.; Jalani, J.; Ahmad, A.
2016-07-01
Safety is a vital issue in Human-Robot Interaction (HRI). In order to guarantee safety in HRI, a model reference impedance control can be a very useful approach introducing a compliant control. In particular, this paper establishes a fuzzy logic compliance control (i.e. active compliance control) to reduce impact and forces during physical interaction between humans/objects and robots. Exploiting a virtual mass-spring-damper system allows us to determine a desired compliant level by understanding the behavior of the model reference impedance control. The performance of fuzzy logic compliant control is tested in simulation for a robotic hand known as the RED Hand. The results show that the fuzzy logic is a feasible control approach, particularly to control position and to provide compliant control. In addition, the fuzzy logic control allows us to simplify the controller design process (i.e. avoid complex computation) when dealing with nonlinearities and uncertainties.
Eco-Logic: Logic-Based Approaches to Ecological Modelling
Daniel L. Schmoldt
1991-01-01
This paper summarizes the simulation research carried out during 1984-1989 at the University of Edinburgh. Two primary objectives of their research are 1) to provide tools for manipulating simulation models (i.e., implementation tools) and 2) to provide advice on conceptualizing real-world phenomena into an idealized representation for simulation (i.e., model design...
Logic-based models in systems biology: a predictive and parameter-free network analysis method†
Wynn, Michelle L.; Consul, Nikita; Merajver, Sofia D.
2012-01-01
Highly complex molecular networks, which play fundamental roles in almost all cellular processes, are known to be dysregulated in a number of diseases, most notably in cancer. As a consequence, there is a critical need to develop practical methodologies for constructing and analysing molecular networks at a systems level. Mathematical models built with continuous differential equations are an ideal methodology because they can provide a detailed picture of a network’s dynamics. To be predictive, however, differential equation models require that numerous parameters be known a priori and this information is almost never available. An alternative dynamical approach is the use of discrete logic-based models that can provide a good approximation of the qualitative behaviour of a biochemical system without the burden of a large parameter space. Despite their advantages, there remains significant resistance to the use of logic-based models in biology. Here, we address some common concerns and provide a brief tutorial on the use of logic-based models, which we motivate with biological examples. PMID:23072820
NASA Technical Reports Server (NTRS)
Howard, Ayanna
2005-01-01
The Fuzzy Logic Engine is a software package that enables users to embed fuzzy-logic modules into their application programs. Fuzzy logic is useful as a means of formulating human expert knowledge and translating it into software to solve problems. Fuzzy logic provides flexibility for modeling relationships between input and output information and is distinguished by its robustness with respect to noise and variations in system parameters. In addition, linguistic fuzzy sets and conditional statements allow systems to make decisions based on imprecise and incomplete information. The user of the Fuzzy Logic Engine need not be an expert in fuzzy logic: it suffices to have a basic understanding of how linguistic rules can be applied to the user's problem. The Fuzzy Logic Engine is divided into two modules: (1) a graphical-interface software tool for creating linguistic fuzzy sets and conditional statements and (2) a fuzzy-logic software library for embedding fuzzy processing capability into current application programs. The graphical- interface tool was developed using the Tcl/Tk programming language. The fuzzy-logic software library was written in the C programming language.
Agent Communication for Dynamic Belief Update
NASA Astrophysics Data System (ADS)
Kobayashi, Mikito; Tojo, Satoshi
Thus far, various formalizations of rational / logical agent model have been proposed. In this paper, we include the notion of communication channel and belief modality into update logic, and introduce Belief Update Logic (BUL). First, we discuss that how we can reformalize the inform action of FIPA-ACL into communication channel, which represents a connection between agents. Thus, our agents can send a message only when they believe, and also there actually is, a channel between him / her and a receiver. Then, we present a static belief logic (BL) and show its soundness and completeness. Next, we develop the logic to BUL, which can update Kripke model by the inform action; in which we show that in the updated model the belief operator also satisfies K45. Thereafter, we show that every sentence in BUL can be translated into BL; thus, we can contend that BUL is also sound and complete. Furthermore, we discuss the features of CUL, including the case of inconsistent information, as well as channel transmission. Finally, we summarize our contribution and discuss some future issues.
DoD Modeling and Simulation (M&S) Glossary
1998-01-01
modeling and simulation. It is the group responsible for establishing the need for the ...logical data grouping (in the logical data model ) to which it belongs. (DoD Publication 8320.1-M-l and NBS Pub 500-149, (references (q) and (u)) 399...Department of the Navy Modeling and Simulation Technical Support Group Demonstration of Dynamic Object Oriented Requirements System Disk
NASA Astrophysics Data System (ADS)
Marcati, Alberto; Prete, M. Irene; Mileti, Antonio; Cortese, Mario; Zodiatis, George; Karaolia, Andria; Gauci, Adam; Drago, Aldo
2016-11-01
This paper presents a case study on the management of users' engagement in the development of a new technology. Based on the experience of MEDESS-4MS, an integrated operational model for oil spill Decision Support System covering the whole Mediterranean Sea, the case study is aimed at the development of a framework for user engagement and for the management of its dual logic. Indeed, users may play a dual role in the innovation process, contributing to both the design of the innovation and its promotion. Users contribute to shaping the innovation, by aggregating and integrating knowledge, and they facilitate its diffusion, by adopting the innovation and fostering its adoption within the socio-economic system.
When fast logic meets slow belief: Evidence for a parallel-processing model of belief bias.
Trippas, Dries; Thompson, Valerie A; Handley, Simon J
2017-05-01
Two experiments pitted the default-interventionist account of belief bias against a parallel-processing model. According to the former, belief bias occurs because a fast, belief-based evaluation of the conclusion pre-empts a working-memory demanding logical analysis. In contrast, according to the latter both belief-based and logic-based responding occur in parallel. Participants were given deductive reasoning problems of variable complexity and instructed to decide whether the conclusion was valid on half the trials or to decide whether the conclusion was believable on the other half. When belief and logic conflict, the default-interventionist view predicts that it should take less time to respond on the basis of belief than logic, and that the believability of a conclusion should interfere with judgments of validity, but not the reverse. The parallel-processing view predicts that beliefs should interfere with logic judgments only if the processing required to evaluate the logical structure exceeds that required to evaluate the knowledge necessary to make a belief-based judgment, and vice versa otherwise. Consistent with this latter view, for the simplest reasoning problems (modus ponens), judgments of belief resulted in lower accuracy than judgments of validity, and believability interfered more with judgments of validity than the converse. For problems of moderate complexity (modus tollens and single-model syllogisms), the interference was symmetrical, in that validity interfered with belief judgments to the same degree that believability interfered with validity judgments. For the most complex (three-term multiple-model syllogisms), conclusion believability interfered more with judgments of validity than vice versa, in spite of the significant interference from conclusion validity on judgments of belief.
Continuous variables logic via coupled automata using a DNAzyme cascade with feedback.
Lilienthal, S; Klein, M; Orbach, R; Willner, I; Remacle, F; Levine, R D
2017-03-01
The concentration of molecules can be changed by chemical reactions and thereby offer a continuous readout. Yet computer architecture is cast in textbooks in terms of binary valued, Boolean variables. To enable reactive chemical systems to compute we show how, using the Cox interpretation of probability theory, one can transcribe the equations of chemical kinetics as a sequence of coupled logic gates operating on continuous variables. It is discussed how the distinct chemical identity of a molecule allows us to create a common language for chemical kinetics and Boolean logic. Specifically, the logic AND operation is shown to be equivalent to a bimolecular process. The logic XOR operation represents chemical processes that take place concurrently. The values of the rate constants enter the logic scheme as inputs. By designing a reaction scheme with a feedback we endow the logic gates with a built in memory because their output then depends on the input and also on the present state of the system. Technically such a logic machine is an automaton. We report an experimental realization of three such coupled automata using a DNAzyme multilayer signaling cascade. A simple model verifies analytically that our experimental scheme provides an integrator generating a power series that is third order in time. The model identifies two parameters that govern the kinetics and shows how the initial concentrations of the substrates are the coefficients in the power series.
A framework for quantification of groundwater dynamics - concepts and hydro(geo-)logical metrics
NASA Astrophysics Data System (ADS)
Haaf, Ezra; Heudorfer, Benedikt; Stahl, Kerstin; Barthel, Roland
2017-04-01
Fluctuation patterns in groundwater hydrographs are generally assumed to contain information on aquifer characteristics, climate and environmental controls. However, attempts to disentangle this information and map the dominant controls have been few. This is due to the substantial heterogeneity and complexity of groundwater systems, which is reflected in the abundance of morphologies of groundwater time series. To describe the structure and shape of hydrographs, descriptive terms like "slow"/ "fast" or "flashy"/ "inert" are frequently used, which are subjective, irreproducible and limited. This lack of objective and refined concepts limit approaches for regionalization of hydrogeological characteristics as well as our understanding of dominant processes controlling groundwater dynamics. Therefore, we propose a novel framework for groundwater hydrograph characterization in an attempt to categorize morphologies explicitly and quantitatively based on perceptual concepts of aspects of the dynamics. This quantitative framework is inspired by the existing and operational eco-hydrological classification frameworks for streamflow. The need for a new framework for groundwater systems is justified by the fundamental differences between the state variable groundwater head and the flow variable streamflow. Conceptually, we extracted exemplars of specific dynamic patterns, attributing descriptive terms for means of systematisation. Metrics, primarily taken from streamflow literature, were subsequently adapted to groundwater and assigned to the described patterns for means of quantification. In this study, we focused on the particularities of groundwater as a state variable. Furthermore, we investigated the descriptive skill of individual metrics as well as their usefulness for groundwater hydrographs. The ensemble of categorized metrics result in a framework, which can be used to describe and quantify groundwater dynamics. It is a promising tool for the setup of a successful similarity classification framework for groundwater hydrographs. However, the overabundance of metrics available calls for a systematic redundancy analysis of the metrics, which we describe in a second study (Heudorfer et al., 2017). Heudorfer, B., Haaf, E., Barthel, R., Stahl, K., 2017. A framework for quantification of groundwater dynamics - redundancy and transferability of hydro(geo-)logical metrics. EGU General Assembly 2017, Vienna, Austria.
NASA Astrophysics Data System (ADS)
El-Sebakhy, Emad A.
2009-09-01
Pressure-volume-temperature properties are very important in the reservoir engineering computations. There are many empirical approaches for predicting various PVT properties based on empirical correlations and statistical regression models. Last decade, researchers utilized neural networks to develop more accurate PVT correlations. These achievements of neural networks open the door to data mining techniques to play a major role in oil and gas industry. Unfortunately, the developed neural networks correlations are often limited, and global correlations are usually less accurate compared to local correlations. Recently, adaptive neuro-fuzzy inference systems have been proposed as a new intelligence framework for both prediction and classification based on fuzzy clustering optimization criterion and ranking. This paper proposes neuro-fuzzy inference systems for estimating PVT properties of crude oil systems. This new framework is an efficient hybrid intelligence machine learning scheme for modeling the kind of uncertainty associated with vagueness and imprecision. We briefly describe the learning steps and the use of the Takagi Sugeno and Kang model and Gustafson-Kessel clustering algorithm with K-detected clusters from the given database. It has featured in a wide range of medical, power control system, and business journals, often with promising results. A comparative study will be carried out to compare their performance of this new framework with the most popular modeling techniques, such as neural networks, nonlinear regression, and the empirical correlations algorithms. The results show that the performance of neuro-fuzzy systems is accurate, reliable, and outperform most of the existing forecasting techniques. Future work can be achieved by using neuro-fuzzy systems for clustering the 3D seismic data, identification of lithofacies types, and other reservoir characterization.
Improvements to Earthquake Location with a Fuzzy Logic Approach
NASA Astrophysics Data System (ADS)
Gökalp, Hüseyin
2018-01-01
In this study, improvements to the earthquake location method were investigated using a fuzzy logic approach proposed by Lin and Sanford (Bull Seismol Soc Am 91:82-93, 2001). The method has certain advantages compared to the inverse methods in terms of eliminating the uncertainties of arrival times and reading errors. In this study, adopting this approach, epicentral locations were determined based on the results of a fuzzy logic space concerning the uncertainties in the velocity models. To map the uncertainties in arrival times into the fuzzy logic space, a trapezoidal membership function was constructed by directly using the travel time difference between the two stations for the P- and S-arrival times instead of the P- and S-wave models to eliminate the need for obtaining information concerning the velocity structure of the study area. The results showed that this method worked most effectively when earthquakes occurred away from a network or when the arrival time data contained phase reading errors. In this study, to resolve the problems related to determining the epicentral locations of the events, a forward modeling method like the grid search technique was used by applying different logical operations (i.e., intersection, union, and their combination) with a fuzzy logic approach. The locations of the events were depended on results of fuzzy logic outputs in fuzzy logic space by searching in a gridded region. The process of location determination with the defuzzification of only the grid points with the membership value of 1 obtained by normalizing all the maximum fuzzy output values of the highest values resulted in more reliable epicentral locations for the earthquakes than the other approaches. In addition, throughout the process, the center-of-gravity method was used as a defuzzification operation.
People Like Logical Truth: Testing the Intuitive Detection of Logical Value in Basic Propositions
2016-01-01
Recent studies on logical reasoning have suggested that people are intuitively aware of the logical validity of syllogisms or that they intuitively detect conflict between heuristic responses and logical norms via slight changes in their feelings. According to logical intuition studies, logically valid or heuristic logic no-conflict reasoning is fluently processed and induces positive feelings without conscious awareness. One criticism states that such effects of logicality disappear when confounding factors such as the content of syllogisms are controlled. The present study used abstract propositions and tested whether people intuitively detect logical value. Experiment 1 presented four logical propositions (conjunctive, biconditional, conditional, and material implications) regarding a target case and asked the participants to rate the extent to which they liked the statement. Experiment 2 tested the effects of matching bias, as well as intuitive logic, on the reasoners’ feelings by manipulating whether the antecedent or consequent (or both) of the conditional was affirmed or negated. The results showed that both logicality and matching bias affected the reasoners’ feelings, and people preferred logically true targets over logically false ones for all forms of propositions. These results suggest that people intuitively detect what is true from what is false during abstract reasoning. Additionally, a Bayesian mixed model meta-analysis of conditionals indicated that people’s intuitive interpretation of the conditional “if p then q” fits better with the conditional probability, q given p. PMID:28036402
2014-01-01
Background There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. Methods This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. Results The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. Conclusions The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. Trial registration number PROSPERO registration number: CRD42013004037. PMID:24885751
Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth
2014-05-10
There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.
Molecular processors: from qubits to fuzzy logic.
Gentili, Pier Luigi
2011-03-14
Single molecules or their assemblies are information processing devices. Herein it is demonstrated how it is possible to process different types of logic through molecules. As long as decoherent effects are maintained far away from a pure quantum mechanical system, quantum logic can be processed. If the collapse of superimposed or entangled wavefunctions is unavoidable, molecules can still be used to process either crisp (binary or multi-valued) or fuzzy logic. The way for implementing fuzzy inference engines is declared and it is supported by the examples of molecular fuzzy logic systems devised so far. Fuzzy logic is drawing attention in the field of artificial intelligence, because it models human reasoning quite well. This ability may be due to some structural analogies between a fuzzy logic system and the human nervous system. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Logic Models: A Tool for Designing and Monitoring Program Evaluations. REL 2014-007
ERIC Educational Resources Information Center
Lawton, Brian; Brandon, Paul R.; Cicchinelli, Louis; Kekahio, Wendy
2014-01-01
introduction to logic models as a tool for designing program evaluations defines the major components of education programs--resources, activities, outputs, and short-, mid-, and long-term outcomes--and uses an example to demonstrate the relationships among them. This quick…