Sample records for guided model checking

  1. Program Model Checking: A Practitioner's Guide

    NASA Technical Reports Server (NTRS)

    Pressburger, Thomas T.; Mansouri-Samani, Masoud; Mehlitz, Peter C.; Pasareanu, Corina S.; Markosian, Lawrence Z.; Penix, John J.; Brat, Guillaume P.; Visser, Willem C.

    2008-01-01

    Program model checking is a verification technology that uses state-space exploration to evaluate large numbers of potential program executions. Program model checking provides improved coverage over testing by systematically evaluating all possible test inputs and all possible interleavings of threads in a multithreaded system. Model-checking algorithms use several classes of optimizations to reduce the time and memory requirements for analysis, as well as heuristics for meaningful analysis of partial areas of the state space Our goal in this guidebook is to assemble, distill, and demonstrate emerging best practices for applying program model checking. We offer it as a starting point and introduction for those who want to apply model checking to software verification and validation. The guidebook will not discuss any specific tool in great detail, but we provide references for specific tools.

  2. Pre-Alignment Checks. Automotive Mechanics. Steering & Suspension. Instructor's Guide [and] Student Guide.

    ERIC Educational Resources Information Center

    Spignesi, B.

    This instructional package, one in a series of individualized instructional units on automotive steering and suspension, consists of a student guide and an instructor guide dealing with prealignment checks. Covered in the module are the following steps in a prealignment check: checking the ride height of a vehicle, checking the ball joints and the…

  3. 12 CFR Appendix A to Part 229 - Routing Number Guide to Next-Day Availability Checks and Local Checks

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Routing Number Guide to Next-Day Availability Checks and Local Checks A Appendix A to Part 229 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM AVAILABILITY OF FUNDS AND COLLECTION OF CHECKS...

  4. Guide to Developing an Environmental Management System - Check

    EPA Pesticide Factsheets

    This page takes you though the basic steps (Plan, Do, Check, Act) of building an Environmental Management System (EMS) as they are outlined in the 2001 Second Edition of Environmental Management Systems: An Implementation Guide. Check section.

  5. A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process

    NASA Technical Reports Server (NTRS)

    Wang, Yi; Tamai, Tetsuo

    2009-01-01

    Since the complexity of software systems continues to grow, most engineers face two serious problems: the state space explosion problem and the problem of how to debug systems. In this paper, we propose a game-theoretic approach to full branching time model checking on three-valued semantics. The three-valued models and logics provide successful abstraction that overcomes the state space explosion problem. The game style model checking that generates counter-examples can guide refinement or identify validated formulas, which solves the system debugging problem. Furthermore, output of our game style method will give significant information to engineers in detecting where errors have occurred and what the causes of the errors are.

  6. Air Conditioner Charging. Automotive Mechanics. Air Conditioning. Instructor's Guide [and] Student Guide.

    ERIC Educational Resources Information Center

    Spignesi, B.

    This instructional package, one in a series of individualized instructional units on automobile air conditioning, consists of a student guide and an instructor guide dealing with air conditioning charging. Covered in the module are checking the air conditioning system for leaks, checking and adding refrigerant oil as needed, evacuating the system,…

  7. Symbolic Analysis of Concurrent Programs with Polymorphism

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam

    2010-01-01

    The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.

  8. REACH. Teacher's Guide Volume II. Check Points.

    ERIC Educational Resources Information Center

    Georgia Univ., Athens. Div. of Vocational Education.

    Designed for use with individualized instructional units (CE 026 345-347, CE 026 349-351) in the REACH (Refrigeration, Electro-Mechanical, Air-Conditioning, Heating) electromechanical cluster, this second volume of the postsecondary teacher guide contains the check points which the instructor may want to refer to when the unit sheet directs the…

  9. Software Model Checking Without Source Code

    NASA Technical Reports Server (NTRS)

    Chaki, Sagar; Ivers, James

    2009-01-01

    We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.

  10. New element for optimizing the functioning of sediment traps

    NASA Astrophysics Data System (ADS)

    Schwindt, Sebastian; Franca, Mário; Schleiss, Anton

    2017-04-01

    Sediment traps protect urban areas against excessive sediment transport during hazardous floods and consist typically of a retention basin with an open sediment check dam at the downstream end. The design, as well as the morphological processes within the retention basin, were analyzed by several authors. With regard to open sediment check dams two types of triggering mechanisms for the initiation of sediment retention can be distinguished: (1) mechanical and (2) hydraulic clogging of the structure. Recent studies have shown that outlet structures combining both clogging principles may be considered to avoid undesired self-flushing. Further elements of check dams are conceivable, e.g. for retaining or conveying driftwood. This study analyses experimentally working principles and design criteria of standard elements of sediment traps. Furthermore, it introduces a new structural element to the sediment trap design with a guiding channel in the retention reservoir. Taking into account the natural shape of mountain rivers, the guiding channel has a trapezoidal cross-section shape and a rough but fixed bed. The effect of the guiding channel on sediment deposition pattern and re-mobilization are studied by means of physical model experiments with a standardized hydrograph and variable sediment supply. The results are evaluated by means of zenithal pictures and bedload transport rate, measured at the downstream end of the model. Major advantages of the combined use of both clogging principles include an improved control of the initiation of sediment deposition in order to allow for sediment transfer for small floods and a reduction of hazards related to self-flushing.

  11. Assume-Guarantee Verification of Source Code with Design-Level Assumptions

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.

    2004-01-01

    Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.

  12. Sediment traps with guiding channel and hybrid check dams improve controlled sediment retention

    NASA Astrophysics Data System (ADS)

    Schwindt, Sebastian; Franca, Mário J.; Reffo, Alessandro; Schleiss, Anton J.

    2018-03-01

    Sediment traps with partially open check dams are crucial elements for flood protection in alpine regions. The trapping of sediment is necessary when intense sediment transport occurs during floods that may endanger urban areas at downstream river reaches. In turn, the unwanted permanent trapping of sediment during small, non-hazardous floods can result in the ecological and morphological degradation of downstream reaches. This study experimentally analyses a novel concept for permeable sediment traps. For ensuring the sediment transfer up to small floods, a guiding channel implemented in the deposition area of a sediment trap was systematically studied. The bankfull discharge of the guiding channel corresponds to a dominant morphological discharge. At the downstream end of the guiding channel, a permeable barrier (check dam) triggers sediment retention and deposition. The permeable barrier consists of a bar screen for mechanical deposition control, superposed to a flow constriction for the hydraulic control. The barrier obstructs hazardous sediment transport for discharges that are higher than the bankfull discharge of the guiding channel without the risk of unwanted sediment flushing (massive self-cleaning).

  13. Airport-Noise Levels and Annoyance Model (ALAMO) user's guide

    NASA Technical Reports Server (NTRS)

    Deloach, R.; Donaldson, J. L.; Johnson, M. J.

    1986-01-01

    A guide for the use of the Airport-Noise Level and Annoyance MOdel (ALAMO) at the Langley Research Center computer complex is provided. This document is divided into 5 primary sections, the introduction, the purpose of the model, and an in-depth description of the following subsystems: baseline, noise reduction simulation and track analysis. For each subsystem, the user is provided with a description of architecture, an explanation of subsystem use, sample results, and a case runner's check list. It is assumed that the user is familiar with the operations at the Langley Research Center (LaRC) computer complex, the Network Operating System (NOS 1.4) and CYBER Control Language. Incorporated within the ALAMO model is a census database system called SITE II.

  14. Housing and Home Furnishings Modules.

    ERIC Educational Resources Information Center

    Clemson Univ., SC. Vocational Education Media Center.

    These sixty-seven modules provide student materials for a home economics course in housing and home furnishings. (A companion instructor's guide is available separately--see note.) Each module contains an objective, student information, learning activities (and activity sheets as needed), student self-checks, student self-check answers, check-out…

  15. Annual Check-up

    MedlinePlus

    ... Guides Quizzes Parents About Us Donate General Health Sexual Health Medical Conditions Nutrition & Fitness Emotional Health Annual Check- ... re under the age of 18. Issues about sexual health, HIV, and STIs will be kept confidential. How ...

  16. Fiscal Year 2005 Solid Waste Pollution Prevention Annual Data Summary, (SW P2ADS) Guide (User’s Guide)

    DTIC Science & Technology

    2005-09-01

    services were procured? 19. IS YOUR INSTALLATION USING GREEN CLEANING PRODUCTS OR SERVICES? Enter “yes” or “no.” Return to Page 1, GPP INFO Tab...X___ If yes, please explain. 19. IS YOUR INSTALLATION USING GREEN CLEANING PRODUCTS OR SERVICES? (Check one) Yes _X__ No...Yes ____ No ____ If yes, please explain 19. IS YOUR INSTALLATION USING GREEN CLEANING PRODUCTS OR SERVICES? (Check

  17. Check It Out. FDIC Money Smart Financial Education Curriculum = Conceptos Basicos sobre Cuentas Corientes. FDIC Money Smart Plan de Educacion para Capacitacion en Finanzas.

    ERIC Educational Resources Information Center

    Federal Deposit Insurance Corp., Washington, DC.

    This module on how to choose and keep a checking account is one of ten in the Money Smart curriculum, and includes an instructor guide and a take-home guide. It was developed to help adults outside the financial mainstream enhance their money skills and create positive banking relationships. It is designed to enable participants to open and keep a…

  18. Using Runtime Analysis to Guide Model Checking of Java Programs

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Norvig, Peter (Technical Monitor)

    2001-01-01

    This paper describes how two runtime analysis algorithms, an existing data race detection algorithm and a new deadlock detection algorithm, have been implemented to analyze Java programs. Runtime analysis is based on the idea of executing the program once. and observing the generated run to extract various kinds of information. This information can then be used to predict whether other different runs may violate some properties of interest, in addition of course to demonstrate whether the generated run itself violates such properties. These runtime analyses can be performed stand-alone to generate a set of warnings. It is furthermore demonstrated how these warnings can be used to guide a model checker, thereby reducing the search space. The described techniques have been implemented in the b e grown Java model checker called PathFinder.

  19. EDMS - Microcomputer Pollution Model for civilian Airports and Air Force Bases: (User’s Guide),

    DTIC Science & Technology

    1991-06-01

    exchange. The United States Government assumes no liability for content or use thereof. The United States Government does not endorse products or...overwritten. As new issues of Mobile 4 are released, they will be incorporated into ElIS. The user shald check with the model issuer to determine what...Triangle Park, N.C.; June 1982 - May 1983 EPA 1985; Compilation of Air Pollutant Emission Factors - Volume II: Mobile Sources; Environental Protection

  20. [An illustrated guide to dental screening: a school survey].

    PubMed

    Tenenbaum, Annabelle; Sayada, Mélanie; Azogui-Levy, Sylvie

    2017-12-05

    Marked social inequalities in oral health are observed right from early childhood. A mandatory complete health check-up, including dental screening, is organized at school for 6-year-old children. School healthcare professionals are not well trained in dental health. The aim of this study was to assess the relevance of an illustrated guide as a simple and rapid dental screening training tool in order to ensure effective, standardized and reproducible screening. A cross-sectional study was conducted in the context of the dental examination performed as part of the health check-up. Two examiners (Doctor E1 and Nurse E2) were trained in dental screening by means of the illustrated guide. This reference guide, comprising pictures and legends, presents the main oral pathology observed in children. 109 consent forms for oral screening were delivered, and 102 children agreed to participate (93.57%). The sensitivity of detection of tooth decay by examiners E1 and E2 was 81.48% with a specificity of 96%. No correlation was observed between the child's age (+/- 6 years) and correct detection rates. The illustrated guide is an appropriate and rapid tool for dental screening that can improve the quality of dental check-up and increase the number of children detected.

  1. Special Education/Traffic Safety Education. Curriculum Guide.

    ERIC Educational Resources Information Center

    McBrayer, Clyde; Tidwell, Fred

    The curriculum guide for special education students is intended to serve as a supplement to the Washington 1980 State Traffice Safety Education Curriculum Guide. The guide is also correlated with two popular traffic safety texts. Each of the 21 modules contains a goal statement, a list of vocabulary words that might be difficult, a check sheet…

  2. Systems cost/performance analysis; study 2.3. Volume 3: Programmer's manual and user's guide

    NASA Technical Reports Server (NTRS)

    1975-01-01

    The implementation of the entire systems cost/performance model as a digital computer program was studied. A discussion of the operating environment in which the program was written and checked, the program specifications such as discussions of logic and computational flow, the different subsystem models involved in the design of the spacecraft, and routines involved in the nondesign area such as costing and scheduling of the design were covered. Preliminary results for the DSCS-2 design are also included.

  3. Solar Heating Systems: Instructor's Guide.

    ERIC Educational Resources Information Center

    Green, Joanne; And Others

    This Instructor's Guide for a Solar Heating System Curriculum is designed to accompany the Student Manual and the Progress Checks and Test Manual for the course (see note), in order to facilitate the instruction of classes on solar heating systems. The Instructor's Guide contains a variety of materials used in teaching the courses, including…

  4. Contractor Accounting, Reporting and Estimating (CARE).

    DTIC Science & Technology

    Contractor Accounting Reporting and Estimating (CARE) provides check lists that may be used as guides in evaluating the accounting system, financial reporting , and cost estimating capabilities of the contractor. Experience gained from the Management Review Technique was used as a basis for the check lists. (Author)

  5. Symbolic Heuristic Search for Factored Markov Decision Processes

    NASA Technical Reports Server (NTRS)

    Morris, Robert (Technical Monitor); Feng, Zheng-Zhu; Hansen, Eric A.

    2003-01-01

    We describe a planning algorithm that integrates two approaches to solving Markov decision processes with large state spaces. State abstraction is used to avoid evaluating states individually. Forward search from a start state, guided by an admissible heuristic, is used to avoid evaluating all states. We combine these two approaches in a novel way that exploits symbolic model-checking techniques and demonstrates their usefulness for decision-theoretic planning.

  6. Guide to Developing an Environmental Management System - Act

    EPA Pesticide Factsheets

    This page takes you though the basic steps (Plan, Do, Check, Act) of building an Environmental Management System (EMS) as they are outlined in the 2001 Second Edition of Environmental Management Systems: An Implementation Guide. Act section.

  7. Guide to Developing an Environmental Management System - Plan

    EPA Pesticide Factsheets

    This page takes you though the basic steps (Plan, Do, Check, Act) of building an Environmental Management System (EMS) as they are outlined in the 2001 Second Edition of Environmental Management Systems: An Implementation Guide. Plan section.

  8. Model Checker for Java Programs

    NASA Technical Reports Server (NTRS)

    Visser, Willem

    2007-01-01

    Java Pathfinder (JPF) is a verification and testing environment for Java that integrates model checking, program analysis, and testing. JPF consists of a custom-made Java Virtual Machine (JVM) that interprets bytecode, combined with a search interface to allow the complete behavior of a Java program to be analyzed, including interleavings of concurrent programs. JPF is implemented in Java, and its architecture is highly modular to support rapid prototyping of new features. JPF is an explicit-state model checker, because it enumerates all visited states and, therefore, suffers from the state-explosion problem inherent in analyzing large programs. It is suited to analyzing programs less than 10kLOC, but has been successfully applied to finding errors in concurrent programs up to 100kLOC. When an error is found, a trace from the initial state to the error is produced to guide the debugging. JPF works at the bytecode level, meaning that all of Java can be model-checked. By default, the software checks for all runtime errors (uncaught exceptions), assertions violations (supports Java s assert), and deadlocks. JPF uses garbage collection and symmetry reductions of the heap during model checking to reduce state-explosion, as well as dynamic partial order reductions to lower the number of interleavings analyzed. JPF is capable of symbolic execution of Java programs, including symbolic execution of complex data such as linked lists and trees. JPF is extensible as it allows for the creation of listeners that can subscribe to events during searches. The creation of dedicated code to be executed in place of regular classes is supported and allows users to easily handle native calls and to improve the efficiency of the analysis.

  9. Collapse Mechanisms Of Masonry Structures

    NASA Astrophysics Data System (ADS)

    Zuccaro, G.; Rauci, M.

    2008-07-01

    The paper outlines a possible approach to typology recognition, safety check analyses and/or damage measuring taking advantage by a multimedia tool (MEDEA), tracing a guided procedure useful for seismic safety check evaluation and post event macroseismic assessment. A list of the possible collapse mechanisms observed in the post event surveys on masonry structures and a complete abacus of the damages are provided in MEDEA. In this tool a possible combination between a set of damage typologies and each collapse mechanism is supplied in order to improve the homogeneity of the damages interpretation. On the other hand recent researches of one of the author have selected a number of possible typological vulnerability factors of masonry buildings, these are listed in the paper and combined with potential collapse mechanisms to be activated under seismic excitation. The procedure takes place from simple structural behavior models, derived from the Umbria-Marche earthquake observations, and tested after the San Giuliano di Puglia event; it provides the basis either for safety check analyses of the existing buildings or for post-event structural safety assessment and economic damage evaluation. In the paper taking advantage of MEDEA mechanisms analysis, mainly developed for the post event safety check surveyors training, a simple logic path is traced in order to approach the evaluation of the masonry building safety check. The procedure starts from the identification of the typological vulnerability factors to derive the potential collapse mechanisms and their collapse multipliers and finally addresses the simplest and cheapest strengthening techniques to reduce the original vulnerability. The procedure has been introduced in the Guide Lines of the Regione Campania for the professionals in charge of the safety check analyses and the buildings strengthening in application of the national mitigation campaign introduced by the Ordinance of the Central Government n. 3362/03. The main cases of out of plane mechanisms are analyzed and a possible innovative theory for masonry building vulnerability assessment, based on limit state analyses, is outlined. The paper report the first step of a research granted by the Department of the Civil Protection to Reluis within the research program of Line 10.

  10. Collapse Mechanisms Of Masonry Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zuccaro, G.; Rauci, M.

    2008-07-08

    The paper outlines a possible approach to typology recognition, safety check analyses and/or damage measuring taking advantage by a multimedia tool (MEDEA), tracing a guided procedure useful for seismic safety check evaluation and post event macroseismic assessment. A list of the possible collapse mechanisms observed in the post event surveys on masonry structures and a complete abacus of the damages are provided in MEDEA. In this tool a possible combination between a set of damage typologies and each collapse mechanism is supplied in order to improve the homogeneity of the damages interpretation. On the other hand recent researches of onemore » of the author have selected a number of possible typological vulnerability factors of masonry buildings, these are listed in the paper and combined with potential collapse mechanisms to be activated under seismic excitation. The procedure takes place from simple structural behavior models, derived from the Umbria-Marche earthquake observations, and tested after the San Giuliano di Puglia event; it provides the basis either for safety check analyses of the existing buildings or for post-event structural safety assessment and economic damage evaluation. In the paper taking advantage of MEDEA mechanisms analysis, mainly developed for the post event safety check surveyors training, a simple logic path is traced in order to approach the evaluation of the masonry building safety check. The procedure starts from the identification of the typological vulnerability factors to derive the potential collapse mechanisms and their collapse multipliers and finally addresses the simplest and cheapest strengthening techniques to reduce the original vulnerability. The procedure has been introduced in the Guide Lines of the Regione Campania for the professionals in charge of the safety check analyses and the buildings strengthening in application of the national mitigation campaign introduced by the Ordinance of the Central Government n. 3362/03. The main cases of out of plane mechanisms are analyzed and a possible innovative theory for masonry building vulnerability assessment, based on limit state analyses, is outlined. The paper report the first step of a research granted by the Department of the Civil Protection to Reluis within the research program of Line 10.« less

  11. Scalable and Accurate SMT-based Model Checking of Data Flow Systems

    DTIC Science & Technology

    2013-10-30

    guided by the semantics of the description language . In this project we developed instead a complementary and novel approach based on a somewhat brute...believe that our approach could help considerably in expanding the reach of abstract interpretation techniques to a variety of tar- get languages , as...project. We worked on developing a framework for compositional verification that capitalizes on the fact that data-flow languages , such as Lustre, have

  12. Guide to Developing an Environmental Management System - Do

    EPA Pesticide Factsheets

    This page takes you though the basic steps (Plan, Do, Check, Act) of building an Environmental Management System (EMS) as they are outlined in the 2001 Second Edition of Environmental Management Systems: An Implementation Guide. This is the Do section.

  13. 76 FR 30280 - Public Meeting To Discuss the Proposed Rule on Enhanced Weapons, Firearms Background Checks, and...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-05-25

    ... Proposed Rule on Enhanced Weapons, Firearms Background Checks, and Security Event Notifications AGENCY... the proposed enhanced weapons rule, the two draft regulatory guides, and the draft weapons safety.... No formal comments on the proposed enhanced weapons rule or the draft guidance documents will be...

  14. Reliability and utility of citizen science reef monitoring data collected by Reef Check Australia, 2002-2015.

    PubMed

    Done, Terence; Roelfsema, Chris; Harvey, Andrew; Schuller, Laura; Hill, Jocelyn; Schläppy, Marie-Lise; Lea, Alexandra; Bauer-Civiello, Anne; Loder, Jennifer

    2017-04-15

    Reef Check Australia (RCA) has collected data on benthic composition and cover at >70 sites along >1000km of Australia's Queensland coast from 2002 to 2015. This paper quantifies the accuracy, precision and power of RCA benthic composition data, to guide its application and interpretation. A simulation study established that the inherent accuracy of the Reef Check point sampling protocol is high (<±7% error absolute), in the range of estimates of benthic cover from 1% to 50%. A field study at three reef sites indicated that, despite minor observer- and deployment-related biases, the protocol does reliably document moderate ecological changes in coral communities. The error analyses were then used to guide the interpretation of inter-annual variability and long term trends at three study sites in RCA's major 2002-2015 data series for the Queensland coast. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Guidelines for design and rating of gusset-plate connections for steel truss bridges : [tech brief].

    DOT National Transportation Integrated Search

    2014-08-01

    The FHWA guide provides rating guidance : on both load and resistance factor rating : (LRFR) and load factor rating philosophies. : Discussions in this TechBrief are from the LRFR : perspective only. The FHWA guide recommends five resistance checks a...

  16. 10 Quick Ways to Analyze Children's Books for Racism and Sexism.

    ERIC Educational Resources Information Center

    Racism and Sexism Resource Center for Educators, New York, NY.

    The 10 guidelines in this guide to analyzing children's books serve as a starting point for evaluating children's literature from the perspective of racist and sexist attitudes. (1) Check the illustrations. Look for stereotypes, for tokenism, and for who's doing what. (2) Check the story lines. Look for subtle forms of bias in such approaches as…

  17. A Look at Constitutional Checks and Balances: Study Sheets for U.S. History.

    ERIC Educational Resources Information Center

    Scott, Nancy

    This document is intended as a resource guide for teachers to use in helping students to understand how the United States system of government operates. It examines the background, historical application, and current debate concerning the principle of checks and balances. Ten study sheets feature various figures and episodes prominently associated…

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cottam, Joseph A.; Blaha, Leslie M.

    Systems have biases. Their interfaces naturally guide a user toward specific patterns of action. For example, modern word-processors and spreadsheets are both capable of taking word wrapping, checking spelling, storing tables, and calculating formulas. You could write a paper in a spreadsheet or could do simple business modeling in a word-processor. However, their interfaces naturally communicate which function they are designed for. Visual analytic interfaces also have biases. In this paper, we outline why simple Markov models are a plausible tool for investigating that bias and how they might be applied. We also discuss some anticipated difficulties in such modelingmore » and touch briefly on what some Markov model extensions might provide.« less

  19. Systems cost/performance analysis (study 2.3). Volume 3: Programmer's manual and user's guide. [for unmanned spacecraft

    NASA Technical Reports Server (NTRS)

    Janz, R. F.

    1974-01-01

    The systems cost/performance model was implemented as a digital computer program to perform initial program planning, cost/performance tradeoffs, and sensitivity analyses. The computer is described along with the operating environment in which the program was written and checked, the program specifications such as discussions of logic and computational flow, the different subsystem models involved in the design of the spacecraft, and routines involved in the nondesign area such as costing and scheduling of the design. Preliminary results for the DSCS-II design are also included.

  20. Psychology and Counseling Library Research Guide.

    ERIC Educational Resources Information Center

    Sylvia, Margaret

    This document is a guide for library research in psychology or counseling. The first section discusses how to do research in the library, including choosing a topic, beginning with books, updating the information with journals, checking out books, interlibrary loan, visiting other libraries, and writing the paper. The second section provides…

  1. Machine Shop. Student Learning Guide.

    ERIC Educational Resources Information Center

    Palm Beach County Board of Public Instruction, West Palm Beach, FL.

    This student learning guide contains eight modules for completing a course in machine shop. It is designed especially for use in Palm Beach County, Florida. Each module covers one task, and consists of a purpose, performance objective, enabling objectives, learning activities and resources, information sheets, student self-check with answer key,…

  2. Intensive Intervention Practice Guide: Intensifying Check-In Check-Out for Students with or At-Risk for Emotional or Behavioral Disabilities

    ERIC Educational Resources Information Center

    Kunemund, Rachel; Majeika, Caitlyn; De La Cruz, Veronica Mellado; Wilkinson, Sarah

    2016-01-01

    The National Center for Leadership in Intensive Intervention (NCLII), a consortium funded by the Office of Special Education Programs (OSEP), prepares special education leaders to become experts in research on intensive intervention for students with disabilities who have persistent and severe academic (e.g., reading and math) and behavioral…

  3. Normality Tests for Statistical Analysis: A Guide for Non-Statisticians

    PubMed Central

    Ghasemi, Asghar; Zahediasl, Saleh

    2012-01-01

    Statistical errors are common in scientific literature and about 50% of the published articles have at least one error. The assumption of normality needs to be checked for many statistical procedures, namely parametric tests, because their validity depends on it. The aim of this commentary is to overview checking for normality in statistical analysis using SPSS. PMID:23843808

  4. Horticultural Practices. Activity Guides.

    ERIC Educational Resources Information Center

    Bania, Kent; Cummings, John, Ed.

    The 88 activity guides in this document are intended to supplement the initial or organized instruction of the agricultural teacher at the secondary educational level. Some of the activities require one student to complete, others may need two or more students working in a team. Some activities also require followup checking within a few days to…

  5. Crisis or Conference! Master List for Conference Planners.

    ERIC Educational Resources Information Center

    Carey, Tony

    This conference organizer's guide contains 42 lists of ideas, reminders, things to check, and questions to ask when a person is planning an event such as a conference, workshop, or training session. Written from a British point of view, the guide is organized into four parts in chronological order: preplanning, planning, onsite, and…

  6. Checking Questionable Entry of Personally Identifiable Information Encrypted by One-Way Hash Transformation

    PubMed Central

    Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David

    2017-01-01

    Background As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. Objective The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. Methods According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. Results There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. Conclusions The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. PMID:28213343

  7. Checking Questionable Entry of Personally Identifiable Information Encrypted by One-Way Hash Transformation.

    PubMed

    Chen, Xianlai; Fann, Yang C; McAuliffe, Matthew; Vismer, David; Yang, Rong

    2017-02-17

    As one of the several effective solutions for personal privacy protection, a global unique identifier (GUID) is linked with hash codes that are generated from combinations of personally identifiable information (PII) by a one-way hash algorithm. On the GUID server, no PII is permitted to be stored, and only GUID and hash codes are allowed. The quality of PII entry is critical to the GUID system. The goal of our study was to explore a method of checking questionable entry of PII in this context without using or sending any portion of PII while registering a subject. According to the principle of GUID system, all possible combination patterns of PII fields were analyzed and used to generate hash codes, which were stored on the GUID server. Based on the matching rules of the GUID system, an error-checking algorithm was developed using set theory to check PII entry errors. We selected 200,000 simulated individuals with randomly-planted errors to evaluate the proposed algorithm. These errors were placed in the required PII fields or optional PII fields. The performance of the proposed algorithm was also tested in the registering system of study subjects. There are 127,700 error-planted subjects, of which 114,464 (89.64%) can still be identified as the previous one and remaining 13,236 (10.36%, 13,236/127,700) are discriminated as new subjects. As expected, 100% of nonidentified subjects had errors within the required PII fields. The possibility that a subject is identified is related to the count and the type of incorrect PII field. For all identified subjects, their errors can be found by the proposed algorithm. The scope of questionable PII fields is also associated with the count and the type of the incorrect PII field. The best situation is to precisely find the exact incorrect PII fields, and the worst situation is to shrink the questionable scope only to a set of 13 PII fields. In the application, the proposed algorithm can give a hint of questionable PII entry and perform as an effective tool. The GUID system has high error tolerance and may correctly identify and associate a subject even with few PII field errors. Correct data entry, especially required PII fields, is critical to avoiding false splits. In the context of one-way hash transformation, the questionable input of PII may be identified by applying set theory operators based on the hash codes. The count and the type of incorrect PII fields play an important role in identifying a subject and locating questionable PII fields. ©Xianlai Chen, Yang C Fann, Matthew McAuliffe, David Vismer, Rong Yang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 17.02.2017.

  8. Research on sub-surface damage and its stress deformation in the process of large aperture and high diameter-to-thickness ratio TMT M3MP

    NASA Astrophysics Data System (ADS)

    Hu, Hai-xiang; Qi, Erhui; Cole, Glen; Hu, Hai-fei; Luo, Xiao; Zhang, Xue-jun

    2016-10-01

    Large flat mirrors play important roles in large aperture telescopes. However, they also introduce unpredictable problems. The surface errors created during manufacturing, testing, and supporting are all combined during measurement, thus making understanding difficult for diagnosis and treatment. Examining a high diameter-to-thickness ratio flat mirror, TMT M3MP, and its unexpected deformation during processing, we proposed a strain model of subsurface damage to explain the observed phenomenon. We designed a set of experiment, and checked the validity of our diagnosis. On that basis, we theoretical predicted the trend of this strain and its scale effect on Zerodur®, and checked the validity on another piece experimentally. This work guided the grinding-polishing process of M3MP, and will be used as reference for M3M processing as well.

  9. Assessing Your Company's Training Needs.

    ERIC Educational Resources Information Center

    Food, Drink and Tobacco Industry Training Board, Croydon (England).

    This book was designed to serve as a guide for the small and medium-sized firm establishing a systematic training pattern for the first time, and to the larger company for whom it will act as a useful check list. The guide examines the kind of information required and the ways in which actual companies have arrived at their training requirements…

  10. Guide to alternative fuel vehicle incentives and laws: September 1998

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riley, C.; O'Connor, K.

    1998-12-22

    This guide provides information in support of the National Clean Cities Program, which will assist one in becoming better informed about the choices and options surrounding the use of alternative fuels and the purchase of alternative fuel vehicles. The information printed in this guide is current as of September 15, 1998. For recent additions or more up-to-date information, check the Alternative Fuels Data Center Web site at http://www.afdc.doe.gov

  11. Bed Bugs

    EPA Pesticide Factsheets

    Prevent, identify, and treat bed bug infestations using EPA’s step-by-step guides, based on IPM principles. Find pesticides approved for bed bug control, check out the information clearinghouse, and dispel bed bug myths.

  12. Voice Technology Design Guides for Navy Training Systems.

    DTIC Science & Technology

    1983-03-01

    34 LSO acknowledges (AUTOIMANUAL/ pilot meatball Coupled as acquisition. appropriate) "Paddles Contact" LSO assuming control from CCA. (continued) 68... meatball to avoid ball to avoid settling slope if not corrected. settling below glide- below glideslope. slope. "Don’t climb"- If not corrected air...Check sink rate and Check sink rate and meat- "Don’t go high." craft will climb above meatball to avoid ball to avoid climbing optimum glideslope

  13. [1980-1991 evaluation of naturopathy practitioners in the Detmold presidential administration--inventory and analysis].

    PubMed

    Lubbe, R

    1993-05-01

    In Germany two professional groups may apply medical science to human beings: physicians and "Heilpraktiker" (naturopaths). However, no regulations exist regarding training, examination and continuation of studies of "Heilpraktiker". They only have to be checked on the basis of the Heilpraktiker low intended to exclude a danger to health of the people. In North Rhine Westphalia each of the 54 public health offices effects this checking on its own responsibility. The present analysis shows that in the Detmold administration district the checking by the individual public health offices differs greatly from one another. There are offices where most applicants fail and others where nearly all of the applicants pass. In addition the passing rate shows considerable regional and temporal differences. It can also be taken from the data that public health offices with a high passing rate not only carry out many checking cases but that they receive applicants residing outside the area of responsibility of the checking public health office ("checking tourism"). Rapid implementation of the "Federal Guide on the Checking of Heilpraktiker Applicants" is recommended for the Land of North Rhine Westphalia as this procedure would be fairer for the applicant and less expensive for the citizens.

  14. Development of the promoting teacher attribution model for promoting science teachers' moral and ethical characteristics

    NASA Astrophysics Data System (ADS)

    Chanprathak, Anusorn; Worakham, Paisan; Suikraduang, Arun

    2018-01-01

    The promotion science teacher attribution model to develop the moral and ethical characteristics was to analyze, synthesis, and develop the guidelines of the scoping study into concepts, theories and research related about the moral and ethics of characteristically teachers from the resources, including research papers, research articles related research, and interviews with luminaries of 9 members. Using interviews and document analysis, data analysis, content analysis, and present an essay was built. The promoting attributes a teacher, moral principles, concepts and theories involved and guidance of a qualified were developed. The Multiple-Attribute Consensus Reaching (MACR) from 12 educational experts were checked the suitability and feasibility of the model, the possibility of the manual with the research instruments consisted of the promotion model attributes the moral and ethics teacher's characteristics were evaluated, to guide the promotion attributes' model forms were assessed, the first edition of the manual data analysis, information obtained from the evaluation of the suitability and feasibility analysis model and guide for the average were administered. The results have found that; the promoting moral teacher attribute data to their moral and ethical characteristics was divided into two groups, priests and scholars. In both groups, the promotion attributes, focusing on teacher's groups is moral in nature to modify the idea to a change of attitude within the organism. Students got down to real experience; an analysis and synthesis face learning environments that cause cognitive skills to act as a self-realization possibly. The promotion model, moral principles, including the importance of the activities, objectives and evaluation methods were attributed. These core concepts learning theory and social cognitive theory, and integrated learning experience were comprised in five stages and four processes, namely; the intended, memory storage process, the actions, and the incentives of the motivation processes. The appropriateness and feasibility of the model and the appropriateness of the form scales evidence of a high level. The assessing guide on the appropriateness of the guide scale and the possibility of the manual guide responded of at a high level, similarly.

  15. CAD/CAM-produced surgical guides: Optimizing the treatment workflow.

    PubMed

    Neugebauer, J; Kistler, F; Kistler, S; Züdorf, G; Freyer, D; Ritter, L; Dreiseidler, T; Kusch, J; Zöller, J E

    2011-01-01

    The increased availability of devices for 3D radiological diagnosis allows the more frequent use of CAD/CAM-produced surgical guides for implant placement. The conventional workflow requires a complex logistic chain which is time-consuming and costly. In a pilot study, the workflow of directly milled surgical guides was evaluated. These surgical guides were designed based on the fusion of an optical impression and the radiological data. The clinical use showed that the surgical guides could be accurately placed on the residual dentition without tipping movements. The conventional surgical guides were used as a control for the manual check of the deviation of the implant axis. The direct transfer of the digital planning data allows the fabrication of surgical guides in an external center without the need of physical transport, which reduces the logistic effort and expense of the central fabrication of surgical guides.

  16. [A simplified occupational health and safety management system designed for small enterprises. Initial validation results].

    PubMed

    Bacchi, Romana; Veneri, L; Ghini, P; Caso, Maria Alessandra; Baldassarri, Giovanna; Renzetti, F; Santarelli, R

    2009-01-01

    Occupational Health and Safety Management Systems (OHSMS) are known to be effective in improving safety at work. Unfortunately they are often too resource-heavy for small businesses. The aim of this project was to develop and test a simplified model of OHSMS suitable for small enterprises. The model consists of 7 procedures and various operating forms and check lists, that guide the enterprise in managing safety at work. The model was tested in 15 volunteer enterprises. In most of the enterprises two audits showed increased awareness and participation of workers; better definition and formalisation of respon sibilities in 8 firms; election of Union Safety Representatives in over one quarter of the enterprises; improvement of safety equipment. The study also helped identify areas where the model could be improved by simplification of unnecessarily complex and redundant procedures.

  17. Nonmelanoma skin cancer in mountain guides: high prevalence and lack of awareness warrant development of evidence-based prevention tools.

    PubMed

    Zink, Alexander; Koch, Elisabeth; Seifert, Florian; Rotter, Markus; Spinner, Christoph D; Biedermann, Tilo

    2016-01-01

    Nonmelanoma skin cancer (NMSC) is the most common cancer in Switzerland and Europe. The main causative factor is exposure to ultraviolet radiation, which puts outdoor workers in general at a higher risk of developing NMSC than indoor workers. However, few studies have clinically examined the risk of developing NMSC to outdoor workers, especially mountain guides. We aimed to investigate the prevalence of NMSC and corresponding precancerous lesions, and the associated risk behaviour of mountain and ski guides in order to develop future prevention programmes. We conducted a cross-sectional study including mountain and ski guides from southern Germany, who underwent a full-body skin check-up by a dermatologist. We assessed their NMSC awareness and risk behaviour using a paper-based questionnaire. Of the 62 state-certified mountain and ski guides (55 men, 7 women; mean age 52.9 ± 13.4 years) included in this study, 27 (43.5%) were diagnosed with NMSC or its premalignant stages. In addition, 59.7% of the participants expressed the opinion that their protection from ultraviolet radiation exposure needs to be improved; 83.6% requested further information on NMSC, and 48.5% had never undergone a skin check-up or consulted a dermatologist before. Mountain and ski guides are at a high risk for developing NMSC. Their unmet medical needs indicate an underestimation of NMSC prevalence, which is usually based on reports by insurance companies, and offer the chance for developing evidence-based awareness and prevention tools that can be promoted to individuals with other outdoor jobs.

  18. SU-E-J-53: A Phantom Design to Assist Patient Position Verification System in Daily Image-Guided RT and Comprehensive QA Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syh, J; Wu, H

    2015-06-15

    Purpose This study is to implement a homemade novel device with surface locking couch index to check daily radiograph (DR) function of adaPTInsight™, stereoscopic image guided system (SIGS), for proton therapy. The comprehensive daily QA checks of proton pencil beam output, field size, flatness and symmetry of spots and energy layers will be followed by using MatriXX dosimetry device. Methods The iBa MatriXX device was used to perform daily dosimetry which is also used to perform SIGS checks. A set of markers were attached to surface of MatriXX device in alignment of DRR of reconstructed CT images and daily DR.more » The novel device allows MatriXX to be fit into the cradle which was locked by couch index bars on couch surface. This will keep the MatriXX at same XY plane daily with exact coordinates. Couch height Z will be adjusted according to imaging to check isocenter-laser coincidence accuracy. Results adaPTInsight™ provides robotic couch to move in 6-degree coordinate system to align the dosimetry device to be within 1.0 mm / 1.0°. The daily constancy was tightened to be ± 0.5 mm / 0.3° compared to 1.0 mm / 1.0° before. For gantry at 0° and couch all 0° angles (@ Rt ARM 0 setting), offsets measured of the couch systems were ≤ 0.5° in roll, yaw and pitch dimensions. Conclusion Simplicity of novel device made daily image guided QA consistent with accuracy. The offset of the MatriXX isocenter-laser coincident was reproducible. Such easy task not only speeds up the setup, but it increases confidence level in detailed daily comprehensive measurements. The total SIGS alignment time has been shortened with less setup error. This device will enhance our experiences for the future QA when cone beam CT imaging modality becomes available at proton therapy center.« less

  19. Java PathFinder User Guide

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus

    1999-01-01

    The JAVA PATHFINDER, JPF, is a translator from a subset of JAVA 1.0 to PROMELA, the programming language of the SPIN model checker. The purpose of JPF is to establish a framework for verification and debugging of JAVA programming based on model checking. The main goal is to automate program verification such that a programmer can apply it in the daily work without the need for a specialist to manually reformulate a program into a different notation in order to analyze the program. The system is especially suited for analyzing multi-threaded JAVA applications, where normal testing usually falls short. The system can find deadlocks and violations of boolean assertions stated by the programmer in a special assertion language. This document explains how to Use JPF.

  20. The effects of a flexible visual acuity-driven ranibizumab treatment regimen in age-related macular degeneration: outcomes of a drug and disease model.

    PubMed

    Holz, Frank G; Korobelnik, Jean-François; Lanzetta, Paolo; Mitchell, Paul; Schmidt-Erfurth, Ursula; Wolf, Sebastian; Markabi, Sabri; Schmidli, Heinz; Weichselberger, Andreas

    2010-01-01

    Differences in treatment responses to ranibizumab injections observed within trials involving monthly (MARINA and ANCHOR studies) and quarterly (PIER study) treatment suggest that an individualized treatment regimen may be effective in neovascular age-related macular degeneration. In the present study, a drug and disease model was used to evaluate the impact of an individualized, flexible treatment regimen on disease progression. For visual acuity (VA), a model was developed on the 12-month data from ANCHOR, MARINA, and PIER. Data from untreated patients were used to model patient-specific disease progression in terms of VA loss. Data from treated patients from the period after the three initial injections were used to model the effect of predicted ranibizumab vitreous concentration on VA loss. The model was checked by comparing simulations of VA outcomes after monthly and quarterly injections during this period with trial data. A flexible VA-guided regimen (after the three initial injections) in which treatment is initiated by loss of >5 letters from best previously observed VA scores was simulated. Simulated monthly and quarterly VA-guided regimens showed good agreement with trial data. Simulation of VA-driven individualized treatment suggests that this regimen, on average, sustains the initial gains in VA seen in clinical trials at month 3. The model predicted that, on average, to maintain initial VA gains, an estimated 5.1 ranibizumab injections are needed during the 9 months after the three initial monthly injections, which amounts to a total of 8.1 injections during the first year. A flexible, individualized VA-guided regimen after the three initial injections may sustain vision improvement with ranibizumab and could improve cost-effectiveness and convenience and reduce drug administration-associated risks.

  1. Addressing Dynamic Issues of Program Model Checking

    NASA Technical Reports Server (NTRS)

    Lerda, Flavio; Visser, Willem

    2001-01-01

    Model checking real programs has recently become an active research area. Programs however exhibit two characteristics that make model checking difficult: the complexity of their state and the dynamic nature of many programs. Here we address both these issues within the context of the Java PathFinder (JPF) model checker. Firstly, we will show how the state of a Java program can be encoded efficiently and how this encoding can be exploited to improve model checking. Next we show how to use symmetry reductions to alleviate some of the problems introduced by the dynamic nature of Java programs. Lastly, we show how distributed model checking of a dynamic program can be achieved, and furthermore, how dynamic partitions of the state space can improve model checking. We support all our findings with results from applying these techniques within the JPF model checker.

  2. Take a Ride Along NIF’s Optics Recycle Loop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouthillier, Lauren; Folta, Jim; Welday, Brian

    The National Ignition Facility uses over 40,000 optics to help guide 192 laser beams onto a target the size of a pencil eraser. Check out how the optics recycle loop repairs optics, saving time and money.

  3. Checking Trace Nitrate in Water and Soil Using an Amateur Scientist's Measurement Guide.

    ERIC Educational Resources Information Center

    Baker, Roger C. Jr.

    1995-01-01

    Presents a test that can measure nitrate nitrogen ions at about 0.1 mg/L using concentration. Uses inexpensive accessible materials and can be used by amateur environmentalists for monitoring water nitrate levels. (JRH)

  4. Precision of guided scanning procedures for full-arch digital impressions in vivo.

    PubMed

    Zimmermann, Moritz; Koller, Christina; Rumetsch, Moritz; Ender, Andreas; Mehl, Albert

    2017-11-01

    System-specific scanning strategies have been shown to influence the accuracy of full-arch digital impressions. Special guided scanning procedures have been implemented for specific intraoral scanning systems with special regard to the digital orthodontic workflow. The aim of this study was to evaluate the precision of guided scanning procedures compared to conventional impression techniques in vivo. Two intraoral scanning systems with implemented full-arch guided scanning procedures (Cerec Omnicam Ortho; Ormco Lythos) were included along with one conventional impression technique with irreversible hydrocolloid material (alginate). Full-arch impressions were taken three times each from 5 participants (n = 15). Impressions were then compared within the test groups using a point-to-surface distance method after best-fit model matching (OraCheck). Precision was calculated using the (90-10%)/2 quantile and statistical analysis with one-way repeated measures ANOVA and post hoc Bonferroni test was performed. The conventional impression technique with alginate showed the lowest precision for full-arch impressions with 162.2 ± 71.3 µm. Both guided scanning procedures performed statistically significantly better than the conventional impression technique (p < 0.05). Mean values for group Cerec Omnicam Ortho were 74.5 ± 39.2 µm and for group Ormco Lythos 91.4 ± 48.8 µm. The in vivo precision of guided scanning procedures exceeds conventional impression techniques with the irreversible hydrocolloid material alginate. Guided scanning procedures may be highly promising for clinical applications, especially for digital orthodontic workflows.

  5. Verifying Multi-Agent Systems via Unbounded Model Checking

    NASA Technical Reports Server (NTRS)

    Kacprzak, M.; Lomuscio, A.; Lasica, T.; Penczek, W.; Szreter, M.

    2004-01-01

    We present an approach to the problem of verification of epistemic properties in multi-agent systems by means of symbolic model checking. In particular, it is shown how to extend the technique of unbounded model checking from a purely temporal setting to a temporal-epistemic one. In order to achieve this, we base our discussion on interpreted systems semantics, a popular semantics used in multi-agent systems literature. We give details of the technique and show how it can be applied to the well known train, gate and controller problem. Keywords: model checking, unbounded model checking, multi-agent systems

  6. User guide for the USGS aerial camera Report of Calibration.

    USGS Publications Warehouse

    Tayman, W.P.

    1984-01-01

    Calibration and testing of aerial mapping cameras includes the measurement of optical constants and the check for proper functioning of a number of complicated mechanical and electrical parts. For this purpose the US Geological Survey performs an operational type photographic calibration. This paper is not strictly a scientific paper but rather a 'user guide' to the USGS Report of Calibration of an aerial mapping camera for compliance with both Federal and State mapping specifications. -Author

  7. Guiding-center equations for electrons in ultraintense laser fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moore, J.E.; Fisch, N.J.

    1994-01-01

    The guiding-center equations are derived for electrons in arbitrarily intense laser fields also subject to external fields and ponderomotive forces. Exhibiting the relativistic mass increase of the oscillating electrons, a simple frame-invariant equation is shown to govern the behavior of the electrons for sufficiently weak background fields and ponderomotive forces. The parameter regime for which such a formulation is valid is made precise, and some predictions of the equation are checked by numerical simulation.

  8. Toward improved design of check dam systems: A case study in the Loess Plateau, China

    NASA Astrophysics Data System (ADS)

    Pal, Debasish; Galelli, Stefano; Tang, Honglei; Ran, Qihua

    2018-04-01

    Check dams are one of the most common strategies for controlling sediment transport in erosion prone areas, along with soil and water conservation measures. However, existing mathematical models that simulate sediment production and delivery are often unable to simulate how the storage capacity of check dams varies with time. To explicitly account for this process-and to support the design of check dam systems-we developed a modelling framework consisting of two components, namely (1) the spatially distributed Soil Erosion and Sediment Delivery Model (WaTEM/SEDEM), and (2) a network-based model of check dam storage dynamics. The two models are run sequentially, with the second model receiving the initial sediment input to check dams from WaTEM/SEDEM. The framework is first applied to Shejiagou catchment, a 4.26 km2 area located in the Loess Plateau, China, where we study the effect of the existing check dam system on sediment dynamics. Results show that the deployment of check dams altered significantly the sediment delivery ratio of the catchment. Furthermore, the network-based model reveals a large variability in the life expectancy of check dams and abrupt changes in their filling rates. The application of the framework to six alternative check dam deployment scenarios is then used to illustrate its usefulness for planning purposes, and to derive some insights on the effect of key decision variables, such as the number, size, and site location of check dams. Simulation results suggest that better performance-in terms of life expectancy and sediment delivery ratio-could have been achieved with an alternative deployment strategy.

  9. Roofing Source File.

    ERIC Educational Resources Information Center

    American School & University, 1994

    1994-01-01

    Presents a resource guide for identifying, selecting, and specifying educational roofing systems. Explores the various types of roofing systems considered for most schools and describes how to select a roofing contractor and consultant. A roofing retrofit check list and roofing specification chart are provided. (GR)

  10. Formal methods in computer system design

    NASA Astrophysics Data System (ADS)

    Hoare, C. A. R.

    1989-12-01

    This note expounds a philosophy of engineering design which is stimulated, guided and checked by mathematical calculations and proofs. Its application to software engineering promises the same benifits as those derived from the use of mathematics in all other branches of modern science.

  11. Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8: Users Guide

    NASA Astrophysics Data System (ADS)

    Justus, C. G.; James, B. F.

    1999-05-01

    Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8 is presented and its new features are discussed. Mars-GRAM uses new values of planetary reference ellipsoid radii, gravity term, and rotation rate (consistent with current JPL values) and includes centrifugal effects on gravity. The model now uses NASA Ames Global Circulation Model low resolution topography. Curvature corrections are applied to winds and limits based on speed of sound are applied. Altitude of the F1 ionization peak and density scale height, including effects of change of molecular weight with altitude are computed. A check is performed to disallow temperatures below CO2 sublimination. This memorandum includes instructions on obtaining Mars-GRAM source code and data files and running the program. Sample input and output are provided. An example of incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code is also given.

  12. Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8: Users Guide

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, B. F.

    1999-01-01

    Mars Global Reference Atmospheric Model (Mars-GRAM) Version 3.8 is presented and its new features are discussed. Mars-GRAM uses new values of planetary reference ellipsoid radii, gravity term, and rotation rate (consistent with current JPL values) and includes centrifugal effects on gravity. The model now uses NASA Ames Global Circulation Model low resolution topography. Curvature corrections are applied to winds and limits based on speed of sound are applied. Altitude of the F1 ionization peak and density scale height, including effects of change of molecular weight with altitude are computed. A check is performed to disallow temperatures below CO2 sublimination. This memorandum includes instructions on obtaining Mars-GRAM source code and data files and running the program. Sample input and output are provided. An example of incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code is also given.

  13. Implementing Model-Check for Employee and Management Satisfaction

    NASA Technical Reports Server (NTRS)

    Jones, Corey; LaPha, Steven

    2013-01-01

    This presentation will discuss methods to which ModelCheck can be implemented to not only improve model quality, but also satisfy both employees and management through different sets of quality checks. This approach allows a standard set of modeling practices to be upheld throughout a company, with minimal interaction required by the end user. The presenter will demonstrate how to create multiple ModelCheck standards, preventing users from evading the system, and how it can improve the quality of drawings and models.

  14. Symbolic LTL Compilation for Model Checking: Extended Abstract

    NASA Technical Reports Server (NTRS)

    Rozier, Kristin Y.; Vardi, Moshe Y.

    2007-01-01

    In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.

  15. Finding Feasible Abstract Counter-Examples

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Dwyer, Matthew B.; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2002-01-01

    A strength of model checking is its ability to automate the detection of subtle system errors and produce traces that exhibit those errors. Given the high computational cost of model checking most researchers advocate the use of aggressive property-preserving abstractions. Unfortunately, the more aggressively a system is abstracted the more infeasible behavior it will have. Thus, while abstraction enables efficient model checking it also threatens the usefulness of model checking as a defect detection tool, since it may be difficult to determine whether a counter-example is feasible and hence worth developer time to analyze. We have explored several strategies for addressing this problem by extending an explicit-state model checker, Java PathFinder (JPF), to search for and analyze counter-examples in the presence of abstractions. We demonstrate that these techniques effectively preserve the defect detection ability of model checking in the presence of aggressive abstraction by applying them to check properties of several abstracted multi-threaded Java programs. These new capabilities are not specific to JPF and can be easily adapted to other model checking frameworks; we describe how this was done for the Bandera toolset.

  16. 12 CFR Appendix C to Part 229 - Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... in different states or check processing regions)]. If you make the deposit in person to one of our...] Substitute Checks and Your Rights What Is a Substitute Check? To make check processing faster, federal law...

  17. Food Guide Pyramid Becomes a Plate

    MedlinePlus

    ... plate also helps keep portion sizes in check. Super-big portionscan cause weight gain. What's a Grain Again? You know what fruits and vegetables are. But here's a reminder about what's included in the three other food groups: protein, grains, and dairy: Protein: Beef; poultry; ...

  18. Photovoltaics radiometric issues and needs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, D.R.

    1995-11-01

    This paper presents a summary of issues discussed at the photovoltaic radiometric measurements workshop. Topics included radiometric measurements guides, the need for well-defined goals, documentation, calibration checks, accreditation of testing laboratories and methods, the need for less expensive radiometric instrumentation, data correlations, and quality assurance.

  19. Basic manual lensometry: a guide for measuring distance and near glasses.

    PubMed

    Garber, N

    2000-01-01

    Manual lensometry is a basic component of ophthalmic clinical care. You will find it necessary to check lens prescriptions manually when the written prescription does not match the results of an automated lensometer or when automated lensometry is not available.

  20. Guide to making time-lapse graphics using the facilities of the National Magnetic Fusion Energy Computing Center

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munro, J.K. Jr.

    1980-05-01

    The advent of large, fast computers has opened the way to modeling more complex physical processes and to handling very large quantities of experimental data. The amount of information that can be processed in a short period of time is so great that use of graphical displays assumes greater importance as a means of displaying this information. Information from dynamical processes can be displayed conveniently by use of animated graphics. This guide presents the basic techniques for generating black and white animated graphics, with consideration of aesthetic, mechanical, and computational problems. The guide is intended for use by someone whomore » wants to make movies on the National Magnetic Fusion Energy Computing Center (NMFECC) CDC-7600. Problems encountered by a geographically remote user are given particular attention. Detailed information is given that will allow a remote user to do some file checking and diagnosis before giving graphics files to the system for processing into film in order to spot problems without having to wait for film to be delivered. Source listings of some useful software are given in appendices along with descriptions of how to use it. 3 figures, 5 tables.« less

  1. Application of conditional moment tests to model checking for generalized linear models.

    PubMed

    Pan, Wei

    2002-06-01

    Generalized linear models (GLMs) are increasingly being used in daily data analysis. However, model checking for GLMs with correlated discrete response data remains difficult. In this paper, through a case study on marginal logistic regression using a real data set, we illustrate the flexibility and effectiveness of using conditional moment tests (CMTs), along with other graphical methods, to do model checking for generalized estimation equation (GEE) analyses. Although CMTs provide an array of powerful diagnostic tests for model checking, they were originally proposed in the econometrics literature and, to our knowledge, have never been applied to GEE analyses. CMTs cover many existing tests, including the (generalized) score test for an omitted covariate, as special cases. In summary, we believe that CMTs provide a class of useful model checking tools.

  2. Take the Reins on Model Quality with ModelCHECK and Gatekeeper

    NASA Technical Reports Server (NTRS)

    Jones, Corey

    2012-01-01

    Model quality and consistency has been an issue for us due to the diverse experience level and imaginative modeling techniques of our users. Fortunately, setting up ModelCHECK and Gatekeeper to enforce our best practices has helped greatly, but it wasn't easy. There were many challenges associated with setting up ModelCHECK and Gatekeeper including: limited documentation, restrictions within ModelCHECK, and resistance from end users. However, we consider ours a success story. In this presentation we will describe how we overcame these obstacles and present some of the details of how we configured them to work for us.

  3. Small Nations in Multinational Operations and Armenian Perspectives

    DTIC Science & Technology

    2014-12-12

    National Security Powers: Are the Checks in Balance?” In U.S. Army War College Guide to National Security Issues, Volume II: National Security Policy and...SMALL NATIONS IN MULTINATIONAL OPERATIONS AND ARMENIAN PERSPECTIVES A thesis presented to the Faculty of the U.S. Army

  4. Model checking for linear temporal logic: An efficient implementation

    NASA Technical Reports Server (NTRS)

    Sherman, Rivi; Pnueli, Amir

    1990-01-01

    This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.

  5. Guiding Principles for Teaching Multicultural Literature

    ERIC Educational Resources Information Center

    Louie, Belinda Y.

    2006-01-01

    When using multicultural literature in the classroom, teachers should: (1) Check the text's authenticity; (2) Help learners understand the characters' world; (3) Encourage children to see the world through the characters' perspectives; (4) Identify values underlying the characters' conflict resolution strategies; (5) Relate self to the text and…

  6. 12 CFR Appendix C to Part 229 - Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy Disclosure and Notices C Appendix C to Part 229 Banks and... OF FUNDS AND COLLECTION OF CHECKS (REGULATION CC) Pt. 229, App. C Appendix C to Part 229—Model...

  7. Design of Software for Design of Finite Element for Structural Analysis. Ph.D. Thesis - Stuttgart Univ., 22 Nov. 1983

    NASA Technical Reports Server (NTRS)

    Helfrich, Reinhard

    1987-01-01

    The concepts of software engineering which allow a user of the finite element method to describe a model, to collect and to check the model data in a data base as well as to form the matrices required for a finite element calculation are examined. Next the components of the model description are conceived including the mesh tree, the topology, the configuration, the kinematic boundary conditions, the data for each element, and the loads. The possibilities for description and review of the data are considered. The concept of the segments for the modularization of the programs follows the components of the model description. The significance of the mesh tree as a globular guiding structure will be understood in view of the principle of the unity of the model, the mesh tree, and the data base. The user-friendly aspects of the software system will be summarized: the principle of language communication, the data generators, error processing, and data security.

  8. Reflection and transmission coefficients for guided waves reflected by defects in viscoelastic material plates.

    PubMed

    Hosten, Bernard; Moreau, Ludovic; Castaings, Michel

    2007-06-01

    The paper presents a Fourier transform-based signal processing procedure for quantifying the reflection and transmission coefficients and mode conversion of guided waves diffracted by defects in plates made of viscoelastic materials. The case of the S(0) Lamb wave mode incident on a notch in a Perspex plate is considered. The procedure is applied to numerical data produced by a finite element code that simulates the propagation of attenuated guided modes and their diffraction by the notch, including mode conversion. Its validity and precision are checked by the way of the energy balance computation and by comparison with results obtained using an orthogonality relation-based processing method.

  9. The influence of social anxiety on the body checking behaviors of female college students.

    PubMed

    White, Emily K; Warren, Cortney S

    2014-09-01

    Social anxiety and eating pathology frequently co-occur. However, there is limited research examining the relationship between anxiety and body checking, aside from one study in which social physique anxiety partially mediated the relationship between body checking cognitions and body checking behavior (Haase, Mountford, & Waller, 2007). In an independent sample of 567 college women, we tested the fit of Haase and colleagues' foundational model but did not find evidence of mediation. Thus we tested the fit of an expanded path model that included eating pathology and clinical impairment. In the best-fitting path model (CFI=.991; RMSEA=.083) eating pathology and social physique anxiety positively predicted body checking, and body checking positively predicted clinical impairment. Therefore, women who endorse social physique anxiety may be more likely to engage in body checking behaviors and experience impaired psychosocial functioning. Published by Elsevier Ltd.

  10. USER'S GUIDE TO CLOSURE EVALUATION SYSTEM: CES BETA-TEST VERSION 1.0

    EPA Science Inventory

    The Closure Evaluation System (CES) is a decision support tool, developed by the U.S. EPA's Risk Reduction Engineering Laboratory, to assist reviewers and preparers of Resource Conservation and Recovery Act (RCRA) Part B permit applications. CES is designed to serve as a checklis...

  11. Basic & Survival Consumer Economics for Adult Refugees.

    ERIC Educational Resources Information Center

    Carlston, Peter G.

    Prepared to help teachers address the basic and survival level consumer needs of adult Vietnamese and Laotian refugees, this instructional guide consists of five units of instructional materials. Topics of the individual units are (1) how the monetary system works (cash, checks, postal money orders, banking); (2) the family consumer (personal and…

  12. The Safety of Older Pedestrians at Signal-Controlled Crossings.

    ERIC Educational Resources Information Center

    Harrell, W. Andrew

    1996-01-01

    Observes the extent to which pedestrians checked for oncoming traffic before crossing signal-controlled intersections on busy city streets. Pedestrians over the age of 50 were the most cautious, especially under dangerous traffic conditions. Older pedestrians were least likely to use other pedestrians as "guides" to safety, instead…

  13. Foods and Nutrition. Student Modules and Instructor's Guide.

    ERIC Educational Resources Information Center

    South Carolina State Dept. of Education, Columbia. Office of Vocational Education.

    These 64 performance-based instructional modules are for the home economics content area of food and nutrition. Each module is composed of an introduction for the student, a performance objective, a variety of learning activities (reading assignments, tasks, written assignments), content information, a student self-check, recommended references,…

  14. 77 FR 15813 - Preoperational Testing of Instrument and Control Air Systems

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-16

    ... seismic requirement, ICAS air-dryer testing to meet dew point design requirements, ICAS accumulator check... ensure consideration only for comments received on or before this date. Although a time limit is given... improvements in all published guides are encouraged at any time. ADDRESSES: You may access information and...

  15. Bioenvironmental Engineer’s Guide to Indoor Air Quality Surveys

    DTIC Science & Technology

    2014-09-01

    housekeeping practices •Fresh air intake located near contaminant source •Check for sewer line leak, septic tank leak, fuel tank leaks...make-up air •Dirty coils/filters •Sewer gas, drain traps, sanitary vents •Leaky tanks , spills •Cleaning products, pesticides •Poor

  16. Guidelines for Protection in Evaluation.

    ERIC Educational Resources Information Center

    Mainzer, Richard W.; And Others

    The report examines due process protections and requirements in evaluation as mandated by P.O. 94-142 (The Education for All Handicapped Children Act) and Maryland special education Bylaw. A checklist guide for Admission, Review, and Dismissal (ARD) personnel to ensure protection in evaluation is provided. Self check guidelines touch on 16…

  17. Micronesian Mathematics Program, Level 1, Children's Workbook.

    ERIC Educational Resources Information Center

    Gring, Carolyn

    This workbook for children was prepared especially to accompany the level 1 Micronesian Mathematics Program Teacher's Guide. It is to be used to check whether children have learned concepts taught by activities and activity cards. Work is provided for such concepts as color recognition, categorizing, counting, ordering, numeration, contrasting,…

  18. Design criteria monograph for pressure regulators, relief valves, check valves, burst disks, and explosive valves

    NASA Technical Reports Server (NTRS)

    1974-01-01

    Monograph reviews and assesses current design practices, and from them establishes firm guidance for achieving greater consistency in design, increased reliability in end product, and greater efficiency in design effort. Five devices are treated separately. Guides to aid in configuration selection are outlined.

  19. 12 CFR Appendix C to Part 229 - Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... in different states or check processing regions)]. If you make the deposit in person to one of our... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... Your Rights What Is a Substitute Check? To make check processing faster, federal law permits banks to...

  20. 12 CFR Appendix C to Part 229 - Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... in different states or check processing regions)]. If you make the deposit in person to one of our... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... Your Rights What Is a Substitute Check? To make check processing faster, federal law permits banks to...

  1. 12 CFR Appendix C to Part 229 - Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... in different states or check processing regions)]. If you make the deposit in person to one of our... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... Your Rights What Is a Substitute Check? To make check processing faster, federal law permits banks to...

  2. Determination of MLC model parameters for Monaco using commercial diode arrays.

    PubMed

    Kinsella, Paul; Shields, Laura; McCavana, Patrick; McClean, Brendan; Langan, Brian

    2016-07-08

    Multileaf collimators (MLCs) need to be characterized accurately in treatment planning systems to facilitate accurate intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). The aim of this study was to examine the use of MapCHECK 2 and ArcCHECK diode arrays for optimizing MLC parameters in Monaco X-ray voxel Monte Carlo (XVMC) dose calculation algorithm. A series of radiation test beams designed to evaluate MLC model parameters were delivered to MapCHECK 2, ArcCHECK, and EBT3 Gafchromic film for comparison. Initial comparison of the calculated and ArcCHECK-measured dose distributions revealed it was unclear how to change the MLC parameters to gain agreement. This ambiguity arose due to an insufficient sampling of the test field dose distributions and unexpected discrepancies in the open parts of some test fields. Consequently, the XVMC MLC parameters were optimized based on MapCHECK 2 measurements. Gafchromic EBT3 film was used to verify the accuracy of MapCHECK 2 measured dose distributions. It was found that adjustment of the MLC parameters from their default values resulted in improved global gamma analysis pass rates for MapCHECK 2 measurements versus calculated dose. The lowest pass rate of any MLC-modulated test beam improved from 68.5% to 93.5% with 3% and 2 mm gamma criteria. Given the close agreement of the optimized model to both MapCHECK 2 and film, the optimized model was used as a benchmark to highlight the relatively large discrepancies in some of the test field dose distributions found with ArcCHECK. Comparison between the optimized model-calculated dose and ArcCHECK-measured dose resulted in global gamma pass rates which ranged from 70.0%-97.9% for gamma criteria of 3% and 2 mm. The simple square fields yielded high pass rates. The lower gamma pass rates were attributed to the ArcCHECK overestimating the dose in-field for the rectangular test fields whose long axis was parallel to the long axis of the ArcCHECK. Considering ArcCHECK measurement issues and the lower gamma pass rates for the MLC-modulated test beams, it was concluded that MapCHECK 2 was a more suitable detector than ArcCHECK for the optimization process. © 2016 The Authors

  3. Propel: Tools and Methods for Practical Source Code Model Checking

    NASA Technical Reports Server (NTRS)

    Mansouri-Samani, Massoud; Mehlitz, Peter; Markosian, Lawrence; OMalley, Owen; Martin, Dale; Moore, Lantz; Penix, John; Visser, Willem

    2003-01-01

    The work reported here is an overview and snapshot of a project to develop practical model checking tools for in-the-loop verification of NASA s mission-critical, multithreaded programs in Java and C++. Our strategy is to develop and evaluate both a design concept that enables the application of model checking technology to C++ and Java, and a model checking toolset for C++ and Java. The design concept and the associated model checking toolset is called Propel. It builds upon the Java PathFinder (JPF) tool, an explicit state model checker for Java applications developed by the Automated Software Engineering group at NASA Ames Research Center. The design concept that we are developing is Design for Verification (D4V). This is an adaption of existing best design practices that has the desired side-effect of enhancing verifiability by improving modularity and decreasing accidental complexity. D4V, we believe, enhances the applicability of a variety of V&V approaches; we are developing the concept in the context of model checking. The model checking toolset, Propel, is based on extending JPF to handle C++. Our principal tasks in developing the toolset are to build a translator from C++ to Java, productize JPF, and evaluate the toolset in the context of D4V. Through all these tasks we are testing Propel capabilities on customer applications.

  4. Program Model Checking as a New Trend

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper introduces a special section of STTT (International Journal on Software Tools for Technology Transfer) containing a selection of papers that were presented at the 7th International SPIN workshop, Stanford, August 30 - September 1, 2000. The workshop was named SPIN Model Checking and Software Verification, with an emphasis on model checking of programs. The paper outlines the motivation for stressing software verification, rather than only design and model verification, by presenting the work done in the Automated Software Engineering group at NASA Ames Research Center within the last 5 years. This includes work in software model checking, testing like technologies and static analysis.

  5. UTP and Temporal Logic Model Checking

    NASA Astrophysics Data System (ADS)

    Anderson, Hugh; Ciobanu, Gabriel; Freitas, Leo

    In this paper we give an additional perspective to the formal verification of programs through temporal logic model checking, which uses Hoare and He Unifying Theories of Programming (UTP). Our perspective emphasizes the use of UTP designs, an alphabetised relational calculus expressed as a pre/post condition pair of relations, to verify state or temporal assertions about programs. The temporal model checking relation is derived from a satisfaction relation between the model and its properties. The contribution of this paper is that it shows a UTP perspective to temporal logic model checking. The approach includes the notion of efficiency found in traditional model checkers, which reduced a state explosion problem through the use of efficient data structures

  6. 75 FR 28480 - Airworthiness Directives; Airbus Model A300 Series Airplanes; Model A300 B4-600, B4-600R, F4-600R...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-21

    ... pressurise the hydraulic reservoirs, due to leakage of the Crissair reservoir air pressurisation check valves. * * * The leakage of the check valves was caused by an incorrect spring material. The affected Crissair check valves * * * were then replaced with improved check valves P/N [part number] 2S2794-1 * * *. More...

  7. Assessment of check-dam groundwater recharge with water-balance calculations

    NASA Astrophysics Data System (ADS)

    Djuma, Hakan; Bruggeman, Adriana; Camera, Corrado; Eliades, Marinos

    2017-04-01

    Studies on the enhancement of groundwater recharge by check-dams in arid and semi-arid environments mainly focus on deriving water infiltration rates from the check-dam ponding areas. This is usually achieved by applying simple water balance models, more advanced models (e.g., two dimensional groundwater models) and field tests (e.g., infiltrometer test or soil pit tests). Recharge behind the check-dam can be affected by the built-up of sediment as a result of erosion in the upstream watershed area. This natural process can increase the uncertainty in the estimates of the recharged water volume, especially for water balance calculations. Few water balance field studies of individual check-dams have been presented in the literature and none of them presented associated uncertainties of their estimates. The objectives of this study are i) to assess the effect of a check-dam on groundwater recharge from an ephemeral river; and ii) to assess annual sedimentation at the check-dam during a 4-year period. The study was conducted on a check-dam in the semi-arid island of Cyprus. Field campaigns were carried out to measure water flow, water depth and check-dam topography in order to establish check-dam water height, volume, evaporation, outflow and recharge relations. Topographic surveys were repeated at the end of consecutive hydrological years to estimate the sediment built up in the reservoir area of the check dam. Also, sediment samples were collected from the check-dam reservoir area for bulk-density analyses. To quantify the groundwater recharge, a water balance model was applied at two locations: at the check-dam and corresponding reservoir area, and at a 4-km stretch of the river bed without check-dam. Results showed that a check-dam with a storage capacity of 25,000 m3 was able to recharge to the aquifer, in four years, a total of 12 million m3 out of the 42 million m3 of measured (or modelled) streamflow. Recharge from the analyzed 4-km long river section without check-dam was estimated to be 1 million m3. Upper and lower limits of prediction intervals were computed to assess the uncertainties of the results. The model was rerun with these values and resulted in recharge values of 0.4 m3 as lower and 38 million m3 as upper limit. The sediment survey in the check-dam reservoir area showed that the reservoir area was filled with 2,000 to 3,000 tons of sediment after one rainfall season. This amount of sediment corresponds to 0.2 to 2 t h-1 y-1 sediment yield at the watershed level and reduces the check-dam storage capacity by approximately 10%. Results indicate that check-dams are valuable structures for increasing groundwater resources, but special attention should be given to soil erosion occurring in the upstream area and the resulting sediment built-up in the check-dam reservoir area. This study has received funding from the EU FP7 RECARE Project (GA 603498)

  8. Efficient model checking of network authentication protocol based on SPIN

    NASA Astrophysics Data System (ADS)

    Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan

    2013-03-01

    Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.

  9. An Integrated Environment for Efficient Formal Design and Verification

    NASA Technical Reports Server (NTRS)

    1998-01-01

    The general goal of this project was to improve the practicality of formal methods by combining techniques from model checking and theorem proving. At the time the project was proposed, the model checking and theorem proving communities were applying different tools to similar problems, but there was not much cross-fertilization. This project involved a group from SRI that had substantial experience in the development and application of theorem-proving technology, and a group at Stanford that specialized in model checking techniques. Now, over five years after the proposal was submitted, there are many research groups working on combining theorem-proving and model checking techniques, and much more communication between the model checking and theorem proving research communities. This project contributed significantly to this research trend. The research work under this project covered a variety of topics: new theory and algorithms; prototype tools; verification methodology; and applications to problems in particular domains.

  10. A voice-actuated wind tunnel model leak checking system

    NASA Technical Reports Server (NTRS)

    Larson, William E.

    1989-01-01

    A computer program has been developed that improves the efficiency of wind tunnel model leak checking. The program uses a voice recognition unit to relay a technician's commands to the computer. The computer, after receiving a command, can respond to the technician via a voice response unit. Information about the model pressure orifice being checked is displayed on a gas-plasma terminal. On command, the program records up to 30 seconds of pressure data. After the recording is complete, the raw data and a straight line fit of the data are plotted on the terminal. This allows the technician to make a decision on the integrity of the orifice being checked. All results of the leak check program are stored in a database file that can be listed on the line printer for record keeping purposes or displayed on the terminal to help the technician find unchecked orifices. This program allows one technician to check a model for leaks instead of the two or three previously required.

  11. Use of posterior predictive checks as an inferential tool for investigating individual heterogeneity in animal population vital rates

    PubMed Central

    Chambert, Thierry; Rotella, Jay J; Higgs, Megan D

    2014-01-01

    The investigation of individual heterogeneity in vital rates has recently received growing attention among population ecologists. Individual heterogeneity in wild animal populations has been accounted for and quantified by including individually varying effects in models for mark–recapture data, but the real need for underlying individual effects to account for observed levels of individual variation has recently been questioned by the work of Tuljapurkar et al. (Ecology Letters, 12, 93, 2009) on dynamic heterogeneity. Model-selection approaches based on information criteria or Bayes factors have been used to address this question. Here, we suggest that, in addition to model-selection, model-checking methods can provide additional important insights to tackle this issue, as they allow one to evaluate a model's misfit in terms of ecologically meaningful measures. Specifically, we propose the use of posterior predictive checks to explicitly assess discrepancies between a model and the data, and we explain how to incorporate model checking into the inferential process used to assess the practical implications of ignoring individual heterogeneity. Posterior predictive checking is a straightforward and flexible approach for performing model checks in a Bayesian framework that is based on comparisons of observed data to model-generated replications of the data, where parameter uncertainty is incorporated through use of the posterior distribution. If discrepancy measures are chosen carefully and are relevant to the scientific context, posterior predictive checks can provide important information allowing for more efficient model refinement. We illustrate this approach using analyses of vital rates with long-term mark–recapture data for Weddell seals and emphasize its utility for identifying shortfalls or successes of a model at representing a biological process or pattern of interest. We show how posterior predictive checks can be used to strengthen inferences in ecological studies. We demonstrate the application of this method on analyses dealing with the question of individual reproductive heterogeneity in a population of Antarctic pinnipeds. PMID:24834335

  12. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking

    PubMed Central

    Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems. PMID:27187178

  13. A Novel Method to Verify Multilevel Computational Models of Biological Systems Using Multiscale Spatio-Temporal Meta Model Checking.

    PubMed

    Pârvu, Ovidiu; Gilbert, David

    2016-01-01

    Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems.

  14. Millwright Apprenticeship. Related Training Modules. 17.1-17.13 Hydraulics.

    ERIC Educational Resources Information Center

    Lane Community Coll., Eugene, OR.

    This packet of 13 learning modules on hydraulics is 1 of 6 such packets developed for apprenticeship training for millwrights. Introductory materials are a complete listing of all available modules and a supplementary reference list. Each module contains some or all of these components: goal, performance indicators, study guide (a check list of…

  15. Development and Validation of Economics Achievement Test for Secondary Schools

    ERIC Educational Resources Information Center

    Eleje, Lydia Ijeoma; Abanobi, Chidiebere Christopher; Obasi, Emma

    2017-01-01

    Economics achievement test (EAT) for assessing senior secondary two (SS2) achievement in economics was developed and validated in the study. Five research questions guided the study. Twenty and 100 mid-senior secondary (SS2) economics students was used for the pilot testing and reliability check respectively. A sample of 250 students randomly…

  16. Desert Shield Leader’s Safety Guide

    DTIC Science & Technology

    1990-12-01

    banana oil) vapor is toxic and flammable. Checking the seal of the protective mask should be done in a well-ventilated area away from heat and flames...protective clothing to keep fuel off the skin. (Skin is highly susceptible to drying, cracking, and peeling if it comes in contact with fuel in desert

  17. Student Perceptions of Formative Assessment in the Chemistry Classroom

    ERIC Educational Resources Information Center

    Haroldson, Rachelle Ann

    2012-01-01

    Research on formative assessment has focused on the ways teachers implement and use formative assessment to check student understanding in order to guide their instruction. This study shifted emphasis away from teachers to look at how students use and perceive formative assessment in the science classroom. Four key strategies of formative…

  18. Curriculum Development--Post-Secondary Electro-Mechanical Technology. Parts I-IV.

    ERIC Educational Resources Information Center

    Texas State Technical Inst., Sweetwater.

    This curriculum guide consists of materials for use in teaching a four-part course in electromechanical technical technology. The first part contains nine units dealing with hydraulics and nine units on pneumatics. Addressed in the individual units are the following topics: an introduction to hydraulics; control of hydraulic energy; check valves…

  19. Marketing the "Sex Check": Evaluating Recruitment Strategies for a Telephone-Based HIV Prevention Project for Gay and Bisexual Men

    ERIC Educational Resources Information Center

    McKee, Michael B.; Picciano, Joseph F.; Roffman, Roger A.; Swanson, Fred; Kalichman, Seth C.

    2006-01-01

    Designing effective marketing and recruitment strategies for HIV prevention research requires attention to cultural relevance, logistical barriers, and perceived psychosocial barriers to accessing services. McGuire's communication/persuasion matrix (1985) guided our evaluation, with particular attention to success of each marketing "channel"…

  20. Millwright Apprenticeship. Related Training Modules. 12.1-12.3 Feedwater.

    ERIC Educational Resources Information Center

    Lane Community Coll., Eugene, OR.

    This packet of three learning modules on feedwater is one of six such packets developed for apprenticeship training for millwrights. Introductory materials are a complete listing of all available modules and a supplementary reference list. Each module contains some or all of these components: goal, performance indicators, study guide (a check list…

  1. Occupational Exploration at Ontario Junior High School: 9th Grade.

    ERIC Educational Resources Information Center

    Bates, Gene; And Others

    The document contains 56 activities for Grade 9. The contents include the following areas: questions about the future; job seeking activities and guidelines; career games; a personal interest check list; unit guides for courses in World of Work (55 pages), and Career Educational Planning (40 pages) which include objectives, activities, evaluation,…

  2. Resource guide 2004. Blood glucose. Monitors and data management systems.

    PubMed

    2004-01-01

    Before you buy a blood glucose monitor (also known as a blood glucose meter), check with your doctor and diabetes educator. Make sure the one you choose is well suited to your particular needs. You might want to have one at home and one for use at school or the office.

  3. Model Checking Temporal Logic Formulas Using Sticker Automata

    PubMed Central

    Feng, Changwei; Wu, Huanmei

    2017-01-01

    As an important complex problem, the temporal logic model checking problem is still far from being fully resolved under the circumstance of DNA computing, especially Computation Tree Logic (CTL), Interval Temporal Logic (ITL), and Projection Temporal Logic (PTL), because there is still a lack of approaches for DNA model checking. To address this challenge, a model checking method is proposed for checking the basic formulas in the above three temporal logic types with DNA molecules. First, one-type single-stranded DNA molecules are employed to encode the Finite State Automaton (FSA) model of the given basic formula so that a sticker automaton is obtained. On the other hand, other single-stranded DNA molecules are employed to encode the given system model so that the input strings of the sticker automaton are obtained. Next, a series of biochemical reactions are conducted between the above two types of single-stranded DNA molecules. It can then be decided whether the system satisfies the formula or not. As a result, we have developed a DNA-based approach for checking all the basic formulas of CTL, ITL, and PTL. The simulated results demonstrate the effectiveness of the new method. PMID:29119114

  4. Foundations of the Bandera Abstraction Tools

    NASA Technical Reports Server (NTRS)

    Hatcliff, John; Dwyer, Matthew B.; Pasareanu, Corina S.; Robby

    2003-01-01

    Current research is demonstrating that model-checking and other forms of automated finite-state verification can be effective for checking properties of software systems. Due to the exponential costs associated with model-checking, multiple forms of abstraction are often necessary to obtain system models that are tractable for automated checking. The Bandera Tool Set provides multiple forms of automated support for compiling concurrent Java software systems to models that can be supplied to several different model-checking tools. In this paper, we describe the foundations of Bandera's data abstraction mechanism which is used to reduce the cardinality (and the program's state-space) of data domains in software to be model-checked. From a technical standpoint, the form of data abstraction used in Bandera is simple, and it is based on classical presentations of abstract interpretation. We describe the mechanisms that Bandera provides for declaring abstractions, for attaching abstractions to programs, and for generating abstracted programs and properties. The contributions of this work are the design and implementation of various forms of tool support required for effective application of data abstraction to software components written in a programming language like Java which has a rich set of linguistic features.

  5. Full implementation of a distributed hydrological model based on check dam trapped sediment volumes

    NASA Astrophysics Data System (ADS)

    Bussi, Gianbattista; Francés, Félix

    2014-05-01

    Lack of hydrometeorological data is one of the most compelling limitations to the implementation of distributed environmental models. Mediterranean catchments, in particular, are characterised by high spatial variability of meteorological phenomena and soil characteristics, which may prevents from transferring model calibrations from a fully gauged catchment to a totally o partially ungauged one. For this reason, new sources of data are required in order to extend the use of distributed models to non-monitored or low-monitored areas. An important source of information regarding the hydrological and sediment cycle is represented by sediment deposits accumulated at the bottom of reservoirs. Since the 60s, reservoir sedimentation volumes were used as proxy data for the estimation of inter-annual total sediment yield rates, or, in more recent years, as a reference measure of the sediment transport for sediment model calibration and validation. Nevertheless, the possibility of using such data for constraining the calibration of a hydrological model has not been exhaustively investigated so far. In this study, the use of nine check dam reservoir sedimentation volumes for hydrological and sedimentological model calibration and spatio-temporal validation was examined. Check dams are common structures in Mediterranean areas, and are a potential source of spatially distributed information regarding both hydrological and sediment cycle. In this case-study, the TETIS hydrological and sediment model was implemented in a medium-size Mediterranean catchment (Rambla del Poyo, Spain) by taking advantage of sediment deposits accumulated behind the check dams located in the catchment headwaters. Reservoir trap efficiency was taken into account by coupling the TETIS model with a pond trap efficiency model. The model was calibrated by adjusting some of its parameters in order to reproduce the total sediment volume accumulated behind a check dam. Then, the model was spatially validated by obtaining the simulated sedimentation volume at the other eight check dams and comparing it to the observed sedimentation volumes. Lastly, the simulated water discharge at the catchment outlet was compared with observed water discharge records in order to check the hydrological sub-model behaviour. Model results provided highly valuable information concerning the spatial distribution of soil erosion and sediment transport. Spatial validation of the sediment sub-model provided very good results at seven check dams out of nine. This study shows that check dams can be a useful tool also for constraining hydrological model calibration, as model results agree with water discharge observations. In fact, the hydrological model validation at a downstream water flow gauge obtained a Nash-Sutcliffe efficiency of 0.8. This technique is applicable to all catchments with presence of check dams, and only requires rainfall and temperature data and soil characteristics maps.

  6. Towards Symbolic Model Checking for Multi-Agent Systems via OBDDs

    NASA Technical Reports Server (NTRS)

    Raimondi, Franco; Lomunscio, Alessio

    2004-01-01

    We present an algorithm for model checking temporal-epistemic properties of multi-agent systems, expressed in the formalism of interpreted systems. We first introduce a technique for the translation of interpreted systems into boolean formulae, and then present a model-checking algorithm based on this translation. The algorithm is based on OBDD's, as they offer a compact and efficient representation for boolean formulae.

  7. Verifying Architectural Design Rules of the Flight Software Product Line

    NASA Technical Reports Server (NTRS)

    Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen

    2009-01-01

    This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.

  8. Soldering and brazing safety guide: A handbook on space practice for those involved in soldering and brazing

    NASA Astrophysics Data System (ADS)

    This manual provides those involved in welding and brazing with effective safety procedures for use in performance of their jobs. Hazards exist in four types of general soldering and brazing processes: (1) cleaning; (2) application of flux; (3) application of heat and filler metal; and (4) residue cleaning. Most hazards during those operations can be avoided by using care, proper ventilation, protective clothing and equipment. Specific process hazards for various methods of brazing and soldering are treated. Methods to check ventilation are presented as well as a check of personal hygiene and good maintenance practices are stressed. Several emergency first aid treatments are described.

  9. Compositional schedulability analysis of real-time actor-based systems.

    PubMed

    Jaghoori, Mohammad Mahdi; de Boer, Frank; Longuet, Delphine; Chothia, Tom; Sirjani, Marjan

    2017-01-01

    We present an extension of the actor model with real-time, including deadlines associated with messages, and explicit application-level scheduling policies, e.g.,"earliest deadline first" which can be associated with individual actors. Schedulability analysis in this setting amounts to checking whether, given a scheduling policy for each actor, every task is processed within its designated deadline. To check schedulability, we introduce a compositional automata-theoretic approach, based on maximal use of model checking combined with testing. Behavioral interfaces define what an actor expects from the environment, and the deadlines for messages given these assumptions. We use model checking to verify that actors match their behavioral interfaces. We extend timed automata refinement with the notion of deadlines and use it to define compatibility of actor environments with the behavioral interfaces. Model checking of compatibility is computationally hard, so we propose a special testing process. We show that the analyses are decidable and automate the process using the Uppaal model checker.

  10. Ethical Review as a Tool for Enhancing Postgraduate Supervision and Research Outcomes in the Creative Arts

    ERIC Educational Resources Information Center

    Romano, Angela

    2016-01-01

    This article outlines the potential for Research Higher Degree (RHD) supervisors at universities and similar institutions to use ethical review as a constructive, dynamic tool in guiding RHD students in the timely completion of effective, innovative research projects. Ethical review involves a bureaucratized process for checking that researchers…

  11. Consumer Economics. Teacher Guidebook and Student Activity Book. Adult Basic Education Project REAL: Relevant Education for Adult Learners.

    ERIC Educational Resources Information Center

    Edgar, S. Keith

    This packet contains both a teacher's guide and a student activity book designed to help adult students acquire consumer information. Both booklets cover the following topics: bank accounts (checking accounts, savings accounts, other banking services), budgeting money, undersanding and using credit, comparative shopping, fraudulent persuasion, and…

  12. Research: Rags to Rags? Riches to Riches?

    ERIC Educational Resources Information Center

    Bracey, Gerald W.

    2004-01-01

    Everyone has read about what might be called the "gold gap"--how the rich in this country are getting richer and controlling an ever-larger share of the nation's wealth. The Century Foundation has started publishing "Reality Check", a series of guides to campaign issues that sometimes finds gaps in these types of cherished delusions. The guides…

  13. Learning to Teach Nothing in Particular: A Uniquely American Educational Dilemma

    ERIC Educational Resources Information Center

    Cohen, David K.

    2011-01-01

    When inspectors visit construction sites to assess the quality of work, they do so against the building code, which typically is written out in detail and used to guide work and teach apprentices. When attending physicians supervise interns as they take patients' histories or check their blood pressure, they compare the interns' work with…

  14. How to Thank a Teacher

    ERIC Educational Resources Information Center

    Davis, Matthew

    2012-01-01

    When it comes to expressing your appreciation to teachers, here's the drill: if the words don't come easily, don't let them get in the way. This guide is full of simple, affordable, straight-from-the-heart actions and gifts that will speak louder than words. And for those who are comfortable putting pen to paper, check out suggestions for written…

  15. ANSYS duplicate finite-element checker routine

    NASA Technical Reports Server (NTRS)

    Ortega, R.

    1995-01-01

    An ANSYS finite-element code routine to check for duplicated elements within the volume of a three-dimensional (3D) finite-element mesh was developed. The routine developed is used for checking floating elements within a mesh, identically duplicated elements, and intersecting elements with a common face. A space shuttle main engine alternate turbopump development high pressure oxidizer turbopump finite-element model check using the developed subroutine is discussed. Finally, recommendations are provided for duplicate element checking of 3D finite-element models.

  16. Pharmacist and Technician Perceptions of Tech-Check-Tech in Community Pharmacy Practice Settings.

    PubMed

    Frost, Timothy P; Adams, Alex J

    2018-04-01

    Tech-check-tech (TCT) is a practice model in which pharmacy technicians with advanced training can perform final verification of prescriptions that have been previously reviewed for appropriateness by a pharmacist. Few states have adopted TCT in part because of the common view that this model is controversial among members of the profession. This article aims to summarize the existing research on pharmacist and technician perceptions of community pharmacy-based TCT. A literature review was conducted using MEDLINE (January 1990 to August 2016) and Google Scholar (January 1990 to August 2016) using the terms "tech* and check," "tech-check-tech," "checking technician," and "accuracy checking tech*." Of the 7 studies identified we found general agreement among both pharmacists and technicians that TCT in community pharmacy settings can be safely performed. This agreement persisted in studies of theoretical TCT models and in studies assessing participants in actual community-based TCT models. Pharmacists who had previously worked with a checking technician were generally more favorable toward TCT. Both pharmacists and technicians in community pharmacy settings generally perceived TCT to be safe, in both theoretical surveys and in surveys following actual TCT demonstration projects. These perceptions of safety align well with the actual outcomes achieved from community pharmacy TCT studies.

  17. Visplause: Visual Data Quality Assessment of Many Time Series Using Plausibility Checks.

    PubMed

    Arbesser, Clemens; Spechtenhauser, Florian; Muhlbacher, Thomas; Piringer, Harald

    2017-01-01

    Trends like decentralized energy production lead to an exploding number of time series from sensors and other sources that need to be assessed regarding their data quality (DQ). While the identification of DQ problems for such routinely collected data is typically based on existing automated plausibility checks, an efficient inspection and validation of check results for hundreds or thousands of time series is challenging. The main contribution of this paper is the validated design of Visplause, a system to support an efficient inspection of DQ problems for many time series. The key idea of Visplause is to utilize meta-information concerning the semantics of both the time series and the plausibility checks for structuring and summarizing results of DQ checks in a flexible way. Linked views enable users to inspect anomalies in detail and to generate hypotheses about possible causes. The design of Visplause was guided by goals derived from a comprehensive task analysis with domain experts in the energy sector. We reflect on the design process by discussing design decisions at four stages and we identify lessons learned. We also report feedback from domain experts after using Visplause for a period of one month. This feedback suggests significant efficiency gains for DQ assessment, increased confidence in the DQ, and the applicability of Visplause to summarize indicators also outside the context of DQ.

  18. Model Checking the Remote Agent Planner

    NASA Technical Reports Server (NTRS)

    Khatib, Lina; Muscettola, Nicola; Havelund, Klaus; Norvig, Peter (Technical Monitor)

    2001-01-01

    This work tackles the problem of using Model Checking for the purpose of verifying the HSTS (Scheduling Testbed System) planning system. HSTS is the planner and scheduler of the remote agent autonomous control system deployed in Deep Space One (DS1). Model Checking allows for the verification of domain models as well as planning entries. We have chosen the real-time model checker UPPAAL for this work. We start by motivating our work in the introduction. Then we give a brief description of HSTS and UPPAAL. After that, we give a sketch for the mapping of HSTS models into UPPAAL and we present samples of plan model properties one may want to verify.

  19. Random Testing and Model Checking: Building a Common Framework for Nondeterministic Exploration

    NASA Technical Reports Server (NTRS)

    Groce, Alex; Joshi, Rajeev

    2008-01-01

    Two popular forms of dynamic analysis, random testing and explicit-state software model checking, are perhaps best viewed as search strategies for exploring the state spaces introduced by nondeterminism in program inputs. We present an approach that enables this nondeterminism to be expressed in the SPIN model checker's PROMELA language, and then lets users generate either model checkers or random testers from a single harness for a tested C program. Our approach makes it easy to compare model checking and random testing for models with precisely the same input ranges and probabilities and allows us to mix random testing with model checking's exhaustive exploration of non-determinism. The PROMELA language, as intended in its design, serves as a convenient notation for expressing nondeterminism and mixing random choices with nondeterministic choices. We present and discuss a comparison of random testing and model checking. The results derive from using our framework to test a C program with an effectively infinite state space, a module in JPL's next Mars rover mission. More generally, we show how the ability of the SPIN model checker to call C code can be used to extend SPIN's features, and hope to inspire others to use the same methods to implement dynamic analyses that can make use of efficient state storage, matching, and backtracking.

  20. CheckMATE 2: From the model to the limit

    NASA Astrophysics Data System (ADS)

    Dercks, Daniel; Desai, Nishita; Kim, Jong Soo; Rolbiecki, Krzysztof; Tattersall, Jamie; Weber, Torsten

    2017-12-01

    We present the latest developments to the CheckMATE program that allows models of new physics to be easily tested against the recent LHC data. To achieve this goal, the core of CheckMATE now contains over 60 LHC analyses of which 12 are from the 13 TeV run. The main new feature is that CheckMATE 2 now integrates the Monte Carlo event generation via MadGraph5_aMC@NLO and Pythia 8. This allows users to go directly from a SLHA file or UFO model to the result of whether a model is allowed or not. In addition, the integration of the event generation leads to a significant increase in the speed of the program. Many other improvements have also been made, including the possibility to now combine signal regions to give a total likelihood for a model.

  1. Stress Analysis of Boom of Special Mobile Crane for Plain Region in Transmission Line

    NASA Astrophysics Data System (ADS)

    Qin, Jian; Shao, Tao; Chen, Jun; Wan, Jiancheng; Li, Zhonghuan; Jiang, Ming

    2017-10-01

    Basis of the boom force analysis of special mobile crane for plain region in transmission line, the load type of boom design is confirmed. According to the different combinations of boom sections, the composite pattern of the different boom length is obtained to suit the actual conditions of boom overlapping. The large deformation model is employed with FEM to simulate the stress distribution of boom, and the calculation results are checked. The performance curves of rated load with different arm length and different working range are obtained, which ensures the lifting capacity of special mobile crane meeting the requirement of tower erection of transmission line. The proposed FEM of boom of mobile crane would provide certain guiding and reference to the boom design.

  2. Coverage Metrics for Model Checking

    NASA Technical Reports Server (NTRS)

    Penix, John; Visser, Willem; Norvig, Peter (Technical Monitor)

    2001-01-01

    When using model checking to verify programs in practice, it is not usually possible to achieve complete coverage of the system. In this position paper we describe ongoing research within the Automated Software Engineering group at NASA Ames on the use of test coverage metrics to measure partial coverage and provide heuristic guidance for program model checking. We are specifically interested in applying and developing coverage metrics for concurrent programs that might be used to support certification of next generation avionics software.

  3. Liquid rocket pressure regulators, relief valves, check valves, burst disks, and explosive valves. [design techniques and practices

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The development of and operational programs for effective use in design are presented for liquid rocket pressure regulators, relief valves, check valves, burst disks, and explosive valves. A review of the total design problem is presented, and design elements are identified which are involved in successful design. Current technology pertaining to these elements is also described. Design criteria are presented which state what rule or standard must be imposed on each essential design element to assure successful design. These criteria serve as a checklist of rules for a project manager to use in guiding a design or in assessing its adequacy. Recommended practices are included which state how to satisfy each of the criteria.

  4. Knowledge-based critiquing of graphical user interfaces with CHIMES

    NASA Technical Reports Server (NTRS)

    Jiang, Jianping; Murphy, Elizabeth D.; Carter, Leslie E.; Truszkowski, Walter F.

    1994-01-01

    CHIMES is a critiquing tool that automates the process of checking graphical user interface (GUI) designs for compliance with human factors design guidelines and toolkit style guides. The current prototype identifies instances of non-compliance and presents problem statements, advice, and tips to the GUI designer. Changes requested by the designer are made automatically, and the revised GUI is re-evaluated. A case study conducted at NASA-Goddard showed that CHIMES has the potential for dramatically reducing the time formerly spent in hands-on consistency checking. Capabilities recently added to CHIMES include exception handling and rule building. CHIMES is intended for use prior to usability testing as a means, for example, of catching and correcting syntactic inconsistencies in a larger user interface.

  5. [Goals, possibilities and limits of quality evaluation of guidelines. A background report on the user manual of the "Methodological Quality of Guidelines" check list].

    PubMed

    Helou, A; Ollenschläger, G

    1998-06-01

    Recently a German appraisal instrument for clinical guidelines was published that could be used by various parties in formal evaluation of guidelines. An user's guide to the appraisal instrument was designed that contains a detailed explanation for each question to ensure that the instrument is interpreted consistently. This paper describes the purposes, format and contents of the user's guide, and reviews the key factors influencing the validity of guidelines. Taking into account international experiences, the purposes, chances and methodological limitations of a prospective assessment of clinical practice guidelines are discussed.

  6. Review of fluorescence guided surgery visualization and overlay techniques

    PubMed Central

    Elliott, Jonathan T.; Dsouza, Alisha V.; Davis, Scott C.; Olson, Jonathan D.; Paulsen, Keith D.; Roberts, David W.; Pogue, Brian W.

    2015-01-01

    In fluorescence guided surgery, data visualization represents a critical step between signal capture and display needed for clinical decisions informed by that signal. The diversity of methods for displaying surgical images are reviewed, and a particular focus is placed on electronically detected and visualized signals, as required for near-infrared or low concentration tracers. Factors driving the choices such as human perception, the need for rapid decision making in a surgical environment, and biases induced by display choices are outlined. Five practical suggestions are outlined for optimal display orientation, color map, transparency/alpha function, dynamic range compression, and color perception check. PMID:26504628

  7. Low Voltage Alarm Apprenticeship. Related Training Modules. 1.1-1.14 Trade Math.

    ERIC Educational Resources Information Center

    Lane Community Coll., Eugene, OR.

    This packet of 14 learning modules on trade math is 1 of 8 such packets developed for apprenticeship training for low voltage alarm. Introductory materials are a complete listing of all available modules and a supplementary reference list. Each module contains some or all of these components: goal, performance indicators, study guide (a check list…

  8. Steps Toward Effective Production of Speech (STEPS): No. 7--How to Take Care of Glasses.

    ERIC Educational Resources Information Center

    Sheeley, Eugene C.; McQuiddy, Doris

    This guide, one of a series of booklets developed by Project STEPS (Steps Toward Effective Production of Speech), presents guidelines for parents of deaf-blind children regarding the care of eyeglasses. Basic concerns with glasses and contact lenses are noted and parents are advised to perform the following daily tasks: checking the frames,…

  9. Posterior Predictive Model Checking in Bayesian Networks

    ERIC Educational Resources Information Center

    Crawford, Aaron

    2014-01-01

    This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…

  10. Analyzing the cost of screening selectee and non-selectee baggage.

    PubMed

    Virta, Julie L; Jacobson, Sheldon H; Kobza, John E

    2003-10-01

    Determining how to effectively operate security devices is as important to overall system performance as developing more sensitive security devices. In light of recent federal mandates for 100% screening of all checked baggage, this research studies the trade-offs between screening only selectee checked baggage and screening both selectee and non-selectee checked baggage for a single baggage screening security device deployed at an airport. This trade-off is represented using a cost model that incorporates the cost of the baggage screening security device, the volume of checked baggage processed through the device, and the outcomes that occur when the device is used. The cost model captures the cost of deploying, maintaining, and operating a single baggage screening security device over a one-year period. The study concludes that as excess baggage screening capacity is used to screen non-selectee checked bags, the expected annual cost increases, the expected annual cost per checked bag screened decreases, and the expected annual cost per expected number of threats detected in the checked bags screened increases. These results indicate that the marginal increase in security per dollar spent is significantly lower when non-selectee checked bags are screened than when only selectee checked bags are screened.

  11. 40 CFR 86.327-79 - Quench checks; NOX analyzer.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Quench checks; NOX analyzer. (a) Perform the reaction chamber quench check for each model of high vacuum reaction chamber analyzer prior to initial use. (b) Perform the reaction chamber quench check for each new analyzer that has an ambient pressure or “soft vacuum” reaction chamber prior to initial use. Additionally...

  12. 40 CFR 86.327-79 - Quench checks; NOX analyzer.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Quench checks; NOX analyzer. (a) Perform the reaction chamber quench check for each model of high vacuum reaction chamber analyzer prior to initial use. (b) Perform the reaction chamber quench check for each new analyzer that has an ambient pressure or “soft vacuum” reaction chamber prior to initial use. Additionally...

  13. User's guide for MAGIC-Meteorologic and hydrologic genscn (generate scenarios) input converter

    USGS Publications Warehouse

    Ortel, Terry W.; Martin, Angel

    2010-01-01

    Meteorologic and hydrologic data used in watershed modeling studies are collected by various agencies and organizations, and stored in various formats. Data may be in a raw, un-processed format with little or no quality control, or may be checked for validity before being made available. Flood-simulation systems require data in near real-time so that adequate flood warnings can be made. Additionally, forecasted data are needed to operate flood-control structures to potentially mitigate flood damages. Because real-time data are of a provisional nature, missing data may need to be estimated for use in floodsimulation systems. The Meteorologic and Hydrologic GenScn (Generate Scenarios) Input Converter (MAGIC) can be used to convert data from selected formats into the Hydrologic Simulation System-Fortran hourly-observations format for input to a Watershed Data Management database, for use in hydrologic modeling studies. MAGIC also can reformat the data to the Full Equations model time-series format, for use in hydraulic modeling studies. Examples of the application of MAGIC for use in the flood-simulation system for Salt Creek in northeastern Illinois are presented in this report.

  14. An Illustrative Guide to the Minerva Framework

    NASA Astrophysics Data System (ADS)

    Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration

    2017-10-01

    Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.

  15. Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking

    NASA Technical Reports Server (NTRS)

    Turgeon, Gregory; Price, Petra

    2010-01-01

    A feasibility study was performed on a representative aerospace system to determine the following: (1) the benefits and limitations to using SCADE , a commercially available tool for model checking, in comparison to using a proprietary tool that was studied previously [1] and (2) metrics for performing the model checking and for assessing the findings. This study was performed independently of the development task by a group unfamiliar with the system, providing a fresh, external perspective free from development bias.

  16. Pitfalls in statistical landslide susceptibility modelling

    NASA Astrophysics Data System (ADS)

    Schröder, Boris; Vorpahl, Peter; Märker, Michael; Elsenbeer, Helmut

    2010-05-01

    The use of statistical methods is a well-established approach to predict landslide occurrence probabilities and to assess landslide susceptibility. This is achieved by applying statistical methods relating historical landslide inventories to topographic indices as predictor variables. In our contribution, we compare several new and powerful methods developed in machine learning and well-established in landscape ecology and macroecology for predicting the distribution of shallow landslides in tropical mountain rainforests in southern Ecuador (among others: boosted regression trees, multivariate adaptive regression splines, maximum entropy). Although these methods are powerful, we think it is necessary to follow a basic set of guidelines to avoid some pitfalls regarding data sampling, predictor selection, and model quality assessment, especially if a comparison of different models is contemplated. We therefore suggest to apply a novel toolbox to evaluate approaches to the statistical modelling of landslide susceptibility. Additionally, we propose some methods to open the "black box" as an inherent part of machine learning methods in order to achieve further explanatory insights into preparatory factors that control landslides. Sampling of training data should be guided by hypotheses regarding processes that lead to slope failure taking into account their respective spatial scales. This approach leads to the selection of a set of candidate predictor variables considered on adequate spatial scales. This set should be checked for multicollinearity in order to facilitate model response curve interpretation. Model quality assesses how well a model is able to reproduce independent observations of its response variable. This includes criteria to evaluate different aspects of model performance, i.e. model discrimination, model calibration, and model refinement. In order to assess a possible violation of the assumption of independency in the training samples or a possible lack of explanatory information in the chosen set of predictor variables, the model residuals need to be checked for spatial auto¬correlation. Therefore, we calculate spline correlograms. In addition to this, we investigate partial dependency plots and bivariate interactions plots considering possible interactions between predictors to improve model interpretation. Aiming at presenting this toolbox for model quality assessment, we investigate the influence of strategies in the construction of training datasets for statistical models on model quality.

  17. MPST Software: grl_pef_check

    NASA Technical Reports Server (NTRS)

    Call, Jared A.; Kwok, John H.; Fisher, Forest W.

    2013-01-01

    This innovation is a tool used to verify and validate spacecraft sequences at the predicted events file (PEF) level for the GRAIL (Gravity Recovery and Interior Laboratory, see http://www.nasa. gov/mission_pages/grail/main/index. html) mission as part of the Multi-Mission Planning and Sequencing Team (MPST) operations process to reduce the possibility for errors. This tool is used to catch any sequence related errors or issues immediately after the seqgen modeling to streamline downstream processes. This script verifies and validates the seqgen modeling for the GRAIL MPST process. A PEF is provided as input, and dozens of checks are performed on it to verify and validate the command products including command content, command ordering, flight-rule violations, modeling boundary consistency, resource limits, and ground commanding consistency. By performing as many checks as early in the process as possible, grl_pef_check streamlines the MPST task of generating GRAIL command and modeled products on an aggressive schedule. By enumerating each check being performed, and clearly stating the criteria and assumptions made at each step, grl_pef_check can be used as a manual checklist as well as an automated tool. This helper script was written with a focus on enabling the user with the information they need in order to evaluate a sequence quickly and efficiently, while still keeping them informed and active in the overall sequencing process. grl_pef_check verifies and validates the modeling and sequence content prior to investing any more effort into the build. There are dozens of various items in the modeling run that need to be checked, which is a time-consuming and errorprone task. Currently, no software exists that provides this functionality. Compared to a manual process, this script reduces human error and saves considerable man-hours by automating and streamlining the mission planning and sequencing task for the GRAIL mission.

  18. Practical Formal Verification of Diagnosability of Large Models via Symbolic Model Checking

    NASA Technical Reports Server (NTRS)

    Cavada, Roberto; Pecheur, Charles

    2003-01-01

    This document reports on the activities carried out during a four-week visit of Roberto Cavada at the NASA Ames Research Center. The main goal was to test the practical applicability of the framework proposed, where a diagnosability problem is reduced to a Symbolic Model Checking problem. Section 2 contains a brief explanation of major techniques currently used in Symbolic Model Checking, and how these techniques can be tuned in order to obtain good performances when using Model Checking tools. Diagnosability is performed on large and structured models of real plants. Section 3 describes how these plants are modeled, and how models can be simplified to improve the performance of Symbolic Model Checkers. Section 4 reports scalability results. Three test cases are briefly presented, and several parameters and techniques have been applied on those test cases in order to produce comparison tables. Furthermore, comparison between several Model Checkers is reported. Section 5 summarizes the application of diagnosability verification to a real application. Several properties have been tested, and results have been highlighted. Finally, section 6 draws some conclusions, and outlines future lines of research.

  19. Non-equilibrium dog-flea model

    NASA Astrophysics Data System (ADS)

    Ackerson, Bruce J.

    2017-11-01

    We develop the open dog-flea model to serve as a check of proposed non-equilibrium theories of statistical mechanics. The model is developed in detail. Then it is applied to four recent models for non-equilibrium statistical mechanics. Comparison of the dog-flea solution with these different models allows checking claims and giving a concrete example of the theoretical models.

  20. Abstraction and Assume-Guarantee Reasoning for Automated Software Verification

    NASA Technical Reports Server (NTRS)

    Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.

    2004-01-01

    Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.

  1. Results of including geometric nonlinearities in an aeroelastic model of an F/A-18

    NASA Technical Reports Server (NTRS)

    Buttrill, Carey S.

    1989-01-01

    An integrated, nonlinear simulation model suitable for aeroelastic modeling of fixed-wing aircraft has been developed. While the author realizes that the subject of modeling rotating, elastic structures is not closed, it is believed that the equations of motion developed and applied herein are correct to second order and are suitable for use with typical aircraft structures. The equations are not suitable for large elastic deformation. In addition, the modeling framework generalizes both the methods and terminology of non-linear rigid-body airplane simulation and traditional linear aeroelastic modeling. Concerning the importance of angular/elastic inertial coupling in the dynamic analysis of fixed-wing aircraft, the following may be said. The rigorous inclusion of said coupling is not without peril and must be approached with care. In keeping with the same engineering judgment that guided the development of the traditional aeroelastic equations, the effect of non-linear inertial effects for most airplane applications is expected to be small. A parameter does not tell the whole story, however, and modes flagged by the parameter as significant also need to be checked to see if the coupling is not a one-way path, i.e., the inertially affected modes can influence other modes.

  2. Incremental checking of Master Data Management model based on contextual graphs

    NASA Astrophysics Data System (ADS)

    Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan

    2015-10-01

    The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.

  3. Effects of random study checks and guided notes study cards on middle school special education students' notetaking accuracy and science vocabulary quiz scores

    NASA Astrophysics Data System (ADS)

    Wood, Charles L.

    Federal legislation mandates that all students with disabilities have meaningful access to the general education curriculum and that students with and without disabilities be held equally accountable to the same academic standards (IDEIA, 2004; NCLB, 2001). Many students with disabilities, however, perform poorly in academic content courses, especially at the middle and secondary school levels. Previous research has reported increased notetaking accuracy and quiz scores over lecture content when students completed guided notes compared to taking their own notes. This study evaluated the effects of a pre-quiz review procedure and specially formatted guided notes on middle school special education students' learning of science vocabulary. This study compared the effects of three experimental conditions. (a) Own Notes (ON), (b) Own Notes+Random Study Checks (ON+RSC), and (c) Guided Notes Study Cards+Random Study Checks (GNSC+RSC) on each student's accuracy of notes, next-day quiz scores, and review quiz scores. Each session, the teacher presented 12 science vocabulary terms and definitions during a lecture and students took notes. The students were given 5 minutes to study their notes at the end of each session and were reminded to study their notes at home and in study hall period. In the ON condition students took notes on a sheet of paper with numbered lines from 1 to 12. Just before each next-day quiz in the ON+RSC condition students used write-on response cards to answer two teacher-posed questions over randomly selected vocabulary terms from the previous day's lecture. If the answer on a randomly selected student's response card was correct, that student earned a lottery ticket for inexpensive prizes and a quiz bonus point for herself and each classmate. In the GNSC+RSC condition students took notes on specially formatted guided notes that after the lecture they cut into a set of flashcards that could used for study. The students' mean notetaking accuracy was 75% during ON, 89% during ON+RSC, and 99.5% during GNSC+RSC. The class mean scores on next-day quizzes during ON, ON+RSC, and GNSC+RSC was 39%, 68%, and 90%, respectively. The class mean score on review quizzes following ON, ON+RSC, and GNSC+RSC was 2.1, 5.3, and 7.8 (maximum score, 10), respectively. Results for five of the seven students provide convincing evidence of functional relationships between ON+RSC and higher quiz scores compared to ON and between GNSC+RSC and higher quiz scores compared to ON+RSC. Students', teachers', and parents' opinions regarding the RSC and GNSC procedures were highly favorable.

  4. Model Checking - My 27-Year Quest to Overcome the State Explosion Problem

    NASA Technical Reports Server (NTRS)

    Clarke, Ed

    2009-01-01

    Model Checking is an automatic verification technique for state-transition systems that are finite=state or that have finite-state abstractions. In the early 1980 s in a series of joint papers with my graduate students E.A. Emerson and A.P. Sistla, we proposed that Model Checking could be used for verifying concurrent systems and gave algorithms for this purpose. At roughly the same time, Joseph Sifakis and his student J.P. Queille at the University of Grenoble independently developed a similar technique. Model Checking has been used successfully to reason about computer hardware and communication protocols and is beginning to be used for verifying computer software. Specifications are written in temporal logic, which is particularly valuable for expressing concurrency properties. An intelligent, exhaustive search is used to determine if the specification is true or not. If the specification is not true, the Model Checker will produce a counterexample execution trace that shows why the specification does not hold. This feature is extremely useful for finding obscure errors in complex systems. The main disadvantage of Model Checking is the state-explosion problem, which can occur if the system under verification has many processes or complex data structures. Although the state-explosion problem is inevitable in worst case, over the past 27 years considerable progress has been made on the problem for certain classes of state-transition systems that occur often in practice. In this talk, I will describe what Model Checking is, how it works, and the main techniques that have been developed for combating the state explosion problem.

  5. Union Directions - Army Response.

    DTIC Science & Technology

    1985-12-06

    reflects the long-held belief in the Army that employee participation in decisions that affect their worklife is healthy and desirable. Although some...pluralistic society, checks and balances are as important for the economy as for the government. Business executives who salivate at the thought of vanishing...Unions. Reading, MA: Addison-Wesley, 1976. 37. National Federation of Federal Employees. NFFE’s Guide to Quality of Worklife Programs. No. G-21

  6. An Empirical Methodology for Engineering Human Systems Integration

    DTIC Science & Technology

    2009-12-01

    scanning critical information and selectively skipping what is not important for the immediate task. This learned skill is called a proper “cross check...ability to recover from errors. Efficiency: the level of productivity that can be achieved once learning has occurred. 162 Learnability: the...use, to legacy systems. Knowing what design issues have plagued past operators can guide the generation of requirements to address the identified

  7. A Multidimensional Item Response Model: Constrained Latent Class Analysis Using the Gibbs Sampler and Posterior Predictive Checks.

    ERIC Educational Resources Information Center

    Hoijtink, Herbert; Molenaar, Ivo W.

    1997-01-01

    This paper shows that a certain class of constrained latent class models may be interpreted as a special case of nonparametric multidimensional item response models. Parameters of this latent class model are estimated using an application of the Gibbs sampler, and model fit is investigated using posterior predictive checks. (SLD)

  8. Construct validity and reliability of the Single Checking Administration of Medications Scale.

    PubMed

    O'Connell, Beverly; Hawkins, Mary; Ockerby, Cherene

    2013-06-01

    Research indicates that single checking of medications is as safe as double checking; however, many nurses are averse to independently checking medications. To assist with the introduction and use of single checking, a measure of nurses' attitudes, the thirteen-item Single Checking Administration of Medications Scale (SCAMS) was developed. We examined the psychometric properties of the SCAMS. Secondary analyses were conducted on data collected from 503 nurses across a large Australian health-care service. Analyses using exploratory and confirmatory factor analyses supported by structural equation modelling resulted in a valid twelve-item SCAMS containing two reliable subscales, the nine-item Attitudes towards single checking and three-item Advantages of single checking subscales. The SCAMS is recommended as a valid and reliable measure for monitoring nurses' attitudes to single checking prior to introducing single checking medications and after its implementation. © 2013 Wiley Publishing Asia Pty Ltd.

  9. Simulation-based model checking approach to cell fate specification during Caenorhabditis elegans vulval development by hybrid functional Petri net with extension.

    PubMed

    Li, Chen; Nagasaki, Masao; Ueno, Kazuko; Miyano, Satoru

    2009-04-27

    Model checking approaches were applied to biological pathway validations around 2003. Recently, Fisher et al. have proved the importance of model checking approach by inferring new regulation of signaling crosstalk in C. elegans and confirming the regulation with biological experiments. They took a discrete and state-based approach to explore all possible states of the system underlying vulval precursor cell (VPC) fate specification for desired properties. However, since both discrete and continuous features appear to be an indispensable part of biological processes, it is more appropriate to use quantitative models to capture the dynamics of biological systems. Our key motivation of this paper is to establish a quantitative methodology to model and analyze in silico models incorporating the use of model checking approach. A novel method of modeling and simulating biological systems with the use of model checking approach is proposed based on hybrid functional Petri net with extension (HFPNe) as the framework dealing with both discrete and continuous events. Firstly, we construct a quantitative VPC fate model with 1761 components by using HFPNe. Secondly, we employ two major biological fate determination rules - Rule I and Rule II - to VPC fate model. We then conduct 10,000 simulations for each of 48 sets of different genotypes, investigate variations of cell fate patterns under each genotype, and validate the two rules by comparing three simulation targets consisting of fate patterns obtained from in silico and in vivo experiments. In particular, an evaluation was successfully done by using our VPC fate model to investigate one target derived from biological experiments involving hybrid lineage observations. However, the understandings of hybrid lineages are hard to make on a discrete model because the hybrid lineage occurs when the system comes close to certain thresholds as discussed by Sternberg and Horvitz in 1986. Our simulation results suggest that: Rule I that cannot be applied with qualitative based model checking, is more reasonable than Rule II owing to the high coverage of predicted fate patterns (except for the genotype of lin-15ko; lin-12ko double mutants). More insights are also suggested. The quantitative simulation-based model checking approach is a useful means to provide us valuable biological insights and better understandings of biological systems and observation data that may be hard to capture with the qualitative one.

  10. The Priority Inversion Problem and Real-Time Symbolic Model Checking

    DTIC Science & Technology

    1993-04-23

    real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties

  11. A Formal Methodology to Design and Deploy Dependable Wireless Sensor Networks

    PubMed Central

    Testa, Alessandro; Cinque, Marcello; Coronato, Antonio; Augusto, Juan Carlos

    2016-01-01

    Wireless Sensor Networks (WSNs) are being increasingly adopted in critical applications, where verifying the correct operation of sensor nodes is a major concern. Undesired events may undermine the mission of the WSNs. Hence, their effects need to be properly assessed before deployment, to obtain a good level of expected performance; and during the operation, in order to avoid dangerous unexpected results. In this paper, we propose a methodology that aims at assessing and improving the dependability level of WSNs by means of an event-based formal verification technique. The methodology includes a process to guide designers towards the realization of a dependable WSN and a tool (“ADVISES”) to simplify its adoption. The tool is applicable to homogeneous WSNs with static routing topologies. It allows the automatic generation of formal specifications used to check correctness properties and evaluate dependability metrics at design time and at runtime for WSNs where an acceptable percentage of faults can be defined. During the runtime, we can check the behavior of the WSN accordingly to the results obtained at design time and we can detect sudden and unexpected failures, in order to trigger recovery procedures. The effectiveness of the methodology is shown in the context of two case studies, as proof-of-concept, aiming to illustrate how the tool is helpful to drive design choices and to check the correctness properties of the WSN at runtime. Although the method scales up to very large WSNs, the applicability of the methodology may be compromised by the state space explosion of the reasoning model, which must be faced by partitioning large topologies into sub-topologies. PMID:28025568

  12. Design of Installing Check Dam Using RAMMS Model in Seorak National Park of South Korea

    NASA Astrophysics Data System (ADS)

    Jun, K.; Tak, W.; JUN, B. H.; Lee, H. J.; KIM, S. D.

    2016-12-01

    Design of Installing Check Dam Using RAMMS Model in Seorak National Park of South Korea Kye-Won Jun*, Won-Jun Tak*, Byong-Hee Jun**, Ho-Jin Lee***, Soung-Doug Kim* *Graduate School of Disaster Prevention, Kangwon National University, 346 Joogang-ro, Samcheok-si, Gangwon-do, Korea **School of Fire and Disaster Protection, Kangwon National University, 346 Joogang-ro, Samcheok-si, Gangwon-do, Korea ***School of Civil Engineering, Chungbuk National University, 1 Chungdae-ro, Seowon-gu, Cheongju, Korea Abstract As more than 64% of the land in South Korea is mountainous area, so many regions in South Korea are exposed to the danger of landslide and debris flow. So it is important to understand the behavior of debris flow in mountainous terrains, the various methods and models are being presented and developed based on the mathematical concept. The purpose of this study is to investigate the regions that experienced the debris flow due to typhoon called Ewiniar and to perform numerical modeling to design and layout of the Check dam for reducing the damage by the debris flow. For the performance of numerical modeling, on-site measurement of the research area was conducted including: topographic investigation, research on bridges in the downstream, and precision LiDAR 3D scanning for composed basic data of numerical modeling. The numerical simulation of this study was performed using RAMMS (Rapid Mass Movements Simulation) model for the analysis of the debris flow. This model applied to the conditions of the Check dam which was installed in the upstream, midstream, and downstream. Considering the reduction effect of debris flow, the expansion of debris flow, and the influence on the bridges in the downstream, proper location of the Check dam was designated. The result of present numerical model showed that when the Check dam was installed in the downstream section, 50 m above the bridge, the reduction effect of the debris flow was higher compared to when the Check dam were installed in other sections. Key words: Debris flow, LiDAR, Check dam, RAMMSAcknowledgementsThis research was supported by a grant [MPSS-NH-2014-74] through the Disaster and Safety Management Institute funded by Ministry of Public Safety and Security of Korean government

  13. Model building strategy for logistic regression: purposeful selection.

    PubMed

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  14. HiVy automated translation of stateflow designs for model checking verification

    NASA Technical Reports Server (NTRS)

    Pingree, Paula

    2003-01-01

    tool set enables model checking of finite state machines designs. This is acheived by translating state-chart specifications into the input language of the Spin model checker. An abstract syntax of hierarchical sequential automata (HSA) is provided as an intermediate format tool set.

  15. Petri Net and Probabilistic Model Checking Based Approach for the Modelling, Simulation and Verification of Internet Worm Propagation

    PubMed Central

    Razzaq, Misbah; Ahmad, Jamil

    2015-01-01

    Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework. PMID:26713449

  16. Petri Net and Probabilistic Model Checking Based Approach for the Modelling, Simulation and Verification of Internet Worm Propagation.

    PubMed

    Razzaq, Misbah; Ahmad, Jamil

    2015-01-01

    Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework.

  17. Immediate Effects of Body Checking Behaviour on Negative and Positive Emotions in Women with Eating Disorders: An Ecological Momentary Assessment Approach.

    PubMed

    Kraus, Nicole; Lindenberg, Julia; Zeeck, Almut; Kosfelder, Joachim; Vocks, Silja

    2015-09-01

    Cognitive-behavioural models of eating disorders state that body checking arises in response to negative emotions in order to reduce the aversive emotional state and is therefore negatively reinforced. This study empirically tests this assumption. For a seven-day period, women with eating disorders (n = 26) and healthy controls (n = 29) were provided with a handheld computer for assessing occurring body checking strategies as well as negative and positive emotions. Serving as control condition, randomized computer-emitted acoustic signals prompted reports on body checking and emotions. There was no difference in the intensity of negative emotions before body checking and in control situations across groups. However, from pre- to post-body checking, an increase in negative emotions was found. This effect was more pronounced in women with eating disorders compared with healthy controls. Results are contradictory to the assumptions of the cognitive-behavioural model, as body checking does not seem to reduce negative emotions. Copyright © 2015 John Wiley & Sons, Ltd and Eating Disorders Association.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belley, M; Schmidt, M; Knutson, N

    Purpose: Physics second-checks for external beam radiation therapy are performed, in-part, to verify that the machine parameters in the Record-and-Verify (R&V) system that will ultimately be sent to the LINAC exactly match the values initially calculated by the Treatment Planning System (TPS). While performing the second-check, a large portion of the physicists’ time is spent navigating and arranging display windows to locate and compare the relevant numerical values (MLC position, collimator rotation, field size, MU, etc.). Here, we describe the development of a software tool that guides the physicist by aggregating and succinctly displaying machine parameter data relevant to themore » physics second-check process. Methods: A data retrieval software tool was developed using Python to aggregate data and generate a list of machine parameters that are commonly verified during the physics second-check process. This software tool imported values from (i) the TPS RT Plan DICOM file and (ii) the MOSAIQ (R&V) Structured Query Language (SQL) database. The machine parameters aggregated for this study included: MLC positions, X&Y jaw positions, collimator rotation, gantry rotation, MU, dose rate, wedges and accessories, cumulative dose, energy, machine name, couch angle, and more. Results: A GUI interface was developed to generate a side-by-side display of the aggregated machine parameter values for each field, and presented to the physicist for direct visual comparison. This software tool was tested for 3D conformal, static IMRT, sliding window IMRT, and VMAT treatment plans. Conclusion: This software tool facilitated the data collection process needed in order for the physicist to conduct a second-check, thus yielding an optimized second-check workflow that was both more user friendly and time-efficient. Utilizing this software tool, the physicist was able to spend less time searching through the TPS PDF plan document and the R&V system and focus the second-check efforts on assessing the patient-specific plan-quality.« less

  19. Turbulent transport in premixed flames

    NASA Technical Reports Server (NTRS)

    Rutland, C. J.; Cant, R. S.

    1994-01-01

    Simulations of planar, premixed turbulent flames with heat release were used to study turbulent transport. Reynolds stress and Reynolds flux budgets were obtained and used to guide the investigation of important physical effects. Essentially all pressure terms in the transport equations were found to be significant. In the Reynolds flux equations, these terms are the major source of counter-gradient transport. Viscous and molecular terms were also found to be significant, with both dilatational and solenoidal terms contributing to the Reynolds stress dissipation. The BML theory of premixed turbulent combustion was critically examined in detail. The BML bimodal pdf was found to agree well with the DNS data. All BML decompositions, through the third moments, show very good agreement with the DNS results. Several BML models for conditional terms were checked using the DNS data and were found to require more extensive development.

  20. Model Diagnostics for Bayesian Networks

    ERIC Educational Resources Information Center

    Sinharay, Sandip

    2006-01-01

    Bayesian networks are frequently used in educational assessments primarily for learning about students' knowledge and skills. There is a lack of works on assessing fit of Bayesian networks. This article employs the posterior predictive model checking method, a popular Bayesian model checking tool, to assess fit of simple Bayesian networks. A…

  1. Dynamic modelling and experimental study of cantilever beam with clearance

    NASA Astrophysics Data System (ADS)

    Li, B.; Jin, W.; Han, L.; He, Z.

    2012-05-01

    Clearances occur in almost all mechanical systems, typically such as the clearance between slide plate of gun barrel and guide. Therefore, to study the clearances of mechanisms can be very important to increase the working performance and lifetime of mechanisms. In this paper, rigid dynamic modelling of cantilever with clearance was done according to the subject investigated. In the rigid dynamic modelling, clearance is equivalent to the spring-dashpot model, the impact of beam and boundary face was also taken into consideration. In ADAMS software, the dynamic simulation was carried out according to the model above. The software simulated the movement of cantilever with clearance under external excitation. Research found: When the clearance is larger, the force of impact will become larger. In order to study how the stiffness of the cantilever's supporting part influences natural frequency of the system, A Euler beam which is restricted by a draught spring and a torsion spring at its end was raised. Through numerical calculation, the relationship between natural frequency and stiffness was found. When the value of the stiffness is close to the limit value, the corresponding boundary condition is illustrated. An ADAMS experiment was carried out to check the theory and the simulation.

  2. The dopamine D2/D3 receptor agonist quinpirole increases checking-like behaviour in an operant observing response task with uncertain reinforcement: A novel possible model of OCD?

    PubMed Central

    Eagle, Dawn M.; Noschang, Cristie; d’Angelo, Laure-Sophie Camilla; Noble, Christie A.; Day, Jacob O.; Dongelmans, Marie Louise; Theobald, David E.; Mar, Adam C.; Urcelay, Gonzalo P.; Morein-Zamir, Sharon; Robbins, Trevor W.

    2014-01-01

    Excessive checking is a common, debilitating symptom of obsessive-compulsive disorder (OCD). In an established rodent model of OCD checking behaviour, quinpirole (dopamine D2/3-receptor agonist) increased checking in open-field tests, indicating dopaminergic modulation of checking-like behaviours. We designed a novel operant paradigm for rats (observing response task (ORT)) to further examine cognitive processes underpinning checking behaviour and clarify how and why checking develops. We investigated i) how quinpirole increases checking, ii) dependence of these effects on D2/3 receptor function (following treatment with D2/3 receptor antagonist sulpiride) and iii) effects of reward uncertainty. In the ORT, rats pressed an ‘observing’ lever for information about the location of an ‘active’ lever that provided food reinforcement. High- and low-checkers (defined from baseline observing) received quinpirole (0.5 mg/kg, 10 treatments) or vehicle. Parametric task manipulations assessed observing/checking under increasing task demands relating to reinforcement uncertainty (variable response requirement and active-lever location switching). Treatment with sulpiride further probed the pharmacological basis of long-term behavioural changes. Quinpirole selectively increased checking, both functional observing lever presses (OLPs) and non-functional extra OLPs (EOLPs). The increase in OLPs and EOLPs was long-lasting, without further quinpirole administration. Quinpirole did not affect the immediate ability to use information from checking. Vehicle and quinpirole-treated rats (VEH and QNP respectively) were selectively sensitive to different forms of uncertainty. Sulpiride reduced non-functional EOLPs in QNP rats but had no effect on functional OLPs. These data have implications for treatment of compulsive checking in OCD, particularly for serotonin-reuptake-inhibitor treatment-refractory cases, where supplementation with dopamine receptor antagonists may be beneficial. PMID:24406720

  3. Perpetual Model Validation

    DTIC Science & Technology

    2017-03-01

    models of software execution, for example memory access patterns, to check for security intrusions. Additional research was performed to tackle the...considered using indirect models of software execution, for example memory access patterns, to check for security intrusions. Additional research ...deterioration for example , no longer corresponds to the model used during verification time. Finally, the research looked at ways to combine hybrid systems

  4. Dielectric elastomer peristaltic pump module with finite deformation

    NASA Astrophysics Data System (ADS)

    Mao, Guoyong; Huang, Xiaoqiang; Liu, Junjie; Li, Tiefeng; Qu, Shaoxing; Yang, Wei

    2015-07-01

    Inspired by various peristaltic structures existing in nature, several bionic peristaltic actuators have been developed. In this study, we propose a novel dielectric elastomer peristaltic pump consisting of short tubular modules, with the saline solution as the electrodes. We investigate the performance of this soft pump module under hydraulic pressure and voltage via experiments and an analytical model based on nonlinear field theory. It is observed that the individual pump module undergoes finite deformation and may experience electromechanical instability during operations. The driving pressure and displaced volume of the peristaltic pump module can be modulated by applied voltage. The efficiency of the pump module is enhanced by alternating current voltage, which can suppress the electromechanical pull-in instability. An analytical model is developed within the framework of the nonlinear field theory, and its predictive capacity is checked by experimental observations. The effects of the prestretch, aspect ratio, and voltage on the performance of the pump modules are characterized by the analytical model. This work can guide the designs of soft active peristaltic pumps in the field of artificial organs and industrial conveying systems.

  5. Inspection of aeronautical mechanical parts with a pan-tilt-zoom camera: an approach guided by the computer-aided design model

    NASA Astrophysics Data System (ADS)

    Viana, Ilisio; Orteu, Jean-José; Cornille, Nicolas; Bugarin, Florian

    2015-11-01

    We focus on quality control of mechanical parts in aeronautical context using a single pan-tilt-zoom (PTZ) camera and a computer-aided design (CAD) model of the mechanical part. We use the CAD model to create a theoretical image of the element to be checked, which is further matched with the sensed image of the element to be inspected, using a graph theory-based approach. The matching is carried out in two stages. First, the two images are used to create two attributed graphs representing the primitives (ellipses and line segments) in the images. In the second stage, the graphs are matched using a similarity function built from the primitive parameters. The similarity scores of the matching are injected in the edges of a bipartite graph. A best-match-search procedure in the bipartite graph guarantees the uniqueness of the match solution. The method achieves promising performance in tests with synthetic data including missing elements, displaced elements, size changes, and combinations of these cases. The results open good prospects for using the method with realistic data.

  6. Development of a new technique for pedicle screw and Magerl screw insertion using a 3-dimensional image guide.

    PubMed

    Kawaguchi, Yoshiharu; Nakano, Masato; Yasuda, Taketoshi; Seki, Shoji; Hori, Takeshi; Kimura, Tomoatsu

    2012-11-01

    We developed a new technique for cervical pedicle screw and Magerl screw insertion using a 3-dimensional image guide. In posterior cervical spinal fusion surgery, instrumentation with screws is virtually routine. However, malpositioning of screws is not rare. To avoid complications during cervical pedicle screw and Magerl screw insertion, the authors developed a new technique which is a mold shaped to fit the lamina. Cervical pedicle screw fixation and Magerl screw fixation provide good correction of cervical alignment, rigid fixation, and a high fusion rate. However, malpositioning of screws is not a rare occurrence, and thus the insertion of screws has a potential risk of neurovascular injury. It is necessary to determine a safe insertion procedure for these screws. Preoperative computed tomographic (CT) scans of 1-mm slice thickness were obtained of the whole surgical area. The CT data were imported into a computer navigation system. We developed a 3-dimensional full-scale model of the patient's spine using a rapid prototyping technique from the CT data. Molds of the left and right sides at each vertebra were also constructed. One hole (2.0 mm in diameter and 2.0 cm in length) was made in each mold for the insertion of a screw guide. We performed a simulated surgery using the bone model and the mold before operation in all patients. The mold was firmly attached to the surface of the lamina and the guide wire was inserted using the intraoperative image of lateral vertebra. The proper insertion point, direction, and length of the guide were also confirmed both with the model bone and the image intensifier in the operative field. Then, drilling using a cannulated drill and tapping using a cannulated tapping device were carried out. Eleven consecutive patients who underwent posterior spinal fusion surgery using this technique since 2009 are included. The screw positions in the sagittal and axial planes were evaluated by postoperative CT scan to check for malpositioning. The screw insertion was done in the same manner as the simulated surgery. With the aid of this guide the pedicle screws and Magerl screws could be easily inserted even at the level where the pedicle seemed to be very thin and sclerotic on the CT scan. Postoperative CT scan showed that there were no critical breaches of the screws. This method employing the device using a 3-dimensional image guide seems to be easy and safe to use. The technique may improve the safety of pedicle screw and Magerl screw insertion even in difficult cases with narrow sclerotic pedicles.

  7. Proceedings of the Second NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Munoz, Cesar (Editor)

    2010-01-01

    This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.

  8. Realization of ActiveX control based on ATL in VC 2008

    NASA Astrophysics Data System (ADS)

    Li, Shuhua; Tie, Yong

    2011-10-01

    ActiveX has a key role in web development, this paper realizes the classical program Polygon in the newest Visual C++ environment 2008 and tests each function of control in ActiveX Control Test Container. After that web code is created rapidly via ActiveX Control Pad and it is checked in HTML. Development process and key point attention are summarized systematically which can guide the related developers.

  9. Proceedings of the First NASA Formal Methods Symposium

    NASA Technical Reports Server (NTRS)

    Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)

    2009-01-01

    Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.

  10. Model analysis of check dam impacts on long-term sediment and water budgets in southeast Arizona, USA

    USGS Publications Warehouse

    Norman, Laura M.; Niraula, Rewati

    2016-01-01

    The objective of this study was to evaluate the effect of check dam infrastructure on soil and water conservation at the catchment scale using the Soil and Water Assessment Tool (SWAT). This paired watershed study includes a watershed treated with over 2000 check dams and a Control watershed which has none, in the West Turkey Creek watershed, Southeast Arizona, USA. SWAT was calibrated for streamflow using discharge documented during the summer of 2013 at the Control site. Model results depict the necessity to eliminate lateral flow from SWAT models of aridland environments, the urgency to standardize geospatial soils data, and the care for which modelers must document altering parameters when presenting findings. Performance was assessed using the percent bias (PBIAS), with values of ±2.34%. The calibrated model was then used to examine the impacts of check dams at the Treated watershed. Approximately 630 tons of sediment is estimated to be stored behind check dams in the Treated watershed over the 3-year simulation, increasing water quality for fish habitat. A minimum precipitation event of 15 mm was necessary to instigate the detachment of soil, sediments, or rock from the study area, which occurred 2% of the time. The resulting watershed model is useful as a predictive framework and decision-support tool to consider long-term impacts of restoration and potential for future restoration.

  11. New Results in Software Model Checking and Analysis

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.

    2010-01-01

    This introductory article surveys new techniques, supported by automated tools, for the analysis of software to ensure reliability and safety. Special focus is on model checking techniques. The article also introduces the five papers that are enclosed in this special journal volume.

  12. Influence of Gestational Age and Body Weight on the Pharmacokinetics of Labetalol in Pregnancy

    PubMed Central

    Fischer, James H.; Sarto, Gloria E.; Hardman, Jennifer; Endres, Loraine; Jenkins, Thomas M.; Kilpatrick, Sarah J.; Jeong, Hyunyoung; Geller, Stacie; Deyo, Kelly; Fischer, Patricia A.; Rodvold, Keith A.

    2015-01-01

    Background and Objectives Labetalol is frequently prescribed for treatment of hypertension during pregnancy. However, the influence of pregnancy on labetalol pharmacokinetics is uncertain, with inconsistent findings reported by previous studies. This study examined the population pharmacokinetics of oral labetalol during and after pregnancy in women receiving labetalol for hypertension. Methods Data were collected from 57 women receiving the drug for hypertension from the 12th week of pregnancy through 12 weeks postpartum using a prospective, longitudinal design. A sparse sampling strategy guided collection of plasma samples. Samples were assayed for labetalol by high performance liquid chromatography. Estimation of population pharmacokinetic parameters and covariate effects was performed by nonlinear mixed effects modeling using NONMEM. Final population model was validated by bootstrap analysis and visual predictive check. Simulations were performed with the final model to evaluate the appropriate body weight to guide labetalol dosing. Results Lean body weight (LBW) and gestational age, i.e., weeks of pregnancy, were identified as significantly influencing oral clearance (CL/F) of labetalol, with CL/F ranging from 1.4-fold greater than postpartum values at 12 weeks gestational age to 1.6-fold greater at 40 weeks. Doses adjusted for LBW provide more consistent drug exposure than doses adjusted for total body weight. The apparent volumes of distribution for the central compartment and at steady-state were 1.9-fold higher during pregnancy. Conclusions Gestational age and LBW impact the pharmacokinetics of labetalol during pregnancy and have clinical implications for adjusting labetalol doses in these women. PMID:24297680

  13. Comparing Macroscale and Microscale Simulations of Porous Battery Electrodes

    DOE PAGES

    Higa, Kenneth; Wu, Shao-Ling; Parkinson, Dilworth Y.; ...

    2017-06-22

    This article describes a vertically-integrated exploration of NMC electrode rate limitations, combining experiments with corresponding macroscale (macro-homogeneous) and microscale models. Parameters common to both models were obtained from experiments or based on published results. Positive electrode tortuosity was the sole fitting parameter used in the macroscale model, while the microscale model used no fitting parameters, instead relying on microstructural domains generated from X-ray microtomography of pristine electrode material held under compression while immersed in electrolyte solution (additionally providing novel observations of electrode wetting). Macroscale simulations showed that the capacity decrease observed at higher rates resulted primarily from solution-phase diffusion resistance.more » This ability to provide such qualitative insights at low computational costs is a strength of macroscale models, made possible by neglecting electrode spatial details. To explore the consequences of such simplification, the corresponding, computationally-expensive microscale model was constructed. This was found to have limitations preventing quantitatively accurate predictions, for reasons that are discussed in the hope of guiding future work. Nevertheless, the microscale simulation results complement those of the macroscale model by providing a reality-check based on microstructural information; in particular, this novel comparison of the two approaches suggests a reexamination of salt diffusivity measurements.« less

  14. Query Language for Location-Based Services: A Model Checking Approach

    NASA Astrophysics Data System (ADS)

    Hoareau, Christian; Satoh, Ichiro

    We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.

  15. Midwives׳ clinical reasoning during second stage labour: Report on an interpretive study.

    PubMed

    Jefford, Elaine; Fahy, Kathleen

    2015-05-01

    clinical reasoning was once thought to be the exclusive domain of medicine - setting it apart from 'non-scientific' occupations like midwifery. Poor assessment, clinical reasoning and decision-making skills are well known contributors to adverse outcomes in maternity care. Midwifery decision-making models share a common deficit: they are insufficiently detailed to guide reasoning processes for midwives in practice. For these reasons we wanted to explore if midwives actively engaged in clinical reasoning processes within their clinical practice and if so to what extent. The study was conducted using post structural, feminist methodology. to what extent do midwives engage in clinical reasoning processes when making decisions in the second stage labour? twenty-six practising midwives were interviewed. Feminist interpretive analysis was conducted by two researchers guided by the steps of a model of clinical reasoning process. Six narratives were excluded from analysis because they did not sufficiently address the research question. The midwives narratives were prepared via data reduction. A theoretically informed analysis and interpretation was conducted. using a feminist, interpretive approach we created a model of midwifery clinical reasoning grounded in the literature and consistent with the data. Thirteen of the 20 participant narratives demonstrate analytical clinical reasoning abilities but only nine completed the process and implemented the decision. Seven midwives used non-analytical decision-making without adequately checking against assessment data. over half of the participants demonstrated the ability to use clinical reasoning skills. Less than half of the midwives demonstrated clinical reasoning as their way of making decisions. The new model of Midwifery Clinical Reasoning includes 'intuition' as a valued way of knowing. Using intuition, however, should not replace clinical reasoning which promotes through decision-making can be made transparent and be consensually validated. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Novel Treatment Planning of Hemimandibular Hyperplasia by the Use of Three-Dimensional Computer-Aided-Design and Computer-Aided-Manufacturing Technologies.

    PubMed

    Hatamleh, Muhanad M; Yeung, Elizabeth; Osher, Jonas; Huppa, Chrisopher

    2017-05-01

    Hemimandibular hyperplasia is characterized by an obvious overgrowth in the size of the mandible on one side, which can extend up to the midline causing facial asymmetry. Surgical resection of the overgrowth depends heavily on the skill and experience of the surgeon. This report describes a novel methodology of applying three-dimensional computer-aided-design and computer-aided-manufacturing principles in improving the outcome of surgery in 2 mandibular hyperplasia patients. Both patients had their cone beam computer tomography (CBCT) scan performed. CMF Pro Plan software (v. 2.1) was used to process the scan data into virtual 3-dimensional models of the maxilla and mandible. Head tilt was adjusted manually by following horizontal reference. Facial asymmetry secondary to mandibular hypertrophy was obvious on frontal and lateral views. Simulation functions were followed including mirror imaging of the unaffected mandibular side into the hyperplastic side and position was optimized by translation and orientation functions. Reconstruction of virtual symmetry was assessed and checked by running 3-dimensional measurements. Then, subtraction functions were used to create a 3-dimensional template defining the outline of the lower mandibular osteotomy needed. Precision of mandibular teeth was enhanced by amalgamating the CBCT scan with e-cast scan of the patient lower teeth. 3-Matic software (v. 10.0) was used in designing cutting guide(s) that define the amount of overgrowth to be resected. The top section of the guide was resting on the teeth hence ensuring stability and accuracy while positioning it. The guide design was exported as an .stl file and printed using in-house 3-dimensional printer in biocompatible resin. Three-dimensional technologies of both softwares (CMF Pro Plan and 3-Matic) are accurate and reliable methods in the diagnosis, treatment planning, and designing of cutting guides that optimize surgical correction of hemimandibular hyperplasia at timely and cost-effect manner.

  17. Abstraction Techniques for Parameterized Verification

    DTIC Science & Technology

    2006-11-01

    approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite

  18. Improvement in latent variable indirect response joint modeling of a continuous and a categorical clinical endpoint in rheumatoid arthritis.

    PubMed

    Hu, Chuanpu; Zhou, Honghui

    2016-02-01

    Improving the quality of exposure-response modeling is important in clinical drug development. The general joint modeling of multiple endpoints is made possible in part by recent progress on the latent variable indirect response (IDR) modeling for ordered categorical endpoints. This manuscript aims to investigate, when modeling a continuous and a categorical clinical endpoint, the level of improvement achievable by joint modeling in the latent variable IDR modeling framework through the sharing of model parameters for the individual endpoints, guided by the appropriate representation of drug and placebo mechanism. This was illustrated with data from two phase III clinical trials of intravenously administered mAb X for the treatment of rheumatoid arthritis, with the 28-joint disease activity score (DAS28) and 20, 50, and 70% improvement in the American College of Rheumatology (ACR20, ACR50, and ACR70) disease severity criteria were used as efficacy endpoints. The joint modeling framework led to a parsimonious final model with reasonable performance, evaluated by visual predictive check. The results showed that, compared with the more common approach of separately modeling the endpoints, it is possible for the joint model to be more parsimonious and yet better describe the individual endpoints. In particular, the joint model may better describe one endpoint through subject-specific random effects that would not have been estimable from data of this endpoint alone.

  19. The Automation of Nowcast Model Assessment Processes

    DTIC Science & Technology

    2016-09-01

    that will automate real-time WRE-N model simulations, collect and quality control check weather observations for assimilation and verification, and...domains centered near White Sands Missile Range, New Mexico, where the Meteorological Sensor Array (MSA) will be located. The MSA will provide...observations and performing quality -control checks for the pre-forecast data assimilation period. 2. Run the WRE-N model to generate model forecast data

  20. Temporal Precedence Checking for Switched Models and its Application to a Parallel Landing Protocol

    NASA Technical Reports Server (NTRS)

    Duggirala, Parasara Sridhar; Wang, Le; Mitra, Sayan; Viswanathan, Mahesh; Munoz, Cesar A.

    2014-01-01

    This paper presents an algorithm for checking temporal precedence properties of nonlinear switched systems. This class of properties subsume bounded safety and capture requirements about visiting a sequence of predicates within given time intervals. The algorithm handles nonlinear predicates that arise from dynamics-based predictions used in alerting protocols for state-of-the-art transportation systems. It is sound and complete for nonlinear switch systems that robustly satisfy the given property. The algorithm is implemented in the Compare Execute Check Engine (C2E2) using validated simulations. As a case study, a simplified model of an alerting system for closely spaced parallel runways is considered. The proposed approach is applied to this model to check safety properties of the alerting logic for different operating conditions such as initial velocities, bank angles, aircraft longitudinal separation, and runway separation.

  1. [Value of ultrasonically-guided liver puncture biopsy in the diagnosis of primary liver cancer. Apropos of 84 cases].

    PubMed

    Peghini, M; Eynard, J P; Vergne, R; Seurat, P; Barabe, P; Aubry, P; Diallo, A; Gueye, P M

    1987-01-01

    Ultrasonographicaly guided fine needle aspiration of liver was performed in 84 patients having a confirmed HCC. This technics utilizes a CHIBA type fine needle, after blood coagulation tests have been checked. Out of 84 fine needle aspirations performed: 64 were positive (76,2%), 9 negative (10,7%), 11 (13,19%) were questionable (6) or nonanalysable (5). It is ascertained that the sensibility of this technics is over 75%. It should be possible to improve it by repeating such an exam in previously negative patients. The causes of failure are discussed. Tolerance of the technics is good. It is attraumatic, and of very easily performance. No accident, no mishap was noted.

  2. The method of a joint intraday security check system based on cloud computing

    NASA Astrophysics Data System (ADS)

    Dong, Wei; Feng, Changyou; Zhou, Caiqi; Cai, Zhi; Dan, Xu; Dai, Sai; Zhang, Chuancheng

    2017-01-01

    The intraday security check is the core application in the dispatching control system. The existing security check calculation only uses the dispatch center’s local model and data as the functional margin. This paper introduces the design of all-grid intraday joint security check system based on cloud computing and its implementation. To reduce the effect of subarea bad data on the all-grid security check, a new power flow algorithm basing on comparison and adjustment with inter-provincial tie-line plan is presented. And the numerical example illustrated the effectiveness and feasibility of the proposed method.

  3. 76 FR 35344 - Airworthiness Directives; Costruzioni Aeronautiche Tecnam srl Model P2006T Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-17

    ... retraction/extension ground checks performed on the P2006T, a loose Seeger ring was found on the nose landing... specified products. The MCAI states: During Landing Gear retraction/extension ground checks performed on the... airworthiness information (MCAI) states: During Landing Gear retraction/extension ground checks performed on the...

  4. Choices and Challenges: A Guide for the Battalion Commander’s Wife

    DTIC Science & Technology

    1991-05-28

    make sure you allow yourself plenty of time between them (about 20 minutes) to replenish food and beverage . Whom do you invite? There are no rules. Some...Etiquette" .............. 91 P. Information Letter for a Unit Dining Out V (Sample) ......... ..................... 97 Q. Conducting a Workshop...Fundraising Events: bake sales (donation only), food booth sales, auctions and raifles (days off or passes for soldiers are illegal). Check your local

  5. Practical guides for seeding grass on skid roads, trails, and landings, following logging on east-side forests of Washington and Oregon.

    Treesearch

    J.O. Gjertson

    1949-01-01

    Seeding to perennial grasses is an effective method for stabilizing soil, preventing invasion by undesirable plants, and increasing forage production on ground denuded during logging. A survey in 1948 of 52 areas seeded between 1940 and 1946 found 80 percent of the seedings to be medium or better in success, and 45 percent good or very good in success. A careful check...

  6. Consumer Law Guide

    DTIC Science & Technology

    1994-06-01

    Consumer Finance Act by making short-term advances to customers who write personal checks in return for substantially smaller amounts of on-the-spot case...practices lawsuit with H&R Block, Inc. forcing tax return company to advertise its "Rapid Refund" program is actually a loan program charging customers ...home equity loans/lines of credit/home improvement loans, etc.) 2. A consumer can have only 9M principal dwelling at a time (includes mobile homes

  7. Sediment trapping efficiency of adjustable check dam in laboratory and field experiment

    NASA Astrophysics Data System (ADS)

    Wang, Chiang; Chen, Su-Chin; Lu, Sheng-Jui

    2014-05-01

    Check dam has been constructed at mountain area to block debris flow, but has been filled after several events and lose its function of trapping. For the reason, the main facilities of our research is the adjustable steel slit check dam, which with the advantages of fast building, easy to remove or adjust it function. When we can remove transverse beams to drain sediments off and keep the channel continuity. We constructed adjustable steel slit check dam on the Landow torrent, Huisun Experiment Forest station as the prototype to compare with model in laboratory. In laboratory experiments, the Froude number similarity was used to design the dam model. The main comparisons focused on types of sediment trapping and removing, sediment discharge, and trapping rate of slit check dam. In different types of removing transverse beam showed different kind of sediment removal and differences on rate of sediment removing, removing rate, and particle size distribution. The sediment discharge in check dam with beams is about 40%~80% of check dam without beams. Furthermore, the spacing of beams is considerable factor to the sediment discharge. In field experiment, this research uses time-lapse photography to record the adjustable steel slit check dam on the Landow torrent. The typhoon Soulik made rainfall amounts of 600 mm in eight hours and induced debris flow in Landow torrent. Image data of time-lapse photography demonstrated that after several sediment transport event the adjustable steel slit check dam was buried by debris flow. The result of lab and field experiments: (1)Adjustable check dam could trap boulders and stop woody debris flow and flush out fine sediment to supply the need of downstream river. (2)The efficiency of sediment trapping in adjustable check dam with transverse beams was significantly improved. (3)The check dam without transverse beams can remove the sediment and keep the ecosystem continuity.

  8. The dopamine D2/D3 receptor agonist quinpirole increases checking-like behaviour in an operant observing response task with uncertain reinforcement: a novel possible model of OCD.

    PubMed

    Eagle, Dawn M; Noschang, Cristie; d'Angelo, Laure-Sophie Camilla; Noble, Christie A; Day, Jacob O; Dongelmans, Marie Louise; Theobald, David E; Mar, Adam C; Urcelay, Gonzalo P; Morein-Zamir, Sharon; Robbins, Trevor W

    2014-05-01

    Excessive checking is a common, debilitating symptom of obsessive-compulsive disorder (OCD). In an established rodent model of OCD checking behaviour, quinpirole (dopamine D2/3-receptor agonist) increased checking in open-field tests, indicating dopaminergic modulation of checking-like behaviours. We designed a novel operant paradigm for rats (observing response task (ORT)) to further examine cognitive processes underpinning checking behaviour and clarify how and why checking develops. We investigated i) how quinpirole increases checking, ii) dependence of these effects on D2/3 receptor function (following treatment with D2/3 receptor antagonist sulpiride) and iii) effects of reward uncertainty. In the ORT, rats pressed an 'observing' lever for information about the location of an 'active' lever that provided food reinforcement. High- and low-checkers (defined from baseline observing) received quinpirole (0.5mg/kg, 10 treatments) or vehicle. Parametric task manipulations assessed observing/checking under increasing task demands relating to reinforcement uncertainty (variable response requirement and active-lever location switching). Treatment with sulpiride further probed the pharmacological basis of long-term behavioural changes. Quinpirole selectively increased checking, both functional observing lever presses (OLPs) and non-functional extra OLPs (EOLPs). The increase in OLPs and EOLPs was long-lasting, without further quinpirole administration. Quinpirole did not affect the immediate ability to use information from checking. Vehicle and quinpirole-treated rats (VEH and QNP respectively) were selectively sensitive to different forms of uncertainty. Sulpiride reduced non-functional EOLPs in QNP rats but had no effect on functional OLPs. These data have implications for treatment of compulsive checking in OCD, particularly for serotonin-reuptake-inhibitor treatment-refractory cases, where supplementation with dopamine receptor antagonists may be beneficial. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Two-step activation of paper batteries for high power generation: design and fabrication of biofluid- and water-activated paper batteries

    NASA Astrophysics Data System (ADS)

    Lee, Ki Bang

    2006-11-01

    Two-step activation of paper batteries has been successfully demonstrated to provide quick activation and to supply high power to credit card-sized biosystems on a plastic chip. A stack of a magnesium layer (an anode), a fluid guide (absorbent paper), a highly doped filter paper with copper chloride (a cathode) and a copper layer as a current collector is laminated between two transparent plastic films into a high power biofluid- and water-activated battery. The battery is activated by two-step activation: (1) after placing a drop of biofluid/water-based solution on the fluid inlet, the surface tension first drives the fluid to soak the fluid guide; (2) the fluid in the fluid guide then penetrates into the heavily doped filter paper with copper chloride to start the battery reaction. The fabricated half credit card-sized battery was activated by saliva, urine and tap water and delivered a maximum voltage of 1.56 V within 10 s after activation and a maximum power of 15.6 mW. When 10 kΩ and 1 KΩ loads are used, the service time with water, urine and saliva is measured as more than 2 h. An in-series battery of 3 V has been successfully tested to power two LEDs (light emitting diodes) and an electric driving circuit. As such, this high power paper battery could be integrated with on-demand credit card-sized biosystems such as healthcare test kits, biochips, lab-on-a-chip, DNA chips, protein chips or even test chips for water quality checking or chemical checking.

  10. Trends in magnetism of free Rh clusters via relativistic ab-initio calculations.

    PubMed

    Šipr, O; Ebert, H; Minár, J

    2015-02-11

    A fully relativistic ab-initio study on free Rh clusters of 13-135 atoms is performed to identify general trends concerning their magnetism and to check whether concepts which proved to be useful in interpreting magnetism of 3d metals are applicable to magnetism of 4d systems. We found that there is no systematic relation between local magnetic moments and coordination numbers. On the other hand, the Stoner model appears well-suited both as a criterion for the onset of magnetism and as a guide for the dependence of local magnetic moments on the site-resolved density of states at the Fermi level. Large orbital magnetic moments antiparallel to spin magnetic moments were found for some sites. The intra-atomic magnetic dipole Tz term can be quite large at certain sites but as a whole it is unlikely to affect the interpretation of x-ray magnetic circular dichroism experiments based on the sum rules.

  11. An experimental method to verify soil conservation by check dams on the Loess Plateau, China.

    PubMed

    Xu, X Z; Zhang, H W; Wang, G Q; Chen, S C; Dang, W Q

    2009-12-01

    A successful experiment with a physical model requires necessary conditions of similarity. This study presents an experimental method with a semi-scale physical model. The model is used to monitor and verify soil conservation by check dams in a small watershed on the Loess Plateau of China. During experiments, the model-prototype ratio of geomorphic variables was kept constant under each rainfall event. Consequently, experimental data are available for verification of soil erosion processes in the field and for predicting soil loss in a model watershed with check dams. Thus, it can predict the amount of soil loss in a catchment. This study also mentions four criteria: similarities of watershed geometry, grain size and bare land, Froude number (Fr) for rainfall event, and soil erosion in downscaled models. The efficacy of the proposed method was confirmed using these criteria in two different downscaled model experiments. The B-Model, a large scale model, simulates watershed prototype. The two small scale models, D(a) and D(b), have different erosion rates, but are the same size. These two models simulate hydraulic processes in the B-Model. Experiment results show that while soil loss in the small scale models was converted by multiplying the soil loss scale number, it was very close to that of the B-Model. Obviously, with a semi-scale physical model, experiments are available to verify and predict soil loss in a small watershed area with check dam system on the Loess Plateau, China.

  12. 75 FR 27406 - Airworthiness Directives; Bombardier, Inc. Model BD-100-1A10 (Challenger 300) Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-17

    ... BD- 100 Time Limits/Maintenance Checks. The actions described in this service information are... Challenger 300 BD-100 Time Limits/Maintenance Checks. (1) For the new tasks identified in Bombardier TR 5-2... Requirements,'' in Part 2 of Chapter 5 of Bombardier Challenger 300 BD-100 Time Limits/ Maintenance Checks...

  13. 75 FR 66655 - Airworthiness Directives; PILATUS Aircraft Ltd. Model PC-7 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-29

    ... December 3, 2010 (the effective date of this AD), check the airplane maintenance records to determine if... of the airplane. Do this check following paragraph 3.A. of Pilatus Aircraft Ltd. PC-7 Service... maintenance records check required in paragraph (f)(1) of this AD or it is unclear whether or not the left and...

  14. 77 FR 20520 - Airworthiness Directives; Bombardier, Inc. Model BD-100-1A10 (Challenger 300) Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-04-05

    ... Bombardier Challenger 300 BD-100 Time Limits/Maintenance Checks Manual. For this task, the initial compliance..., of Part 2, of the Bombardier Challenger 300 BD-100 Time Limits/Maintenance Checks Manual, the general.../Maintenance Checks Manual, provided that the relevant information in the general revision is identical to that...

  15. Development of a Multi-Behavioral mHealth App for Women Smokers.

    PubMed

    Armin, Julie; Johnson, Thienne; Hingle, Melanie; Giacobbi, Peter; Gordon, Judith S

    2017-02-01

    This article describes the development of the See Me Smoke-Free™ (SMSF) mobile health application, which uses guided imagery to support women in smoking cessation, eating a healthy diet, and increasing physical activity. Focus group discussions, with member checks, were conducted to refine the intervention content and app user interface. Data related to the context of app deployment were collected via user testing sessions and internal quality control testing, which identified and addressed functionality issues, content problems, and bugs. Interactive app features include playback of guided imagery audio files, notification pop-ups, award-sharing on social media, a tracking calendar, content resources, and direct call to the local tobacco quitline. Focus groups helped design the user interface and identified several themes for incorporation into app content, including positivity, the rewards of smoking cessation, and the integrated benefits of maintaining a healthy lifestyle. User testing improved app functionality and usability on many Android phone models. Changes to the app content and function were made iteratively by the development team as a result of focus group and user testing. Despite extensive internal and user testing, unanticipated data collection and reporting issues emerged during deployment due not only to the variety of Android software and hardware but also to individual phone settings and use.

  16. A procedure for seismic risk reduction in Campania Region

    NASA Astrophysics Data System (ADS)

    Zuccaro, G.; Palmieri, M.; Maggiò, F.; Cicalese, S.; Grassi, V.; Rauci, M.

    2008-07-01

    The Campania Region has set and performed a peculiar procedure in the field of seismic risk reduction. Great attention has been paid to public strategic buildings such as town halls, civil protection buildings and schools. The Ordinance 3274 promulgate in the 2004 by the Italian central authority obliged the owners of strategic buildings to perform seismic analyses within 2008 in order to check the safety of the structures and the adequacy to the use. In the procedure the Campania region, instead of the local authorities, ensure the complete drafting of seismic checks through financial resources of the Italian Government. A regional scientific technical committee has been constituted, composed of scientific experts, academics in seismic engineering. The committee has drawn up guidelines for the processing of seismic analyses. At the same time, the Region has issued a public competition to select technical seismic engineering experts to appoint seismic analysis in accordance with guidelines. The scientific committee has the option of requiring additional documents and studies in order to approve the safety checks elaborated. The Committee is supported by a technical and administrative secretariat composed of a group of expert in seismic engineering. At the moment several seismic safety checks have been completed. The results will be presented in this paper. Moreover, the policy to mitigate the seismic risk, set by Campania region, was to spend the most of the financial resources available on structural strengthening of public strategic buildings rather than in safety checks. A first set of buildings of which the response under seismic action was already known by data and studies of vulnerability previously realised, were selected for immediate retrofitting designs. Secondly, an other set of buildings were identified for structural strengthening. These were selected by using the criteria specified in the Guide Line prepared by the Scientific Committee and based on data obtained by the first set of safety checks. The strengthening philosophy adopt in the projects will be described in the paper.

  17. The DD Check App for prevention and control of digital dermatitis in dairy herds.

    PubMed

    Tremblay, Marlène; Bennett, Tom; Döpfer, Dörte

    2016-09-15

    Digital dermatitis (DD) is the most important infectious claw disease in the cattle industry causing outbreaks of lameness. The clinical course of disease can be classified using 5 clinical stages. M-stages represent not only different disease severities but also unique clinical characteristics and outcomes. Monitoring the proportions of cows per M-stage is needed to better understand and address DD and factors influencing risks of DD in a herd. Changes in the proportion of cows per M-stage over time or between groups may be attributed to differences in management, environment, or treatment and can have impact on the future claw health of the herd. Yet trends in claw health regarding DD are not intuitively noticed without statistical analysis of detailed records. Our specific aim was to develop a mobile application (app) for persons with less statistical training, experience or supporting programs that would standardize M-stage records, automate data analysis including trends of M-stages over time, the calculation of predictions and assignments of Cow Types (i.e., Cow Types I-III are assigned to cows without active lesions, single and repeated cases of active DD lesions, respectively). The predictions were the stationary distributions of transitions between DD states (i.e., M-stages or signs of chronicity) in a class-structured multi-state Markov chain population model commonly used to model endemic diseases. We hypothesized that the app can be used at different levels of record detail to discover significant trends in the prevalence of M-stages that help to make informed decisions to prevent and control DD on-farm. Four data sets were used to test the flexibility and value of the DD Check App. The app allows easy recording of M-stages in different environments and is flexible in terms of the users' goals and the level of detail used. Results show that this tool discovers trends in M-stage proportions, predicts potential outbreaks of DD, and makes comparisons among Cow Types, signs of chronicity, scorers or pens. The DD Check App also provides a list of cows that should be treated augmented by individual Cow Types to help guide treatment and determine prognoses. Producers can be proactive instead of reactive in controlling DD in a herd by using this app. The DD Check App serves as an example of how technology makes knowledge and advice of veterinary epidemiology widely available to monitor, control and prevent this complex disease. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. "I share, therefore I am": personality traits, life satisfaction, and Facebook check-ins.

    PubMed

    Wang, Shaojung Sharon

    2013-12-01

    This study explored whether agreeableness, extraversion, and openness function to influence self-disclosure behavior, which in turn impacts the intensity of checking in on Facebook. A complete path from extraversion to Facebook check-in through self-disclosure and sharing was found. The indirect effect from sharing to check-in intensity through life satisfaction was particularly salient. The central component of check-in is for users to disclose a specific location selectively that has implications on demonstrating their social lives, lifestyles, and tastes, enabling a selective and optimized self-image. Implications on the hyperpersonal model and warranting principle are discussed.

  19. Efficient Translation of LTL Formulae into Buchi Automata

    NASA Technical Reports Server (NTRS)

    Giannakopoulou, Dimitra; Lerda, Flavio

    2001-01-01

    Model checking is a fully automated technique for checking that a system satisfies a set of required properties. With explicit-state model checkers, properties are typically defined in linear-time temporal logic (LTL), and are translated into B chi automata in order to be checked. This report presents how we have combined and improved existing techniques to obtain an efficient LTL to B chi automata translator. In particular, we optimize the core of existing tableau-based approaches to generate significantly smaller automata. Our approach has been implemented and is being released as part of the Java PathFinder software (JPF), an explicit state model checker under development at the NASA Ames Research Center.

  20. Concrete Model Checking with Abstract Matching and Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu Corina S.; Peianek Radek; Visser, Willem

    2005-01-01

    We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to he generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition. the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction, by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs. We also show how a lightweight variant can be used for efficient software testing.

  1. Logic Model Checking of Unintended Acceleration Claims in Toyota Vehicles

    NASA Technical Reports Server (NTRS)

    Gamble, Ed

    2012-01-01

    Part of the US Department of Transportation investigation of Toyota sudden unintended acceleration (SUA) involved analysis of the throttle control software, JPL Laboratory for Reliable Software applied several techniques including static analysis and logic model checking, to the software; A handful of logic models were build, Some weaknesses were identified; however, no cause for SUA was found; The full NASA report includes numerous other analyses

  2. Logic Model Checking of Unintended Acceleration Claims in the 2005 Toyota Camry Electronic Throttle Control System

    NASA Technical Reports Server (NTRS)

    Gamble, Ed; Holzmann, Gerard

    2011-01-01

    Part of the US DOT investigation of Toyota SUA involved analysis of the throttle control software. JPL LaRS applied several techniques, including static analysis and logic model checking, to the software. A handful of logic models were built. Some weaknesses were identified; however, no cause for SUA was found. The full NASA report includes numerous other analyses

  3. Clinical evaluation of the use of an intracardiac electrocardiogram to guide the tip positioning of peripherally inserted central catheters.

    PubMed

    Zhao, Ruiyi; Chen, Chunfang; Jin, Jingfen; Sharma, Komal; Jiang, Nan; Shentu, Yingqin; Wang, Xingang

    2016-06-01

    The use of peripherally inserted central catheters (PICCs) provides important central venous accesses for clinical treatments, tests and monitoring. Compared with the traditional methods, intracardiac electrocardiogram (ECG)-guided method has the potential to guide more accurate tip positioning of PICCs. This study aimed to clinically evaluate the effectiveness of an intracardiac ECG to guide the tip positioning by monitoring characteristic P-wave changes. In this study, eligible patients enrolled September 2011 to May 2012 according to the inclusion and exclusion criteria received the catheterization monitored by intracardiac ECG. Then chest radiography was performed to check the catheter position. The results revealed that, with 117 eligible patients, all bar one patient who died (n = 116) completed the study, including 60 males and 56 females aged 51.2 ± 15.1 years. Most (n = 113, > 97%) had characteristic P-wave changes. The intracardiac ECG-guided positioning procedure achieved correct placement for 112 patients (96.56%), demonstrating 99.12% sensitivity and 100% specificity. In conclusion, the intracardiac ECG can be a promising technique to guide tip positioning of PICCs. However, since the sample size in this study is limited, more experience and further study during clinical practice are needed to demonstrate achievement of optimal catheterization outcomes. © 2015 John Wiley & Sons Australia, Ltd.

  4. Isolation of bioactive allelochemicals from sunflower (variety Suncross-42) through fractionation-guided bioassays.

    PubMed

    Anjum, Tehmina; Bajwa, Rukhsana

    2010-11-01

    Plants are rich source of biologically active allelochemicals. However, natural product discovery is not an easy task. Many problems encountered during this laborious practice can be overcome through the modification of preliminary trials. Bioassay-directed isolation of active plant compounds can increase efficiency by eliminating many of the problems encountered. This strategy avoids unnecessary compounds, concentrating on potential components and thus reducing the cost and time required. In this study, a crude aqueous extract of sunflower leaves was fractionated through high performance liquid chromatography. The isolated fractions were checked against Chenopodium album and Rumex dentatus. The fraction found active against two selected weeds was re-fractionated, and the active components were checked for their composition. Thin layer chromatography isolated a range of phenolics, whereas the presence of bioactive terpenoids was confirmed through mass spectroscopy and nuclear magnetic resonance spectroscopy.

  5. Model selection and assessment for multi­-species occupancy models

    USGS Publications Warehouse

    Broms, Kristin M.; Hooten, Mevin B.; Fitzpatrick, Ryan M.

    2016-01-01

    While multi-species occupancy models (MSOMs) are emerging as a popular method for analyzing biodiversity data, formal checking and validation approaches for this class of models have lagged behind. Concurrent with the rise in application of MSOMs among ecologists, a quiet regime shift is occurring in Bayesian statistics where predictive model comparison approaches are experiencing a resurgence. Unlike single-species occupancy models that use integrated likelihoods, MSOMs are usually couched in a Bayesian framework and contain multiple levels. Standard model checking and selection methods are often unreliable in this setting and there is only limited guidance in the ecological literature for this class of models. We examined several different contemporary Bayesian hierarchical approaches for checking and validating MSOMs and applied these methods to a freshwater aquatic study system in Colorado, USA, to better understand the diversity and distributions of plains fishes. Our findings indicated distinct differences among model selection approaches, with cross-validation techniques performing the best in terms of prediction.

  6. Test Guide for ADS-33E-PRF

    DTIC Science & Technology

    2008-07-01

    8501A (Reference 2), and from the V/STOL specification MIL-F-83300 (Reference 3). ADS-33E-PRF contains intermeshed requirements on not only shoi -t- and...While final verification will in most cases require flight testing, initial checks can be performed through analysis and on ground-based simulators...they are difficult to test, or for some reason are deficient in one or more areas. In such cases one or more alternate criteria are presented where

  7. Accounting Clerk Guide, Learner Packet--Part I. A Spec Unit for the 10th, 11th, or 12th Grade. A Career Education Unit (An Edited Developmental Draft).

    ERIC Educational Resources Information Center

    Foster, Brian; And Others

    The learner packet is part of an eight volume unit for grades 10, 11, and 12, designed for individualized progression in preparing students for entry into the occupation of accounting clerk. Intended to be used on an individual basis at the student's own speed, the learner packet contains vocabulary, suggested lesson time, self-check keys, and…

  8. 76 FR 53348 - Airworthiness Directives; BAE SYSTEMS (Operations) Limited Model BAe 146 Airplanes and Model Avro...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-26

    ... Maintenance Manual (AMM) includes chapters 05-10 ``Time Limits'', 05-15 ``Critical Design Configuration... 05, ``Time Limits/Maintenance Checks,'' of BAe 146 Series/AVRO 146-RJ Series Aircraft Maintenance... Chapter 05, ``Time Limits/ Maintenance Checks,'' of the BAE SYSTEMS (Operations) Limited BAe 146 Series...

  9. 78 FR 40063 - Airworthiness Directives; Erickson Air-Crane Incorporated Helicopters (Type Certificate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-03

    ... Sikorsky Model S-64E helicopters. The AD requires repetitive checks of the Blade Inspection Method (BIM... and check procedures for BIM blades installed on the Model S-64F helicopters. Several blade spars with a crack emanating from corrosion pits and other damage have been found because of BIM pressure...

  10. Slicing AADL Specifications for Model Checking

    NASA Technical Reports Server (NTRS)

    Odenbrett, Maximilian; Nguyen, Viet Yen; Noll, Thomas

    2010-01-01

    To combat the state-space explosion problem in model checking larger systems, abstraction techniques can be employed. Here, methods that operate on the system specification before constructing its state space are preferable to those that try to minimize the resulting transition system as they generally reduce peak memory requirements. We sketch a slicing algorithm for system specifications written in (a variant of) the Architecture Analysis and Design Language (AADL). Given a specification and a property to be verified, it automatically removes those parts of the specification that are irrelevant for model checking the property, thus reducing the size of the corresponding transition system. The applicability and effectiveness of our approach is demonstrated by analyzing the state-space reduction for an example, employing a translator from AADL to Promela, the input language of the SPIN model checker.

  11. 75 FR 42585 - Airworthiness Directives; Empresa Brasileira de Aeronautica S.A. (EMBRAER) Model ERJ 170 and ERJ...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-22

    ... (Low Stage Bleed Check Valve) specified in Section 1 of the EMBRAER 170 Maintenance Review Board Report...-11-02-002 (Low Stage Bleed Check Valve), specified in Section 1 of the EMBRAER 170 Maintenance Review... Task 36-11-02-002 (Low Stage Bleed Check Valve) specified in Section 1 of the EMBRAER 170 Maintenance...

  12. 75 FR 9816 - Airworthiness Directives; Empresa Brasileira de Aeronautica S.A. (EMBRAER) Model ERJ 170 and ERJ...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-04

    ... maintenance plan to include repetitive functional tests of the low-stage check valve. For certain other... program to include maintenance Task Number 36-11-02- 002 (Low Stage Bleed Check Valve), specified in... Check Valve) in Section 1 of the EMBRAER 170 Maintenance Review Board Report MRB-1621. Issued in Renton...

  13. 75 FR 39811 - Airworthiness Directives; The Boeing Company Model 777 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-13

    ... Service Bulletin 777-57A0064, dated March 26, 2009, it is not necessary to perform the torque check on the... instructions in Boeing Alert Service Bulletin 777-57A0064, dated March 26, 2009, a torque check is redundant... are less than those for the torque check. Boeing notes that it plans to issue a new revision to this...

  14. User's guide for the Solar Backscattered Ultraviolet (SBUV) instrument first year ozone-S data set

    NASA Technical Reports Server (NTRS)

    Fleig, A. J.; Klenk, K. F.; Bhartia, P. K.; Gordon, D.; Schneider, W. H.

    1982-01-01

    Total-ozone and ozone vertical profile results for Solar Backscattered Ultraviolet/Total Ozone Mapping Spectrometer (SBUV/TOMS) Nimbus 7 operation from November 1978 to November 1979 are available. The algorithm used have been thoroughly tested, the instrument performance has been examined in details, and the ozone results have been compared with Dobson, Umkehr, balloon, and rocket observations. The accuracy and precision of the satellite ozone data are good to at least within the ability of the ground truth to check and are self-consistent to within the specifications of the instrument. The 'SBUV User's Guide' describes the SBUV experiment and algorithms used. Detailed information on the data available on computer tape is provided including how to order tapes from the National Space Science Data Center.

  15. Checking Dimensionality in Item Response Models with Principal Component Analysis on Standardized Residuals

    ERIC Educational Resources Information Center

    Chou, Yeh-Tai; Wang, Wen-Chung

    2010-01-01

    Dimensionality is an important assumption in item response theory (IRT). Principal component analysis on standardized residuals has been used to check dimensionality, especially under the family of Rasch models. It has been suggested that an eigenvalue greater than 1.5 for the first eigenvalue signifies a violation of unidimensionality when there…

  16. Stress analysis of 27% scale model of AH-64 main rotor hub

    NASA Technical Reports Server (NTRS)

    Hodges, R. V.

    1985-01-01

    Stress analysis of an AH-64 27% scale model rotor hub was performed. Component loads and stresses were calculated based upon blade root loads and motions. The static and fatigue analysis indicates positive margins of safety in all components checked. Using the format developed here, the hub can be stress checked for future application.

  17. Do alcohol compliance checks decrease underage sales at neighboring establishments?

    PubMed

    Erickson, Darin J; Smolenski, Derek J; Toomey, Traci L; Carlin, Bradley P; Wagenaar, Alexander C

    2013-11-01

    Underage alcohol compliance checks conducted by law enforcement agencies can reduce the likelihood of illegal alcohol sales at checked alcohol establishments, and theory suggests that an alcohol establishment that is checked may warn nearby establishments that compliance checks are being conducted in the area. In this study, we examined whether the effects of compliance checks diffuse to neighboring establishments. We used data from the Complying with the Minimum Drinking Age trial, which included more than 2,000 compliance checks conducted at more than 900 alcohol establishments. The primary outcome was the sale of alcohol to a pseudo-underage buyer without the need for age identification. A multilevel logistic regression was used to model the effect of a compliance check at each establishment as well as the effect of compliance checks at neighboring establishments within 500 m (stratified into four equal-radius concentric rings), after buyer, license, establishment, and community-level variables were controlled for. We observed a decrease in the likelihood of establishments selling alcohol to underage youth after they had been checked by law enforcement, but these effects quickly decayed over time. Establishments that had a close neighbor (within 125 m) checked in the past 90 days were also less likely to sell alcohol to young-appearing buyers. The spatial effect of compliance checks on other establishments decayed rapidly with increasing distance. Results confirm the hypothesis that the effects of police compliance checks do spill over to neighboring establishments. These findings have implications for the development of an optimal schedule of police compliance checks.

  18. Observational analysis on inflammatory reaction to talc pleurodesis: Small and large animal model series review

    PubMed Central

    Vannucci, Jacopo; Bellezza, Guido; Matricardi, Alberto; Moretti, Giulia; Bufalari, Antonello; Cagini, Lucio; Puma, Francesco; Daddi, Niccolò

    2018-01-01

    Talc pleurodesis has been associated with pleuropulmonary damage, particularly long-term damage due to its inert nature. The present model series review aimed to assess the safety of this procedure by examining inflammatory stimulus, biocompatibility and tissue reaction following talc pleurodesis. Talc slurry was performed in rabbits: 200 mg/kg checked at postoperative day 14 (five models), 200 mg/kg checked at postoperative day 28 (five models), 40 mg/kg, checked at postoperative day 14 (five models), 40 mg/kg checked at postoperative day 28 (five models). Talc poudrage was performed in pigs: 55 mg/kg checked at postoperative day 60 (18 models). Tissue inspection and data collection followed the surgical pathology approach currently used in clinical practice. As this was an observational study, no statistical analysis was performed. Regarding the rabbit model (Oryctolagus cunicoli), the extent of adhesions ranged between 0 and 30%, and between 0 and 10% following 14 and 28 days, respectively. No intraparenchymal granuloma was observed whereas, pleural granulomas were extensively encountered following both talc dosages, with more evidence of visceral pleura granulomas following 200 mg/kg compared with 40 mg/kg. Severe florid inflammation was observed in 2/10 cases following 40 mg/kg. Parathymic, pericardium granulomas and mediastinal lymphadenopathy were evidenced at 28 days. At 60 days, from rare adhesions to extended pleurodesis were observed in the pig model (Sus Scrofa domesticus). Pleural granulomas were ubiquitous on visceral and parietal pleurae. Severe spotted inflammation among the adhesions were recorded in 15/18 pigs. Intraparenchymal granulomas were observed in 9/18 lungs. Talc produced unpredictable pleurodesis in both animal models with enduring pleural inflammation whether it was performed via slurry or poudrage. Furthermore, talc appeared to have triggered extended pleural damage, intraparenchymal nodules (porcine poudrage) and mediastinal migration (rabbit slurry). PMID:29403549

  19. Analyzing Phylogenetic Trees with Timed and Probabilistic Model Checking: The Lactose Persistence Case Study.

    PubMed

    Requeno, José Ignacio; Colom, José Manuel

    2014-12-01

    Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.

  20. Analyzing phylogenetic trees with timed and probabilistic model checking: the lactose persistence case study.

    PubMed

    Requeno, José Ignacio; Colom, José Manuel

    2014-10-23

    Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.

  1. Model Checking for Verification of Interactive Health IT Systems

    PubMed Central

    Butler, Keith A.; Mercer, Eric; Bahrami, Ali; Tao, Cui

    2015-01-01

    Rigorous methods for design and verification of health IT systems have lagged far behind their proliferation. The inherent technical complexity of healthcare, combined with the added complexity of health information technology makes their resulting behavior unpredictable and introduces serious risk. We propose to mitigate this risk by formalizing the relationship between HIT and the conceptual work that increasingly typifies modern care. We introduce new techniques for modeling clinical workflows and the conceptual products within them that allow established, powerful modeling checking technology to be applied to interactive health IT systems. The new capability can evaluate the workflows of a new HIT system performed by clinicians and computers to improve safety and reliability. We demonstrate the method on a patient contact system to demonstrate model checking is effective for interactive systems and that much of it can be automated. PMID:26958166

  2. High-level neutron coincidence counter maintenance manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swansen, J.; Collinsworth, P.

    1983-05-01

    High-level neutron coincidence counter operational (field) calibration and usage is well known. This manual makes explicit basic (shop) check-out, calibration, and testing of new units and is a guide for repair of failed in-service units. Operational criteria for the major electronic functions are detailed, as are adjustments and calibration procedures, and recurrent mechanical/electromechanical problems are addressed. Some system tests are included for quality assurance. Data on nonstandard large-scale integrated (circuit) components and a schematic set are also included.

  3. Training for the Infantry Fighting Vehicle-M2: An Evaluation of the 11M10 Course

    DTIC Science & Technology

    1981-10-30

    selected partially because of the evaluator’s familiarity with the procedure, but primarily because of the short lead time for the IFV evaluation. Only...right. Hence the majority of comments are negative and may give a distorted picture of a class. In almost all classes, more things were right th.~n wrong...format guide to assure all areas were covered rather than as a traditional rating sheet. Negative checks were used to generate notes and conmments which

  4. Rules for Optical Testing

    NASA Technical Reports Server (NTRS)

    Stahl, H. Philip

    2014-01-01

    Based on 30 years of optical testing experience, a lot of mistakes, a lot of learning and a lot of experience, I have defined seven guiding principles for optical testing - regardless of how small or how large the optical testing or metrology task: Fully Understand the Task, Develop an Error Budget, Continuous Metrology Coverage, Know where you are, Test like you fly, Independent Cross-Checks, Understand All Anomalies. These rules have been applied with great success to the inprocess optical testing and final specification compliance testing of the JWST mirrors.

  5. Skylight book. Capturing the Sun and the Moon: a guide to creating natural light

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burns, A.

    1976-01-01

    The following topics are covered: planning; essential tools: hand and power; safety hints; curb installation; plexiglas or plate glass skylight; the plexiglas box skylight; tips on working with plexiglas; checking for leaks; framing the shaftway; electric work; shaftwall insulation; covering the shaftway with drywall; other kinds of wall coverings; internal storm windows; plants under your skylight; skylight manufacturers; and places to buy things. There are 38 pages of pictures of the use of skylights. (MHR)

  6. Model criticism based on likelihood-free inference, with an application to protein network evolution.

    PubMed

    Ratmann, Oliver; Andrieu, Christophe; Wiuf, Carsten; Richardson, Sylvia

    2009-06-30

    Mathematical models are an important tool to explain and comprehend complex phenomena, and unparalleled computational advances enable us to easily explore them without any or little understanding of their global properties. In fact, the likelihood of the data under complex stochastic models is often analytically or numerically intractable in many areas of sciences. This makes it even more important to simultaneously investigate the adequacy of these models-in absolute terms, against the data, rather than relative to the performance of other models-but no such procedure has been formally discussed when the likelihood is intractable. We provide a statistical interpretation to current developments in likelihood-free Bayesian inference that explicitly accounts for discrepancies between the model and the data, termed Approximate Bayesian Computation under model uncertainty (ABCmicro). We augment the likelihood of the data with unknown error terms that correspond to freely chosen checking functions, and provide Monte Carlo strategies for sampling from the associated joint posterior distribution without the need of evaluating the likelihood. We discuss the benefit of incorporating model diagnostics within an ABC framework, and demonstrate how this method diagnoses model mismatch and guides model refinement by contrasting three qualitative models of protein network evolution to the protein interaction datasets of Helicobacter pylori and Treponema pallidum. Our results make a number of model deficiencies explicit, and suggest that the T. pallidum network topology is inconsistent with evolution dominated by link turnover or lateral gene transfer alone.

  7. Verification and Planning Based on Coinductive Logic Programming

    NASA Technical Reports Server (NTRS)

    Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal

    2008-01-01

    Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.

  8. A theory-informed approach to developing visually mediated interventions to change behaviour using an asthma and physical activity intervention exemplar.

    PubMed

    Murray, Jennifer; Williams, Brian; Hoskins, Gaylor; Skar, Silje; McGhee, John; Treweek, Shaun; Sniehotta, Falko F; Sheikh, Aziz; Brown, Gordon; Hagen, Suzanne; Cameron, Linda; Jones, Claire; Gauld, Dylan

    2016-01-01

    Visualisation techniques are used in a range of healthcare interventions. However, these frequently lack a coherent rationale or clear theoretical basis. This lack of definition and explicit targeting of the underlying mechanisms may impede the success of and evaluation of the intervention. We describe the theoretical development, deployment, and pilot evaluation, of a complex visually mediated behavioural intervention. The exemplar intervention focused on increasing physical activity among young people with asthma. We employed an explicit five-stage development model, which was actively supported by a consultative user group. The developmental stages involved establishing the theoretical basis, establishing a narrative structure, visual rendering, checking interpretation, and pilot testing. We conducted in-depth interviews and focus groups during early development and checking, followed by an online experiment for pilot testing. A total of 91 individuals, including young people with asthma, parents, teachers, and health professionals, were involved in development and testing. Our final intervention consisted of two components: (1) an interactive 3D computer animation to create intentions and (2) an action plan and volitional help sheet to promote the translation of intentions to behaviour. Theory was mediated throughout by visual and audio forms. The intervention was regarded as highly acceptable, engaging, and meaningful by all stakeholders. The perceived impact on asthma understanding and intentions was reported positively, with most individuals saying that the 3D computer animation had either clarified a range of issues or made them more real. Our five-stage model underpinned by extensive consultation worked well and is presented as a framework to support explicit decision-making for others developing theory informed visually mediated interventions. We have demonstrated the ability to develop theory-based visually mediated behavioural interventions. However, attention needs to be paid to the potential ambiguity associated with images and thus the concept of visual literacy among patients. Our revised model may be helpful as a guide to aid development, acceptability, and ultimately effectiveness.

  9. Bayesian model checking: A comparison of tests

    NASA Astrophysics Data System (ADS)

    Lucy, L. B.

    2018-06-01

    Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.

  10. Secure open cloud in data transmission using reference pattern and identity with enhanced remote privacy checking

    NASA Astrophysics Data System (ADS)

    Vijay Singh, Ran; Agilandeeswari, L.

    2017-11-01

    To handle the large amount of client’s data in open cloud lots of security issues need to be address. Client’s privacy should not be known to other group members without data owner’s valid permission. Sometime clients are fended to have accessing with open cloud servers due to some restrictions. To overcome the security issues and these restrictions related to storing, data sharing in an inter domain network and privacy checking, we propose a model in this paper which is based on an identity based cryptography in data transmission and intermediate entity which have client’s reference with identity that will take control handling of data transmission in an open cloud environment and an extended remote privacy checking technique which will work at admin side. On behalf of data owner’s authority this proposed model will give best options to have secure cryptography in data transmission and remote privacy checking either as private or public or instructed. The hardness of Computational Diffie-Hellman assumption algorithm for key exchange makes this proposed model more secure than existing models which are being used for public cloud environment.

  11. Spot-checks to measure general hygiene practice.

    PubMed

    Sonego, Ina L; Mosler, Hans-Joachim

    2016-01-01

    A variety of hygiene behaviors are fundamental to the prevention of diarrhea. We used spot-checks in a survey of 761 households in Burundi to examine whether something we could call general hygiene practice is responsible for more specific hygiene behaviors, ranging from handwashing to sweeping the floor. Using structural equation modeling, we showed that clusters of hygiene behavior, such as primary caregivers' cleanliness and household cleanliness, explained the spot-check findings well. Within our model, general hygiene practice as overall concept explained the more specific clusters of hygiene behavior well. Furthermore, the higher general hygiene practice, the more likely children were to be categorized healthy (r = 0.46). General hygiene practice was correlated with commitment to hygiene (r = 0.52), indicating a strong association to psychosocial determinants. The results show that different hygiene behaviors co-occur regularly. Using spot-checks, the general hygiene practice of a household can be rated quickly and easily.

  12. A systematic composite service design modeling method using graph-based theory.

    PubMed

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.

  13. Design Principles as a Guide for Constraint Based and Dynamic Modeling: Towards an Integrative Workflow.

    PubMed

    Sehr, Christiana; Kremling, Andreas; Marin-Sanguino, Alberto

    2015-10-16

    During the last 10 years, systems biology has matured from a fuzzy concept combining omics, mathematical modeling and computers into a scientific field on its own right. In spite of its incredible potential, the multilevel complexity of its objects of study makes it very difficult to establish a reliable connection between data and models. The great number of degrees of freedom often results in situations, where many different models can explain/fit all available datasets. This has resulted in a shift of paradigm from the initially dominant, maybe naive, idea of inferring the system out of a number of datasets to the application of different techniques that reduce the degrees of freedom before any data set is analyzed. There is a wide variety of techniques available, each of them can contribute a piece of the puzzle and include different kinds of experimental information. But the challenge that remains is their meaningful integration. Here we show some theoretical results that enable some of the main modeling approaches to be applied sequentially in a complementary manner, and how this workflow can benefit from evolutionary reasoning to keep the complexity of the problem in check. As a proof of concept, we show how the synergies between these modeling techniques can provide insight into some well studied problems: Ammonia assimilation in bacteria and an unbranched linear pathway with end-product inhibition.

  14. A Systematic Composite Service Design Modeling Method Using Graph-Based Theory

    PubMed Central

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system. PMID:25928358

  15. Crystallographic Mapping of Guided Nanowires by Second Harmonic Generation Polarimetry

    PubMed Central

    2017-01-01

    The growth of horizontal nanowires (NWs) guided by epitaxial and graphoepitaxial relations with the substrate is becoming increasingly attractive owing to the possibility of controlling their position, direction, and crystallographic orientation. In guided NWs, as opposed to the extensively characterized vertically grown NWs, there is an increasing need for understanding the relation between structure and properties, specifically the role of the epitaxial relation with the substrate. Furthermore, the uniformity of crystallographic orientation along guided NWs and over the substrate has yet to be checked. Here we perform highly sensitive second harmonic generation (SHG) polarimetry of polar and nonpolar guided ZnO NWs grown on R-plane and M-plane sapphire. We optically map large areas on the substrate in a nondestructive way and find that the crystallographic orientations of the guided NWs are highly selective and specific for each growth direction with respect to the substrate lattice. In addition, we perform SHG polarimetry along individual NWs and find that the crystallographic orientation is preserved along the NW in both polar and nonpolar NWs. While polar NWs show highly uniform SHG along their axis, nonpolar NWs show a significant change in the local nonlinear susceptibility along a few micrometers, reflected in a reduction of 40% in the ratio of the SHG along different crystal axes. We suggest that these differences may be related to strain accumulation along the nonpolar wires. We find SHG polarimetry to be a powerful tool to study both selectivity and uniformity of crystallographic orientations of guided NWs with different epitaxial relations. PMID:28094977

  16. Crystallographic Mapping of Guided Nanowires by Second Harmonic Generation Polarimetry.

    PubMed

    Neeman, Lior; Ben-Zvi, Regev; Rechav, Katya; Popovitz-Biro, Ronit; Oron, Dan; Joselevich, Ernesto

    2017-02-08

    The growth of horizontal nanowires (NWs) guided by epitaxial and graphoepitaxial relations with the substrate is becoming increasingly attractive owing to the possibility of controlling their position, direction, and crystallographic orientation. In guided NWs, as opposed to the extensively characterized vertically grown NWs, there is an increasing need for understanding the relation between structure and properties, specifically the role of the epitaxial relation with the substrate. Furthermore, the uniformity of crystallographic orientation along guided NWs and over the substrate has yet to be checked. Here we perform highly sensitive second harmonic generation (SHG) polarimetry of polar and nonpolar guided ZnO NWs grown on R-plane and M-plane sapphire. We optically map large areas on the substrate in a nondestructive way and find that the crystallographic orientations of the guided NWs are highly selective and specific for each growth direction with respect to the substrate lattice. In addition, we perform SHG polarimetry along individual NWs and find that the crystallographic orientation is preserved along the NW in both polar and nonpolar NWs. While polar NWs show highly uniform SHG along their axis, nonpolar NWs show a significant change in the local nonlinear susceptibility along a few micrometers, reflected in a reduction of 40% in the ratio of the SHG along different crystal axes. We suggest that these differences may be related to strain accumulation along the nonpolar wires. We find SHG polarimetry to be a powerful tool to study both selectivity and uniformity of crystallographic orientations of guided NWs with different epitaxial relations.

  17. You are lost without a map: Navigating the sea of protein structures.

    PubMed

    Lamb, Audrey L; Kappock, T Joseph; Silvaggi, Nicholas R

    2015-04-01

    X-ray crystal structures propel biochemistry research like no other experimental method, since they answer many questions directly and inspire new hypotheses. Unfortunately, many users of crystallographic models mistake them for actual experimental data. Crystallographic models are interpretations, several steps removed from the experimental measurements, making it difficult for nonspecialists to assess the quality of the underlying data. Crystallographers mainly rely on "global" measures of data and model quality to build models. Robust validation procedures based on global measures now largely ensure that structures in the Protein Data Bank (PDB) are largely correct. However, global measures do not allow users of crystallographic models to judge the reliability of "local" features in a region of interest. Refinement of a model to fit into an electron density map requires interpretation of the data to produce a single "best" overall model. This process requires inclusion of most probable conformations in areas of poor density. Users who misunderstand this can be misled, especially in regions of the structure that are mobile, including active sites, surface residues, and especially ligands. This article aims to equip users of macromolecular models with tools to critically assess local model quality. Structure users should always check the agreement of the electron density map and the derived model in all areas of interest, even if the global statistics are good. We provide illustrated examples of interpreted electron density as a guide for those unaccustomed to viewing electron density. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. XMI2USE: A Tool for Transforming XMI to USE Specifications

    NASA Astrophysics Data System (ADS)

    Sun, Wuliang; Song, Eunjee; Grabow, Paul C.; Simmonds, Devon M.

    The UML-based Specification Environment (USE) tool supports syntactic analysis, type checking, consistency checking, and dynamic validation of invariants and pre-/post conditions specified in the Object Constraint Language (OCL). Due to its animation and analysis power, it is useful when checking critical non-functional properties such as security policies. However, the USE tool requires one to specify (i.e., "write") a model using its own textual language and does not allow one to import any model specification files created by other UML modeling tools. Hence, to make the best use of existing UML tools, we often create a model with OCL constraints using a modeling tool such as the IBM Rational Software Architect (RSA) and then use the USE tool for model validation. This approach, however, requires a manual transformation between the specifications of two different tool formats, which is error-prone and diminishes the benefit of automated model-level validations. In this paper, we describe our own implementation of a specification transformation engine that is based on the Model Driven Architecture (MDA) framework and currently supports automatic tool-level transformations from RSA to USE.

  19. Evaluation of properties over phylogenetic trees using stochastic logics.

    PubMed

    Requeno, José Ignacio; Colom, José Manuel

    2016-06-14

    Model checking has been recently introduced as an integrated framework for extracting information of the phylogenetic trees using temporal logics as a querying language, an extension of modal logics that imposes restrictions of a boolean formula along a path of events. The phylogenetic tree is considered a transition system modeling the evolution as a sequence of genomic mutations (we understand mutation as different ways that DNA can be changed), while this kind of logics are suitable for traversing it in a strict and exhaustive way. Given a biological property that we desire to inspect over the phylogeny, the verifier returns true if the specification is satisfied or a counterexample that falsifies it. However, this approach has been only considered over qualitative aspects of the phylogeny. In this paper, we repair the limitations of the previous framework for including and handling quantitative information such as explicit time or probability. To this end, we apply current probabilistic continuous-time extensions of model checking to phylogenetics. We reinterpret a catalog of qualitative properties in a numerical way, and we also present new properties that couldn't be analyzed before. For instance, we obtain the likelihood of a tree topology according to a mutation model. As case of study, we analyze several phylogenies in order to obtain the maximum likelihood with the model checking tool PRISM. In addition, we have adapted the software for optimizing the computation of maximum likelihoods. We have shown that probabilistic model checking is a competitive framework for describing and analyzing quantitative properties over phylogenetic trees. This formalism adds soundness and readability to the definition of models and specifications. Besides, the existence of model checking tools hides the underlying technology, omitting the extension, upgrade, debugging and maintenance of a software tool to the biologists. A set of benchmarks justify the feasibility of our approach.

  20. The inherent catastrophic traps in retrograde CTO PCI.

    PubMed

    Wu, Eugene B; Tsuchikane, Etsuo

    2018-05-01

    When we learn to drive, our driving instructor tells us how to check the side mirror and turn your head to check the blind spot before changing lanes. He tells us how to stop at stop signs, how to drive in slippery conditions, the safe stopping distances, and these all make our driving safe. Similarly, when we learn PCI, our mentors teach us to seat the guiding catheter co-axially, to wire the vessel safely, to deliver balloon and stents over the wire, to watch the pressure of the guiding, in order that we perform PCI safely and evade complications. In retrograde CTO PCI, there is no such published teaching. Also many individual mentors have not had the wide experience to see all the possible complications of retrograde CTO PCI and, therefore, may not be able to warn their apprentice. As the number of retrograde procedures increase worldwide, there is a corresponding increase in catastrophic complications, many of which, we as experts, can see are easily avoidable. To breach this gap in knowledge, this article describes 12 commonly met inherent traps in retrograde CTO PCI. They are inherent because by arranging our equipment in the manner to perform retrograde CTO PCI, these complications are either induced directly or happen easily. We hope this work will enhance safety of retrograde CTO PCI and avoid many catastrophic complications for our readers and operators. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  1. A likelihood-based biostatistical model for analyzing consumer movement in simultaneous choice experiments.

    PubMed

    Zeilinger, Adam R; Olson, Dawn M; Andow, David A

    2014-08-01

    Consumer feeding preference among resource choices has critical implications for basic ecological and evolutionary processes, and can be highly relevant to applied problems such as ecological risk assessment and invasion biology. Within consumer choice experiments, also known as feeding preference or cafeteria experiments, measures of relative consumption and measures of consumer movement can provide distinct and complementary insights into the strength, causes, and consequences of preference. Despite the distinct value of inferring preference from measures of consumer movement, rigorous and biologically relevant analytical methods are lacking. We describe a simple, likelihood-based, biostatistical model for analyzing the transient dynamics of consumer movement in a paired-choice experiment. With experimental data consisting of repeated discrete measures of consumer location, the model can be used to estimate constant consumer attraction and leaving rates for two food choices, and differences in choice-specific attraction and leaving rates can be tested using model selection. The model enables calculation of transient and equilibrial probabilities of consumer-resource association, which could be incorporated into larger scale movement models. We explore the effect of experimental design on parameter estimation through stochastic simulation and describe methods to check that data meet model assumptions. Using a dataset of modest sample size, we illustrate the use of the model to draw inferences on consumer preference as well as underlying behavioral mechanisms. Finally, we include a user's guide and computer code scripts in R to facilitate use of the model by other researchers.

  2. Model Checking Verification and Validation at JPL and the NASA Fairmont IV and V Facility

    NASA Technical Reports Server (NTRS)

    Schneider, Frank; Easterbrook, Steve; Callahan, Jack; Montgomery, Todd

    1999-01-01

    We show how a technology transfer effort was carried out. The successful use of model checking on a pilot JPL flight project demonstrates the usefulness and the efficacy of the approach. The pilot project was used to model a complex spacecraft controller. Software design and implementation validation were carried out successfully. To suggest future applications we also show how the implementation validation step can be automated. The effort was followed by the formal introduction of the modeling technique as a part of the JPL Quality Assurance process.

  3. Designing and evaluating a persuasive child restraint television commercial.

    PubMed

    Lewis, Ioni; Ho, Bonnie; Lennon, Alexia

    2016-01-01

    Relatively high rates of child restraint inappropriate use and misuse and faults in the installation of restraints have suggested a crucial need for public education messages to raise parental awareness of the need to use restraints correctly. This project involved the devising and pilot testing of message concepts, filming of a television advertisement (the TVC), and the evaluation of the TVC. This article focuses specifically upon the evaluation of the TVC. The development and evaluation of the TVC were guided by an extended theory of planned behavior that included the standard constructs of attitudes, subjective norms, and perceived behavioral control as well as the additional constructs of group norms and descriptive norms. The study also explored the extent to which parents with low and high intentions to self-check restraints differed on salient beliefs regarding the behavior. An online survey of parents (N = 384) was conducted where parents were randomly assigned to either the intervention group (n = 161), and therefore viewed the advertisement within the survey, or the control group (n = 223), and therefore did not view the advertisement. Following a one-off exposure to the TVC, the results indicated that, although not a significant difference, parents in the intervention group reported stronger intentions (M = 4.43, SD = 0.74) to self-check restraints than parents in the control group (M = 4.18, SD = 0.86). In addition, parents in the intervention group (M = 4.59, SD = 0.47) reported significantly higher levels of perceived behavioral control than parents in the control group (M = 4.40, SD = 0.73). The regression results revealed that, for parents in the intervention group, attitudes and group norms were significant predictors of parental intentions to self-check their child restraint. Finally, the exploratory analyses of parental beliefs suggested that those parents with low intentions to self-check child restraints were significantly more likely than high intenders to agree that they did not have enough time to check restraints or that having a child in a restraint is more important than checking the installation of the restraint. Overall, the findings provide some support for the persuasiveness of the child restraint TVC and provide insight into the factors influencing reported parental intentions as well as salient beliefs underpinning self-checking of restraints. Interventions that attempt to increase parental perceptions of the importance of self-checking restraints regularly and brevity of the time involved in doing so may be effective.

  4. Philosophy and the practice of Bayesian statistics

    PubMed Central

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2015-01-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework. PMID:22364575

  5. Philosophy and the practice of Bayesian statistics.

    PubMed

    Gelman, Andrew; Shalizi, Cosma Rohilla

    2013-02-01

    A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science. Clarity about these matters should benefit not just philosophy of science, but also statistical practice. At best, the inductivist view has encouraged researchers to fit and compare models without checking them; at worst, theorists have actively discouraged practitioners from performing model checking because it does not fit into their framework. © 2012 The British Psychological Society.

  6. A Model-Driven Approach for Telecommunications Network Services Definition

    NASA Astrophysics Data System (ADS)

    Chiprianov, Vanea; Kermarrec, Yvon; Alff, Patrick D.

    Present day Telecommunications market imposes a short concept-to-market time for service providers. To reduce it, we propose a computer-aided, model-driven, service-specific tool, with support for collaborative work and for checking properties on models. We started by defining a prototype of the Meta-model (MM) of the service domain. Using this prototype, we defined a simple graphical modeling language specific for service designers. We are currently enlarging the MM of the domain using model transformations from Network Abstractions Layers (NALs). In the future, we will investigate approaches to ensure the support for collaborative work and for checking properties on models.

  7. Using State Merging and State Pruning to Address the Path Explosion Problem Faced by Symbolic Execution

    DTIC Science & Technology

    2014-06-19

    urgent and compelling. Recent efforts in this area automate program analysis techniques using model checking and symbolic execution [2, 5–7]. These...bounded model checking tool for x86 binary programs developed at the Air Force Institute of Technology (AFIT). Jiseki creates a bit-vector logic model based...assume there are n different paths through the function foo . The program could potentially call the function foo a bound number of times, resulting in n

  8. The Infobiotics Workbench: an integrated in silico modelling platform for Systems and Synthetic Biology.

    PubMed

    Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio

    2011-12-01

    The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.

  9. Discrete Event Simulation-Based Resource Modelling in Health Technology Assessment.

    PubMed

    Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Dixon, Simon

    2017-10-01

    The objective of this article was to conduct a systematic review of published research on the use of discrete event simulation (DES) for resource modelling (RM) in health technology assessment (HTA). RM is broadly defined as incorporating and measuring effects of constraints on physical resources (e.g. beds, doctors, nurses) in HTA models. Systematic literature searches were conducted in academic databases (JSTOR, SAGE, SPRINGER, SCOPUS, IEEE, Science Direct, PubMed, EMBASE) and grey literature (Google Scholar, NHS journal library), enhanced by manual searchers (i.e. reference list checking, citation searching and hand-searching techniques). The search strategy yielded 4117 potentially relevant citations. Following the screening and manual searches, ten articles were included. Reviewing these articles provided insights into the applications of RM: firstly, different types of economic analyses, model settings, RM and cost-effectiveness analysis (CEA) outcomes were identified. Secondly, variation in the characteristics of the constraints such as types and nature of constraints and sources of data for the constraints were identified. Thirdly, it was found that including the effects of constraints caused the CEA results to change in these articles. The review found that DES proved to be an effective technique for RM but there were only a small number of studies applied in HTA. However, these studies showed the important consequences of modelling physical constraints and point to the need for a framework to be developed to guide future applications of this approach.

  10. Density Functional Calculations for Prediction of 57Fe Mössbauer Isomer Shifts and Quadrupole Splittings in β-Diketiminate Complexes

    PubMed Central

    2017-01-01

    The relative ease of Mössbauer spectroscopy and of density functional theory (DFT) calculations encourages the use of Mössbauer parameters as a validation method for calculations, and the use of calculations as a double check on crystallographic structures. A number of studies have proposed correlations between the computationally determined electron density at the iron nucleus and the observed isomer shift, but deviations from these correlations in low-valent iron β-diketiminate complexes encouraged us to determine a new correlation for these compounds. The use of B3LYP/def2-TZVP in the ORCA platform provides an excellent balance of accuracy and speed. We provide here not only this new correlation and a clear guide to its use but also a systematic analysis of the limitations of this approach. We also highlight the impact of crystallographic inaccuracies, DFT model truncation, and spin states, with intent to assist experimentalists to use Mössbauer spectroscopy and calculations together. PMID:28691111

  11. Investigation of numerical simulation on all-optical flip-flop stability maps of 1550nm vertical-cavity surface-emitting laser

    NASA Astrophysics Data System (ADS)

    Li, Jun; Xia, Qing; Wang, Xiaofa

    2017-10-01

    Based on the extended spin-flip model, the all-optical flip-flop stability maps of the 1550nm vertical-cavity surface-emitting laser have been studied. Theoretical results show that excellent agreement is found between theoretical and the reported experimental results in polarization switching point current which is equal to 1.95 times threshold. Furthermore, the polarization bistable region is wide which is from 1.05 to 1.95 times threshold. A new method is presented that uses power difference between two linear polarization modes as the judging criterion of trigger degree and stability maps of all-optical flip-flop operation under different injection parameters are obtained. By alternately injecting set and reset pulse with appropriate parameters, the mutual conversion switching between two polarization modes is realized, the feasibility of all-optical flip-flop operation is checked theoretically. The results show certain guiding significance on the experimental study on all optical buffer technology.

  12. Staying safe while consuming alcohol: a qualitative study of the protective strategies and informational needs of college freshmen.

    PubMed

    Howard, Donna Elise; Griffin, Melinda; Boekeloo, Bradley; Lake, Kristin; Bellows, Denise

    2007-01-01

    In this qualitative study, the authors examined how students attempt to minimize harm to themselves and others when drinking. The authors recruited freshmen at a large, mid-Atlantic US public university during the fall semester of 2005 to participate in 8 focus groups. The moderator's guide was developed through an iterative process that included input from experts and pilot testing. The researchers audiotaped focus group conversations, transcribed them, and subjected them to an interrater reliability check. Analysis was based on the framework of Information-Motivation-Behavioral Skills Model and a phenomenological approach. College students have a repertoire of coping strategies they use in an attempt to safeguard themselves and their friends from harm when drinking. Strategies encompass planning a safe context for drinking, using safety measures to minimize harm when drinking, and taking care of someone who has consumed too much alcohol. A harm-reduction focus that acknowledges and builds on existing protective strategies may be a promising avenue for alcohol interventions.

  13. [Errors in laboratory daily practice].

    PubMed

    Larrose, C; Le Carrer, D

    2007-01-01

    Legislation set by GBEA (Guide de bonne exécution des analyses) requires that, before performing analysis, the laboratory directors have to check both the nature of the samples and the patients identity. The data processing of requisition forms, which identifies key errors, was established in 2000 and in 2002 by the specialized biochemistry laboratory, also with the contribution of the reception centre for biological samples. The laboratories follow a strict criteria of defining acceptability as a starting point for the reception to then check requisition forms and biological samples. All errors are logged into the laboratory database and analysis report are sent to the care unit specifying the problems and the consequences they have on the analysis. The data is then assessed by the laboratory directors to produce monthly or annual statistical reports. This indicates the number of errors, which are then indexed to patient files to reveal the specific problem areas, therefore allowing the laboratory directors to teach the nurses and enable corrective action.

  14. Personalized medicine: reality and reality checks.

    PubMed

    Leeder, J Steven; Spielberg, Stephen P

    2009-05-01

    The evolving era of pharmacogenomics and personalized medicine is greeted with optimism by many, but this sentiment is not universally shared. The existence of diametrically opposed opinions concerning the potential benefits and obstacles facing the widespread implementation of genomic medicine should stimulate discussion and guide the design of studies to establish the value of interventions targeted at the level of individual patients. One of the more controversial aspects of personalized medicine is whether the anticipated benefits will be realized at an acceptable cost. Recently released analyses suggest that the returns on investment depend on the particular scenario and are different for different stakeholders. On the other hand, cost is only one of the challenges regarding implementation of personalized medicine. Among these are the development of universal standards for managing genomic information in electronic medical records, improvement in the collection and interpretation of clinical phenotype data, and new strategies to educate practitioners and patients/consumers. The reality is that personalized medicine is upon us; open discourse and periodic reality checks will be necessary as we confront it.

  15. Experimental and modal verification of an integral equation solution for a thin-walled dichroic plate with cross-shaped holes

    NASA Technical Reports Server (NTRS)

    Epp, L. W.; Stanton, P. H.

    1993-01-01

    In order to add the capability of an X-band uplink onto the 70-m antenna, a new dichroic plate is needed to replace the Pyle-guide-shaped dichroic plate currently in use. The replacement dichroic plate must exhibit an additional passband at the new uplink frequency of 7.165 GHz, while still maintaining a passband at the existing downlink frequency of 8.425 GHz. Because of the wide frequency separation of these two passbands, conventional methods of designing air-filled dichroic plates exhibit grating lobe problems. A new method of solving this problem by using a dichroic plate with cross-shaped holes is presented and verified experimentally. Two checks of the integral equation solution are described. One is the comparison to a modal analysis for the limiting cross shape of a square hole. As a final check, a prototype dichroic plate with cross-shaped holes was built and measured.

  16. [Practical implementation of a quality management system in a radiological department].

    PubMed

    Huber, S; Zech, C J

    2011-10-01

    This article describes the architecture of a project aiming to implement a DIN EN ISO 9001 quality management system in a radiological department. It is intended to be a practical guide to demonstrate each step of the project leading to certification of the system. In a planning phase resources for the implementation of the project have to be identified and a quality management (QM) group as core team has to be formed. In the first project phase all available documents have to be checked and compiled in the QM manual. Moreover all relevant processes of the department have to be described in so-called process descriptions. In a second step responsibilities for the project are identified. Customer and employee surveys have to be carried out and a nonconformity management system has to be implemented. In this phase internal audits are also needed to check the new QM system, which is finally tested in the external certification audit with reference to its conformity with the standards.

  17. Parallel Software Model Checking

    DTIC Science & Technology

    2015-01-08

    checker. This project will explore this strategy to parallelize the generalized PDR algorithm for software model checking. It belongs to TF1 due to its ... focus on formal verification . Generalized PDR. Generalized Property Driven Rechability (GPDR) i is an algorithm for solving HORN-SMT reachability...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 08

  18. Stochastic Game Analysis and Latency Awareness for Self-Adaptation

    DTIC Science & Technology

    2014-01-01

    this paper, we introduce a formal analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to quantify the...Additional Key Words and Phrases: Proactive adaptation, Stochastic multiplayer games , Latency 1. INTRODUCTION When planning how to adapt, self-adaptive...contribution of this paper is twofold: (1) A novel analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to

  19. Probabilistic Priority Message Checking Modeling Based on Controller Area Networks

    NASA Astrophysics Data System (ADS)

    Lin, Cheng-Min

    Although the probabilistic model checking tool called PRISM has been applied in many communication systems, such as wireless local area network, Bluetooth, and ZigBee, the technique is not used in a controller area network (CAN). In this paper, we use PRISM to model the mechanism of priority messages for CAN because the mechanism has allowed CAN to become the leader in serial communication for automobile and industry control. Through modeling CAN, it is easy to analyze the characteristic of CAN for further improving the security and efficiency of automobiles. The Markov chain model helps us to model the behaviour of priority messages.

  20. Demonstration of the Military Ecological Risk Assessment Framework (MERAF): Apache Longbow - Hellfire Missile Test at Yuma Proving Ground

    DTIC Science & Technology

    2001-11-01

    that there were· no· target misses. The Hellfire missile does not have a depleted uranium head . . -,, 2.2.2.3 Tank movement During the test, the...guide otber users through the use of this. complicated program. The_input data files for NOISEMAP consist of a root file name with several extensions...SOURCES subdirectory. This file will have the root file name followed by an accession number, then the .bps extension. The user must check the *.log

  1. A comparison of Monte-Carlo simulations using RESTRAX and McSTAS with experiment on IN14

    NASA Astrophysics Data System (ADS)

    Wildes, A. R.; S̆aroun, J.; Farhi, E.; Anderson, I.; Høghøj, P.; Brochier, A.

    2000-03-01

    Monte-Carlo simulations of a focusing supermirror guide after the monochromator on the IN14 cold neutron three-axis spectrometer, I.L.L. were carried out using the instrument simulation programs RESTRAX and McSTAS. The simulations were compared to experiment to check their accuracy. Comparisons of the flux ratios over both a 100 and a 1600 mm 2 area at the sample position compare well, and there is a very close agreement between simulation and experiment for the energy spread of the incident beam.

  2. KSC-04PD-1680

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. In the Orbiter Processing Facility, workers help guide the nose cap (right) toward the orbiter Atlantis for installation. The nose cap was removed from the vehicle in May and sent back to the vendor for thorough Non- Destructive Engineering evaluation and recoating. Thermography was also performed to check for internal flaws. This procedure uses high intensity light to heat areas that are immediately scanned with an infrared camera. White Thermal Protection System blankets were reinstalled on the nose cap before installation. Processing continues on Atlantis for its future mission to the International Space Station.

  3. MOM: A meteorological data checking expert system in CLIPS

    NASA Technical Reports Server (NTRS)

    Odonnell, Richard

    1990-01-01

    Meteorologists have long faced the problem of verifying the data they use. Experience shows that there is a sizable number of errors in the data reported by meteorological observers. This is unacceptable for computer forecast models, which depend on accurate data for accurate results. Most errors that occur in meteorological data are obvious to the meteorologist, but time constraints prevent hand-checking. For this reason, it is necessary to have a 'front end' to the computer model to ensure the accuracy of input. Various approaches to automatic data quality control have been developed by several groups. MOM is a rule-based system implemented in CLIPS and utilizing 'consistency checks' and 'range checks'. The system is generic in the sense that it knows some meteorological principles, regardless of specific station characteristics. Specific constraints kept as CLIPS facts in a separate file provide for system flexibility. Preliminary results show that the expert system has detected some inconsistencies not noticed by a local expert.

  4. Learning Assumptions for Compositional Verification

    NASA Technical Reports Server (NTRS)

    Cobleigh, Jamieson M.; Giannakopoulou, Dimitra; Pasareanu, Corina; Clancy, Daniel (Technical Monitor)

    2002-01-01

    Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assume-guarantee style. However, the application of this technique is difficult because it involves non-trivial human input. This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to the analysis of a NASA system.

  5. Considering Decision Variable Diversity in Multi-Objective Optimization: Application in Hydrologic Model Calibration

    NASA Astrophysics Data System (ADS)

    Sahraei, S.; Asadzadeh, M.

    2017-12-01

    Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.

  6. Model-checking techniques based on cumulative residuals.

    PubMed

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  7. Parameter discovery in stochastic biological models using simulated annealing and statistical model checking.

    PubMed

    Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J

    2014-01-01

    Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.

  8. Adiabatic particle motion in a nearly drift-free magnetic field: Application to the geomagnetic tail

    NASA Technical Reports Server (NTRS)

    Stern, D. P.

    1977-01-01

    The guiding center motion of particles in a nearly drift free magnetic field is analyzed in order to investigate the dependence of mean drift velocity on equatorial pitch angle, the variation of local drift velocity along the trajectory, and other properties. The mean drift for adiabatic particles is expressed by means of elliptic integrals. Approximations to the twice-averaged Hamiltonian W near z = O are derived, permitting simple representation of drift paths if an electric potential also exists. In addition, the use of W or of expressions for the longitudinal invariant allows the derivation of the twice averaged Liouville equation and of the corresponding Vlasov equation. Bounce times are calculated (using the drift-free approximation), as are instantaneous guiding center drift velocities, which are then used to provide a numerical check on the formulas for the mean drift.

  9. The Role of Nurses in E-Health: The MobiGuide Project Experience.

    PubMed

    Parimbelli, Enea; Sacchi, Lucia; Budasu, Roxana; Napolitano, Carlo; Peleg, Mor; Quaglini, Silvana

    2016-01-01

    Leveraging the experience of the European project MobiGuide, this paper elaborates on the nurses' role in developing, delivering and evaluating e-health based services. We focus on the home monitoring of atrial fibrillation. Patients enrolled in our study are provided with a smartphone and an ECG sensor, and receive recommendations, reminders and alerts concerning medications and measurements that they should perform through a mobile decision support system that is constantly updated by a backend system. Patients' data are sent to health care personnel that may visualize them, and act accordingly. Nurses play a central role in such setting. After being involved in the design of the caregiver interface, they are responsible for the patients' enrollment phase (which includes patients' training), for the daily checking of incoming data, for the triage of patients' complaints, and for the final phase of the study where patients are interviewed about their experience with the system.

  10. Quantitative CMMI Assessment for Offshoring through the Analysis of Project Management Repositories

    NASA Astrophysics Data System (ADS)

    Sunetnanta, Thanwadee; Nobprapai, Ni-On; Gotel, Olly

    The nature of distributed teams and the existence of multiple sites in offshore software development projects pose a challenging setting for software process improvement. Often, the improvement and appraisal of software processes is achieved through a turnkey solution where best practices are imposed or transferred from a company’s headquarters to its offshore units. In so doing, successful project health checks and monitoring for quality on software processes requires strong project management skills, well-built onshore-offshore coordination, and often needs regular onsite visits by software process improvement consultants from the headquarters’ team. This paper focuses on software process improvement as guided by the Capability Maturity Model Integration (CMMI) and proposes a model to evaluate the status of such improvement efforts in the context of distributed multi-site projects without some of this overhead. The paper discusses the application of quantitative CMMI assessment through the collection and analysis of project data gathered directly from project repositories to facilitate CMMI implementation and reduce the cost of such implementation for offshore-outsourced software development projects. We exemplify this approach to quantitative CMMI assessment through the analysis of project management data and discuss the future directions of this work in progress.

  11. Generalized Symbolic Execution for Model Checking and Testing

    NASA Technical Reports Server (NTRS)

    Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)

    2003-01-01

    Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.

  12. Model Checking Degrees of Belief in a System of Agents

    NASA Technical Reports Server (NTRS)

    Raimondi, Franco; Primero, Giuseppe; Rungta, Neha

    2014-01-01

    Reasoning about degrees of belief has been investigated in the past by a number of authors and has a number of practical applications in real life. In this paper we present a unified framework to model and verify degrees of belief in a system of agents. In particular, we describe an extension of the temporal-epistemic logic CTLK and we introduce a semantics based on interpreted systems for this extension. In this way, degrees of beliefs do not need to be provided externally, but can be derived automatically from the possible executions of the system, thereby providing a computationally grounded formalism. We leverage the semantics to (a) construct a model checking algorithm, (b) investigate its complexity, (c) provide a Java implementation of the model checking algorithm, and (d) evaluate our approach using the standard benchmark of the dining cryptographers. Finally, we provide a detailed case study: using our framework and our implementation, we assess and verify the situational awareness of the pilot of Air France 447 flying in off-nominal conditions.

  13. Development of an inpatient operational pharmacy productivity model.

    PubMed

    Naseman, Ryan W; Lopez, Ben R; Forrey, Ryan A; Weber, Robert J; Kipp, Kris M

    2015-02-01

    An innovative model for measuring the operational productivity of medication order management in inpatient settings is described. Order verification within a computerized prescriber order-entry system was chosen as the pharmacy workload driver. To account for inherent variability in the tasks involved in processing different types of orders, pharmaceutical products were grouped by class, and each class was assigned a time standard, or "medication complexity weight" reflecting the intensity of pharmacist and technician activities (verification of drug indication, verification of appropriate dosing, adverse-event prevention and monitoring, medication preparation, product checking, product delivery, returns processing, nurse/provider education, and problem-order resolution). The resulting "weighted verifications" (WV) model allows productivity monitoring by job function (pharmacist versus technician) to guide hiring and staffing decisions. A 9-month historical sample of verified medication orders was analyzed using the WV model, and the calculations were compared with values derived from two established models—one based on the Case Mix Index (CMI) and the other based on the proprietary Pharmacy Intensity Score (PIS). Evaluation of Pearson correlation coefficients indicated that values calculated using the WV model were highly correlated with those derived from the CMI-and PIS-based models (r = 0.845 and 0.886, respectively). Relative to the comparator models, the WV model offered the advantage of less period-to-period variability. The WV model yielded productivity data that correlated closely with values calculated using two validated workload management models. The model may be used as an alternative measure of pharmacy operational productivity. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.

  14. Test and Evaluation Report of the IVAC (Trademark) Vital Check Monitor Model 4000AEE

    DTIC Science & Technology

    1992-02-01

    AD-A248 834 111111 jIf+l l’ l USAARL Report No. 92-14 Test and Evaluation Report of the IVAC® Vital Check Monitor DTI ~cModel 4000AEE f ELECTE APR17...does not constitute an official Department of the Army endorsement or approval of the use ot such commercial items. Reviewed: DENNIS F . SHANAHAN LTC, MC...to 12.4 GHz) was scanned for emissions. The IVACO Model 4000AEE was operated with both ac and battery power. 2.10.3.2 The radiated susceptibility

  15. Automated Verification of Specifications with Typestates and Access Permissions

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.; Catano, Nestor

    2011-01-01

    We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).

  16. Utilisation of preventative health check-ups in the UK: findings from individual-level repeated cross-sectional data from 1992 to 2008

    PubMed Central

    Labeit, Alexander; Peinemann, Frank; Baker, Richard

    2013-01-01

    Objectives To analyse and compare the determinants of screening uptake for different National Health Service (NHS) health check-ups in the UK. Design Individual-level analysis of repeated cross-sectional surveys with balanced panel data. Setting The UK. Participants Individuals taking part in the British Household Panel Survey (BHPS), 1992–2008. Outcome measure Uptake of NHS health check-ups for cervical cancer screening, breast cancer screening, blood pressure checks, cholesterol tests, dental screening and eyesight tests. Methods Dynamic panel data models (random effects panel probit with initial conditions). Results Having had a health check-up 1 year before, and previously in accordance with the recommended schedule, was associated with higher uptake of health check-ups. Individuals who visited a general practitioner (GP) had a significantly higher uptake in 5 of the 6 health check-ups. Uptake was highest in the recommended age group for breast and cervical cancer screening. For all health check-ups, age had a non-linear relationship. Lower self-rated health status was associated with increased uptake of blood pressure checks and cholesterol tests; smoking was associated with decreased uptake of 4 health check-ups. The effects of socioeconomic variables differed for the different health check-ups. Ethnicity did not have a significant influence on any health check-up. Permanent household income had an influence only on eyesight tests and dental screening. Conclusions Common determinants for having health check-ups are age, screening history and a GP visit. Policy interventions to increase uptake should consider the central role of the GP in promoting screening examinations and in preserving a high level of uptake. Possible economic barriers to access for prevention exist for dental screening and eyesight tests, and could be a target for policy intervention. Trial registration This observational study was not registered. PMID:24366576

  17. How can machine-learning methods assist in virtual screening for hyperuricemia? A healthcare machine-learning approach.

    PubMed

    Ichikawa, Daisuke; Saito, Toki; Ujita, Waka; Oyama, Hiroshi

    2016-12-01

    Our purpose was to develop a new machine-learning approach (a virtual health check-up) toward identification of those at high risk of hyperuricemia. Applying the system to general health check-ups is expected to reduce medical costs compared with administering an additional test. Data were collected during annual health check-ups performed in Japan between 2011 and 2013 (inclusive). We prepared training and test datasets from the health check-up data to build prediction models; these were composed of 43,524 and 17,789 persons, respectively. Gradient-boosting decision tree (GBDT), random forest (RF), and logistic regression (LR) approaches were trained using the training dataset and were then used to predict hyperuricemia in the test dataset. Undersampling was applied to build the prediction models to deal with the imbalanced class dataset. The results showed that the RF and GBDT approaches afforded the best performances in terms of sensitivity and specificity, respectively. The area under the curve (AUC) values of the models, which reflected the total discriminative ability of the classification, were 0.796 [95% confidence interval (CI): 0.766-0.825] for the GBDT, 0.784 [95% CI: 0.752-0.815] for the RF, and 0.785 [95% CI: 0.752-0.819] for the LR approaches. No significant differences were observed between pairs of each approach. Small changes occurred in the AUCs after applying undersampling to build the models. We developed a virtual health check-up that predicted the development of hyperuricemia using machine-learning methods. The GBDT, RF, and LR methods had similar predictive capability. Undersampling did not remarkably improve predictive power. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Reopen parameter regions in two-Higgs doublet models

    NASA Astrophysics Data System (ADS)

    Staub, Florian

    2018-01-01

    The stability of the electroweak potential is a very important constraint for models of new physics. At the moment, it is standard for Two-Higgs doublet models (THDM), singlet or triplet extensions of the standard model to perform these checks at tree-level. However, these models are often studied in the presence of very large couplings. Therefore, it can be expected that radiative corrections to the potential are important. We study these effects at the example of the THDM type-II and find that loop corrections can revive more than 50% of the phenomenological viable points which are ruled out by the tree-level vacuum stability checks. Similar effects are expected for other extension of the standard model.

  19. Verification of Compartmental Epidemiological Models using Metamorphic Testing, Model Checking and Visual Analytics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramanathan, Arvind; Steed, Chad A; Pullum, Laura L

    Compartmental models in epidemiology are widely used as a means to model disease spread mechanisms and understand how one can best control the disease in case an outbreak of a widespread epidemic occurs. However, a significant challenge within the community is in the development of approaches that can be used to rigorously verify and validate these models. In this paper, we present an approach to rigorously examine and verify the behavioral properties of compartmen- tal epidemiological models under several common modeling scenarios including birth/death rates and multi-host/pathogen species. Using metamorphic testing, a novel visualization tool and model checking, we buildmore » a workflow that provides insights into the functionality of compartmental epidemiological models. Our initial results indicate that metamorphic testing can be used to verify the implementation of these models and provide insights into special conditions where these mathematical models may fail. The visualization front-end allows the end-user to scan through a variety of parameters commonly used in these models to elucidate the conditions under which an epidemic can occur. Further, specifying these models using a process algebra allows one to automatically construct behavioral properties that can be rigorously verified using model checking. Taken together, our approach allows for detecting implementation errors as well as handling conditions under which compartmental epidemiological models may fail to provide insights into disease spread dynamics.« less

  20. 10 CFR 35.2642 - Records of periodic spot-checks for teletherapy units.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ....2642 Section 35.2642 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL Records... must include— (1) The date of the spot-check; (2) The manufacturer's name, model number, and serial... device; (6) The determined accuracy of each distance measuring and localization device; (7) The...

  1. 10 CFR 35.2642 - Records of periodic spot-checks for teletherapy units.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....2642 Section 35.2642 Energy NUCLEAR REGULATORY COMMISSION MEDICAL USE OF BYPRODUCT MATERIAL Records... must include— (1) The date of the spot-check; (2) The manufacturer's name, model number, and serial... device; (6) The determined accuracy of each distance measuring and localization device; (7) The...

  2. 12 CFR Appendix A to Part 205 - Model Disclosure Clauses and Forms

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... your checking account using information from your check to: (i) Pay for purchases. (ii) Pay bills. (3... disclose information to third parties about your account or the transfers you make: (i) Where it is...) Disclosure by government agencies of information about obtaining account balances and account histories...

  3. Using computer models to design gully erosion control structures for humid northern Ethiopia

    USDA-ARS?s Scientific Manuscript database

    Classic gully erosion control measures such as check dams have been unsuccessful in halting gully formation and growth in the humid northern Ethiopian highlands. Gullies are typically formed in vertisols and flow often bypasses the check dams as elevated groundwater tables make gully banks unstable....

  4. Posterior Predictive Checks for Conditional Independence between Response Time and Accuracy

    ERIC Educational Resources Information Center

    Bolsinova, Maria; Tijmstra, Jesper

    2016-01-01

    Conditional independence (CI) between response time and response accuracy is a fundamental assumption of many joint models for time and accuracy used in educational measurement. In this study, posterior predictive checks (PPCs) are proposed for testing this assumption. These PPCs are based on three discrepancy measures reflecting different…

  5. Building Program Verifiers from Compilers and Theorem Provers

    DTIC Science & Technology

    2015-05-14

    Checking with SMT UFO • LLVM-based front-end (partially reused in SeaHorn) • Combines Abstract Interpretation with Interpolation-Based Model Checking • (no...assertions Counter-examples are long Hard to determine (from main) what is relevant Assertion Main 35 Building Verifiers from Comp and SMT Gurfinkel, 2015

  6. Exploration of Effective Persuasive Strategies Used in Resisting Product Advertising: A Case Study of Adult Health Check-Ups.

    PubMed

    Tien, Han-Kuang; Chung, Wen

    2018-05-10

    This research addressed adults' health check-ups through the lens of Role Transportation Theory. This theory is applied to narrative advertising that lures adults into seeking health check-ups by causing audiences to empathize with the advertisement's character. This study explored the persuasive mechanism behind narrative advertising and reinforced the Protection Motivation Theory model. We added two key perturbation variables: optimistic bias and truth avoidance. To complete the verification hypothesis, we performed two experiments. In Experiment 1, we recruited 77 respondents online for testing. We used analyses of variance to verify the effectiveness of narrative and informative advertising. Then, in Experiment 2, we recruited 228 respondents to perform offline physical experiments and conducted a path analysis through structural equation modelling. The findings showed that narrative advertising positively impacted participants' disease prevention intentions. The use of Role Transportation Theory in advertising enables the audience to be emotionally connected with the character, which enhances persuasiveness. In Experiment 2, we found that the degree of role transference can interfere with optimistic bias, improve perceived health risk, and promote behavioral intentions for health check-ups. Furthermore, truth avoidance can interfere with perceived health risks, which, in turn, reduce behavioral intentions for health check-ups.

  7. Rehabilitation of a debris-flow prone mountain stream in southwestern China - Strategies, effects and implications

    NASA Astrophysics Data System (ADS)

    Yu, Guo-an; Huang, He Qing; Wang, Zhaoyin; Brierley, Gary; Zhang, Kang

    2012-01-01

    SummaryRehabilitation of Shengou Creek, a small, steep mountain stream in southwestern China that is prone to debris flows, started more than 30 years ago through an integrated program of engineering applications (check dams and guiding dikes), biological measures (reforestation), and social measures (reducing human disturbance). Small and medium-sized check dams and guiding dikes were constructed on key upper and middle sections of the creek to stabilize hillslopes and channel bed. Meanwhile, Leucaena leucocephala, a drought-tolerant, fast-growing, and highly adaptive plant species, was introduced to promote vegetation recovery in the watershed. The collective community structure of tree, shrub, and herb assemblages in the artificial L. leucocephala forest, which developed after 7 years, enhanced soil structure and drastically reduced soil erosion on hillslopes. Cultivation of steep land was strictly controlled in the basin, and some inhabitants were encouraged to move from upstream areas to downstream towns to reduce disturbance. These integrated measures reduced sediment supply from both hillslopes and upstream channels, preventing sediment-related hazards. The development of natural streambed resistance structures (mainly step-pool systems) and luxuriant riparian vegetation aided channel stability, diversity of stream habitat, and ecological maintenance in the creek. These findings are compared with Jiangjia and Xiaobaini Ravines, two adjacent non-rehabilitated debris-flow streams which have climate and geomorphologic conditions similar to Shengou Creek. Habitat diversity indices, taxa richness, biodiversity, and bio-community indices are much higher in Shengou Creek relative to Jiangjia and Xiaobaini Ravines, attesting to the effectiveness of rehabilitation measures.

  8. Using a Discrete Choice Conjoint Experiment to Engage Stakeholders in the Design of an Outpatient Children's Health Center.

    PubMed

    Cunningham, Charles E; Niccols, Alison; Rimas, Heather; Robicheau, Randi; Anderson, Colleen; DeVries, Bart

    2017-10-01

    To engage users in the design of a regional child and youth health center. The perspective of users should be an integral component of a patient-centered, evidence-based approach to the design of health facilities. We conducted a discrete choice conjoint experiment (DCE), a method from marketing research and health economics, as a component of a strategy to engage users in the preconstruction planning process. A sample of 467 participants (290 staff and 177 clients or community stakeholders) completed the DCE. Latent class analysis identified three segments with different design preferences. A group we termed an enhanced design (57%) segment preferred a fully featured facility with personal contacts at the start of visits (in-person check-in, personal waiting room notification, volunteer-assisted wayfinding, and visible security), a family resource center with a health librarian, and an outdoor playground equipped with covered heated pathways. The self-guided design segment (11%), in contrast, preferred a design allowing a more independent use of the facility (e.g., self-check-in at computer kiosks, color-coded wayfinding, and a self-guided family resource center). Designs affording privacy and personal contact with staff were important to the private design segment (32%). The theme and decor of the building was less important than interactive features and personal contacts. A DCE allowed us to engage users in the planning process by estimating the value of individual design elements, identifying segments with differing views, informing decisions regarding design trade-offs, and simulating user response to design options.

  9. A rule-based approach to model checking of UML state machines

    NASA Astrophysics Data System (ADS)

    Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz

    2016-12-01

    In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.

  10. Introduction of Virtualization Technology to Multi-Process Model Checking

    NASA Technical Reports Server (NTRS)

    Leungwattanakit, Watcharin; Artho, Cyrille; Hagiya, Masami; Tanabe, Yoshinori; Yamamoto, Mitsuharu

    2009-01-01

    Model checkers find failures in software by exploring every possible execution schedule. Java PathFinder (JPF), a Java model checker, has been extended recently to cover networked applications by caching data transferred in a communication channel. A target process is executed by JPF, whereas its peer process runs on a regular virtual machine outside. However, non-deterministic target programs may produce different output data in each schedule, causing the cache to restart the peer process to handle the different set of data. Virtualization tools could help us restore previous states of peers, eliminating peer restart. This paper proposes the application of virtualization technology to networked model checking, concentrating on JPF.

  11. Body checking is associated with weight- and body-related shame and weight- and body-related guilt among men and women.

    PubMed

    Solomon-Krakus, Shauna; Sabiston, Catherine M

    2017-12-01

    This study examined whether body checking was a correlate of weight- and body-related shame and guilt for men and women. Participants were 537 adults (386 women) between the ages of 17 and 74 (M age =28.29, SD=14.63). Preliminary analyses showed women reported significantly more body-checking (p<.001), weight- and body-related shame (p<.001), and weight- and body-related guilt (p<.001) than men. In sex-stratified hierarchical linear regression models, body checking was significantly and positively associated with weight- and body-related shame (R 2 =.29 and .43, p<.001) and weight- and body-related guilt (R 2 =.34 and .45, p<.001) for men and women, respectively. Based on these findings, body checking is associated with negative weight- and body-related self-conscious emotions. Intervention and prevention efforts aimed at reducing negative weight- and body-related self-conscious emotions should consider focusing on body checking for adult men and women. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Practical Results from the Application of Model Checking and Test Generation from UML/SysML Models of On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Faria, J. M.; Mahomad, S.; Silva, N.

    2009-05-01

    The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.

  13. An experimental manipulation of responsibility in children: a test of the inflated responsibility model of obsessive-compulsive disorder.

    PubMed

    Reeves, J; Reynolds, S; Coker, S; Wilson, C

    2010-09-01

    The objective of this study was to investigate whether Salkovskis (1985) inflated responsibility model of obsessive-compulsive disorder (OCD) applied to children. In an experimental design, 81 children aged 9-12 years were randomly allocated to three conditions: an inflated responsibility group, a moderate responsibility group, and a reduced responsibility group. In all groups children were asked to sort sweets according to whether or not they contained nuts. At baseline the groups did not differ on children's self reported anxiety, depression, obsessive-compulsive symptoms or on inflated responsibility beliefs. The experimental manipulation successfully changed children's perceptions of responsibility. During the sorting task time taken to complete the task, checking behaviours, hesitations, and anxiety were recorded. There was a significant effect of responsibility level on the behavioural variables of time taken, hesitations and check; as perceived responsibility increased children took longer to complete the task and checked and hesitated more often. There was no between-group difference in children's self reported state anxiety. The results offer preliminary support for the link between inflated responsibility and increased checking behaviours in children and add to the small but growing literature suggesting that cognitive models of OCD may apply to children. (c) 2010 Elsevier Ltd. All rights reserved.

  14. 75 FR 52482 - Airworthiness Directives; PILATUS Aircraft Ltd. Model PC-7 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-26

    ..., check the airplane maintenance records to determine if the left and/or right aileron outboard bearing... an entry is found during the airplane maintenance records check required in paragraph (f)(1) of this...-0849; Directorate Identifier 2010-CE-043-AD] RIN 2120-AA64 Airworthiness Directives; PILATUS Aircraft...

  15. 77 FR 50644 - Airworthiness Directives; Cessna Airplane Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ... airplanes that have P/N 1134104-1 or 1134104-5 A/C compressor motor installed; an aircraft logbook check for... following: (1) Inspect the number of hours on the A/C compressor hour meter; and (2) Check the aircraft.... Do the replacement following Cessna Aircraft Company Model 525 Maintenance Manual, Revision 23, dated...

  16. Design of experiments enhanced statistical process control for wind tunnel check standard testing

    NASA Astrophysics Data System (ADS)

    Phillips, Ben D.

    The current wind tunnel check standard testing program at NASA Langley Research Center is focused on increasing data quality, uncertainty quantification and overall control and improvement of wind tunnel measurement processes. The statistical process control (SPC) methodology employed in the check standard testing program allows for the tracking of variations in measurements over time as well as an overall assessment of facility health. While the SPC approach can and does provide researchers with valuable information, it has certain limitations in the areas of process improvement and uncertainty quantification. It is thought by utilizing design of experiments methodology in conjunction with the current SPC practices that one can efficiently and more robustly characterize uncertainties and develop enhanced process improvement procedures. In this research, methodologies were developed to generate regression models for wind tunnel calibration coefficients, balance force coefficients and wind tunnel flow angularities. The coefficients of these regression models were then tracked in statistical process control charts, giving a higher level of understanding of the processes. The methodology outlined is sufficiently generic such that this research can be applicable to any wind tunnel check standard testing program.

  17. Development of a multi-behavioral mHealth app for women smokers

    PubMed Central

    Armin, Julie; Johnson, Thienne; Hingle, Melanie; Giacobbi, Peter; Gordon, Judith S.

    2017-01-01

    Objective This paper describes the development of the See Me Smoke-Free™ (SMSF) mobile health application, which uses guided imagery to support women in smoking cessation, eating a healthy diet, and increasing physical activity. Materials and Methods Focus group discussions, with member checks, were conducted to refine the intervention content and app user interface. Data related to the context of app deployment were collected via user testing sessions and internal quality control testing, which identified and addressed functionality issues, content problems, and bugs. Results Interactive app features include playback of guided imagery audio files, notification pop-ups, award-sharing on social media, a tracking calendar, content resources, and direct call to the local tobacco quitline. Focus groups helped design the user interface, and identified several themes for incorporation into app content, including positivity, the rewards of smoking cessation, and the integrated benefits maintaining a healthy lifestyle. User testing improved app functionality and usability on many Android phone models. Discussion Changes to the app content and function were made iteratively by the development team as a result of focus group and user testing. Despite extensive internal and user testing, unanticipated data collection and reporting issues emerged during deployment due to the variety of Android software and hardware, but also due to individual phone settings and use. Conclusion Focus group interviews helped thematically frame the intervention and maximize user engagement. Testing caught many bugs and usability concerns, but missed some of the data transmission problems encountered during deployment. PMID:28121240

  18. A procedure to evaluate environmental rehabilitation in limestone quarries.

    PubMed

    Neri, Ana Claudia; Sánchez, Luis Enrique

    2010-11-01

    A procedure to evaluate mine rehabilitation practices during the operational phase was developed and validated. It is based on a comparison of actually observed or documented practices with internationally recommended best practices (BP). A set of 150 BP statements was derived from international guides in order to establish the benchmark. The statements are arranged in six rehabilitation programs under three categories: (1) planning (2) operational and (3) management, corresponding to the adoption of the plan-do-check-act management systems model to mine rehabilitation. The procedure consists of (i) performing technical inspections guided by a series of field forms containing BP statements; (ii) classifying evidences in five categories; and (iii) calculating conformity indexes and levels. For testing and calibration purposes, the procedure was applied to nine limestone quarries and conformity indexes were calculated for the rehabilitation programs in each quarry. Most quarries featured poor planning practices, operational practices reached high conformity levels in 50% of the cases and management practices scored moderate conformity. Despite all quarries being ISO 14001 certified, their management systems pay low attention to issues pertaining to land rehabilitation and biodiversity. The best results were achieved by a quarry whose expansion was recently submitted to the environmental impact assessment process, suggesting that public scrutiny may play a positive role in enhancing rehabilitation practices. Conformity indexes and levels can be used to chart the evolution of rehabilitation practices at regular intervals, to establish corporate goals and for communication with stakeholders. Copyright 2010 Elsevier Ltd. All rights reserved.

  19. CCM-C,Collins checks the middeck experiment

    NASA Image and Video Library

    1999-07-24

    S93-E-5016 (23 July 1999) --- Astronaut Eileen M. Collins, mission commander, checks on an experiment on Columbia's middeck during Flight Day 1 activity. The experiment is called the Cell Culture Model, Configuration C. Objectives of it are to validate cell culture models for muscle, bone and endothelial cell biochemical and functional loss induced by microgravity stress; to evaluate cytoskeleton, metabolism, membrane integrity and protease activity in target cells; and to test tissue loss pharmaceuticals for efficacy. The photo was recorded with an electronic still camera (ESC).

  20. Class Model Development Using Business Rules

    NASA Astrophysics Data System (ADS)

    Skersys, Tomas; Gudas, Saulius

    New developments in the area of computer-aided system engineering (CASE) greatly improve processes of the information systems development life cycle (ISDLC). Much effort is put into the quality improvement issues, but IS development projects still suffer from the poor quality of models during the system analysis and design cycles. At some degree, quality of models that are developed using CASE tools can be assured using various. automated. model comparison, syntax. checking procedures. It. is also reasonable to check these models against the business domain knowledge, but the domain knowledge stored in the repository of CASE tool (enterprise model) is insufficient (Gudas et al. 2004). Involvement of business domain experts into these processes is complicated because non- IT people often find it difficult to understand models that were developed by IT professionals using some specific modeling language.

  1. Prevalence of Tooth Shade and its Correlation with Skin Colour - A Cross-sectional Study.

    PubMed

    Vadavadagi, Suneel V; Kumari, K V Halini; Choudhury, Gopal Krishna; Vilekar, Abhishek Madhukar; Das, Sitansu Sekhar; Jena, Debkant; Kataraki, Bharat; B L, Bhavana

    2016-02-01

    Aesthetics has become an important issue in modern society. Tooth shade is one of the factors in determining aesthetics. Studies have revealed that tooth shade is influenced by age, gender, eye colour, skin colour and other factors. The present study was aimed to assess the prevalence of tooth shade and its correlation with skin colour. A total of 300 subjects aged 18-20 years were evaluated for tooth shade using Vitapan - 3D shade guide. Anterior teeth were checked under natural light and facial skin colour by Lakme liquid foundation make up as a shade guide. Data was analysed using chi square test and spearman's correlation. Out of 300 students, 114 (38.00%) had A2 tooth shade; the least prevalent tooth shade among Chitradurga population was C1 (4.00%). There was a positive correlation between tooth shade and skin colour which was found to be statistically significant (p <0.05). The most prevalent tooth shade among Chitradurga population was A2 and least was C1. There was a significant correlation between tooth shade and skin colour with lighter skin tone subjects having lighter tooth shade hence skin colour can be used as a guide for shade selection.

  2. [Proposal and preliminary validation of a check-list for the assessment of occupational exposure to repetitive movements of the upper lims].

    PubMed

    Colombini, D; Occhipinti, E; Cairoli, S; Baracco, A

    2000-01-01

    Over the last few years the Authors developed and implemented, a specific check-list for a "rapid" assessment of occupational exposure to repetitive movements and exertion of the upper limbs, after verifying the lack of such a tool which also had to be coherent with the latest data in the specialized literature. The check-list model and the relevant application procedures are presented and discussed. The check-list was applied by trained factory technicians in 46 different working tasks where the OCRA method previously proposed by the Authors was also applied by independent observers. Since 46 pairs of observation data were available (OCRA index and check-list score) it was possible to verify, via parametric and nonparametric statistical tests, the level of association between the two variables and to find the best simple regression function (exponential in this case) of the OCRA index from the check-list score. By means of this function, which was highly significant (R2 = 0.98, p < 0.0000), the values of the check-list score which better corresponded to the critical values (for exposure assessment) of the OCRA index looked for. The following correspondance values between OCRA Index and check-list were then established with a view to classifying exposure levels. The check-list "critical" scores were established considering the need for obtaining, in borderline cases, a potential effect of overestimation of the exposure level. On the basis of practical application experience and the preliminary validation results, recommendations are made and the caution needed in the use of the check-list is suggested.

  3. Socioeconomic differences in health check-ups and medically certified sickness absence: a 10-year follow-up among middle-aged municipal employees in Finland.

    PubMed

    Piha, Kustaa; Sumanen, Hilla; Lahelma, Eero; Rahkonen, Ossi

    2017-04-01

    There is contradictory evidence on the association between health check-ups and future morbidity. Among the general population, those with high socioeconomic position participate more often in health check-ups. The main aims of this study were to analyse if attendance to health check-ups are socioeconomically patterned and affect sickness absence over a 10-year follow-up. This register-based follow-up study included municipal employees of the City of Helsinki. 13 037 employees were invited to age-based health check-up during 2000-2002, with a 62% attendance rate. Education, occupational class and individual income were used to measure socioeconomic position. Medically certified sickness absence of 4 days or more was measured and controlled for at the baseline and used as an outcome over follow-up. The mean follow-up time was 7.5 years. Poisson regression was used. Men and employees with lower socioeconomic position participated more actively in health check-ups. Among women, non-attendance to health check-up predicted higher sickness absence during follow-up (relative risk =1.26, 95% CI 1.17 to 1.37) in the fully adjusted model. Health check-ups were not effective in reducing socioeconomic differences in sickness absence. Age-based health check-ups reduced subsequent sickness absence and should be promoted. Attendance to health check-ups should be as high as possible. Contextual factors need to be taken into account when applying the results in interventions in other settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  4. Comparison of ultrasound-guided supraclavicular block according to the various volumes of local anesthetic

    PubMed Central

    Kim, Seok Kon; Kang, Bong Jin; Kwon, Min A; Song, Jae Gyok; Jeon, Soo Mi

    2013-01-01

    Background The ultrasound guidance in regional nerve blocks has recently been introduced and gaining popularity. Ultrasound-guided supraclavicular block has many advantages including the higher success rate, faster onset time, and fewer complications. The aim of this study was to examine the clinical data according to the varied volume of local anesthetics in the ultrasound-guided supraclavicular block. Methods One hundred twenty patients were randomized into four groups, according to the local anesthetic volume used: Group 35 (n = 30), Group 30 (n = 30), Group 25 (n = 30), and Group 20 (n = 30). Supraclavicular blocks were performed with 1% mepivacaine 35 ml, 30 ml, 25 ml, and 20 ml, respectively. The success rate, onset time, and complications were checked and evaluated. Results The success rate (66.7%) was lower in Group 20 than that of Group 35 (96.7%) (P < 0.05). The average onset times of Group 35, Group 30, Group 25, and Group 20 were 14.3 ± 6.9 min, 13.6 ± 4.5 min, 16.7 ± 4.6 min, and 16.5 ± 3.7 min, respectively. There were no significant differences. Horner's syndrome was higher in Group 35 (P < 0.05). Conclusions In conclusion, we achieved 90% success rate with 30 ml of 1% mepivacaine. Therefore, we suggest 30 ml of local anesthetic volume for ultrasound-guided supraclavicular block. PMID:23814648

  5. Big Data Analytics for Prostate Radiotherapy.

    PubMed

    Coates, James; Souhami, Luis; El Naqa, Issam

    2016-01-01

    Radiation therapy is a first-line treatment option for localized prostate cancer and radiation-induced normal tissue damage are often the main limiting factor for modern radiotherapy regimens. Conversely, under-dosing of target volumes in an attempt to spare adjacent healthy tissues limits the likelihood of achieving local, long-term control. Thus, the ability to generate personalized data-driven risk profiles for radiotherapy outcomes would provide valuable prognostic information to help guide both clinicians and patients alike. Big data applied to radiation oncology promises to deliver better understanding of outcomes by harvesting and integrating heterogeneous data types, including patient-specific clinical parameters, treatment-related dose-volume metrics, and biological risk factors. When taken together, such variables make up the basis for a multi-dimensional space (the "RadoncSpace") in which the presented modeling techniques search in order to identify significant predictors. Herein, we review outcome modeling and big data-mining techniques for both tumor control and radiotherapy-induced normal tissue effects. We apply many of the presented modeling approaches onto a cohort of hypofractionated prostate cancer patients taking into account different data types and a large heterogeneous mix of physical and biological parameters. Cross-validation techniques are also reviewed for the refinement of the proposed framework architecture and checking individual model performance. We conclude by considering advanced modeling techniques that borrow concepts from big data analytics, such as machine learning and artificial intelligence, before discussing the potential future impact of systems radiobiology approaches.

  6. Big Data Analytics for Prostate Radiotherapy

    PubMed Central

    Coates, James; Souhami, Luis; El Naqa, Issam

    2016-01-01

    Radiation therapy is a first-line treatment option for localized prostate cancer and radiation-induced normal tissue damage are often the main limiting factor for modern radiotherapy regimens. Conversely, under-dosing of target volumes in an attempt to spare adjacent healthy tissues limits the likelihood of achieving local, long-term control. Thus, the ability to generate personalized data-driven risk profiles for radiotherapy outcomes would provide valuable prognostic information to help guide both clinicians and patients alike. Big data applied to radiation oncology promises to deliver better understanding of outcomes by harvesting and integrating heterogeneous data types, including patient-specific clinical parameters, treatment-related dose–volume metrics, and biological risk factors. When taken together, such variables make up the basis for a multi-dimensional space (the “RadoncSpace”) in which the presented modeling techniques search in order to identify significant predictors. Herein, we review outcome modeling and big data-mining techniques for both tumor control and radiotherapy-induced normal tissue effects. We apply many of the presented modeling approaches onto a cohort of hypofractionated prostate cancer patients taking into account different data types and a large heterogeneous mix of physical and biological parameters. Cross-validation techniques are also reviewed for the refinement of the proposed framework architecture and checking individual model performance. We conclude by considering advanced modeling techniques that borrow concepts from big data analytics, such as machine learning and artificial intelligence, before discussing the potential future impact of systems radiobiology approaches. PMID:27379211

  7. Litho hotspots fixing using model based algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Meili; Yu, Shirui; Mao, Zhibiao; Shafee, Marwa; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe; Hu, Xinyi; Wan, Qijian; Du, Chunshan

    2017-04-01

    As technology advances, IC designs are getting more sophisticated, thus it becomes more critical and challenging to fix printability issues in the design flow. Running lithography checks before tapeout is now mandatory for designers, which creates a need for more advanced and easy-to-use techniques for fixing hotspots found after lithographic simulation without creating a new design rule checking (DRC) violation or generating a new hotspot. This paper presents a new methodology for fixing hotspots on layouts while using the same engine currently used to detect the hotspots. The fix is achieved by applying minimum movement of edges causing the hotspot, with consideration of DRC constraints. The fix is internally simulated by the lithographic simulation engine to verify that the hotspot is eliminated and that no new hotspot is generated by the new edge locations. Hotspot fix checking is enhanced by adding DRC checks to the litho-friendly design (LFD) rule file to guarantee that any fix options that violate DRC checks are removed from the output hint file. This extra checking eliminates the need to re-run both DRC and LFD checks to ensure the change successfully fixed the hotspot, which saves time and simplifies the designer's workflow. This methodology is demonstrated on industrial designs, where the fixing rate of single and dual layer hotspots is reported.

  8. Model-Checking with Edge-Valued Decision Diagrams

    NASA Technical Reports Server (NTRS)

    Roux, Pierre; Siminiceanu, Radu I.

    2010-01-01

    We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library along with state-of-the-art algorithms for building the transition relation and the state space of discrete state systems. We provide efficient algorithms for manipulating EVMDDs and give upper bounds of the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi-Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools: EVMDDs for encoding arithmetic expressions, identity-reduced MDDs for representing the transition relation, and the saturation algorithm for reachability analysis. We compare our new symbolic model checking EVMDD library with the widely used CUDD package and show that, in many cases, our tool is several orders of magnitude faster than CUDD.

  9. DAB user's guide

    NASA Technical Reports Server (NTRS)

    Trosin, J.

    1985-01-01

    Use of the Display AButments (DAB) which plots PAN AIR geometries is presented. The DAB program creates hidden line displays of PAN AIR geometries and labels specified geometry components, such as abutments, networks, and network edges. It is used to alleviate the very time consuming and error prone abutment list checking phase of developing a valid PAN AIR geometry, and therefore represents a valuable tool for debugging complex PAN AIR geometry definitions. DAB is written in FORTRAN 77 and runs on a Digital Equipment Corporation VAX 11/780 under VMS. It utilizes a special color version of the SKETCH hidden line analysis routine.

  10. DARPA Advanced Cannon Propellant (ACP) Library User’s Guide. Appendix E. Patents Dealing with LP Gun Hardware

    DTIC Science & Technology

    1981-06-15

    clip 30. A spool 32 in- than as herein specifically illustrated or described, and r 3,782,241 3 4 that certain changes in the form and arrangement of...34 storke " to -- stroke--. 5igncd and 5caicd this Fourteenth Day of March 1978 ISEA 1.I .4 test: Rk TH C. MASON LITRELLE F. PARKER A ttesting Officer Acting...10.67 cells. One thereby closing the check valve 36. This quantity of 45 would select 10 cells and increase the storke length compressed air and

  11. Social-cognitive determinants of the tick check: a cross-sectional study on self-protective behavior in combatting Lyme disease.

    PubMed

    van der Heijden, Amy; Mulder, Bob C; Poortvliet, P Marijn; van Vliet, Arnold J H

    2017-11-25

    Performing a tick check after visiting nature is considered the most important preventive measure to avoid contracting Lyme disease. Checking the body for ticks after visiting nature is the only measure that can fully guarantee whether one has been bitten by a tick and provides the opportunity to remove the tick as soon as possible, thereby greatly reducing the chance of contracting Lyme disease. However, compliance to performing the tick check is low. In addition, most previous studies on determinants of preventive measures to avoid Lyme disease lack a clear definition and/or operationalization of the term "preventive measures". Those that do distinguish multiple behaviors including the tick check, fail to describe the systematic steps that should be followed in order to perform the tick check effectively. Hence, the purpose of this study was to identify determinants of systematically performing the tick check, based on social cognitive theory. A cross-sectional self-administered survey questionnaire was filled out online by 508 respondents (M age  = 51.7, SD = 16.0; 50.2% men; 86.4% daily or weekly nature visitors). Bivariate correlations and multivariate regression analyses were conducted to identify associations between socio-cognitive determinants (i.e. concepts related to humans' intrinsic and extrinsic motivation to perform certain behavior), and the tick check, and between socio-cognitive determinants and proximal goal to do the tick check. The full regression model explained 28% of the variance in doing the tick check. Results showed that performing the tick check was associated with proximal goal (β = .23, p < 0.01), self-efficacy (β = .22, p < 0.01), self-evaluative outcome expectations (β = .21, p < 0.01), descriptive norm (β = .16, p < 0.01), and experience (β = .13, p < 0.01). Our study is among the first to examine the determinants of systematic performance of the tick check, using an extended version of social cognitive theory to identify determinants. Based on the results, a number of practical recommendations can be made to promote the performance of the tick check.

  12. Prediction Interval Development for Wind-Tunnel Balance Check-Loading

    NASA Technical Reports Server (NTRS)

    Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.

    2014-01-01

    Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.

  13. Minimally invasive positioning robot system of femoral neck hollow screw implants based on x-ray error correction

    NASA Astrophysics Data System (ADS)

    Zou, Yunpeng; Xu, Ying; Hu, Lei; Guo, Na; Wang, Lifeng

    2017-01-01

    Aiming the high failure rate, the high radiation quantity and the poor positioning accuracy of femoral neck traditional surgery, this article develops a set of new positioning robot system of femoral neck hollow screw implants based on X-rays error correction, which bases on the study of x-rays perspective principle and the Motion Principle of 6 DOF(degree of freedom) series robot UR(Universal Robots). Compared with Computer Assisted Navigation System, this system owns better positioning accuracy and more simple operation. In addition, without extra Equipment of Visual Tracking, this system can reduce a lot of cost. During the surgery, Doctor can plan the operation path and the pose of mark needle according to the positive and lateral X-rays images of patients. Then they can calculate the pixel ratio according to the ratio of the actual length of mark line and the length on image. After that, they can calculate the amount of exercise of UR Robot according to the relative position between operation path and guide pin and the fixed relationship between guide pin and UR robot. Then, they can control UR to drive the positioning guide pin to the operation path. At this point, check the positioning guide pin and the planning path is coincident, if not, repeat the previous steps, until the positioning guide pin and the planning path coincide which will eventually complete the positioning operation. Moreover, to verify the positioning accuracy, this paper make an errors analysis aiming to thirty cases of the experimental model of bone. The result shows that the motion accuracy of the UR Robot is 0.15mm and the Integral error precision is within 0.8mm. To verify the clinical feasibility of this system, this article analysis on three cases of the clinical experiment. In the whole process of positioning, the X-rays irradiation time is 2-3s, the number of perspective is 3-5 and the whole positioning time is 7-10min. The result shows that this system can complete accurately femoral neck positioning surgery. Meanwhile, it can greatly reduce the X-rays radiation of medical staff and patients. To summarize, it has a significant value in clinical application.

  14. DBS Programming: An Evolving Approach for Patients with Parkinson's Disease.

    PubMed

    Wagle Shukla, Aparna; Zeilman, Pam; Fernandez, Hubert; Bajwa, Jawad A; Mehanna, Raja

    2017-01-01

    Deep brain stimulation (DBS) surgery is a well-established therapy for control of motor symptoms in Parkinson's disease. Despite an appropriate targeting and an accurate placement of DBS lead, a thorough and efficient programming is critical for a successful clinical outcome. DBS programming is a time consuming and laborious manual process. The current approach involves use of general guidelines involving determination of the lead type, electrode configuration, impedance check, and battery check. However there are no validated and well-established programming protocols. In this review, we will discuss the current practice and the recent advances in DBS programming including the use of interleaving, fractionated current, directional steering of current, and the use of novel DBS pulses. These technological improvements are focused on achieving a more efficient control of clinical symptoms with the least possible side effects. Other promising advances include the introduction of computer guided programming which will likely impact the efficiency of programming for the clinicians and the possibility of remote Internet based programming which will improve access to DBS care for the patients.

  15. DBS Programming: An Evolving Approach for Patients with Parkinson's Disease

    PubMed Central

    Zeilman, Pam; Fernandez, Hubert; Bajwa, Jawad A.

    2017-01-01

    Deep brain stimulation (DBS) surgery is a well-established therapy for control of motor symptoms in Parkinson's disease. Despite an appropriate targeting and an accurate placement of DBS lead, a thorough and efficient programming is critical for a successful clinical outcome. DBS programming is a time consuming and laborious manual process. The current approach involves use of general guidelines involving determination of the lead type, electrode configuration, impedance check, and battery check. However there are no validated and well-established programming protocols. In this review, we will discuss the current practice and the recent advances in DBS programming including the use of interleaving, fractionated current, directional steering of current, and the use of novel DBS pulses. These technological improvements are focused on achieving a more efficient control of clinical symptoms with the least possible side effects. Other promising advances include the introduction of computer guided programming which will likely impact the efficiency of programming for the clinicians and the possibility of remote Internet based programming which will improve access to DBS care for the patients. PMID:29147598

  16. Dealing with complex and ill-structured problems: results of a Plan-Do-Check-Act experiment in a business engineering semester

    NASA Astrophysics Data System (ADS)

    Riis, Jens Ove; Achenbach, Marlies; Israelsen, Poul; Kyvsgaard Hansen, Poul; Johansen, John; Deuse, Jochen

    2017-07-01

    Challenged by increased globalisation and fast technological development, we carried out an experiment in the third semester of a global business engineering programme aimed at identifying conditions for training student in dealing with complex and ill-structured problems of forming a new business. As this includes a fuzzy front end, learning cannot be measured in traditional, quantitative terms; therefore, we have explored the use of reflection to convert tacit knowledge to explicit knowledge. The experiment adopted a Plan-Do-Check-Act approach and concluded with developing a plan for new learning initiatives in the subsequent year's semester. The findings conclude that (1) problem-based learning develops more competencies than ordinarily measured at the examination, especially, the social/communication and personal competencies are developed; (2) students are capable of dealing with a complex and ambiguous problem, if properly guided. Four conditions were identified; (3) most students are not conscious of their learning, but are able to reflect if properly encouraged; and (4) improving engineering education should be considered as an organisational learning process.

  17. 75 FR 43801 - Airworthiness Directives; Eurocopter France (ECF) Model EC225LP Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-27

    ... time. Also, we use inspect rather than check when referring to an action required by a mechanic as... the various levels of government. Therefore, I certify this AD: 1. Is not a ``significant regulatory... compliance time. Also, we use inspect rather than check when referring to an action required by a mechanic as...

  18. Mandatory Identification Bar Checks: How Bouncers Are Doing Their Job

    ERIC Educational Resources Information Center

    Monk-Turner, Elizabeth; Allen, John; Casten, John; Cowling, Catherine; Gray, Charles; Guhr, David; Hoofnagle, Kara; Huffman, Jessica; Mina, Moises; Moore, Brian

    2011-01-01

    The behavior of bouncers at on site establishments that served alcohol was observed. Our aim was to better understand how bouncers went about their job when the bar had a mandatory policy to check identification of all customers. Utilizing an ethnographic decision model, we found that bouncers were significantly more likely to card customers that…

  19. Enhancing Classroom Management Using the Classroom Check-up Consultation Model with In-Vivo Coaching and Goal Setting Components

    ERIC Educational Resources Information Center

    Kleinert, Whitney L.; Silva, Meghan R.; Codding, Robin S.; Feinberg, Adam B.; St. James, Paula S.

    2017-01-01

    Classroom management is essential to promote learning in schools, and as such it is imperative that teachers receive adequate support to maximize their competence implementing effective classroom management strategies. One way to improve teachers' classroom managerial competence is through consultation. The Classroom Check-Up (CCU) is a structured…

  20. 76 FR 18964 - Airworthiness Directives; Costruzioni Aeronautiche Tecnam srl Model P2006T Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-06

    ... Landing Gear retraction/extension ground checks performed on the P2006T, a loose Seeger ring was found on... condition for the specified products. The MCAI states: During Landing Gear retraction/extension ground... retraction/extension ground checks performed on the P2006T, a loose Seeger ring was found on the nose landing...

  1. 78 FR 69987 - Airworthiness Directives; Erickson Air-Crane Incorporated Helicopters (Type Certificate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-22

    ... to require recurring checks of the Blade Inspection Method (BIM) indicator on each blade to determine whether the BIM indicator is signifying that the blade pressure may have been compromised by a blade crack... check procedures for BIM blades installed on the Model S-64E and S-64F helicopters. Several blade spars...

  2. Motivational Interviewing for Effective Classroom Management: The Classroom Check-Up. Practical Intervention in the Schools Series

    ERIC Educational Resources Information Center

    Reinke, Wendy M.; Herman, Keith C.; Sprick, Randy

    2011-01-01

    Highly accessible and user-friendly, this book focuses on helping K-12 teachers increase their use of classroom management strategies that work. It addresses motivational aspects of teacher consultation that are essential, yet often overlooked. The Classroom Check-Up is a step-by-step model for assessing teachers' organizational, instructional,…

  3. 75 FR 63045 - Airworthiness Directives; BAE SYSTEMS (OPERATIONS) LIMITED Model BAe 146 and Avro 146-RJ Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-14

    ... the fitting and wing structure. Checking the nuts with a suitable torque spanner to the specifications in the torque figures shown in Table 2. of the Accomplishment Instructions of BAE SYSTEMS (OPERATIONS... installed, and Doing either an ultrasonic inspection for damaged bolts or torque check of the tension bolts...

  4. 76 FR 13069 - Airworthiness Directives; BAE Systems (Operations) Limited Model ATP Airplanes; BAE Systems...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-10

    ..., an operator found an aileron trim tab hinge pin that had migrated sufficiently to cause a rubbing.... Recently, during a walk round check, an operator found an aileron trim tab hinge pin that had migrated... walk round check, an operator found an aileron trim tab hinge pin that had migrated sufficiently to...

  5. Model Checking a Byzantine-Fault-Tolerant Self-Stabilizing Protocol for Distributed Clock Synchronization Systems

    NASA Technical Reports Server (NTRS)

    Malekpour, Mahyar R.

    2007-01-01

    This report presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV) [SMV]. The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space. Also, additional innovative state space reduction techniques are introduced that can be used in future verification efforts applied to this and other protocols.

  6. Stochastic Local Search for Core Membership Checking in Hedonic Games

    NASA Astrophysics Data System (ADS)

    Keinänen, Helena

    Hedonic games have emerged as an important tool in economics and show promise as a useful formalism to model multi-agent coalition formation in AI as well as group formation in social networks. We consider a coNP-complete problem of core membership checking in hedonic coalition formation games. No previous algorithms to tackle the problem have been presented. In this work, we overcome this by developing two stochastic local search algorithms for core membership checking in hedonic games. We demonstrate the usefulness of the algorithms by showing experimentally that they find solutions efficiently, particularly for large agent societies.

  7. SU-E-T-310: Targeting Safety Improvements Through Analysis of Near-Miss Error Detection Points in An Incident Learning Database

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novak, A; Nyflot, M; Sponseller, P

    2014-06-01

    Purpose: Radiation treatment planning involves a complex workflow that can make safety improvement efforts challenging. This study utilizes an incident reporting system to identify detection points of near-miss errors, in order to guide our departmental safety improvement efforts. Previous studies have examined where errors arise, but not where they are detected or their patterns. Methods: 1377 incidents were analyzed from a departmental nearmiss error reporting system from 3/2012–10/2013. All incidents were prospectively reviewed weekly by a multi-disciplinary team, and assigned a near-miss severity score ranging from 0–4 reflecting potential harm (no harm to critical). A 98-step consensus workflow was usedmore » to determine origination and detection points of near-miss errors, categorized into 7 major steps (patient assessment/orders, simulation, contouring/treatment planning, pre-treatment plan checks, therapist/on-treatment review, post-treatment checks, and equipment issues). Categories were compared using ANOVA. Results: In the 7-step workflow, 23% of near-miss errors were detected within the same step in the workflow, while an additional 37% were detected by the next step in the workflow, and 23% were detected two steps downstream. Errors detected further from origination were more severe (p<.001; Figure 1). The most common source of near-miss errors was treatment planning/contouring, with 476 near misses (35%). Of those 476, only 72(15%) were found before leaving treatment planning, 213(45%) were found at physics plan checks, and 191(40%) were caught at the therapist pre-treatment chart review or on portal imaging. Errors that passed through physics plan checks and were detected by therapists were more severe than other errors originating in contouring/treatment planning (1.81 vs 1.33, p<0.001). Conclusion: Errors caught by radiation treatment therapists tend to be more severe than errors caught earlier in the workflow, highlighting the importance of safety checks in dosimetry and physics. We are utilizing our findings to improve manual and automated checklists for dosimetry and physics.« less

  8. Multi-stage 3D-2D registration for correction of anatomical deformation in image-guided spine surgery

    NASA Astrophysics Data System (ADS)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Jacobson, M. W.; Goerres, J.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2017-06-01

    A multi-stage image-based 3D-2D registration method is presented that maps annotations in a 3D image (e.g. point labels annotating individual vertebrae in preoperative CT) to an intraoperative radiograph in which the patient has undergone non-rigid anatomical deformation due to changes in patient positioning or due to the intervention itself. The proposed method (termed msLevelCheck) extends a previous rigid registration solution (LevelCheck) to provide an accurate mapping of vertebral labels in the presence of spinal deformation. The method employs a multi-stage series of rigid 3D-2D registrations performed on sets of automatically determined and increasingly localized sub-images, with the final stage achieving a rigid mapping for each label to yield a locally rigid yet globally deformable solution. The method was evaluated first in a phantom study in which a CT image of the spine was acquired followed by a series of 7 mobile radiographs with increasing degree of deformation applied. Second, the method was validated using a clinical data set of patients exhibiting strong spinal deformation during thoracolumbar spine surgery. Registration accuracy was assessed using projection distance error (PDE) and failure rate (PDE  >  20 mm—i.e. label registered outside vertebra). The msLevelCheck method was able to register all vertebrae accurately for all cases of deformation in the phantom study, improving the maximum PDE of the rigid method from 22.4 mm to 3.9 mm. The clinical study demonstrated the feasibility of the approach in real patient data by accurately registering all vertebral labels in each case, eliminating all instances of failure encountered in the conventional rigid method. The multi-stage approach demonstrated accurate mapping of vertebral labels in the presence of strong spinal deformation. The msLevelCheck method maintains other advantageous aspects of the original LevelCheck method (e.g. compatibility with standard clinical workflow, large capture range, and robustness against mismatch in image content) and extends capability to cases exhibiting strong changes in spinal curvature.

  9. Robust check loss-based variable selection of high-dimensional single-index varying-coefficient model

    NASA Astrophysics Data System (ADS)

    Song, Yunquan; Lin, Lu; Jian, Ling

    2016-07-01

    Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.

  10. Model Checking Satellite Operational Procedures

    NASA Astrophysics Data System (ADS)

    Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri

    2011-08-01

    We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.

  11. Ultrasound guided double injection of blood into cisterna magna: a rabbit model for treatment of cerebral vasospasm.

    PubMed

    Chen, Yongchao; Zhu, Youzhi; Zhang, Yu; Zhang, Zixuan; Lian, Juan; Luo, Fucheng; Deng, Xuefei; Wong, Kelvin K L

    2016-02-06

    Double injection of blood into cisterna magna using a rabbit model results in cerebral vasospasm. An unacceptably high mortality rate tends to limit the application of model. Ultrasound guided puncture can provide real-time imaging guidance for operation. The aim of this paper is to establish a safe and effective rabbit model of cerebral vasospasm after subarachnoid hemorrhage with the assistance of ultrasound medical imaging. A total of 160 New Zealand white rabbits were randomly divided into four groups of 40 each: (1) manual control group, (2) manual model group, (3) ultrasound guided control group, and (4) ultrasound guided model group. The subarachnoid hemorrhage was intentionally caused by double injection of blood into their cisterna magna. Then, basilar artery diameters were measured using magnetic resonance angiography before modeling and 5 days after modeling. The depth of needle entering into cisterna magna was determined during the process of ultrasound guided puncture. The mortality rates in manual control group and model group were 15 and 23 %, respectively. No rabbits were sacrificed in those two ultrasound guided groups. We found that the mortality rate in ultrasound guided groups decreased significantly compared to manual groups. Compared with diameters before modeling, the basilar artery diameters after modeling were significantly lower in manual and ultrasound guided model groups. The vasospasm aggravated and the proportion of severe vasospasms was greater in ultrasound guided model group than that of manual group. In manual model group, no vasospasm was found in 8 % of rabbits. The ultrasound guided double injection of blood into cisterna magna is a safe and effective rabbit model for treatment of cerebral vasospasm.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakaguchi, Yuji, E-mail: nkgc2003@yahoo.co.jp; Ono, Takeshi; Onitsuka, Ryota

    COMPASS system (IBA Dosimetry, Schwarzenbruck, Germany) and ArcCHECK with 3DVH software (Sun Nuclear Corp., Melbourne, FL) are commercial quasi-3-dimensional (3D) dosimetry arrays. Cross-validation to compare them under the same conditions, such as a treatment plan, allows for clear evaluation of such measurement devices. In this study, we evaluated the accuracy of reconstructed dose distributions from the COMPASS system and ArcCHECK with 3DVH software using Monte Carlo simulation (MC) for multi-leaf collimator (MLC) test patterns and clinical VMAT plans. In a phantom study, ArcCHECK 3DVH showed clear differences from COMPASS, measurement and MC due to the detector resolution and the dosemore » reconstruction method. Especially, ArcCHECK 3DVH showed 7% difference from MC for the heterogeneous phantom. ArcCHECK 3DVH only corrects the 3D dose distribution of treatment planning system (TPS) using ArcCHECK measurement, and therefore the accuracy of ArcCHECK 3DVH depends on TPS. In contrast, COMPASS showed good agreement with MC for all cases. However, the COMPASS system requires many complicated installation procedures such as beam modeling, and appropriate commissioning is needed. In terms of clinical cases, there were no large differences for each QA device. The accuracy of the compass and ArcCHECK 3DVH systems for phantoms and clinical cases was compared. Both systems have advantages and disadvantages for clinical use, and consideration of the operating environment is important. The QA system selection is depending on the purpose and workflow in each hospital.« less

  13. MoniQA: a general approach to monitor quality assurance

    NASA Astrophysics Data System (ADS)

    Jacobs, J.; Deprez, T.; Marchal, G.; Bosmans, H.

    2006-03-01

    MoniQA ("Monitor Quality Assurance") is a new, non-commercial, independent quality assurance software application developed in our medical physics team. It is a complete Java TM - based modular environment for the evaluation of radiological viewing devices and it thus fits in the global quality assurance network of our (film less) radiology department. The purpose of the software tool is to guide the medical physicist through an acceptance protocol and the radiologist through a constancy check protocol by presentation of the necessary test patterns and by automated data collection. Data are then sent to a central management system for further analysis. At the moment more than 55 patterns have been implemented, which can be grouped in schemes to implement protocols (i.e. AAPMtg18, DIN and EUREF). Some test patterns are dynamically created and 'drawn' on the viewing device with random parameters as is the case in a recently proposed new pattern for constancy testing. The software is installed on 35 diagnostic stations (70 monitors) in a film less radiology department. Learning time was very limited. A constancy check -with the new pattern that assesses luminance decrease, resolution problems and geometric distortion- takes only 2 minutes and 28 seconds per monitor. The modular approach of the software allows the evaluation of new or emerging test patterns. We will report on the software and its usability: practicality of the constancy check tests in our hospital and on the results from acceptance tests of viewing stations for digital mammography.

  14. Model Checking Abstract PLEXIL Programs with SMART

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu I.

    2007-01-01

    We describe a method to automatically generate discrete-state models of abstract Plan Execution Interchange Language (PLEXIL) programs that can be analyzed using model checking tools. Starting from a high-level description of a PLEXIL program or a family of programs with common characteristics, the generator lays the framework that models the principles of program execution. The concrete parts of the program are not automatically generated, but require the modeler to introduce them by hand. As a case study, we generate models to verify properties of the PLEXIL macro constructs that are introduced as shorthand notation. After an exhaustive analysis, we conclude that the macro definitions obey the intended semantics and behave as expected, but contingently on a few specific requirements on the timing semantics of micro-steps in the concrete executive implementation.

  15. An approach to checking case-crossover analyses based on equivalence with time-series methods.

    PubMed

    Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L

    2008-03-01

    The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.

  16. Bounded Parametric Model Checking for Elementary Net Systems

    NASA Astrophysics Data System (ADS)

    Knapik, Michał; Szreter, Maciej; Penczek, Wojciech

    Bounded Model Checking (BMC) is an efficient verification method for reactive systems. BMC has been applied so far to verification of properties expressed in (timed) modal logics, but never to their parametric extensions. In this paper we show, for the first time that BMC can be extended to PRTECTL - a parametric extension of the existential version of CTL. To this aim we define a bounded semantics and a translation from PRTECTL to SAT. The implementation of the algorithm for Elementary Net Systems is presented, together with some experimental results.

  17. Development and feasibility study of very brief interventions for physical activity in primary care.

    PubMed

    Pears, Sally; Morton, Katie; Bijker, Maaike; Sutton, Stephen; Hardeman, Wendy

    2015-04-08

    There is increasing interest in brief and very brief behaviour change interventions for physical activity as they are potentially scalable to the population level. However, few very brief interventions (VBIs) have been published, and evidence is lacking about their feasibility, acceptability and which 'active ingredients' (behaviour change techniques) would maximise their effectiveness. The aim of this research was to identify and develop promising VBIs for physical activity and test their feasibility and acceptability in the context of preventive health checks in primary care. The process included two stages, guided by four criteria: effectiveness, feasibility, acceptability, and cost. In Stage 1, we used an iterative approach informed by systematic reviews, a scoping review of BCTs, team discussion, stakeholder consultation, a qualitative study, and cost estimation to guide the development of promising VBIs. In Stage 2, a feasibility study assessed the feasibility and acceptability of the short-listed VBIs, using tape-recordings and interviews with practitioners (n = 4) and patients (n = 68), to decide which VBIs merited further evaluation in a pilot trial. Four VBIs were short-listed: Motivational intervention; Action Planning intervention; Pedometer intervention; and Physical Activity Diary intervention. All were deliverable in around five minutes and were feasible and acceptable to participants and practitioners. Based on the results of interviews with practitioners and patients, techniques from the VBIs were combined into three new VBIs for further evaluation in a pilot trial. Using a two-stage approach, in which we considered the practicability of VBIs (acceptability, feasibility and cost) alongside potential efficacy from the outset, we developed a short-list of four promising VBIs for physical activity and demonstrated that they were acceptable and feasible as part of a preventive health check in primary care. Current Controlled Trials ISRCTN02863077. Registered 5 October 2012.

  18. Greater physician involvement improves coding outcomes in endobronchial ultrasound-guided transbronchial needle aspiration procedures.

    PubMed

    Pillai, Anilkumar; Medford, Andrew R L

    2013-01-01

    Correct coding is essential for accurate reimbursement for clinical activity. Published data confirm that significant aberrations in coding occur, leading to considerable financial inaccuracies especially in interventional procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). Previous data reported a 15% coding error for EBUS-TBNA in a U.K. service. We hypothesised that greater physician involvement with coders would reduce EBUS-TBNA coding errors and financial disparity. The study was done as a prospective cohort study in the tertiary EBUS-TBNA service in Bristol. 165 consecutive patients between October 2009 and March 2012 underwent EBUS-TBNA for evaluation of unexplained mediastinal adenopathy on computed tomography. The chief coder was prospectively electronically informed of all procedures and cross-checked on a prospective database and by Trust Informatics. Cost and coding analysis was performed using the 2010-2011 tariffs. All 165 procedures (100%) were coded correctly as verified by Trust Informatics. This compares favourably with the 14.4% coding inaccuracy rate for EBUS-TBNA in a previous U.K. prospective cohort study [odds ratio 201.1 (1.1-357.5), p = 0.006]. Projected income loss was GBP 40,000 per year in the previous study, compared to a GBP 492,195 income here with no coding-attributable loss in revenue. Greater physician engagement with coders prevents coding errors and financial losses which can be significant especially in interventional specialties. The intervention can be as cheap, quick and simple as a prospective email to the coding team with cross-checks by Trust Informatics and against a procedural database. We suggest that all specialties should engage more with their coders using such a simple intervention to prevent revenue losses. Copyright © 2013 S. Karger AG, Basel.

  19. Endocrine check-up in adolescents and indications for referral: A guide for health care providers

    PubMed Central

    De Sanctis, Vincenzo; Soliman, Ashraf T; Fiscina, Bernadette; Elsedfy, Heba; Elalaily, Rania; Yassin, Mohamed; Skordis, Nicos; Di Maio, Salvatore; Piacentini, Giorgio; Kholy, Mohamed El

    2014-01-01

    The American Academy of Pediatrics recommends that young people between the ages of 11 and 21 years should be seen annually by their pediatricians, since annual checkups can be an important opportunity for health evaluation and anticipatory guidance. Parents of infants and young children are accustomed to regularly visiting a pediatrician for their child's checkups. Unfortunately, when children reach the teen years, these annual checkups may decrease in frequency. In routine check-ups and medical office visits, particular attention should be paid to the possibility of a developmental or endocrine disorder. Early diagnosis and treatment may prevent medical complications in adulthood and foster age-appropriate development. Our purpose is to acquaint readers with the concept, based on current scientific understanding, that some endocrine disorders may be associated with a wide range of deleterious health consequences including an increased risk of hypertension and hyperlipidemia, increased risk of coronary artery disease, type 2 diabetes, significant anxiety and lack of self-esteem. Understanding the milestones and developmental stages of adolescence is essential for pediatricians and all other health providers who care for adolescents. Treating adolescents involves knowledge of a variety of medical, social and legal information; in addition, close working relationships must be established within the adolescent's network to create an effective care system. In summary, we underline the importance of a periodic endocrine checkup in adolescents in order to identify endocrine problems early and develop an approach to treatment for those patients who need help during this time. Indications for endocrine referral for professional and other healthcare providers are also included. These lists are clearly not intended to be comprehensive, but will hopefully serve as a guide for specific clinical circumstances. PMID:25538875

  20. Comprehensive processing of high-throughput small RNA sequencing data including quality checking, normalization, and differential expression analysis using the UEA sRNA Workbench

    PubMed Central

    Beckers, Matthew; Mohorianu, Irina; Stocks, Matthew; Applegate, Christopher; Dalmay, Tamas; Moulton, Vincent

    2017-01-01

    Recently, high-throughput sequencing (HTS) has revealed compelling details about the small RNA (sRNA) population in eukaryotes. These 20 to 25 nt noncoding RNAs can influence gene expression by acting as guides for the sequence-specific regulatory mechanism known as RNA silencing. The increase in sequencing depth and number of samples per project enables a better understanding of the role sRNAs play by facilitating the study of expression patterns. However, the intricacy of the biological hypotheses coupled with a lack of appropriate tools often leads to inadequate mining of the available data and thus, an incomplete description of the biological mechanisms involved. To enable a comprehensive study of differential expression in sRNA data sets, we present a new interactive pipeline that guides researchers through the various stages of data preprocessing and analysis. This includes various tools, some of which we specifically developed for sRNA analysis, for quality checking and normalization of sRNA samples as well as tools for the detection of differentially expressed sRNAs and identification of the resulting expression patterns. The pipeline is available within the UEA sRNA Workbench, a user-friendly software package for the processing of sRNA data sets. We demonstrate the use of the pipeline on a H. sapiens data set; additional examples on a B. terrestris data set and on an A. thaliana data set are described in the Supplemental Information. A comparison with existing approaches is also included, which exemplifies some of the issues that need to be addressed for sRNA analysis and how the new pipeline may be used to do this. PMID:28289155

  1. Determining preventability of pediatric readmissions using fault tree analysis.

    PubMed

    Jonas, Jennifer A; Devon, Erin Pete; Ronan, Jeanine C; Ng, Sonia C; Owusu-McKenzie, Jacqueline Y; Strausbaugh, Janet T; Fieldston, Evan S; Hart, Jessica K

    2016-05-01

    Previous studies attempting to distinguish preventable from nonpreventable readmissions reported challenges in completing reviews efficiently and consistently. (1) Examine the efficiency and reliability of a Web-based fault tree tool designed to guide physicians through chart reviews to a determination about preventability. (2) Investigate root causes of general pediatrics readmissions and identify the percent that are preventable. General pediatricians from The Children's Hospital of Philadelphia used a Web-based fault tree tool to classify root causes of all general pediatrics 15-day readmissions in 2014. The tool guided reviewers through a logical progression of questions, which resulted in 1 of 18 root causes of readmission, 8 of which were considered potentially preventable. Twenty percent of cases were cross-checked to measure inter-rater reliability. Of the 7252 discharges, 248 were readmitted, for an all-cause general pediatrics 15-day readmission rate of 3.4%. Of those readmissions, 15 (6.0%) were deemed potentially preventable, corresponding to 0.2% of total discharges. The most common cause of potentially preventable readmissions was premature discharge. For the 50 cross-checked cases, both reviews resulted in the same root cause for 44 (86%) of files (κ = 0.79; 95% confidence interval: 0.60-0.98). Completing 1 review using the tool took approximately 20 minutes. The Web-based fault tree tool helped physicians to identify root causes of hospital readmissions and classify them as either preventable or not preventable in an efficient and consistent way. It also confirmed that only a small percentage of general pediatrics 15-day readmissions are potentially preventable. Journal of Hospital Medicine 2016;11:329-335. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.

  2. The instant sequencing task: Toward constraint-checking a complex spacecraft command sequence interactively

    NASA Technical Reports Server (NTRS)

    Horvath, Joan C.; Alkalaj, Leon J.; Schneider, Karl M.; Amador, Arthur V.; Spitale, Joseph N.

    1993-01-01

    Robotic spacecraft are controlled by sets of commands called 'sequences.' These sequences must be checked against mission constraints. Making our existing constraint checking program faster would enable new capabilities in our uplink process. Therefore, we are rewriting this program to run on a parallel computer. To do so, we had to determine how to run constraint-checking algorithms in parallel and create a new method of specifying spacecraft models and constraints. This new specification gives us a means of representing flight systems and their predicted response to commands which could be used in a variety of applications throughout the command process, particularly during anomaly or high-activity operations. This commonality could reduce operations cost and risk for future complex missions. Lessons learned in applying some parts of this system to the TOPEX/Poseidon mission will be described.

  3. Analysis of beam propagation characteristics in gain-guided, index antiguided fibers with the beam propagation method.

    PubMed

    Ai, Fei; Qian, Jianqiang; Shi, Junfeng; Zhang, Machi

    2017-10-10

    The transmission properties of beams in gain fibers are studied with the complex refractive index beam propagation method (CRI-BPM). The method is checked by comparison with an analytic method. The behavior of a gain-guided, index antiguided (GG-IAG) fiber with different gain coefficients is studied. The simulation results show that the signal can transfer in the fiber with almost no loss when the gain coefficient reaches the threshold of the fundamental mode, and the shape of output spot will have no major changes when the gain coefficient is over the thresholds of high-order modes, even when the mode competition is not obvious. The CRI-BPM can predict the changes in light power and light mode at the same time, and will be very useful in the designing of fiber amplifiers and lasers with complex structures. More factors will be considered in this method to provide reference for practical application in our further research.

  4. Evaluating and Improving a Learning Trajectory for Linear Measurement in Elementary Grades 2 and 3: A Longitudinal Study

    ERIC Educational Resources Information Center

    Barrett, Jeffrey E.; Sarama, Julie; Clements, Douglas H.; Cullen, Craig; McCool, Jenni; Witkowski-Rumsey, Chepina; Klanderman, David

    2012-01-01

    We examined children's development of strategic and conceptual knowledge for linear measurement. We conducted teaching experiments with eight students in grades 2 and 3, based on our hypothetical learning trajectory for length to check its coherence and to strengthen the domain-specific model for learning and teaching. We checked the hierarchical…

  5. 75 FR 39185 - Airworthiness Directives; The Boeing Company Model 747-100, 747-100B, 747-100B SUD, 747-200B, 747...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ... and torque checks of the hanger fittings and strut forward bulkhead of the forward engine mount and... requires repetitive inspections and torque checks of the hanger fittings and strut forward bulkhead of the... corrective actions are replacing the fasteners; removing loose fasteners; tightening all Group A [[Page 39187...

  6. Population Pharmacokinetics of Combined Intravenous and Local Intrathecal Administration of Meropenem in Aneurysm Patients with Suspected Intracranial Infections After Craniotomy.

    PubMed

    Li, Xingang; Sun, Shusen; Wang, Qiang; Zhao, Zhigang

    2018-02-01

    For patients with intracranial infection, local intrathecal administration of meropenem may be a useful method to obtain a sufficient drug concentration in the cerebral spinal fluid (CSF). However, a large inter-individual variability may pose treatment efficacy at risk. This study aimed to identify factors affecting drug concentration in the CSF using population pharmacokinetics method. After craniotomy, aneurysm patients with an indwelling lumbar cistern drainage tube who received a combined intravenous and intrathecal administration of meropenem for the treatment of suspected intracranial infection were enrolled. Venous blood and CSF specimens were collected for determining meropenem concentrations. Nonlinear mixed-effects modeling method was used to fit blood and CSF concentrations simultaneously and to develop the population pharmacokinetic model. The proposed model was applied to simulate dosage regimens. A three-compartmental model was established to describe meropenem in vivo behavior. Lumbar CSF drainage resulted in a drug loss, and drug clearance in CSF (CL CSF ) was employed to describe this. The covariate analysis found that the drainage volume (mL/day) was strongly associated with CL CSF , and the effect of creatinine clearance was significant on the clearance of meropenem in blood (CL). Visual predictive check suggested that the proposed pharmacokinetic model agreed well with the observations. Simulation showed that both intravenous and intrathecal doses should be increased with the increases of minimum inhibitory concentration and daily CSF drainage volume. This model incorporates covariates of the creatinine clearance and the drainage volume, and a simple to use dosage regimen table was created to guide clinicians with meropenem dosing.

  7. Visual Predictive Check in Models with Time-Varying Input Function.

    PubMed

    Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio

    2015-11-01

    The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.

  8. Combining Static Analysis and Model Checking for Software Analysis

    NASA Technical Reports Server (NTRS)

    Brat, Guillaume; Visser, Willem; Clancy, Daniel (Technical Monitor)

    2003-01-01

    We present an iterative technique in which model checking and static analysis are combined to verify large software systems. The role of the static analysis is to compute partial order information which the model checker uses to reduce the state space. During exploration, the model checker also computes aliasing information that it gives to the static analyzer which can then refine its analysis. The result of this refined analysis is then fed back to the model checker which updates its partial order reduction. At each step of this iterative process, the static analysis computes optimistic information which results in an unsafe reduction of the state space. However we show that the process converges to a fired point at which time the partial order information is safe and the whole state space is explored.

  9. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  10. Neighborhood social capital is associated with participation in health checks of a general population: a multilevel analysis of a population-based lifestyle intervention- the Inter99 study.

    PubMed

    Bender, Anne Mette; Kawachi, Ichiro; Jørgensen, Torben; Pisinger, Charlotta

    2015-07-22

    Participation in population-based preventive health check has declined over the past decades. More research is needed to determine factors enhancing participation. The objective of this study was to examine the association between two measures of neighborhood level social capital on participation in the health check phase of a population-based lifestyle intervention. The study population comprised 12,568 residents of 73 Danish neighborhoods in the intervention group of a large population-based lifestyle intervention study - the Inter99. Two measures of social capital were applied; informal socializing and voting turnout. In a multilevel analysis only adjusting for age and sex, a higher level of neighborhood social capital was associated with higher probability of participating in the health check. Inclusion of both individual socioeconomic position and neighborhood deprivation in the model attenuated the coefficients for informal socializing, while voting turnout became non-significant. Higher level of neighborhood social capital was associated with higher probability of participating in the health check phase of a population-based lifestyle intervention. Most of the association between neighborhood social capital and participation in preventive health checks can be explained by differences in individual socioeconomic position and level of neighborhood deprivation. Nonetheless, there seems to be some residual association between social capital and health check participation, suggesting that activating social relations in the community may be an avenue for boosting participation rates in population-based health checks. ClinicalTrials.gov (registration no. NCT00289237 ).

  11. Microscopic analysis and simulation of check-mark stain on the galvanized steel strip

    NASA Astrophysics Data System (ADS)

    So, Hongyun; Yoon, Hyun Gi; Chung, Myung Kyoon

    2010-11-01

    When galvanized steel strip is produced through a continuous hot-dip galvanizing process, the thickness of adhered zinc film is controlled by plane impinging air gas jet referred to as "air-knife system". In such a gas-jet wiping process, stain of check-mark or sag line shape frequently appears. The check-mark defect is caused by non-uniform zinc coating and the oblique patterns such as "W", "V" or "X" on the coated surface. The present paper presents a cause and analysis of the check-mark formation and a numerical simulation of sag lines by using the numerical data produced by Large Eddy Simulation (LES) of the three-dimensional compressible turbulent flow field around the air-knife system. It was found that there is alternating plane-wise vortices near the impinging stagnation region and such alternating vortices move almost periodically to the right and to the left sides on the stagnation line due to the jet flow instability. Meanwhile, in order to simulate the check-mark formation, a novel perturbation model has been developed to predict the variation of coating thickness along the transverse direction. Finally, the three-dimensional zinc coating surface was obtained by the present perturbation model. It was found that the sag line formation is determined by the combination of the instantaneous coating thickness distribution along the transverse direction near the stagnation line and the feed speed of the steel strip.

  12. Predictors of Health Service Utilization Among Older Men in Jamaica.

    PubMed

    Willie-Tyndale, Douladel; McKoy Davis, Julian; Holder-Nevins, Desmalee; Mitchell-Fearon, Kathryn; James, Kenneth; Waldron, Norman K; Eldemire-Shearer, Denise

    2018-01-03

    To determine the relative influence of sociodemographic, socioeconomic, psychosocial, and health variables on health service utilization in the last 12 months. Data were analyzed for 1,412 men ≥60 years old from a 2012 nationally representative community-based survey in Jamaica. Associations between six health service utilization variables and several explanatory variables were explored. Logistic regression models were used to identify independent predictors of each utilization measure and determine the strengths of associations. More than 75% reported having health visits and blood pressure checks. Blood sugar (69.6%) and cholesterol (63.1%) checks were less common, and having a prostate check (35.1%) was the least utilized service. Adjusted models confirmed that the presence of chronic diseases and health insurance most strongly predicted utilization. A daughter or son as the main source of financial support (vs self) doubled or tripled, respectively, the odds of routine doctors' visits. Compared with primary or lower education, tertiary education doubled [2.37 (1.12, 4.95)] the odds of a blood pressure check. Regular attendance at club/society/religious organizations' meetings increased the odds of having a prostate check by 45%. Although need and financial resources most strongly influenced health service utilization, psychosocial variables may be particularly influential for underutilized services. © The Author(s) 2018. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. Simulation-based MDP verification for leading-edge masks

    NASA Astrophysics Data System (ADS)

    Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki

    2017-07-01

    For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification acceptable.

  14. Vehicular traffic noise prediction using soft computing approach.

    PubMed

    Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek

    2016-12-01

    A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. A generalized statistical model for the size distribution of wealth

    NASA Astrophysics Data System (ADS)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  16. Timing analysis by model checking

    NASA Technical Reports Server (NTRS)

    Naydich, Dimitri; Guaspari, David

    2000-01-01

    The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.

  17. Location contexts of user check-ins to model urban geo life-style patterns.

    PubMed

    Hasan, Samiul; Ukkusuri, Satish V

    2015-01-01

    Geo-location data from social media offers us information, in new ways, to understand people's attitudes and interests through their activity choices. In this paper, we explore the idea of inferring individual life-style patterns from activity-location choices revealed in social media. We present a model to understand life-style patterns using the contextual information (e. g. location categories) of user check-ins. Probabilistic topic models are developed to infer individual geo life-style patterns from two perspectives: i) to characterize the patterns of user interests to different types of places and ii) to characterize the patterns of user visits to different neighborhoods. The method is applied to a dataset of Foursquare check-ins of the users from New York City. The co-existence of several location contexts and the corresponding probabilities in a given pattern provide useful information about user interests and choices. It is found that geo life-style patterns have similar items-either nearby neighborhoods or similar location categories. The semantic and geographic proximity of the items in a pattern reflects the hidden regularity in user preferences and location choice behavior.

  18. Constraint-Based Abstract Semantics for Temporal Logic: A Direct Approach to Design and Implementation

    NASA Astrophysics Data System (ADS)

    Banda, Gourinath; Gallagher, John P.

    interpretation provides a practical approach to verifying properties of infinite-state systems. We apply the framework of abstract interpretation to derive an abstract semantic function for the modal μ-calculus, which is the basis for abstract model checking. The abstract semantic function is constructed directly from the standard concrete semantics together with a Galois connection between the concrete state-space and an abstract domain. There is no need for mixed or modal transition systems to abstract arbitrary temporal properties, as in previous work in the area of abstract model checking. Using the modal μ-calculus to implement CTL, the abstract semantics gives an over-approximation of the set of states in which an arbitrary CTL formula holds. Then we show that this leads directly to an effective implementation of an abstract model checking algorithm for CTL using abstract domains based on linear constraints. The implementation of the abstract semantic function makes use of an SMT solver. We describe an implemented system for proving properties of linear hybrid automata and give some experimental results.

  19. SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qiu, J; Yang, D

    2015-06-15

    Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less

  20. A cost-effectiveness model to personalize antiviral therapy in naive patients with genotype 1 chronic hepatitis C.

    PubMed

    Iannazzo, Sergio; Colombatto, Piero; Ricco, Gabriele; Oliveri, Filippo; Bonino, Ferruccio; Brunetto, Maurizia R

    2015-03-01

    Rapid virologic response is the best predictor of sustained virologic response with dual therapy in genotype-1 chronic hepatitis C, and its evaluation was proposed to tailor triple therapy in F0-F2 patients. Bio-mathematical modelling of viral dynamics during dual therapy has potentially higher accuracy than rapid virologic in the identification of patients who will eventually achieve sustained response. Study's objective was the cost-effectiveness analysis of a personalized therapy in naïve F0-F2 patients with chronic hepatitis C based on a bio-mathematical model (model-guided strategy) rather than on rapid virologic response (guideline-guided strategy). A deterministic bio-mathematical model of the infected cell dynamics was validated in a cohort of 135 patients treated with dual therapy. A decision-analytic economic model was then developed to compare model-guided and guideline-guided strategies in the Italian setting. The outcomes of the cost-effectiveness analysis with model-guided and guideline-guided strategy were 19.1-19.4 and 18.9-19.3 quality-adjusted-life-years. Total per-patient lifetime costs were €25,200-€26,000 with model-guided strategy and €28,800-€29,900 with guideline-guided strategy. When comparing model-guided with guideline-guided strategy the former resulted more effective and less costly. The adoption of the bio-mathematical predictive criterion has the potential to improve the cost-effectiveness of a personalized therapy for chronic hepatitis C, reserving triple therapy for those patients who really need it. Copyright © 2014 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  1. 76 FR 477 - Airworthiness Directives; Bombardier, Inc. Model CL-600-2A12 (CL-601) and CL-600-2B16 (CL-601-3A...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-05

    ... to these aircraft if Bombardier Service Bulletin (SB) 601-0590 [Scheduled Maintenance Instructions... information: Challenger 601 Time Limits/Maintenance Checks, PSP 601-5, Revision 38, dated June 19, 2009. Challenger 601 Time Limits/Maintenance Checks, PSP 601A-5, Revision 34, dated June 19, 2009. Challenger 604...

  2. Microprocessor-based cardiopulmonary monitoring system

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The system uses a dedicated microprocessor for transducer control and data acquisition and analysis. No data will be stored in this system, but the data will be transmitted to the onboard data system. The data system will require approximately 12 inches of rack space and will consume only 100 watts of power. An experiment specific control panel, through a series of lighted buttons, will guide the operator through the test series providing a smaller margin of error. The experimental validity of the system was verified, and the reproducibility of data and reliability of the system checked. In addition, ease of training, ease of operator interaction, and crew acceptance were evaluated in actual flight conditions.

  3. Model Checking with Edge-Valued Decision Diagrams

    NASA Technical Reports Server (NTRS)

    Roux, Pierre; Siminiceanu, Radu I.

    2010-01-01

    We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library. We provide efficient algorithms for manipulating EVMDDs and review the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi- Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools. Compared to the CUDD package, our tool is several orders of magnitude faster

  4. Verification and Validation of Autonomy Software at NASA

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles

    2000-01-01

    Autonomous software holds the promise of new operation possibilities, easier design and development and lower operating costs. However, as those system close control loops and arbitrate resources on board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques and concrete experiments at NASA.

  5. Verification and Validation of Autonomy Software at NASA

    NASA Technical Reports Server (NTRS)

    Pecheur, Charles

    2000-01-01

    Autonomous software holds the promise of new operation possibilities, easier design and development, and lower operating costs. However, as those system close control loops and arbitrate resources on-board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques, and concrete experiments at NASA.

  6. Combining Static Model Checking with Dynamic Enforcement Using the Statecall Policy Language

    NASA Astrophysics Data System (ADS)

    Madhavapeddy, Anil

    Internet protocols encapsulate a significant amount of state, making implementing the host software complex. In this paper, we define the Statecall Policy Language (SPL) which provides a usable middle ground between ad-hoc coding and formal reasoning. It enables programmers to embed automata in their code which can be statically model-checked using SPIN and dynamically enforced. The performance overheads are minimal, and the automata also provide higher-level debugging capabilities. We also describe some practical uses of SPL by describing the automata used in an SSH server written entirely in OCaml/SPL.

  7. Diagnosis checking of statistical analysis in RCTs indexed in PubMed.

    PubMed

    Lee, Paul H; Tse, Andy C Y

    2017-11-01

    Statistical analysis is essential for reporting of the results of randomized controlled trials (RCTs), as well as evaluating their effectiveness. However, the validity of a statistical analysis also depends on whether the assumptions of that analysis are valid. To review all RCTs published in journals indexed in PubMed during December 2014 to provide a complete picture of how RCTs handle assumptions of statistical analysis. We reviewed all RCTs published in December 2014 that appeared in journals indexed in PubMed using the Cochrane highly sensitive search strategy. The 2014 impact factors of the journals were used as proxies for their quality. The type of statistical analysis used and whether the assumptions of the analysis were tested were reviewed. In total, 451 papers were included. Of the 278 papers that reported a crude analysis for the primary outcomes, 31 (27·2%) reported whether the outcome was normally distributed. Of the 172 papers that reported an adjusted analysis for the primary outcomes, diagnosis checking was rarely conducted, with only 20%, 8·6% and 7% checked for generalized linear model, Cox proportional hazard model and multilevel model, respectively. Study characteristics (study type, drug trial, funding sources, journal type and endorsement of CONSORT guidelines) were not associated with the reporting of diagnosis checking. The diagnosis of statistical analyses in RCTs published in PubMed-indexed journals was usually absent. Journals should provide guidelines about the reporting of a diagnosis of assumptions. © 2017 Stichting European Society for Clinical Investigation Journal Foundation.

  8. User's guide to the western spruce budworm modeling system

    Treesearch

    Nicholas L. Crookston; J. J. Colbert; Paul W. Thomas; Katharine A. Sheehan; William P. Kemp

    1990-01-01

    The Budworm Modeling System is a set of four computer programs: The Budworm Dynamics Model, the Prognosis-Budworm Dynamics Model, the Prognosis-Budworm Damage Model, and the Parallel Processing-Budworm Dynamics Model. Input to the first three programs and the output produced are described in this guide. A guide to the fourth program will be published separately....

  9. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, J.; Tolson, B.

    2017-12-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method's independency of the convergence testing method, we applied it to two widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991) and the variance-based Sobol' method (Solbol' 1993). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  10. Neighborhood deprivation is strongly associated with participation in a population-based health check.

    PubMed

    Bender, Anne Mette; Kawachi, Ichiro; Jørgensen, Torben; Pisinger, Charlotta

    2015-01-01

    We sought to examine whether neighborhood deprivation is associated with participation in a large population-based health check. Such analyses will help answer the question whether health checks, which are designed to meet the needs of residents in deprived neighborhoods, may increase participation and prove to be more effective in preventing disease. In Europe, no study has previously looked at the association between neighborhood deprivation and participation in a population-based health check. The study population comprised 12,768 persons invited for a health check including screening for ischemic heart disease and lifestyle counseling. The study population was randomly drawn from a population of 179,097 persons living in 73 neighborhoods in Denmark. Data on neighborhood deprivation (percentage with basic education, with low income and not in work) and individual socioeconomic position were retrieved from national administrative registers. Multilevel regression analyses with log links and binary distributions were conducted to obtain relative risks, intraclass correlation coefficients and proportional change in variance. Large differences between neighborhoods existed in both deprivation levels and neighborhood health check participation rate (mean 53%; range 35-84%). In multilevel analyses adjusted for age and sex, higher levels of all three indicators of neighborhood deprivation and a deprivation score were associated with lower participation in a dose-response fashion. Persons living in the most deprived neighborhoods had up to 37% decreased probability of participating compared to those living in the least deprived neighborhoods. Inclusion of individual socioeconomic position in the model attenuated the neighborhood deprivation coefficients, but all except for income deprivation remained statistically significant. Neighborhood deprivation was associated with participation in a population-based health check in a dose-response manner, in which increasing neighborhood deprivation was associated with decreasing participation. This suggests the need to develop preventive health checks tailored to deprived neighborhoods.

  11. Performance Evaluation of 3d Modeling Software for Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Yanagi, H.; Chikatsu, H.

    2016-06-01

    UAV (Unmanned Aerial Vehicle) photogrammetry, which combines UAV and freely available internet-based 3D modeling software, is widely used as a low-cost and user-friendly photogrammetry technique in the fields such as remote sensing and geosciences. In UAV photogrammetry, only the platform used in conventional aerial photogrammetry is changed. Consequently, 3D modeling software contributes significantly to its expansion. However, the algorithms of the 3D modelling software are black box algorithms. As a result, only a few studies have been able to evaluate their accuracy using 3D coordinate check points. With this motive, Smart3DCapture and Pix4Dmapper were downloaded from the Internet and commercial software PhotoScan was also employed; investigations were performed in this paper using check points and images obtained from UAV.

  12. CheckMyMetal: a macromolecular metal-binding validation tool

    PubMed Central

    Porebski, Przemyslaw J.

    2017-01-01

    Metals are essential in many biological processes, and metal ions are modeled in roughly 40% of the macromolecular structures in the Protein Data Bank (PDB). However, a significant fraction of these structures contain poorly modeled metal-binding sites. CheckMyMetal (CMM) is an easy-to-use metal-binding site validation server for macromolecules that is freely available at http://csgid.org/csgid/metal_sites. The CMM server can detect incorrect metal assignments as well as geometrical and other irregularities in the metal-binding sites. Guidelines for metal-site modeling and validation in macromolecules are illustrated by several practical examples grouped by the type of metal. These examples show CMM users (and crystallographers in general) problems they may encounter during the modeling of a specific metal ion. PMID:28291757

  13. Method and system to perform energy-extraction based active noise control

    NASA Technical Reports Server (NTRS)

    Kelkar, Atul (Inventor); Joshi, Suresh M. (Inventor)

    2009-01-01

    A method to provide active noise control to reduce noise and vibration in reverberant acoustic enclosures such as aircraft, vehicles, appliances, instruments, industrial equipment and the like is presented. A continuous-time multi-input multi-output (MIMO) state space mathematical model of the plant is obtained via analytical modeling and system identification. Compensation is designed to render the mathematical model passive in the sense of mathematical system theory. The compensated system is checked to ensure robustness of the passive property of the plant. The check ensures that the passivity is preserved if the mathematical model parameters are perturbed from nominal values. A passivity-based controller is designed and verified using numerical simulations and then tested. The controller is designed so that the resulting closed-loop response shows the desired noise reduction.

  14. Model Selection Methods for Mixture Dichotomous IRT Models

    ERIC Educational Resources Information Center

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  15. Regression assumptions in clinical psychology research practice-a systematic review of common misconceptions.

    PubMed

    Ernst, Anja F; Albers, Casper J

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking.

  16. Regression assumptions in clinical psychology research practice—a systematic review of common misconceptions

    PubMed Central

    Ernst, Anja F.

    2017-01-01

    Misconceptions about the assumptions behind the standard linear regression model are widespread and dangerous. These lead to using linear regression when inappropriate, and to employing alternative procedures with less statistical power when unnecessary. Our systematic literature review investigated employment and reporting of assumption checks in twelve clinical psychology journals. Findings indicate that normality of the variables themselves, rather than of the errors, was wrongfully held for a necessary assumption in 4% of papers that use regression. Furthermore, 92% of all papers using linear regression were unclear about their assumption checks, violating APA-recommendations. This paper appeals for a heightened awareness for and increased transparency in the reporting of statistical assumption checking. PMID:28533971

  17. An effective automatic procedure for testing parameter identifiability of HIV/AIDS models.

    PubMed

    Saccomani, Maria Pia

    2011-08-01

    Realistic HIV models tend to be rather complex and many recent models proposed in the literature could not yet be analyzed by traditional identifiability testing techniques. In this paper, we check a priori global identifiability of some of these nonlinear HIV models taken from the recent literature, by using a differential algebra algorithm based on previous work of the author. The algorithm is implemented in a software tool, called DAISY (Differential Algebra for Identifiability of SYstems), which has been recently released (DAISY is freely available on the web site http://www.dei.unipd.it/~pia/ ). The software can be used to automatically check global identifiability of (linear and) nonlinear models described by polynomial or rational differential equations, thus providing a general and reliable tool to test global identifiability of several HIV models proposed in the literature. It can be used by researchers with a minimum of mathematical background.

  18. 75 FR 36298 - Airworthiness Directives; McDonnell Douglas Corporation Model DC-8-31, DC-8-32, DC-8-33, DC-8-41...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-25

    ... Airworthiness Limitations inspections (ALIs). This proposed AD results from a design review of the fuel tank...,'' and also adds ALI 30-1 for a pneumatic system decay check to minimize the risk of hot air impingement... 5, 2010, adds ALI 28-1, ``DC-8 Alternate and Center Auxiliary Tank Fuel Pump Control Systems Check...

  19. Development of a Model of Soldier Effectiveness: Retranslation Materials and Results

    DTIC Science & Technology

    1987-05-01

    covering financial responsibility, particularly the family checking account . Consequent- ly, the bad check rate for the unit drop- ped from 70 a month...Alcohol, and Aggressive Acts " Showing prudence in financial management and responsibility in personal/family matters; avoiding alcohol and other drugs or...threatening others, etc. versus " Acting irresponsibly in financial or personal/family affairs such that command time is required to counsel or otherwise

  20. Method-independent, Computationally Frugal Convergence Testing for Sensitivity Analysis Techniques

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Tolson, Bryan

    2017-04-01

    The increasing complexity and runtime of environmental models lead to the current situation that the calibration of all model parameters or the estimation of all of their uncertainty is often computationally infeasible. Hence, techniques to determine the sensitivity of model parameters are used to identify most important parameters or model processes. All subsequent model calibrations or uncertainty estimation procedures focus then only on these subsets of parameters and are hence less computational demanding. While the examination of the convergence of calibration and uncertainty methods is state-of-the-art, the convergence of the sensitivity methods is usually not checked. If any, bootstrapping of the sensitivity results is used to determine the reliability of the estimated indexes. Bootstrapping, however, might as well become computationally expensive in case of large model outputs and a high number of bootstraps. We, therefore, present a Model Variable Augmentation (MVA) approach to check the convergence of sensitivity indexes without performing any additional model run. This technique is method- and model-independent. It can be applied either during the sensitivity analysis (SA) or afterwards. The latter case enables the checking of already processed sensitivity indexes. To demonstrate the method independency of the convergence testing method, we applied it to three widely used, global SA methods: the screening method known as Morris method or Elementary Effects (Morris 1991, Campolongo et al., 2000), the variance-based Sobol' method (Solbol' 1993, Saltelli et al. 2010) and a derivative-based method known as Parameter Importance index (Goehler et al. 2013). The new convergence testing method is first scrutinized using 12 analytical benchmark functions (Cuntz & Mai et al. 2015) where the true indexes of aforementioned three methods are known. This proof of principle shows that the method reliably determines the uncertainty of the SA results when different budgets are used for the SA. Subsequently, we focus on the model-independency by testing the frugal method using the hydrologic model mHM (www.ufz.de/mhm) with about 50 model parameters. The results show that the new frugal method is able to test the convergence and therefore the reliability of SA results in an efficient way. The appealing feature of this new technique is the necessity of no further model evaluation and therefore enables checking of already processed (and published) sensitivity results. This is one step towards reliable and transferable, published sensitivity results.

  1. Sub-pixel analysis to support graphic security after scanning at low resolution

    NASA Astrophysics Data System (ADS)

    Haas, Bertrand; Cordery, Robert; Gou, Hongmei; Decker, Steve

    2006-02-01

    Whether in the domain of audio, video or finance, our world tends to become increasingly digital. However, for diverse reasons, the transition from analog to digital is often much extended in time, and proceeds by long steps (and sometimes never completes). One such step is the conversion of information on analog media to digital information. We focus in this paper on the conversion (scanning) of printed documents to digital images. Analog media have the advantage over digital channels that they can harbor much imperceptible information that can be used for fraud detection and forensic purposes. But this secondary information usually fails to be retrieved during the conversion step. This is particularly relevant since the Check-21 act (Check Clearing for the 21st Century act) became effective in 2004 and allows images of checks to be handled by banks as usual paper checks. We use here this situation of check scanning as our primary benchmark for graphic security features after scanning. We will first present a quick review of the most common graphic security features currently found on checks, with their specific purpose, qualities and disadvantages, and we demonstrate their poor survivability after scanning in the average scanning conditions expected from the Check-21 Act. We will then present a novel method of measurement of distances between and rotations of line elements in a scanned image: Based on an appropriate print model, we refine direct measurements to an accuracy beyond the size of a scanning pixel, so we can then determine expected distances, periodicity, sharpness and print quality of known characters, symbols and other graphic elements in a document image. Finally we will apply our method to fraud detection of documents after gray-scale scanning at 300dpi resolution. We show in particular that alterations on legitimate checks or copies of checks can be successfully detected by measuring with sub-pixel accuracy the irregularities inherently introduced by the illegitimate process.

  2. Software Safety Analysis of a Flight Guidance System

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W. (Technical Monitor); Tribble, Alan C.; Miller, Steven P.; Lempia, David L.

    2004-01-01

    This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.

  3. Space shuttle prototype check valve development

    NASA Technical Reports Server (NTRS)

    Tellier, G. F.

    1976-01-01

    Contaminant-resistant seal designs and a dynamically stable prototype check valve for the orbital maneuvering and reaction control helium pressurization systems of the space shuttle were developed. Polymer and carbide seal models were designed and tested. Perfluoroelastomers compatible with N2O4 and N2H4 types were evaluated and compared with Teflon in flat and captive seal models. Low load sealing and contamination resistance tests demonstrated cutter seal superiority over polymer seals. Ceramic and carbide materials were evaluated for N2O4 service using exposure to RFNA as a worst case screen; chemically vapor deposited tungsten carbide was shown to be impervious to the acid after 6 months immersion. A unique carbide shell poppet/cutter seat check valve was designed and tested to demonstrate low cracking pressure ( 2.0 psid), dynamic stability under all test bench flow conditions, contamination resistance (0.001 inch CRES wires cut with 1.5 pound seat load) and long life of 100,000 cycles (leakage 1.0 scc/hr helium from 0.1 to 400 psig).

  4. Model year 2002 fuel economy guide

    DOT National Transportation Integrated Search

    2001-01-01

    The Fuel Economy Guide is published by the U.S. Department of Energy as an aid to consumers considering the purchase of a new vehicle. The Guide lists estimates of miles per gallon (mpg) for each vehicle available for the new model year. The Guide is...

  5. Birth/death process model

    NASA Technical Reports Server (NTRS)

    Solloway, C. B.; Wakeland, W.

    1976-01-01

    First-order Markov model developed on digital computer for population with specific characteristics. System is user interactive, self-documenting, and does not require user to have complete understanding of underlying model details. Contains thorough error-checking algorithms on input and default capabilities.

  6. Symbolic discrete event system specification

    NASA Technical Reports Server (NTRS)

    Zeigler, Bernard P.; Chi, Sungdo

    1992-01-01

    Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.

  7. SU-F-T-300: Impact of Electron Density Modeling of ArcCHECK Cylindricaldiode Array On 3DVH Patient Specific QA Software Tool Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patwe, P; Mhatre, V; Dandekar, P

    Purpose: 3DVH software is a patient specific quality assurance tool which estimates the 3D dose to the patient specific geometry with the help of Planned Dose Perturbation algorithm. The purpose of this study is to evaluate the impact of HU value of ArcCHECK phantom entered in Eclipse TPS on 3D dose & DVH QA analysis. Methods: Manufacturer of ArcCHECK phantom provides CT data set of phantom & recommends considering it as a homogeneous phantom with electron density (1.19 gm/cc or 282 HU) close to PMMA. We performed this study on Eclipse TPS (V13, VMS) & trueBEAM STx VMS Linac &more » ArcCHECK phantom (SNC). Plans were generated for 6MV photon beam, 20cm×20cm field size at isocentre & SPD (Source to phantom distance) of 86.7 cm to deliver 100cGy at isocentre. 3DVH software requires patients DICOM data generated by TPS & plan delivered on ArcCHECK phantom. Plans were generated in TPS by assigning different HU values to phantom. We analyzed gamma index & the dose profile for all plans along vertical down direction of beam’s central axis for Entry, Exit & Isocentre dose. Results: The global gamma passing rate (2% & 2mm) for manufacturer recommended HU value 282 was 96.3%. Detector entry, Isocentre & detector exit Doses were 1.9048 (1.9270), 1.00(1.0199) & 0.5078(0.527) Gy for TPS (Measured) respectively.The global gamma passing rate for electron density 1.1302 gm/cc was 98.6%. Detector entry, Isocentre & detector exit Doses were 1.8714 (1.8873), 1.00(0.9988) & 0.5211(0.516) Gy for TPS (Measured) respectively. Conclusion: Electron density value assigned by manufacturer does not hold true for every user. Proper modeling of electron density of ArcCHECK in TPS is essential to avoid systematic error in dose calculation of patient specific QA.« less

  8. 75 FR 5355 - Notice of Extension of Comment Period for NUREG-1934, Nuclear Power Plant Fire Modeling...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-02

    ..., Nuclear Power Plant Fire Modeling Application Guide (NPP FIRE MAG), Draft Report for Comment AGENCY... 1019195), Nuclear Power Plant Fire Modeling Application Guide (NPP FIRE MAG), Draft Report for Comment... Plant Fire Modeling Application Guide (NPP FIRE MAG)'' is available electronically under ADAMS Accession...

  9. Analysis of student’s scientific attitude behaviour change effects blended learning supported by I-spring Suite 8 application

    NASA Astrophysics Data System (ADS)

    Budiharti, Rini; Waras, N. S.

    2018-05-01

    This article aims to describe the student’s scientific attitude behaviour change as treatment effect of Blended Learning supported by I-Spring Suite 8 application on the material balance and the rotational dynamics. Blended Learning models is learning strategy that integrate between face-to-face learning and online learning by combination of various media. Blended Learning model supported I-Spring Suite 8 media setting can direct learning becomes interactive. Students are guided to actively interact with the media as well as with other students to discuss getting the concept by the phenomena or facts presented. The scientific attitude is a natural attitude of students in the learning process. In interactive learning, scientific attitude is so needed. The research was conducted using a model Lesson Study which consists of the stages Plan-Do-Check-Act (PDCA) and applied to the subject of learning is students at class XI MIPA 2 of Senior High School 6 Surakarta. The validity of the data used triangulation techniques of observation, interviews and document review. Based on the discussion, it can be concluded that the use of Blended Learning supported media I-Spring Suite 8 is able to give the effect of changes in student behaviour on all dimensions of scientific attitude that is inquisitive, respect the data or fact, critical thinking, discovery and creativity, open minded and cooperation, and perseverance. Display e-learning media supported student worksheet makes the students enthusiastically started earlier, the core until the end of learning

  10. Assembly of a check-patterned CuSx-TiO2 film with an electron-rich pool and its application for the photoreduction of carbon dioxide to methane

    NASA Astrophysics Data System (ADS)

    Lee, Homin; Kwak, Byeong Sub; Park, No-Kuk; Baek, Jeom-In; Ryu, Ho-Jung; Kang, Misook

    2017-01-01

    A new check-patterned CuSx-TiO2 film was designed to improve the photoreduction of CO2 to CH4. The check-patterned CuSx-TiO2 film with a 3D-network microstructure was fabricated by a facile squeeze method. The as-synthesized TiO2 and CuSx powders, as well as the patterned film, were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM), X-ray photoelectron spectroscopy (XPS), UV-visible spectroscopy, cyclic voltammetry (CV), and photoluminescence (PL) spectroscopy, as well as photocurrent density and CO2 temperature-programmed desorption (TPD) measurements. Compared to pure CuSx and TiO2, the check-patterned CuSx-TiO2 film exhibited significantly increased adsorption of CO2 on its networked microstructure, attributed to the enlarged interfaces between the microparticles. The check-patterned CuSx-TiO2 film exhibited superior photocatalytic behavior, with 53.2 μmolgcat-1 L-1 of CH4 produced after 8 h of reaction, whereas 18.1 and 7.3 μmolgcat-1 L-1 of CH4 were produced from pure TiO2 and CuSx films under the same reaction conditions, respectively. A model for enhanced photoactivity over the check-patterned CuSx - TiO2 film was proposed. Results indicated that the check-patterned CuS-TiO2 material is quite promising as a photocatalyst for the reduction of CO2 to CH4.

  11. Formal Validation of Fault Management Design Solutions

    NASA Technical Reports Server (NTRS)

    Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John

    2013-01-01

    The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.

  12. Using chemical organization theory for model checking

    PubMed Central

    Kaleta, Christoph; Richter, Stephan; Dittrich, Peter

    2009-01-01

    Motivation: The increasing number and complexity of biomodels makes automatic procedures for checking the models' properties and quality necessary. Approaches like elementary mode analysis, flux balance analysis, deficiency analysis and chemical organization theory (OT) require only the stoichiometric structure of the reaction network for derivation of valuable information. In formalisms like Systems Biology Markup Language (SBML), however, information about the stoichiometric coefficients required for an analysis of chemical organizations can be hidden in kinetic laws. Results: First, we introduce an algorithm that uncovers stoichiometric information that might be hidden in the kinetic laws of a reaction network. This allows us to apply OT to SBML models using modifiers. Second, using the new algorithm, we performed a large-scale analysis of the 185 models contained in the manually curated BioModels Database. We found that for 41 models (22%) the set of organizations changes when modifiers are considered correctly. We discuss one of these models in detail (BIOMD149, a combined model of the ERK- and Wnt-signaling pathways), whose set of organizations drastically changes when modifiers are considered. Third, we found inconsistencies in 5 models (3%) and identified their characteristics. Compared with flux-based methods, OT is able to identify those species and reactions more accurately [in 26 cases (14%)] that can be present in a long-term simulation of the model. We conclude that our approach is a valuable tool that helps to improve the consistency of biomodels and their repositories. Availability: All data and a JAVA applet to check SBML-models is available from http://www.minet.uni-jena.de/csb/prj/ot/tools Contact: dittrich@minet.uni-jena.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19468053

  13. Effects of arthroscopy-guided suprascapular nerve block combined with ultrasound-guided interscalene brachial plexus block for arthroscopic rotator cuff repair: a randomized controlled trial.

    PubMed

    Lee, Jae Jun; Hwang, Jung-Taek; Kim, Do-Young; Lee, Sang-Soo; Hwang, Sung Mi; Lee, Na Rea; Kwak, Byung-Chan

    2017-07-01

    The aim of this study was to compare the pain relieving effect of ultrasound-guided interscalene brachial plexus block (ISB) combined with arthroscopy-guided suprascapular nerve block (SSNB) with that of ultrasound-guided ISB alone within the first 48 h after arthroscopic rotator cuff repair. Forty-eight patients with rotator cuff tears who had undergone arthroscopic rotator cuff repair were enrolled. The 24 patients in group 1 received ultrasound-guided ISB and arthroscopy-guided SSNB; the remaining 24 patients in group 2 underwent ultrasound-guided ISB alone. Visual analogue scale pain score and patient satisfaction score were checked at 1, 3, 6, 12, 18, 24, and 48 h post-operatively. Group 1 had a lower visual analogue scale pain score at 3, 6, 12, 18, 24, and 48 h post-operatively (1.7 < 2.6, 1.6 < 4.0, 3.5 < 5.8, 3.6 < 5.2, 3.2 < 4.2, 1.3 < 2.0), and a higher patient satisfaction score at 6, 12, 18, 24, and 36 h post-operatively than group 2 (7.8 > 6.0, 6.2 > 4.3, 6.4 > 5.1, 6.9 > 5.9, 7.9 > 7.1). Six patients in group 1 developed rebound pain twice, and the others in group 1 developed it once. All of the patients in group 2 had one rebound phenomenon each (p = 0.010). The mean timing of rebound pain in group 1 was later than that in group 2 (15.5 > 9.3 h, p < 0.001), and the mean size of rebound pain was smaller in group 1 than that in group 2 (2.5 > 4.0, p = 0.001). Arthroscopy-guided SSNB combined with ultrasound-guided ISB resulted in lower visual analogue scale pain scores at 3-24 and 48 h post-operatively, and higher patient satisfaction scores at 6-36 h post-operatively with the attenuated rebound pain compared to scores in patients who received ultrasound-guided ISB alone after arthroscopic rotator cuff repair. The combined blocks may relieve post-operative pain more effectively than the single block within 48 h after arthroscopic cuff repair. Randomized controlled trial, Level I. ClinicalTrials.gov Identifier: NCT02424630.

  14. Location Contexts of User Check-Ins to Model Urban Geo Life-Style Patterns

    PubMed Central

    Hasan, Samiul; Ukkusuri, Satish V.

    2015-01-01

    Geo-location data from social media offers us information, in new ways, to understand people's attitudes and interests through their activity choices. In this paper, we explore the idea of inferring individual life-style patterns from activity-location choices revealed in social media. We present a model to understand life-style patterns using the contextual information (e. g. location categories) of user check-ins. Probabilistic topic models are developed to infer individual geo life-style patterns from two perspectives: i) to characterize the patterns of user interests to different types of places and ii) to characterize the patterns of user visits to different neighborhoods. The method is applied to a dataset of Foursquare check-ins of the users from New York City. The co-existence of several location contexts and the corresponding probabilities in a given pattern provide useful information about user interests and choices. It is found that geo life-style patterns have similar items—either nearby neighborhoods or similar location categories. The semantic and geographic proximity of the items in a pattern reflects the hidden regularity in user preferences and location choice behavior. PMID:25970430

  15. Dynamic modeling and simulation of an integral bipropellant propulsion double-valve combined test system

    NASA Astrophysics Data System (ADS)

    Chen, Yang; Wang, Huasheng; Xia, Jixia; Cai, Guobiao; Zhang, Zhenpeng

    2017-04-01

    For the pressure reducing regulator and check valve double-valve combined test system in an integral bipropellant propulsion system, a system model is established with modular models of various typical components. The simulation research is conducted on the whole working process of an experiment of 9 MPa working condition from startup to rated working condition and finally to shutdown. Comparison of simulation results with test data shows: five working conditions including standby, startup, rated pressurization, shutdown and halt and nine stages of the combined test system are comprehensively disclosed; valve-spool opening and closing details of the regulator and two check valves are accurately revealed; the simulation also clarifies two phenomena which test data are unable to clarify, one is the critical opening state in which the check valve spools slightly open and close alternately in their own fully closed positions, the other is the obvious effects of flow-field temperature drop and temperature rise in pipeline network with helium gas flowing. Moreover, simulation results with consideration of component wall heat transfer are closer to the test data than those under the adiabatic-wall condition, and more able to reveal the dynamic characteristics of the system in various working stages.

  16. Teaching "Instant Experience" with Graphical Model Validation Techniques

    ERIC Educational Resources Information Center

    Ekstrøm, Claus Thorn

    2014-01-01

    Graphical model validation techniques for linear normal models are often used to check the assumptions underlying a statistical model. We describe an approach to provide "instant experience" in looking at a graphical model validation plot, so it becomes easier to validate if any of the underlying assumptions are violated.

  17. Bayesian truncation errors in chiral effective field theory: model checking and accounting for correlations

    NASA Astrophysics Data System (ADS)

    Melendez, Jordan; Wesolowski, Sarah; Furnstahl, Dick

    2017-09-01

    Chiral effective field theory (EFT) predictions are necessarily truncated at some order in the EFT expansion, which induces an error that must be quantified for robust statistical comparisons to experiment. A Bayesian model yields posterior probability distribution functions for these errors based on expectations of naturalness encoded in Bayesian priors and the observed order-by-order convergence pattern of the EFT. As a general example of a statistical approach to truncation errors, the model was applied to chiral EFT for neutron-proton scattering using various semi-local potentials of Epelbaum, Krebs, and Meißner (EKM). Here we discuss how our model can learn correlation information from the data and how to perform Bayesian model checking to validate that the EFT is working as advertised. Supported in part by NSF PHY-1614460 and DOE NUCLEI SciDAC DE-SC0008533.

  18. Automated Environment Generation for Software Model Checking

    NASA Technical Reports Server (NTRS)

    Tkachuk, Oksana; Dwyer, Matthew B.; Pasareanu, Corina S.

    2003-01-01

    A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrates formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera Environment Generator (BEG) which we describe along with our experience using BEG to reason about properties of several non-trivial concurrent Java programs.

  19. Modelling of influential parameters on a continuous evaporation process by Doehlert shells

    PubMed Central

    Porte, Catherine; Havet, Jean-Louis; Daguet, David

    2003-01-01

    The modelling of the parameters that influence the continuous evaporation of an alcoholic extract was considered using Doehlert matrices. The work was performed with a wiped falling film evaporator that allowed us to study the influence of the pressure, temperature, feed flow and dry matter of the feed solution on the dry matter contents of the resulting concentrate, and the productivity of the process. The Doehlert shells were used to model the influential parameters. The pattern obtained from the experimental results was checked allowing for some dysfunction in the unit. The evaporator was modified and a new model applied; the experimental results were then in agreement with the equations. The model was finally determined and successfully checked in order to obtain an 8% dry matter concentrate with the best productivity; the results fit in with the industrial constraints of subsequent processes. PMID:18924887

  20. AQUATOX Setup Guide

    EPA Pesticide Factsheets

    The new Guidance in AQUATOX Setup and Application provides a quick start guide to introduce major model features, as well as being a type of cookbook to guide basic model setup, calibration, and validation.

  1. Re-engineering pre-employment check-up systems: a model for improving health services.

    PubMed

    Rateb, Said Abdel Hakim; El Nouman, Azza Abdel Razek; Rateb, Moshira Abdel Hakim; Asar, Mohamed Naguib; El Amin, Ayman Mohammed; Gad, Saad abdel Aziz; Mohamed, Mohamed Salah Eldin

    2011-01-01

    The purpose of this paper is to develop a model for improving health services provided by the pre-employment medical fitness check-up system affiliated to Egypt's Health Insurance Organization (HIO). Operations research, notably system re-engineering, is used in six randomly selected centers and findings before and after re-engineering are compared. The re-engineering model follows a systems approach, focusing on three areas: structure, process and outcome. The model is based on six main components: electronic booking, standardized check-up processes, protected medical documents, advanced archiving through an electronic content management (ECM) system, infrastructure development, and capacity building. The model originates mainly from customer needs and expectations. The centers' monthly customer flow increased significantly after re-engineering. The mean time spent per customer cycle improved after re-engineering--18.3 +/- 5.5 minutes as compared to 48.8 +/- 14.5 minutes before. Appointment delay was also significantly decreased from an average 18 to 6.2 days. Both beneficiaries and service providers were significantly more satisfied with the services after re-engineering. The model proves that re-engineering program costs are exceeded by increased revenue. Re-engineering in this study involved multiple structure and process elements. The literature review did not reveal similar re-engineering healthcare packages. Therefore, each element was compared separately. This model is highly recommended for improving service effectiveness and efficiency. This research is the first in Egypt to apply the re-engineering approach to public health systems. Developing user-friendly models for service improvement is an added value.

  2. Development of a check sheet for collecting information necessary for occupational safety and health activities and building relevant systems in overseas business places.

    PubMed

    Kajiki, Shigeyuki; Kobayashi, Yuichi; Uehara, Masamichi; Nakanishi, Shigemoto; Mori, Koji

    2016-06-07

    This study aimed to develop an information gathering check sheet to efficiently collect information necessary for Japanese companies to build global occupational safety and health management systems in overseas business places. The study group consisted of 2 researchers with occupational physician careers in a foreign-affiliated company in Japan and 3 supervising occupational physicians who were engaged in occupational safety and health activities in overseas business places. After investigating information and sources of information necessary for implementing occupational safety and health activities and building relevant systems, we conducted information acquisition using an information gathering check sheet in the field, by visiting 10 regions in 5 countries (first phase). The accuracy of the information acquired and the appropriateness of the information sources were then verified in study group meetings to improve the information gathering check sheet. Next, the improved information gathering check sheet was used in another setting (3 regions in 1 country) to confirm its efficacy (second phase), and the information gathering check sheet was thereby completed. The information gathering check sheet was composed of 9 major items (basic information on the local business place, safety and health overview, safety and health systems, safety and health staff, planning/implementation/evaluation/improvement, safety and health activities, laws and administrative organs, local medical care systems and public health, and medical support for resident personnel) and 61 medium items. We relied on the following eight information sources: the internet, company (local business place and head office in Japan), embassy/consulate, ISO certification body, university or other educational institutions, and medical institutions (aimed at Japanese people or at local workers). Through multiple study group meetings and a two-phased field survey (13 regions in 6 countries), an information gathering check sheet was completed. We confirmed the possibility that this check sheet would enable the user to obtain necessary information when expanding safety and health activities in a country or region that is new to the user. It is necessary in the future to evaluate safety and health systems and activities using this information gathering check sheet in a local business place in any country in which a Japanese business will be established, and to verify the efficacy of the check sheet by conducting model programs to test specific approaches.

  3. Directed Bak-Sneppen Model for Food Chains

    NASA Astrophysics Data System (ADS)

    Stauffer, D.; Jan, N.

    A modification of the Bak-Sneppen model to include simple elements of Darwinian evolution is used to check the survival of prey and predators in long food chains. Mutations, selection, and starvation resulting from depleted prey are incorporated in this model.

  4. Detecting Inconsistencies in Multi-View Models with Variability

    NASA Astrophysics Data System (ADS)

    Lopez-Herrejon, Roberto Erick; Egyed, Alexander

    Multi-View Modeling (MVM) is a common modeling practice that advocates the use of multiple, different and yet related models to represent the needs of diverse stakeholders. Of crucial importance in MVM is consistency checking - the description and verification of semantic relationships amongst the views. Variability is the capacity of software artifacts to vary, and its effective management is a core tenet of the research in Software Product Lines (SPL). MVM has proven useful for developing one-of-a-kind systems; however, to reap the potential benefits of MVM in SPL it is vital to provide consistency checking mechanisms that cope with variability. In this paper we describe how to address this need by applying Safe Composition - the guarantee that all programs of a product line are type safe. We evaluate our approach with a case study.

  5. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    NASA Astrophysics Data System (ADS)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  6. Sediment depositions upstream of open check dams: new elements from small scale models

    NASA Astrophysics Data System (ADS)

    Piton, Guillaume; Le Guern, Jules; Carbonari, Costanza; Recking, Alain

    2015-04-01

    Torrent hazard mitigation remains a big issue in mountainous regions. In steep slope streams and especially in their fan part, torrential floods mainly result from abrupt and massive sediment deposits. To curtail such phenomenon, soil conservation measures as well as torrent control works have been undertaken for decades. Since the 1950s, open check dams complete other structural and non-structural measures in watershed scale mitigation plans1. They are often built to trap sediments near the fan apexes. The development of earthmoving machinery after the WWII facilitated the dredging operations of open check dams. Hundreds of these structures have thus been built for 60 years. Their design evolved with the improving comprehension of torrential hydraulics and sediment transport; however this kind of structure has a general tendency to trap most of the sediments supplied by the headwaters. Secondary effects as channel incision downstream of the traps often followed an open check dam creation. This sediment starvation trend tends to propagate to the main valley rivers and to disrupt past geomorphic equilibriums. Taking it into account and to diminish useless dredging operation, a better selectivity of sediment trapping must be sought in open check dams, i.e. optimal open check dams would trap sediments during dangerous floods and flush them during normal small floods. An accurate description of the hydraulic and deposition processes that occur in sediment traps is needed to optimize existing structures and to design best-adjusted new structures. A literature review2 showed that if design criteria exist for the structure itself, little information is available on the dynamic of the sediment depositions upstream of open check dams, i.e. what are the geomorphic patterns that occur during the deposition?, what are the relevant friction laws and sediment transport formula that better describe massive depositions in sediment traps?, what are the range of Froude and Shields numbers that the flows tend to adopt? New small scale model experiments have been undertaken focusing on depositions processes and their related hydraulics. Accurate photogrammetric measurements allowed us to better describe the deposition processes3. Large Scale Particle Image Velocimetry (LS-PIV) was performed to determine surface velocity fields in highly active channels with low grain submersion4. We will present preliminary results of our experiments showing the new elements we observed in massive deposit dynamics. REFERENCES 1.Armanini, A., Dellagiacoma, F. & Ferrari, L. From the check dam to the development of functional check dams. Fluvial Hydraulics of Mountain Regions 37, 331-344 (1991). 2.Piton, G. & Recking, A. Design of sediment traps with open check dams: a review, part I: hydraulic and deposition processes. (Accepted by the) Journal of Hydraulic Engineering 1-23 (2015). 3.Le Guern, J. Ms Thesis: Modélisation physique des plages de depot : analyse de la dynamique de remplissage.(2014) . 4.Carbonari, C. Ms Thesis: Small scale experiments of deposition processes occuring in sediment traps, LS-PIV measurments and geomorphological descriptions. (in preparation).

  7. An interactive framework for acquiring vision models of 3-D objects from 2-D images.

    PubMed

    Motai, Yuichi; Kak, Avinash

    2004-02-01

    This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.

  8. Experience Report: A Do-It-Yourself High-Assurance Compiler

    NASA Technical Reports Server (NTRS)

    Pike, Lee; Wegmann, Nis; Niller, Sebastian; Goodloe, Alwyn

    2012-01-01

    Embedded domain-specific languages (EDSLs) are an approach for quickly building new languages while maintaining the advantages of a rich metalanguage. We argue in this experience report that the "EDSL approach" can surprisingly ease the task of building a high-assurance compiler.We do not strive to build a fully formally-verified tool-chain, but take a "do-it-yourself" approach to increase our confidence in compiler-correctness without too much effort. Copilot is an EDSL developed by Galois, Inc. and the National Institute of Aerospace under contract to NASA for the purpose of runtime monitoring of flight-critical avionics. We report our experience in using type-checking, QuickCheck, and model-checking "off-the-shelf" to quickly increase confidence in our EDSL tool-chain.

  9. The Large Local Hole in the Galaxy Distribution: The 2MASS Galaxy Angular Power Spectrum

    NASA Astrophysics Data System (ADS)

    Frith, W. J.; Outram, P. J.; Shanks, T.

    2005-06-01

    We present new evidence for a large deficiency in the local galaxy distribution situated in the ˜4000 deg2 APM survey area. We use models guided by the 2dF Galaxy Redshift Survey (2dFGRS) n(z) as a probe of the underlying large-scale structure. We first check the usefulness of this technique by comparing the 2dFGRS n(z) model prediction with the K-band and B-band number counts extracted from the 2MASS and 2dFGRS parent catalogues over the 2dFGRS Northern and Southern declination strips, before turning to a comparison with the APM counts. We find that the APM counts in both the B and K-bands indicate a deficiency in the local galaxy distribution of ˜30% to z ≈ 0.1 over the entire APM survey area. We examine the implied significance of such a large local hole, considering several possible forms for the real-space correlation function. We find that such a deficiency in the APM survey area indicates an excess of power at large scales over what is expected from the correlation function observed in 2dFGRS correlation function or predicted from ΛCDM Hubble Volume mock catalogues. In order to check further the clustering at large scales in the 2MASS data, we have calculated the angular power spectrum for 2MASS galaxies. Although in the linear regime (l<30), ΛCDM models can give a good fit to the 2MASS angular power spectrum, over a wider range (l<100) the power spectrum from Hubble Volume mock catalogues suggests that scale-dependent bias may be needed for ΛCDM to fit. However, the modest increase in large-scale power observed in the 2MASS angular power spectrum is still not enough to explain the local hole. If the APM survey area really is 25% deficient in galaxies out to z≈0.1, explanations for the disagreement with observed galaxy clustering statistics include the possibilities that the galaxy clustering is non-Gaussian on large scales or that the 2MASS volume is still too small to represent a `fair sample' of the Universe. Extending the 2dFGRS redshift survey over the whole APM area would resolve many of the remaining questions about the existence and interpretation of this local hole.

  10. TEAMS Model Analyzer

    NASA Technical Reports Server (NTRS)

    Tijidjian, Raffi P.

    2010-01-01

    The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.

  11. 2016 Oncology Nursing Society Annual Congress: Podium, E-Poster, and Poster Session Abstracts.

    PubMed

    2016-03-01

    Abstracts appear as they were submitted and have not undergone editing or the Oncology Nursing Forum's review process. Only abstracts that will be presented appear online. Poster numbers are subject to change. For updated poster numbers, visit congress.ons.org or check the Congress guide. Data published in abstracts presented at ONS's Annual Congress are embargoed until the conclusion of the presentation. Coverage and/or distribution of an abstract, poster, or any of its supplemental material to or by the news media, any commercial entity, or individuals, including the authors of said abstract, is strictly prohibited until the embargo is lifted. Promotion of general topics and speakers is encouraged within these guidelines.

  12. National Program for Inspection of Non-Federal Dams. Palmer Brook Dam MA 00205, Connecticut River Basin, Becket, Massachusetts. Phase I Inspection Report.

    DTIC Science & Technology

    1981-03-01

    DAMS PALMER BROOK DAM LOCATION MAP 2010 OpO 9 1Q00 20-0 AND DRAINAGE AREA DRAWN ICHECKED APPROVED SCALE AS SHOWN SCALE IN FEET L L R D B W F J. K. DATE...BROOK DAM EXHIBIT C-2-DIKE GUIDE TO PHOTOGRAPHS DRAWN iCHECKED JAPPOVED ,SCALE: 1.1K 5F0FT. L.L.R. D.B.W. F. J.K. JDATE: 2/8I PAGE C-2 ,it PRODUJCED A...ooo opo2000FLOOD IMPACT AREA P SALEIN EETDRAWN ICHECKED IAPPROVED ISCALE AS SHOWN SCALE__________IN______________ L. L. R. D. B. W. IF J. K. JDATE 2

  13. Microfabrication of curcumin-loaded microparticles using coaxial electrohydrodynamic atomization

    NASA Astrophysics Data System (ADS)

    Yuan, Shuai; Si, Ting; Liu, Zhongfa; Xu, Ronald X.

    2014-03-01

    Encapsulation of curcumin in PLGA microparticles is performed by a coaxial electrohydrodynamic atomization device. To optimize the process, the effects of different control parameters on morphology and size distribution of resultant microparticles are studied systemically. Four main flow modes are identified as the applied electric field intensity increases. The stable cone-jet configuration is found to be available for fabricating monodisperse microparticles with core-shell structures. The results are compared with those observed in traditional emulsion. The drug-loading efficiency is also checked. The present system is advantageous for the enhancement of particle size distribution and drug-loading efficiency in various applications such as drug delivery, biomedicine and image-guided therapy.

  14. Test of Hadronic Interaction Models with the KASCADE Hadron Calorimeter

    NASA Astrophysics Data System (ADS)

    Milke, J.; KASCADE Collaboration

    The interpretation of extensive air shower (EAS) measurements often requires the comparison with EAS simulations based on high-energy hadronic interaction models. These interaction models have to extrapolate into kinematical regions and energy ranges beyond the limit of present accelerators. Therefore, it is necessary to test whether these models are able to describe the EAS development in a consistent way. By measuring simultaneously the hadronic, electromagnetic, and muonic part of an EAS the experiment KASCADE offers best facilities for checking the models. For the EAS simulations the program CORSIKA with several hadronic event generators implemented is used. Different hadronic observables, e.g. hadron number, energy spectrum, lateral distribution, are investigated, as well as their correlations with the electromagnetic and muonic shower size. By comparing measurements and simulations the consistency of the description of the EAS development is checked. First results with the new interaction model NEXUS and the version II.5 of the model DPMJET, recently included in CORSIKA, are presented and compared with QGSJET simulations.

  15. Prototype of a laser guide star wavefront sensor for the Extremely Large Telescope

    NASA Astrophysics Data System (ADS)

    Patti, M.; Lombini, M.; Schreiber, L.; Bregoli, G.; Arcidiacono, C.; Cosentino, G.; Diolaiti, E.; Foppiani, I.

    2018-06-01

    The new class of large telescopes, like the future Extremely Large Telescope (ELT), are designed to work with a laser guide star (LGS) tuned to a resonance of atmospheric sodium atoms. This wavefront sensing technique presents complex issues when applied to big telescopes for many reasons, mainly linked to the finite distance of the LGS, the launching angle, tip-tilt indetermination and focus anisoplanatism. The implementation of a laboratory prototype for the LGS wavefront sensor (WFS) at the beginning of the phase study of MAORY (Multi-conjugate Adaptive Optics Relay) for ELT first light has been indispensable in investigating specific mitigation strategies for the LGS WFS issues. This paper presents the test results of the LGS WFS prototype under different working conditions. The accuracy within which the LGS images are generated on the Shack-Hartmann WFS has been cross-checked with the MAORY simulation code. The experiments show the effect of noise on centroiding precision, the impact of LGS image truncation on wavefront sensing accuracy as well as the temporal evolution of the sodium density profile and LGS image under-sampling.

  16. Optimization of a Light Collection System for use in the Neutron Lifetime Project

    NASA Astrophysics Data System (ADS)

    Taylor, C.; O'Shaughnessy, C.; Mumm, P.; Thompson, A.; Huffman, P.

    2007-10-01

    The Ultracold Neutron (UCN) Lifetime Project is an ongoing experiment with the objective of improving the average measurement of the neutron beta-decay lifetime. A more accurate measurement may increase our understanding of the electroweak interaction and improve astrophysical/cosmological theories on Big Bang nucleosynthesis. The current apparatus uses 0.89 nm cold neutrons to produce UCN through inelastic collisions with superfluid 4He in the superthermal process. The lifetime of the UCN is measured by detection of scintillation light from superfluid 4He created by electrons produced in neutron decay. Competing criteria of high detection efficiency outside of the apparatus and minimum heating of the experimental cell has led to the design of an acrylic light collection system. Initial designs were based on previous generations of the apparatus. ANSYS was used to optimize the cooling system for the light guide by checking simulated end conditions based on width of contact area, number of contact points, and location on the guide itself. SolidWorks and AutoCAD were used for design. The current system is in the production process.

  17. [Intracranial pressure monitoring and CSF dynamics in patients with neurological disorders: indications and practical considerations].

    PubMed

    Poca, M; Sahuquillo, J

    2001-01-01

    The study of cerebrospinal fluid (CSF) dynamics is central to the diagnosis of adult chronic hydrocephalus (ACH). At present, many neurology and neurosurgery departments use one or more tests to guide diagnosis of this syndrome and to predict patient response to shunting. In specialised centres, the study of CSF dynamics is combined with continuous intracranial pressure (ICP) monitoring. Determination of several variables of CSF dynamics and definitions of qualitative and quantitative characteristics of ICP can be used to establish whether the hydrocephalus is active, compensated or arrested. CSF dynamics and ICP monitoring can also be used to check the correct functioning of the shunt and can be of use in the clinical management of patients with pseudotumor cerebri. Moreover, ICP monitoring is used to guide the treatment of several acute neurological processes. The aim of this review is to describe the fundamentals of CSF dynamics studies and the bases of continuous ICP monitoring. The advantages and disadvantages of several hydrodynamic tests that can be performed by lumbar puncture, as well as the normal and abnormal characteristics of an ICP recording, are discussed.

  18. Guide for Oxygen Component Qualification Tests

    NASA Technical Reports Server (NTRS)

    Bamford, Larry J.; Rucker, Michelle A.; Dobbin, Douglas

    1996-01-01

    Although oxygen is a chemically stable element, it is not shock sensitive, will not decompose, and is not flammable. Oxygen use therefore carries a risk that should never be overlooked, because oxygen is a strong oxidizer that vigorously supports combustion. Safety is of primary concern in oxygen service. To promote safety in oxygen systems, the flammability of materials used in them should be analyzed. At the NASA White Sands Test Facility (WSTF), we have performed configurational tests of components specifically engineered for oxygen service. These tests follow a detailed WSTF oxygen hazards analysis. The stated objective of the tests was to provide performance test data for customer use as part of a qualification plan for a particular component in a particular configuration, and under worst-case conditions. In this document - the 'Guide for Oxygen Component Qualification Tests' - we outline recommended test systems, and cleaning, handling, and test procedures that address worst-case conditions. It should be noted that test results apply specifically to: manual valves, remotely operated valves, check valves, relief valves, filters, regulators, flexible hoses, and intensifiers. Component systems are not covered.

  19. Improving the detection of cocoa bean fermentation-related changes using image fusion

    NASA Astrophysics Data System (ADS)

    Ochoa, Daniel; Criollo, Ronald; Liao, Wenzhi; Cevallos-Cevallos, Juan; Castro, Rodrigo; Bayona, Oswaldo

    2017-05-01

    Complex chemical processes occur in during cocoa bean fermentation. To select well-fermented beans, experts take a sample of beans, cut them in half and visually check its color. Often farmers mix high and low quality beans therefore, chocolate properties are difficult to control. In this paper, we explore how close-range hyper- spectral (HS) data can be used to characterize the fermentation process of two types of cocoa beans (CCN51 and National). Our aim is to find spectral differences to allow bean classification. The main issue is to extract reliable spectral data as openings resulting from the loss of water during fermentation, can cover up to 40% of the bean surface. We exploit HS pan-sharpening techniques to increase the spatial resolution of HS images and filter out uneven surface regions. In particular, the guided filter PCA approach which has proved suitable to use high-resolution RGB data as guide image. Our preliminary results show that this pre-processing step improves the separability of classes corresponding to each fermentation stage compared to using the average spectrum of the bean surface.

  20. Tools, strategies and qualitative approach in relation to suicidal attempts and ideation in the elderly.

    PubMed

    Cavalcante, Fátima Gonçalves; Minayo, Maria Cecília de Souza; Gutierrez, Denise Machado Duran; de Sousa, Girliani Silva; da Silva, Raimunda Magalhães; Moura, Rosylaine; Meneghel, Stela Nazareth; Grubits, Sonia; Conte, Marta; Cavalcante, Ana Célia Sousa; Figueiredo, Ana Elisa Bastos; Mangas, Raimunda Matilde do Nascimento; Fachola, María Cristina Heuguerot; Izquierdo, Giovane Mendieta

    2015-06-01

    The article analyses the quality and consistency of a comprehensive interview guide, adapted to study attempted suicide and its ideation among the elderly, and imparts the method followed in applying this tool. The objective is to show how the use of a semi-structured interview and the organization and data analysis set-up were tested and perfected by a network of researchers from twelve universities or research centers in Brazil, Uruguay and Colombia. The method involved application and evaluation of the tool and joint production of an instruction manual on data collection, systematization and analysis. The methodology was followed in 67 interviews with elderly people of 60 or older and in 34 interviews with health professionals in thirteen Brazilian municipalities and in Montevideo and Bogotá, allowing the consistency of the tool and the applicability of the method to be checked, during the process and at the end. The enhanced guide and the instructions for reproducing it are presented herein. The results indicate the suitability and credibility of this methodological approach, tested and certified in interdisciplinary and interinstitutional terms.

  1. A synergetic combination of small and large neighborhood schemes in developing an effective procedure for solving the job shop scheduling problem.

    PubMed

    Amirghasemi, Mehrdad; Zamani, Reza

    2014-01-01

    This paper presents an effective procedure for solving the job shop problem. Synergistically combining small and large neighborhood schemes, the procedure consists of four components, namely (i) a construction method for generating semi-active schedules by a forward-backward mechanism, (ii) a local search for manipulating a small neighborhood structure guided by a tabu list, (iii) a feedback-based mechanism for perturbing the solutions generated, and (iv) a very large-neighborhood local search guided by a forward-backward shifting bottleneck method. The combination of shifting bottleneck mechanism and tabu list is used as a means of the manipulation of neighborhood structures, and the perturbation mechanism employed diversifies the search. A feedback mechanism, called repeat-check, detects consequent repeats and ignites a perturbation when the total number of consecutive repeats for two identical makespan values reaches a given threshold. The results of extensive computational experiments on the benchmark instances indicate that the combination of these four components is synergetic, in the sense that they collectively make the procedure fast and robust.

  2. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections

    PubMed Central

    Bailey, Stephanie L.; Bono, Rose S.; Nash, Denis; Kimmel, April D.

    2018-01-01

    Background Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. Methods We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. Results We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Conclusions Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited. PMID:29570737

  3. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    PubMed

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not identify all errors in spreadsheet-based models. Comparing parallel model versions can aid in identifying unintentional errors and promoting reliable model projections, particularly when resources are limited.

  4. Improving the trajectory of transpedicular transdiscal lumbar screw fixation with a computer-assisted 3D-printed custom drill guide

    PubMed Central

    Shao, Zhen-Xuan; Wang, Jian-Shun; Lin, Zhong-Ke; Ni, Wen-Fei; Wang, Xiang-Yang

    2017-01-01

    Transpedicular transdiscal screw fixation is an alternative technique used in lumbar spine fixation; however, it requires an accurate screw trajectory. The aim of this study is to design a novel 3D-printed custom drill guide and investigate its accuracy to guide the trajectory of transpedicular transdiscal (TPTD) lumbar screw fixation. Dicom images of thirty lumbar functional segment units (FSU, two segments) of L1–L4 were acquired from the PACS system in our hospital (patients who underwent a CT scan for other abdomen diseases and had normal spine anatomy) and imported into reverse design software for three-dimensional reconstructions. Images were used to print the 3D lumbar models and were imported into CAD software to design an optimal TPTD screw trajectory and a matched custom drill guide. After both the 3D printed FSU models and 3D-printed custom drill guide were prepared, the TPTD screws will be guided with a 3D-printed custom drill guide and introduced into the 3D printed FSU models. No significant statistical difference in screw trajectory angles was observed between the digital model and the 3D-printed model (P > 0.05). Our present study found that, with the help of CAD software, it is feasible to design a TPTD screw custom drill guide that could guide the accurate TPTD screw trajectory on 3D-printed lumbar models. PMID:28717599

  5. 76 FR 46330 - NUREG-1934, Nuclear Power Plant Fire Modeling Application Guide (NPP FIRE MAG); Second Draft...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-02

    ... NUCLEAR REGULATORY COMMISSION [NRC-2009-0568] NUREG-1934, Nuclear Power Plant Fire Modeling Application Guide (NPP FIRE MAG); Second Draft Report for Comment AGENCY: Nuclear Regulatory Commission... 1023259), ``Nuclear Power Plant Fire Modeling Application Guide (NPP FIRE MAG), Second Draft Report for...

  6. Combining Ratio Estimation for Low Density Parity Check (LDPC) Coding

    NASA Technical Reports Server (NTRS)

    Mahmoud, Saad; Hi, Jianjun

    2012-01-01

    The Low Density Parity Check (LDPC) Code decoding algorithm make use of a scaled receive signal derived from maximizing the log-likelihood ratio of the received signal. The scaling factor (often called the combining ratio) in an AWGN channel is a ratio between signal amplitude and noise variance. Accurately estimating this ratio has shown as much as 0.6 dB decoding performance gain. This presentation briefly describes three methods for estimating the combining ratio: a Pilot-Guided estimation method, a Blind estimation method, and a Simulation-Based Look-Up table. The Pilot Guided Estimation method has shown that the maximum likelihood estimates of signal amplitude is the mean inner product of the received sequence and the known sequence, the attached synchronization marker (ASM) , and signal variance is the difference of the mean of the squared received sequence and the square of the signal amplitude. This method has the advantage of simplicity at the expense of latency since several frames worth of ASMs. The Blind estimation method s maximum likelihood estimator is the average of the product of the received signal with the hyperbolic tangent of the product combining ratio and the received signal. The root of this equation can be determined by an iterative binary search between 0 and 1 after normalizing the received sequence. This method has the benefit of requiring one frame of data to estimate the combining ratio which is good for faster changing channels compared to the previous method, however it is computationally expensive. The final method uses a look-up table based on prior simulated results to determine signal amplitude and noise variance. In this method the received mean signal strength is controlled to a constant soft decision value. The magnitude of the deviation is averaged over a predetermined number of samples. This value is referenced in a look up table to determine the combining ratio that prior simulation associated with the average magnitude of the deviation. This method is more complicated than the Pilot-Guided Method due to the gain control circuitry, but does not have the real-time computation complexity of the Blind Estimation method. Each of these methods can be used to provide an accurate estimation of the combining ratio, and the final selection of the estimation method depends on other design constraints.

  7. Quality assurance of weather data for agricultural system model input

    USDA-ARS?s Scientific Manuscript database

    It is well known that crop production and hydrologic variation on watersheds is weather related. Rarely, however, is meteorological data quality checks reported for agricultural systems model research. We present quality assurance procedures for agricultural system model weather data input. Problems...

  8. Improving Quality and Reducing Waste in Allied Health Workplace Education Programs: A Pragmatic Operational Education Framework Approach.

    PubMed

    Golder, Janet; Farlie, Melanie K; Sevenhuysen, Samantha

    2016-01-01

    Efficient utilisation of education resources is required for the delivery of effective learning opportunities for allied health professionals. This study aimed to develop an education framework to support delivery of high-quality education within existing education resources. This study was conducted in a large metropolitan health service. Homogenous and purposive sampling methods were utilised in Phase 1 (n=43) and 2 (n=14) consultation stages. Participants included 25 allied health professionals, 22 managers, 1 educator, and 3 executives. Field notes taken during 43 semi-structured interviews and 4 focus groups were member-checked, and semantic thematic analysis methods were utilised. Framework design was informed by existing published framework development guides. The framework model contains governance, planning, delivery, and evaluation and research elements and identifies performance indicators, practice examples, and support tools for a range of stakeholders. Themes integrated into framework content include improving quality of education and training provided and delivery efficiency, greater understanding of education role requirements, and workforce support for education-specific knowledge and skill development. This framework supports efficient delivery of allied health workforce education and training to the highest standard, whilst pragmatically considering current allied health education workforce demands.

  9. Rat Models and Identification of Candidate Early Serum Biomarkers of Battlefield Traumatic Brain Injury

    DTIC Science & Technology

    2007-07-31

    brain injury) All surgeries were performed using aseptic technique. Animals were checked for pain /distress immediately prior to anesthesia/surgery... Pain /distress checks were performed at 3, 6, 12, 24, 36, 48, 60, and 72 hours post-injury. Fluid Percussion Injury (FPI) For animals in the...NIH), and Neurobehavioral Scale (NBS). The criteria used to obtain the scores are detailed in Tables 2 and 3. As an additional endpoint, we also

  10. Employing the Intelligence Cycle Process Model Within the Homeland Security Enterprise

    DTIC Science & Technology

    2013-12-01

    the Iraq anti-war movement, a former U.S. Congresswoman, the U.S. Treasury Department and hip hop bands to spread Sharia law in the U.S. A Virginia...challenges remain with threat notification, access to information, and database management of information that may have contributed the 2013 Boston...The FBI said it took a number of investigative steps to check on the request, including looking at his travel history, checking databases for

  11. Modeling and analysis of cell membrane systems with probabilistic model checking

    PubMed Central

    2011-01-01

    Background Recently there has been a growing interest in the application of Probabilistic Model Checking (PMC) for the formal specification of biological systems. PMC is able to exhaustively explore all states of a stochastic model and can provide valuable insight into its behavior which are more difficult to see using only traditional methods for system analysis such as deterministic and stochastic simulation. In this work we propose a stochastic modeling for the description and analysis of sodium-potassium exchange pump. The sodium-potassium pump is a membrane transport system presents in all animal cell and capable of moving sodium and potassium ions against their concentration gradient. Results We present a quantitative formal specification of the pump mechanism in the PRISM language, taking into consideration a discrete chemistry approach and the Law of Mass Action aspects. We also present an analysis of the system using quantitative properties in order to verify the pump reversibility and understand the pump behavior using trend labels for the transition rates of the pump reactions. Conclusions Probabilistic model checking can be used along with other well established approaches such as simulation and differential equations to better understand pump behavior. Using PMC we can determine if specific events happen such as the potassium outside the cell ends in all model traces. We can also have a more detailed perspective on its behavior such as determining its reversibility and why its normal operation becomes slow over time. This knowledge can be used to direct experimental research and make it more efficient, leading to faster and more accurate scientific discoveries. PMID:22369714

  12. Software Model Checking of ARINC-653 Flight Code with MCP

    NASA Technical Reports Server (NTRS)

    Thompson, Sarah J.; Brat, Guillaume; Venet, Arnaud

    2010-01-01

    The ARINC-653 standard defines a common interface for Integrated Modular Avionics (IMA) code. In particular, ARINC-653 Part 1 specifies a process- and partition-management API that is analogous to POSIX threads, but with certain extensions and restrictions intended to support the implementation of high reliability flight code. MCP is a software model checker, developed at NASA Ames, that provides capabilities for model checking C and C++ source code. In this paper, we present recent work aimed at implementing extensions to MCP that support ARINC-653, and we discuss the challenges and opportunities that consequentially arise. Providing support for ARINC-653 s time and space partitioning is nontrivial, though there are implicit benefits for partial order reduction possible as a consequence of the API s strict interprocess communication policy.

  13. Swarm Verification

    NASA Technical Reports Server (NTRS)

    Holzmann, Gerard J.; Joshi, Rajeev; Groce, Alex

    2008-01-01

    Reportedly, supercomputer designer Seymour Cray once said that he would sooner use two strong oxen to plow a field than a thousand chickens. Although this is undoubtedly wise when it comes to plowing a field, it is not so clear for other types of tasks. Model checking problems are of the proverbial "search the needle in a haystack" type. Such problems can often be parallelized easily. Alas, none of the usual divide and conquer methods can be used to parallelize the working of a model checker. Given that it has become easier than ever to gain access to large numbers of computers to perform even routine tasks it is becoming more and more attractive to find alternate ways to use these resources to speed up model checking tasks. This paper describes one such method, called swarm verification.

  14. 75 FR 53857 - Airworthiness Directives; Eurocopter France Model SA330J Helicopters

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ... Airworthiness Directives; Eurocopter France Model SA330J Helicopters AGENCY: Federal Aviation Administration... known U.S. owners and operators of Eurocopter France (Eurocopter) Model SA330J helicopters by individual...'' rather than checking for ``play.'' This helicopter model is manufactured in France and is type...

  15. Development and testing of the VITAMIN-B7/BUGLE-B7 coupled neutron-gamma multigroup cross-section libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risner, J.M.; Wiarda, D.; Miller, T.M.

    2011-07-01

    The U.S. Nuclear Regulatory Commission's Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the evaluated nuclear data file (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI.3 data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII.0. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96 libraries.more » Verification and validation of the new libraries were accomplished using diagnostic checks in AMPX, 'unit tests' for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in RPV fluence calculations and meet the calculational uncertainty criterion in Regulatory Guide 1.190. (authors)« less

  16. Development and Testing of the VITAMIN-B7/BUGLE-B7 Coupled Neutron-Gamma Multigroup Cross-Section Libraries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Risner, Joel M; Wiarda, Dorothea; Miller, Thomas Martin

    2011-01-01

    The U.S. Nuclear Regulatory Commission s Regulatory Guide 1.190 states that calculational methods used to estimate reactor pressure vessel (RPV) fluence should use the latest version of the Evaluated Nuclear Data File (ENDF). The VITAMIN-B6 fine-group library and BUGLE-96 broad-group library, which are widely used for RPV fluence calculations, were generated using ENDF/B-VI data, which was the most current data when Regulatory Guide 1.190 was issued. We have developed new fine-group (VITAMIN-B7) and broad-group (BUGLE-B7) libraries based on ENDF/B-VII. These new libraries, which were processed using the AMPX code system, maintain the same group structures as the VITAMIN-B6 and BUGLE-96more » libraries. Verification and validation of the new libraries was accomplished using diagnostic checks in AMPX, unit tests for each element in VITAMIN-B7, and a diverse set of benchmark experiments including critical evaluations for fast and thermal systems, a set of experimental benchmarks that are used for SCALE regression tests, and three RPV fluence benchmarks. The benchmark evaluation results demonstrate that VITAMIN-B7 and BUGLE-B7 are appropriate for use in LWR shielding applications, and meet the calculational uncertainty criterion in Regulatory Guide 1.190.« less

  17. A Guide to Community Shared Solar: Utility, Private, and Non-Profit Project Development (Book)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coughlin, J.; Grove, J.; Irvine, L.

    2012-05-01

    This guide is organized around three sponsorship models: utility-sponsored projects, projects sponsored by special purpose entities - businesses formed for the purpose of producing community solar power, and non-profit sponsored projects. The guide addresses issues common to all project models, as well as issues unique to each model.

  18. State background checks for gun purchase and firearm deaths: an exploratory study.

    PubMed

    Sen, Bisakha; Panjamapirom, Anantachai

    2012-10-01

    This study examines the relationship between the types of background-information check required by states prior to firearm purchases, and firearm homicide and suicide deaths. Negative binomial models are used to analyze state-level data for homicides and suicides in the U.S. from 1996 to 2005. Data on types of background information are retrieved from the Surveys of State Procedures Related to Firearm Sales, and the violent death data are from the WISQARS. Several other state level factors were controlled for. More background checks are associated with fewer homicide (IRR:0.93, 95% CI:0.91-0.96) and suicide (IRR:0.98, 95% CI:0.96-1.00) deaths. Firearm homicide deaths are lower when states have checks for restraining orders (IRR:0.87, 95% CI:0.79-0.95) and fugitive status (IRR:0.79, 95% CI:0.72-0.88). Firearm suicide deaths are lower when states have background checks for mental illness (IRR:0.96, 95% CI:0.92-0.99), fugitive status (IRR:0.95, 95% CI:0.90-0.99) and misdemeanors (IRR:0.95, 95% CI:0.92-1.00). It does not appear that reductions in firearm deaths are offset by increases in non-firearm violent deaths. More extensive background checks prior to gun purchase are mostly associated with reductions in firearm homicide and suicide deaths. Several study limitations are acknowledged, and further research is called for to ascertain causality. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Towards a Semantically-Enabled Control Strategy for Building Simulations: Integration of Semantic Technologies and Model Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgoshaei, Parastoo; Austin, Mark A.; Pertzborn, Amanda J.

    State-of-the-art building simulation control methods incorporate physical constraints into their mathematical models, but omit implicit constraints associated with policies of operation and dependency relationships among rules representing those constraints. To overcome these shortcomings, there is a recent trend in enabling the control strategies with inference-based rule checking capabilities. One solution is to exploit semantic web technologies in building simulation control. Such approaches provide the tools for semantic modeling of domains, and the ability to deduce new information based on the models through use of Description Logic (DL). In a step toward enabling this capability, this paper presents a cross-disciplinary data-drivenmore » control strategy for building energy management simulation that integrates semantic modeling and formal rule checking mechanisms into a Model Predictive Control (MPC) formulation. The results show that MPC provides superior levels of performance when initial conditions and inputs are derived from inference-based rules.« less

  20. 'Constraint consistency' at all orders in cosmological perturbation theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in

    2015-08-01

    We study the equivalence of two—order-by-order Einstein's equation and Reduced action—approaches to cosmological perturbation theory at all orders for different models of inflation. We point out a crucial consistency check which we refer to as 'Constraint consistency' condition that needs to be satisfied in order for the two approaches to lead to identical single variable equation of motion. The method we propose here is quick and efficient to check the consistency for any model including modified gravity models. Our analysis points out an important feature which is crucial for inflationary model building i.e., all 'constraint' inconsistent models have higher ordermore » Ostrogradsky's instabilities but the reverse is not true. In other words, one can have models with constraint Lapse function and Shift vector, though it may have Ostrogradsky's instabilities. We also obtain single variable equation for non-canonical scalar field in the limit of power-law inflation for the second-order perturbed variables.« less

  1. Jobs and Economic Development Impact (JEDI) Model Geothermal User Reference Guide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, C.; Augustine, C.; Goldberg, M.

    2012-09-01

    The Geothermal Jobs and Economic Development Impact (JEDI) model, developed through the National Renewable Energy Laboratory (NREL), is an Excel-based user-friendly tools that estimates the economic impacts of constructing and operating hydrothermal and Enhanced Geothermal System (EGS) power generation projects at the local level for a range of conventional and renewable energy technologies. The JEDI Model Geothermal User Reference Guide was developed to assist users in using and understanding the model. This guide provides information on the model's underlying methodology, as well as the parameters and references used to develop the cost data utilized in the model. This guide alsomore » provides basic instruction on model add-in features, operation of the model, and a discussion of how the results should be interpreted.« less

  2. Model Checking JAVA Programs Using Java Pathfinder

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Pressburger, Thomas

    2000-01-01

    This paper describes a translator called JAVA PATHFINDER from JAVA to PROMELA, the "programming language" of the SPIN model checker. The purpose is to establish a framework for verification and debugging of JAVA programs based on model checking. This work should be seen in a broader attempt to make formal methods applicable "in the loop" of programming within NASA's areas such as space, aviation, and robotics. Our main goal is to create automated formal methods such that programmers themselves can apply these in their daily work (in the loop) without the need for specialists to manually reformulate a program into a different notation in order to analyze the program. This work is a continuation of an effort to formally verify, using SPIN, a multi-threaded operating system programmed in Lisp for the Deep-Space 1 spacecraft, and of previous work in applying existing model checkers and theorem provers to real applications.

  3. Rewriting Modulo SMT

    NASA Technical Reports Server (NTRS)

    Rocha, Camilo; Meseguer, Jose; Munoz, Cesar A.

    2013-01-01

    Combining symbolic techniques such as: (i) SMT solving, (ii) rewriting modulo theories, and (iii) model checking can enable the analysis of infinite-state systems outside the scope of each such technique. This paper proposes rewriting modulo SMT as a new technique combining the powers of (i)-(iii) and ideally suited to model and analyze infinite-state open systems; that is, systems that interact with a non-deterministic environment. Such systems exhibit both internal non-determinism due to the system, and external non-determinism due to the environment. They are not amenable to finite-state model checking analysis because they typically are infinite-state. By being reducible to standard rewriting using reflective techniques, rewriting modulo SMT can both naturally model and analyze open systems without requiring any changes to rewriting-based reachability analysis techniques for closed systems. This is illustrated by the analysis of a real-time system beyond the scope of timed automata methods.

  4. Model Checking Artificial Intelligence Based Planners: Even the Best Laid Plans Must Be Verified

    NASA Technical Reports Server (NTRS)

    Smith, Margaret H.; Holzmann, Gerard J.; Cucullu, Gordon C., III; Smith, Benjamin D.

    2005-01-01

    Automated planning systems (APS) are gaining acceptance for use on NASA missions as evidenced by APS flown On missions such as Orbiter and Deep Space 1 both of which were commanded by onboard planning systems. The planning system takes high level goals and expands them onboard into a detailed of action fiat the spacecraft executes. The system must be verified to ensure that the automatically generated plans achieve the goals as expected and do not generate actions that would harm the spacecraft or mission. These systems are typically tested using empirical methods. Formal methods, such as model checking, offer exhaustive or measurable test coverage which leads to much greater confidence in correctness. This paper describes a formal method based on the SPIN model checker. This method guarantees that possible plans meet certain desirable properties. We express the input model in Promela, the language of SPIN and express the properties of desirable plans formally.

  5. Development of flank wear model of cutting tool by using adaptive feedback linear control system on machining AISI D2 steel and AISI 4340 steel

    NASA Astrophysics Data System (ADS)

    Orra, Kashfull; Choudhury, Sounak K.

    2016-12-01

    The purpose of this paper is to build an adaptive feedback linear control system to check the variation of cutting force signal to improve the tool life. The paper discusses the use of transfer function approach in improving the mathematical modelling and adaptively controlling the process dynamics of the turning operation. The experimental results shows to be in agreement with the simulation model and error obtained is less than 3%. The state space approach model used in this paper successfully check the adequacy of the control system through controllability and observability test matrix and can be transferred from one state to another by appropriate input control in a finite time. The proposed system can be implemented to other machining process under varying range of cutting conditions to improve the efficiency and observability of the system.

  6. Models for Change: Harassment and Discrimination Prevention Education for Colleges and Universities. A Resource Guide and Video=Formation pour la prevention du harcelement et de la discrimination a l'intention des decisionnaires des colleges et des universites. Guide de ressources et video.

    ERIC Educational Resources Information Center

    Ruemper, Wendy, Ed.; And Others

    Intended as a reference for preventing harassment and discrimination in Ontario colleges and universities, this resource guide describes a project to develop models of alternative instructional delivery and presents the models. Part 1 provides an introduction to the guide, reviews the goals of the project, and describes a related training video…

  7. Aviation Safety: Modeling and Analyzing Complex Interactions between Humans and Automated Systems

    NASA Technical Reports Server (NTRS)

    Rungta, Neha; Brat, Guillaume; Clancey, William J.; Linde, Charlotte; Raimondi, Franco; Seah, Chin; Shafto, Michael

    2013-01-01

    The on-going transformation from the current US Air Traffic System (ATS) to the Next Generation Air Traffic System (NextGen) will force the introduction of new automated systems and most likely will cause automation to migrate from ground to air. This will yield new function allocations between humans and automation and therefore change the roles and responsibilities in the ATS. Yet, safety in NextGen is required to be at least as good as in the current system. We therefore need techniques to evaluate the safety of the interactions between humans and automation. We think that current human factor studies and simulation-based techniques will fall short in front of the ATS complexity, and that we need to add more automated techniques to simulations, such as model checking, which offers exhaustive coverage of the non-deterministic behaviors in nominal and off-nominal scenarios. In this work, we present a verification approach based both on simulations and on model checking for evaluating the roles and responsibilities of humans and automation. Models are created using Brahms (a multi-agent framework) and we show that the traditional Brahms simulations can be integrated with automated exploration techniques based on model checking, thus offering a complete exploration of the behavioral space of the scenario. Our formal analysis supports the notion of beliefs and probabilities to reason about human behavior. We demonstrate the technique with the Ueberligen accident since it exemplifies authority problems when receiving conflicting advices from human and automated systems.

  8. Modelling and Analysis of the Excavation Phase by the Theory of Blocks Method of Tunnel 4 Kherrata Gorge, Algeria

    NASA Astrophysics Data System (ADS)

    Boukarm, Riadh; Houam, Abdelkader; Fredj, Mohammed; Boucif, Rima

    2017-12-01

    The aim of our work is to check the stability during excavation tunnel work in the rock mass of Kherrata, connecting the cities of Bejaia to Setif. The characterization methods through the Q system (method of Barton), RMR (Bieniawski classification) allowed us to conclude that the quality of rock mass is average in limestone, and poor in fractured limestone. Then modelling of excavation phase using the theory of blocks method (Software UNWEDGE) with the parameters from the recommendations of classification allowed us to check stability and to finally conclude that the use of geomechanical classification and the theory of blocks can be considered reliable in preliminary design.

  9. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  10. Reliability of implant surgical guides based on soft-tissue models.

    PubMed

    Maney, Pooja; Simmons, David E; Palaiologou, Archontia; Kee, Edwin

    2012-12-01

    The purpose of this study was to determine the accuracy of implant surgical guides fabricated on diagnostic casts. Guides were fabricated with radiopaque rods representing implant positions. Cone beam computerized tomograms were taken with guides in place. Accuracy was evaluated using software to simulate implant placement. Twenty-two sites (47%) were considered accurate (13 of 24 maxillary and 9 of 23 mandibular sites). Soft-tissue models do not always provide sufficient accuracy for fabricating implant surgical guides.

  11. Portable Wireless LAN Device and Two-Way Radio Threat Assessment for Aircraft VHF Communication Radio Band

    NASA Technical Reports Server (NTRS)

    Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Williams, Reuben A.; Smith, Laura J.; Salud, Maria Theresa P.

    2004-01-01

    This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.

  12. Ecological Modeling Guide for Ecosystem Restoration and Management

    DTIC Science & Technology

    2012-08-01

    may result from proposed restoration and management actions. This report provides information to guide environmental planers in selection, development...actions. This report provides information to guide environmental planers in selection, development, evaluation and documentation of ecological models. A

  13. Cross-sectional review of the response and treatment uptake from the NHS Health Checks programme in Stoke on Trent.

    PubMed

    Cochrane, Thomas; Gidlow, Christopher J; Kumar, Jagdish; Mawby, Yvonne; Iqbal, Zafar; Chambers, Ruth M

    2013-03-01

    As part of national policy to manage the increasing burden of chronic diseases, the Department of Health in England has launched the NHS Health Checks programme, which aims to reduce the burden of the major vascular diseases on the health service. A cross-sectional review of response, attendance and treatment uptake over the first year of the programme in Stoke on Trent was carried out. Patients aged between 32 and 74 years and estimated to be at ≥20% risk of developing cardiovascular disease were identified from electronic medical records. Multi-level regression modelling was used to evaluate the influence of individual- and practice-level factors on health check outcomes. Overall 63.3% of patients responded, 43.7% attended and 29.8% took up a treatment following their health check invitation. The response was higher for older age and more affluent areas; attendance and treatment uptake were higher for males and older age. Variance between practices was significant (P < 0.001) for response (13.4%), attendance (12.7%) and uptake (23%). The attendance rate of 43.7% following invitation to a health check was considerably lower than the benchmark of 75%. The lack of public interest and the prevalence of significant comorbidity are challenges to this national policy innovation.

  14. Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.

    1992-01-01

    The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.

  15. Daily quality assurance phantom for ultrasound image guided radiation therapy

    PubMed Central

    Drever, Laura

    2007-01-01

    A simple phantom was designed, constructed, tested, and clinically implemented for daily quality assurance (QA) of an ultrasound‐image‐guided radiation therapy (US‐IGRT) system, the Restitu Ultrasound system (Resonant Medical, Montreal, QC). The phantom consists of a high signal echogenic background gel surrounding a low signal hypoechoic egg‐shaped target. Daily QA checks involve ultrasound imaging of the phantom and segmenting of the embedded target using the automated tools available on the US‐IGRT system. This process serves to confirm system hardware and software functions and, in particular, accurate determination of the target position. Experiments were conducted to test the stability of the phantom at room temperature, its tissue‐mimicking properties, the reproducibility of target position measurements, and the usefulness of the phantom as a daily QA device. The phantom proved stable at room temperature, exhibited no evidence of bacterial or fungal invasion in 9 months, and showed limited desiccation (resulting in a monthly reduction in ultrasound‐measured volume of approximately 0.2 cm3). Furthermore, the phantom was shown to be nearly tissue‐mimicking, with speed of sound in the phantom estimated to be 0.8% higher than that assumed by the scanner calibration. The phantom performs well in a clinical setting, owing to its light weight and ease of operation. It provides reproducible measures of target position even with multiple users. At our center, the phantom is being used for daily QA of the US‐IGRT system with clinically acceptable tolerances of ±1 cm3 on target volume and ±2 mm on target position. For routine daily QA, this phantom is a good alternative to the manufacturer‐supplied calibration phantom, and we recommended that that larger phantom be reserved for less frequent, more detailed QA checks and system calibration. PACS numbers: 87.66.Xa, 87.63.Df

  16. Extension of specification language for soundness and completeness of service workflow

    NASA Astrophysics Data System (ADS)

    Viriyasitavat, Wattana; Xu, Li Da; Bi, Zhuming; Sapsomboon, Assadaporn

    2018-05-01

    A Service Workflow is an aggregation of distributed services to fulfill specific functionalities. With ever increasing available services, the methodologies for the selections of the services against the given requirements become main research subjects in multiple disciplines. A few of researchers have contributed to the formal specification languages and the methods for model checking; however, existing methods have the difficulties to tackle with the complexity of workflow compositions. In this paper, we propose to formalize the specification language to reduce the complexity of the workflow composition. To this end, we extend a specification language with the consideration of formal logic, so that some effective theorems can be derived for the verification of syntax, semantics, and inference rules in the workflow composition. The logic-based approach automates compliance checking effectively. The Service Workflow Specification (SWSpec) has been extended and formulated, and the soundness, completeness, and consistency of SWSpec applications have been verified; note that a logic-based SWSpec is mandatory for the development of model checking. The application of the proposed SWSpec has been demonstrated by the examples with the addressed soundness, completeness, and consistency.

  17. From Care to Cure: Demonstrating a Model of Clinical Patient Navigation for Hepatitis C Care and Treatment in High-Need Patients.

    PubMed

    Ford, Mary M; Johnson, Nirah; Desai, Payal; Rude, Eric; Laraque, Fabienne

    2017-03-01

    The NYC Department of Health implemented a patient navigation program, Check Hep C, to address patient and provider barriers to HCV care and potentially lifesaving treatment. Services were delivered at two clinical care sites and two sites that linked patients to off-site care. Working with a multidisciplinary care team, patient navigators provided risk assessment, health education, treatment readiness and medication adherence counseling, and medication coordination. Between March 2014 and January 2015, 388 participants enrolled in Check Hep C, 129 (33%) initiated treatment, and 119 (91% of initiators) had sustained virologic response (SVR). Participants receiving on-site clinical care had higher odds of initiating treatment than those linked to off-site care. Check Hep C successfully supported high-need participants through HCV care and treatment, and SVR rates demonstrate the real-world ability of achieving high cure rates using patient navigation care models. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. NASTRAN data generation and management using interactive graphics

    NASA Technical Reports Server (NTRS)

    Smootkatow, M.; Cooper, B. M.

    1972-01-01

    A method of using an interactive graphics device to generate a large portion of the input bulk data with visual checks of the structure and the card images is described. The generation starts from GRID and PBAR cards. The visual checks result from a three-dimensional display of the model in any rotated position. By detailing the steps, the time saving and cost effectiveness of this method may be judged, and its potential as a useful tool for the structural analyst may be established.

  19. Development of Subischial Prosthetic Sockets with Vacuum-Assisted Suspension for Highly Active Persons with Transfemoral Amputations

    DTIC Science & Technology

    2016-12-01

    of the frame from the combined image files and ensure total contact between the frame geometry, ultimately modeled independently as a solid, and...fitting with a rigid PETG check socket to ensure correct volumes and total contact at the distal end has been achieved, a second check socket can be...from dynamically conforming to changes in residual limb shape and volume during gait (Sanders, 2009). The ensuing separation (i.e. loss of contact

  20. Invention and Innovation: A Standards-Based Middle School Model Course Guide. Advancing Technological Literacy: ITEA Professional Series

    ERIC Educational Resources Information Center

    International Technology Education Association (ITEA), 2005

    2005-01-01

    This guide presents a model for a standards-based contemporary technology education course for the middle school. This model course guide features an exploratory curriculum thrust for a cornerstone middle level course. It provides teachers with an overview of the concept, suggestions for planning the course, and ideas for developing…

  1. Extensions to the visual predictive check to facilitate model performance evaluation.

    PubMed

    Post, Teun M; Freijer, Jan I; Ploeger, Bart A; Danhof, Meindert

    2008-04-01

    The Visual Predictive Check (VPC) is a valuable and supportive instrument for evaluating model performance. However in its most commonly applied form, the method largely depends on a subjective comparison of the distribution of the simulated data with the observed data, without explicitly quantifying and relating the information in both. In recent adaptations to the VPC this drawback is taken into consideration by presenting the observed and predicted data as percentiles. In addition, in some of these adaptations the uncertainty in the predictions is represented visually. However, it is not assessed whether the expected random distribution of the observations around the predicted median trend is realised in relation to the number of observations. Moreover the influence of and the information residing in missing data at each time point is not taken into consideration. Therefore, in this investigation the VPC is extended with two methods to support a less subjective and thereby more adequate evaluation of model performance: (i) the Quantified Visual Predictive Check (QVPC) and (ii) the Bootstrap Visual Predictive Check (BVPC). The QVPC presents the distribution of the observations as a percentage, thus regardless the density of the data, above and below the predicted median at each time point, while also visualising the percentage of unavailable data. The BVPC weighs the predicted median against the 5th, 50th and 95th percentiles resulting from a bootstrap of the observed data median at each time point, while accounting for the number and the theoretical position of unavailable data. The proposed extensions to the VPC are illustrated by a pharmacokinetic simulation example and applied to a pharmacodynamic disease progression example.

  2. A Categorization of Dynamic Analyzers

    NASA Technical Reports Server (NTRS)

    Lujan, Michelle R.

    1997-01-01

    Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.

  3. Guide to APA-Based Models

    NASA Technical Reports Server (NTRS)

    Robins, Robert E.; Delisi, Donald P.

    2008-01-01

    In Robins and Delisi (2008), a linear decay model, a new IGE model by Sarpkaya (2006), and a series of APA-Based models were scored using data from three airports. This report is a guide to the APA-based models.

  4. Care guides: an examination of occupational conflict and role relationships in primary care.

    PubMed

    Wholey, Douglas R; White, Katie M; Adair, Richard; Christianson, Jon B; Lee, Suhna; Elumba, Deborah

    2013-01-01

    Improving the efficiency and effectiveness of primary care treatment of patients with chronic illness is an important goal in reforming the U.S. health care system. Reducing occupational conflicts and creating interdependent primary care teams is crucial for the effective functioning of new models being developed to reorganize chronic care. Occupational conflict, role interdependence, and resistance to change in a proof-of-concept pilot test of one such model that uses a new kind of employee in the primary care office, a "care guide," were analyzed. Care guides are lay individuals who help chronic disease patients and their providers achieve standard health goals. The aim of this study was to examine the development of occupational boundaries, interdependence of care guides and primary care team members, and acceptance by clinic employees of this new kind of health worker. A mixed methods, pilot study was conducted using qualitative analysis; clinic, provider, and patient surveys; administrative data; and multivariate analysis. Qualitative analysis examined the emergence of the care guide role. Administrative data and surveys were used to examine patterns of interdependence between care guides, physicians, team members, and clinic staff; obtain physician evaluations of the care guide role; and evaluate the effect of care guides on patient perceptions of care coordination and follow-up. Evaluation of implementation of the care guide model showed that (a) the care guide scope of practice was clearly defined; (b) interdependent relationships between care guides and providers were formed; (c) relational triads consisting of patient, care guide, and physician were created; (d) patients and providers were supported in managing chronic disease; and (e) resistance to this model among traditional employees was minimized. The feasibility of implementing a new care model for chronic disease management in the primary care setting, identifying factors associated with a positive organizational experience, was shown in this study.

  5. Are mHealth Interventions to Improve Child Restraint System Installation of Value? A Mixed Methods Study of Parents

    PubMed Central

    Fleisher, Linda; Erkoboni, Danielle; Halkyard, Katherine; Sykes, Emily; Norris, Marisol S.; Walker, Lorrie; Winston, Flaura

    2017-01-01

    Childhood death from vehicle crashes and the delivery of information about proper child restraint systems (CRS) use continues to be a critical public health issue. Safe Seat, a sequential, mixed-methods study identified gaps in parental knowledge about and perceived challenges in the use of appropriate CRS and insights into the preferences of various technological approaches to deliver CRS education. Focus groups (eight groups with 21 participants) and a quantitative national survey (N = 1251) using MTurk were conducted. Although there were differences in the age, racial/ethnic background, and educational level between the focus group participants and the national sample, there was a great deal of consistency in the need for more timely and personalized information about CRS. The majority of parents did not utilize car seat check professionals although they expressed interest in and lack of knowledge about how to access these resources. Although there was some interest in an app that would be personalized and able to push just-in-time content (e.g., new guidelines, location and times of car seat checks), content that has sporadic relevance (e.g., initial installation) seemed more appropriate for a website. Stakeholder input is critical to guide the development and delivery of acceptable and useful child safety education. PMID:28954429

  6. Are mHealth Interventions to Improve Child Restraint System Installation of Value? A Mixed Methods Study of Parents.

    PubMed

    Fleisher, Linda; Erkoboni, Danielle; Halkyard, Katherine; Sykes, Emily; Norris, Marisol S; Walker, Lorrie; Winston, Flaura

    2017-09-26

    Childhood death from vehicle crashes and the delivery of information about proper child restraint systems (CRS) use continues to be a critical public health issue. Safe Seat, a sequential, mixed-methods study identified gaps in parental knowledge about and perceived challenges in the use of appropriate CRS and insights into the preferences of various technological approaches to deliver CRS education. Focus groups (eight groups with 21 participants) and a quantitative national survey (N = 1251) using MTurk were conducted. Although there were differences in the age, racial/ethnic background, and educational level between the focus group participants and the national sample, there was a great deal of consistency in the need for more timely and personalized information about CRS. The majority of parents did not utilize car seat check professionals although they expressed interest in and lack of knowledge about how to access these resources. Although there was some interest in an app that would be personalized and able to push just-in-time content (e.g., new guidelines, location and times of car seat checks), content that has sporadic relevance (e.g., initial installation) seemed more appropriate for a website. Stakeholder input is critical to guide the development and delivery of acceptable and useful child safety education.

  7. Retrospective checking of compliance with practice guidelines for acute stroke care: a novel experiment using openEHR’s Guideline Definition Language

    PubMed Central

    2014-01-01

    Background Providing scalable clinical decision support (CDS) across institutions that use different electronic health record (EHR) systems has been a challenge for medical informatics researchers. The lack of commonly shared EHR models and terminology bindings has been recognised as a major barrier to sharing CDS content among different organisations. The openEHR Guideline Definition Language (GDL) expresses CDS content based on openEHR archetypes and can support any clinical terminologies or natural languages. Our aim was to explore in an experimental setting the practicability of GDL and its underlying archetype formalism. A further aim was to report on the artefacts produced by this new technological approach in this particular experiment. We modelled and automatically executed compliance checking rules from clinical practice guidelines for acute stroke care. Methods We extracted rules from the European clinical practice guidelines as well as from treatment contraindications for acute stroke care and represented them using GDL. Then we executed the rules retrospectively on 49 mock patient cases to check the cases’ compliance with the guidelines, and manually validated the execution results. We used openEHR archetypes, GDL rules, the openEHR reference information model, reference terminologies and the Data Archetype Definition Language. We utilised the open-sourced GDL Editor for authoring GDL rules, the international archetype repository for reusing archetypes, the open-sourced Ocean Archetype Editor for authoring or modifying archetypes and the CDS Workbench for executing GDL rules on patient data. Results We successfully represented clinical rules about 14 out of 19 contraindications for thrombolysis and other aspects of acute stroke care with 80 GDL rules. These rules are based on 14 reused international archetypes (one of which was modified), 2 newly created archetypes and 51 terminology bindings (to three terminologies). Our manual compliance checks for 49 mock patients were a complete match versus the automated compliance results. Conclusions Shareable guideline knowledge for use in automated retrospective checking of guideline compliance may be achievable using GDL. Whether the same GDL rules can be used for at-the-point-of-care CDS remains unknown. PMID:24886468

  8. Algebraic model checking for Boolean gene regulatory networks.

    PubMed

    Tran, Quoc-Nam

    2011-01-01

    We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.

  9. SimCheck: An Expressive Type System for Simulink

    NASA Technical Reports Server (NTRS)

    Roy, Pritam; Shankar, Natarajan

    2010-01-01

    MATLAB Simulink is a member of a class of visual languages that are used for modeling and simulating physical and cyber-physical systems. A Simulink model consists of blocks with input and output ports connected using links that carry signals. We extend the type system of Simulink with annotations and dimensions/units associated with ports and links. These types can capture invariants on signals as well as relations between signals. We define a type-checker that checks the wellformedness of Simulink blocks with respect to these type annotations. The type checker generates proof obligations that are solved by SRI's Yices solver for satisfiability modulo theories (SMT). This translation can be used to detect type errors, demonstrate counterexamples, generate test cases, or prove the absence of type errors. Our work is an initial step toward the symbolic analysis of MATLAB Simulink models.

  10. A multicentre randomized controlled trial of an empowerment-inspired intervention for adolescents starting continuous subcutaneous insulin infusion--a study protocol.

    PubMed

    Brorsson, Anna Lena; Leksell, Janeth; Viklund, Gunnel; Lindholm Olinder, Anna

    2013-12-20

    Continuous subcutaneous insulin infusion (CSII) treatment among children with type 1 diabetes is increasing in Sweden. However, studies evaluating glycaemic control in children using CSII show inconsistent results. The distribution of responsibility for diabetes self-management between children and parents is often unclear and needs clarification. There is much published support for continued parental involvement and shared diabetes management during adolescence. Guided Self-Determination (GSD) is an empowerment-based, person-centred, reflection and problem solving method intended to guide the patient to become self-sufficient and develop life skills for managing difficulties in diabetes self-management. This method has been adapted for adolescents and parents as Guided Self-Determination-Young (GSD-Y). This study aims to evaluate the effect of an intervention with GSD-Y in groups of adolescents starting on insulin pumps and their parents on diabetes-related family conflicts, perceived health and quality of life (QoL), and metabolic control. Here, we describe the protocol and plans for study enrollment. This study is designed as a randomized, controlled, prospective, multicentre study. Eighty patients between 12-18 years of age who are planning to start CSII will be included. All adolescents and their parents will receive standard insulin pump training. The education intervention will be conducted when CSII is to be started and at four appointments in the first 4 months after starting CSII. The primary outcome is haemoglobin A1c levels. Secondary outcomes are perceived health and QoL, frequency of blood glucose self-monitoring and bolus doses, and usage of carbohydrate counting. The following instruments will be used: Disabkids, 'Check your health', the Diabetes Family Conflict Scale and the Swedish Diabetes Empowerment Scale. Outcomes will be evaluated within and between groups by comparing data at baseline, and at 6 and 12 months after starting treatment. In this study, we will assess the effect of starting an CSII together with the model of GSD to determine whether this approach leads to retention of improved glycaemic control, QoL, responsibility distribution and reduced diabetes-related conflicts in the family.

  11. Application of Survival Analysis and Multistate Modeling to Understand Animal Behavior: Examples from Guide Dogs

    PubMed Central

    Asher, Lucy; Harvey, Naomi D.; Green, Martin; England, Gary C. W.

    2017-01-01

    Epidemiology is the study of patterns of health-related states or events in populations. Statistical models developed for epidemiology could be usefully applied to behavioral states or events. The aim of this study is to present the application of epidemiological statistics to understand animal behavior where discrete outcomes are of interest, using data from guide dogs to illustrate. Specifically, survival analysis and multistate modeling are applied to data on guide dogs comparing dogs that completed training and qualified as a guide dog, to those that were withdrawn from the training program. Survival analysis allows the time to (or between) a binary event(s) and the probability of the event occurring at or beyond a specified time point. Survival analysis, using a Cox proportional hazards model, was used to examine the time taken to withdraw a dog from training. Sex, breed, and other factors affected time to withdrawal. Bitches were withdrawn faster than dogs, Labradors were withdrawn faster, and Labrador × Golden Retrievers slower, than Golden Retriever × Labradors; and dogs not bred by Guide Dogs were withdrawn faster than those bred by Guide Dogs. Multistate modeling (MSM) can be used as an extension of survival analysis to incorporate more than two discrete events or states. Multistate models were used to investigate transitions between states of training to qualification as a guide dog or behavioral withdrawal, and from qualification as a guide dog to behavioral withdrawal. Sex, breed (with purebred Labradors and Golden retrievers differing from F1 crosses), and bred by Guide Dogs or not, effected movements between states. We postulate that survival analysis and MSM could be applied to a wide range of behavioral data and key examples are provided. PMID:28804710

  12. Computational technique for stepwise quantitative assessment of equation correctness

    NASA Astrophysics Data System (ADS)

    Othman, Nuru'l Izzah; Bakar, Zainab Abu

    2017-04-01

    Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.

  13. Specification and Verification of Web Applications in Rewriting Logic

    NASA Astrophysics Data System (ADS)

    Alpuente, María; Ballis, Demis; Romero, Daniel

    This paper presents a Rewriting Logic framework that formalizes the interactions between Web servers and Web browsers through a communicating protocol abstracting HTTP. The proposed framework includes a scripting language that is powerful enough to model the dynamics of complex Web applications by encompassing the main features of the most popular Web scripting languages (e.g. PHP, ASP, Java Servlets). We also provide a detailed characterization of browser actions (e.g. forward/backward navigation, page refresh, and new window/tab openings) via rewrite rules, and show how our models can be naturally model-checked by using the Linear Temporal Logic of Rewriting (LTLR), which is a Linear Temporal Logic specifically designed for model-checking rewrite theories. Our formalization is particularly suitable for verification purposes, since it allows one to perform in-depth analyses of many subtle aspects related to Web interaction. Finally, the framework has been completely implemented in Maude, and we report on some successful experiments that we conducted by using the Maude LTLR model-checker.

  14. Experimental Evaluation of a Planning Language Suitable for Formal Verification

    NASA Technical Reports Server (NTRS)

    Butler, Rick W.; Munoz, Cesar A.; Siminiceanu, Radu I.

    2008-01-01

    The marriage of model checking and planning faces two seemingly diverging alternatives: the need for a planning language expressive enough to capture the complexity of real-life applications, as opposed to a language simple, yet robust enough to be amenable to exhaustive verification and validation techniques. In an attempt to reconcile these differences, we have designed an abstract plan description language, ANMLite, inspired from the Action Notation Modeling Language (ANML) [17]. We present the basic concepts of the ANMLite language as well as an automatic translator from ANMLite to the model checker SAL (Symbolic Analysis Laboratory) [7]. We discuss various aspects of specifying a plan in terms of constraints and explore the implications of choosing a robust logic behind the specification of constraints, rather than simply propose a new planning language. Additionally, we provide an initial assessment of the efficiency of model checking to search for solutions of planning problems. To this end, we design a basic test benchmark and study the scalability of the generated SAL models in terms of plan complexity.

  15. Formal Verification of the Runway Safety Monitor

    NASA Technical Reports Server (NTRS)

    Siminiceanu, Radu; Ciardo, Gianfranco

    2006-01-01

    The Runway Safety Monitor (RSM) designed by Lockheed Martin is part of NASA's effort to reduce runway accidents. We developed a Petri net model of the RSM protocol and used the model checking functions of our tool SMART to investigate a number of safety properties in RSM. To mitigate the impact of state-space explosion, we built a highly discretized model of the system, obtained by partitioning the monitored runway zone into a grid of smaller volumes and by considering scenarios involving only two aircraft. The model also assumes that there are no communication failures, such as bad input from radar or lack of incoming data, thus it relies on a consistent view of reality by all participants. In spite of these simplifications, we were able to expose potential problems in the RSM conceptual design. Our findings were forwarded to the design engineers, who undertook corrective action. Additionally, the results stress the efficiency attained by the new model checking algorithms implemented in SMART, and demonstrate their applicability to real-world systems.

  16. Ultrasound-Guided Vascular Access Simulator for Medical Training: Proposal of a Simple, Economic and Effective Model.

    PubMed

    Fürst, Rafael Vilhena de Carvalho; Polimanti, Afonso César; Galego, Sidnei José; Bicudo, Maria Claudia; Montagna, Erik; Corrêa, João Antônio

    2017-03-01

    To present a simple and affordable model able to properly simulate an ultrasound-guided venous access. The simulation was made using a latex balloon tube filled with water and dye solution implanted in a thawed chicken breast with bones. The presented model allows the simulation of all implant stages of a central catheter. The obtained echogenicity is similar to that observed in human tissue, and the ultrasound identification of the tissues, balloon, needle, wire guide and catheter is feasible and reproducible. The proposed model is simple, economical, easy to manufacture and capable of realistically and effectively simulating an ultrasound-guided venous access.

  17. Cognitive Style Mapping at Mt. Hood Community College.

    ERIC Educational Resources Information Center

    Keyser, John S.

    1980-01-01

    Describes Mount Hood Community College's experiences using the Modified Hill Model for Cognitive Style Mapping (CSM). Enumerates the nine dimensions of cognitive style assessed by the model. Discusses the value and limitations of CSM, five major checks on the validity of the model, and Mount Hood faculty's involvement with CSM. (AYC)

  18. Scaling in the Donangelo-Sneppen model for evolution of money

    NASA Astrophysics Data System (ADS)

    Stauffer, Dietrich; P. Radomski, Jan

    2001-03-01

    The evolution of money from unsuccessful barter attempts, as modeled by Donangelo and Sneppen, is modified by a deterministic instead of a probabilistic selection of the most desired product as money. We check in particular the characteristic times of the model as a function of system size.

  19. On the Estimation of Standard Errors in Cognitive Diagnosis Models

    ERIC Educational Resources Information Center

    Philipp, Michel; Strobl, Carolin; de la Torre, Jimmy; Zeileis, Achim

    2018-01-01

    Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of fine-grained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs, or to check model…

  20. The Future of Political Communication Research: A Japanese Perspective.

    ERIC Educational Resources Information Center

    Youichi, Ito

    1993-01-01

    Introduces two Japanese models of mass media effects: (1) the "joho kohdo" (information behavior) model which suggests that people use extracted information to check the credibility of mass media information; and (2) the tripolar "kuuki" model which suggests that the mass media have effects as part of the triadic relationship…

Top