Program Model Checking as a New Trend
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Visser, Willem; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper introduces a special section of STTT (International Journal on Software Tools for Technology Transfer) containing a selection of papers that were presented at the 7th International SPIN workshop, Stanford, August 30 - September 1, 2000. The workshop was named SPIN Model Checking and Software Verification, with an emphasis on model checking of programs. The paper outlines the motivation for stressing software verification, rather than only design and model verification, by presenting the work done in the Automated Software Engineering group at NASA Ames Research Center within the last 5 years. This includes work in software model checking, testing like technologies and static analysis.
Program Model Checking: A Practitioner's Guide
NASA Technical Reports Server (NTRS)
Pressburger, Thomas T.; Mansouri-Samani, Masoud; Mehlitz, Peter C.; Pasareanu, Corina S.; Markosian, Lawrence Z.; Penix, John J.; Brat, Guillaume P.; Visser, Willem C.
2008-01-01
Program model checking is a verification technology that uses state-space exploration to evaluate large numbers of potential program executions. Program model checking provides improved coverage over testing by systematically evaluating all possible test inputs and all possible interleavings of threads in a multithreaded system. Model-checking algorithms use several classes of optimizations to reduce the time and memory requirements for analysis, as well as heuristics for meaningful analysis of partial areas of the state space Our goal in this guidebook is to assemble, distill, and demonstrate emerging best practices for applying program model checking. We offer it as a starting point and introduction for those who want to apply model checking to software verification and validation. The guidebook will not discuss any specific tool in great detail, but we provide references for specific tools.
Abstraction and Assume-Guarantee Reasoning for Automated Software Verification
NASA Technical Reports Server (NTRS)
Chaki, S.; Clarke, E.; Giannakopoulou, D.; Pasareanu, C. S.
2004-01-01
Compositional verification and abstraction are the key techniques to address the state explosion problem associated with model checking of concurrent software. A promising compositional approach is to prove properties of a system by checking properties of its components in an assume-guarantee style. This article proposes a framework for performing abstraction and assume-guarantee reasoning of concurrent C code in an incremental and fully automated fashion. The framework uses predicate abstraction to extract and refine finite state models of software and it uses an automata learning algorithm to incrementally construct assumptions for the compositional verification of the abstract models. The framework can be instantiated with different assume-guarantee rules. We have implemented our approach in the COMFORT reasoning framework and we show how COMFORT out-performs several previous software model checking approaches when checking safety properties of non-trivial concurrent programs.
Proceedings of the Second NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Munoz, Cesar (Editor)
2010-01-01
This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.
Efficient model checking of network authentication protocol based on SPIN
NASA Astrophysics Data System (ADS)
Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan
2013-03-01
Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.
An Integrated Environment for Efficient Formal Design and Verification
NASA Technical Reports Server (NTRS)
1998-01-01
The general goal of this project was to improve the practicality of formal methods by combining techniques from model checking and theorem proving. At the time the project was proposed, the model checking and theorem proving communities were applying different tools to similar problems, but there was not much cross-fertilization. This project involved a group from SRI that had substantial experience in the development and application of theorem-proving technology, and a group at Stanford that specialized in model checking techniques. Now, over five years after the proposal was submitted, there are many research groups working on combining theorem-proving and model checking techniques, and much more communication between the model checking and theorem proving research communities. This project contributed significantly to this research trend. The research work under this project covered a variety of topics: new theory and algorithms; prototype tools; verification methodology; and applications to problems in particular domains.
Assume-Guarantee Verification of Source Code with Design-Level Assumptions
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Pasareanu, Corina S.; Cobleigh, Jamieson M.
2004-01-01
Model checking is an automated technique that can be used to determine whether a system satisfies certain required properties. To address the 'state explosion' problem associated with this technique, we propose to integrate assume-guarantee verification at different phases of system development. During design, developers build abstract behavioral models of the system components and use them to establish key properties of the system. To increase the scalability of model checking at this level, we have developed techniques that automatically decompose the verification task by generating component assumptions for the properties to hold. The design-level artifacts are subsequently used to guide the implementation of the system, but also to enable more efficient reasoning at the source code-level. In particular we propose to use design-level assumptions to similarly decompose the verification of the actual system implementation. We demonstrate our approach on a significant NASA application, where design-level models were used to identify; and correct a safety property violation, and design-level assumptions allowed us to check successfully that the property was presented by the implementation.
Temporal Specification and Verification of Real-Time Systems.
1991-08-30
of concrete real - time systems can be modeled adequately. Specification: We present two conservative extensions of temporal logic that allow for the...logic. We present both model-checking algorithms for the automatic verification of finite-state real - time systems and proof methods for the deductive verification of real - time systems .
Model Checking Satellite Operational Procedures
NASA Astrophysics Data System (ADS)
Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri
2011-08-01
We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.
Software Model Checking Without Source Code
NASA Technical Reports Server (NTRS)
Chaki, Sagar; Ivers, James
2009-01-01
We present a framework, called AIR, for verifying safety properties of assembly language programs via software model checking. AIR extends the applicability of predicate abstraction and counterexample guided abstraction refinement to the automated verification of low-level software. By working at the assembly level, AIR allows verification of programs for which source code is unavailable-such as legacy and COTS software-and programs that use features-such as pointers, structures, and object-orientation-that are problematic for source-level software verification tools. In addition, AIR makes no assumptions about the underlying compiler technology. We have implemented a prototype of AIR and present encouraging results on several non-trivial examples.
NASA Technical Reports Server (NTRS)
Windley, P. J.
1991-01-01
In this paper we explore the specification and verification of VLSI designs. The paper focuses on abstract specification and verification of functionality using mathematical logic as opposed to low-level boolean equivalence verification such as that done using BDD's and Model Checking. Specification and verification, sometimes called formal methods, is one tool for increasing computer dependability in the face of an exponentially increasing testing effort.
Automated Verification of Specifications with Typestates and Access Permissions
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Catano, Nestor
2011-01-01
We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).
Verification of Java Programs using Symbolic Execution and Invariant Generation
NASA Technical Reports Server (NTRS)
Pasareanu, Corina; Visser, Willem
2004-01-01
Software verification is recognized as an important and difficult problem. We present a norel framework, based on symbolic execution, for the automated verification of software. The framework uses annotations in the form of method specifications an3 loop invariants. We present a novel iterative technique that uses invariant strengthening and approximation for discovering these loop invariants automatically. The technique handles different types of data (e.g. boolean and numeric constraints, dynamically allocated structures and arrays) and it allows for checking universally quantified formulas. Our framework is built on top of the Java PathFinder model checking toolset and it was used for the verification of several non-trivial Java programs.
HiVy automated translation of stateflow designs for model checking verification
NASA Technical Reports Server (NTRS)
Pingree, Paula
2003-01-01
tool set enables model checking of finite state machines designs. This is acheived by translating state-chart specifications into the input language of the Spin model checker. An abstract syntax of hierarchical sequential automata (HSA) is provided as an intermediate format tool set.
Simulation-based MDP verification for leading-edge masks
NASA Astrophysics Data System (ADS)
Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki
2017-07-01
For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification acceptable.
Practical Formal Verification of Diagnosability of Large Models via Symbolic Model Checking
NASA Technical Reports Server (NTRS)
Cavada, Roberto; Pecheur, Charles
2003-01-01
This document reports on the activities carried out during a four-week visit of Roberto Cavada at the NASA Ames Research Center. The main goal was to test the practical applicability of the framework proposed, where a diagnosability problem is reduced to a Symbolic Model Checking problem. Section 2 contains a brief explanation of major techniques currently used in Symbolic Model Checking, and how these techniques can be tuned in order to obtain good performances when using Model Checking tools. Diagnosability is performed on large and structured models of real plants. Section 3 describes how these plants are modeled, and how models can be simplified to improve the performance of Symbolic Model Checkers. Section 4 reports scalability results. Three test cases are briefly presented, and several parameters and techniques have been applied on those test cases in order to produce comparison tables. Furthermore, comparison between several Model Checkers is reported. Section 5 summarizes the application of diagnosability verification to a real application. Several properties have been tested, and results have been highlighted. Finally, section 6 draws some conclusions, and outlines future lines of research.
Verifying Multi-Agent Systems via Unbounded Model Checking
NASA Technical Reports Server (NTRS)
Kacprzak, M.; Lomuscio, A.; Lasica, T.; Penczek, W.; Szreter, M.
2004-01-01
We present an approach to the problem of verification of epistemic properties in multi-agent systems by means of symbolic model checking. In particular, it is shown how to extend the technique of unbounded model checking from a purely temporal setting to a temporal-epistemic one. In order to achieve this, we base our discussion on interpreted systems semantics, a popular semantics used in multi-agent systems literature. We give details of the technique and show how it can be applied to the well known train, gate and controller problem. Keywords: model checking, unbounded model checking, multi-agent systems
Propel: Tools and Methods for Practical Source Code Model Checking
NASA Technical Reports Server (NTRS)
Mansouri-Samani, Massoud; Mehlitz, Peter; Markosian, Lawrence; OMalley, Owen; Martin, Dale; Moore, Lantz; Penix, John; Visser, Willem
2003-01-01
The work reported here is an overview and snapshot of a project to develop practical model checking tools for in-the-loop verification of NASA s mission-critical, multithreaded programs in Java and C++. Our strategy is to develop and evaluate both a design concept that enables the application of model checking technology to C++ and Java, and a model checking toolset for C++ and Java. The design concept and the associated model checking toolset is called Propel. It builds upon the Java PathFinder (JPF) tool, an explicit state model checker for Java applications developed by the Automated Software Engineering group at NASA Ames Research Center. The design concept that we are developing is Design for Verification (D4V). This is an adaption of existing best design practices that has the desired side-effect of enhancing verifiability by improving modularity and decreasing accidental complexity. D4V, we believe, enhances the applicability of a variety of V&V approaches; we are developing the concept in the context of model checking. The model checking toolset, Propel, is based on extending JPF to handle C++. Our principal tasks in developing the toolset are to build a translator from C++ to Java, productize JPF, and evaluate the toolset in the context of D4V. Through all these tasks we are testing Propel capabilities on customer applications.
2009-01-01
Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks. PMID:20042075
Bounded Parametric Model Checking for Elementary Net Systems
NASA Astrophysics Data System (ADS)
Knapik, Michał; Szreter, Maciej; Penczek, Wojciech
Bounded Model Checking (BMC) is an efficient verification method for reactive systems. BMC has been applied so far to verification of properties expressed in (timed) modal logics, but never to their parametric extensions. In this paper we show, for the first time that BMC can be extended to PRTECTL - a parametric extension of the existential version of CTL. To this aim we define a bounded semantics and a translation from PRTECTL to SAT. The implementation of the algorithm for Elementary Net Systems is presented, together with some experimental results.
Model Checking for Verification of Interactive Health IT Systems
Butler, Keith A.; Mercer, Eric; Bahrami, Ali; Tao, Cui
2015-01-01
Rigorous methods for design and verification of health IT systems have lagged far behind their proliferation. The inherent technical complexity of healthcare, combined with the added complexity of health information technology makes their resulting behavior unpredictable and introduces serious risk. We propose to mitigate this risk by formalizing the relationship between HIT and the conceptual work that increasingly typifies modern care. We introduce new techniques for modeling clinical workflows and the conceptual products within them that allow established, powerful modeling checking technology to be applied to interactive health IT systems. The new capability can evaluate the workflows of a new HIT system performed by clinicians and computers to improve safety and reliability. We demonstrate the method on a patient contact system to demonstrate model checking is effective for interactive systems and that much of it can be automated. PMID:26958166
2017-03-01
models of software execution, for example memory access patterns, to check for security intrusions. Additional research was performed to tackle the...considered using indirect models of software execution, for example memory access patterns, to check for security intrusions. Additional research ...deterioration for example , no longer corresponds to the model used during verification time. Finally, the research looked at ways to combine hybrid systems
Regression Verification Using Impact Summaries
NASA Technical Reports Server (NTRS)
Backes, John; Person, Suzette J.; Rungta, Neha; Thachuk, Oksana
2013-01-01
Regression verification techniques are used to prove equivalence of syntactically similar programs. Checking equivalence of large programs, however, can be computationally expensive. Existing regression verification techniques rely on abstraction and decomposition techniques to reduce the computational effort of checking equivalence of the entire program. These techniques are sound but not complete. In this work, we propose a novel approach to improve scalability of regression verification by classifying the program behaviors generated during symbolic execution as either impacted or unimpacted. Our technique uses a combination of static analysis and symbolic execution to generate summaries of impacted program behaviors. The impact summaries are then checked for equivalence using an o-the-shelf decision procedure. We prove that our approach is both sound and complete for sequential programs, with respect to the depth bound of symbolic execution. Our evaluation on a set of sequential C artifacts shows that reducing the size of the summaries can help reduce the cost of software equivalence checking. Various reduction, abstraction, and compositional techniques have been developed to help scale software verification techniques to industrial-sized systems. Although such techniques have greatly increased the size and complexity of systems that can be checked, analysis of large software systems remains costly. Regression analysis techniques, e.g., regression testing [16], regression model checking [22], and regression verification [19], restrict the scope of the analysis by leveraging the differences between program versions. These techniques are based on the idea that if code is checked early in development, then subsequent versions can be checked against a prior (checked) version, leveraging the results of the previous analysis to reduce analysis cost of the current version. Regression verification addresses the problem of proving equivalence of closely related program versions [19]. These techniques compare two programs with a large degree of syntactic similarity to prove that portions of one program version are equivalent to the other. Regression verification can be used for guaranteeing backward compatibility, and for showing behavioral equivalence in programs with syntactic differences, e.g., when a program is refactored to improve its performance, maintainability, or readability. Existing regression verification techniques leverage similarities between program versions by using abstraction and decomposition techniques to improve scalability of the analysis [10, 12, 19]. The abstractions and decomposition in the these techniques, e.g., summaries of unchanged code [12] or semantically equivalent methods [19], compute an over-approximation of the program behaviors. The equivalence checking results of these techniques are sound but not complete-they may characterize programs as not functionally equivalent when, in fact, they are equivalent. In this work we describe a novel approach that leverages the impact of the differences between two programs for scaling regression verification. We partition program behaviors of each version into (a) behaviors impacted by the changes and (b) behaviors not impacted (unimpacted) by the changes. Only the impacted program behaviors are used during equivalence checking. We then prove that checking equivalence of the impacted program behaviors is equivalent to checking equivalence of all program behaviors for a given depth bound. In this work we use symbolic execution to generate the program behaviors and leverage control- and data-dependence information to facilitate the partitioning of program behaviors. The impacted program behaviors are termed as impact summaries. The dependence analyses that facilitate the generation of the impact summaries, we believe, could be used in conjunction with other abstraction and decomposition based approaches, [10, 12], as a complementary reduction technique. An evaluation of our regression verification technique shows that our approach is capable of leveraging similarities between program versions to reduce the size of the queries and the time required to check for logical equivalence. The main contributions of this work are: - A regression verification technique to generate impact summaries that can be checked for functional equivalence using an off-the-shelf decision procedure. - A proof that our approach is sound and complete with respect to the depth bound of symbolic execution. - An implementation of our technique using the LLVMcompiler infrastructure, the klee Symbolic Virtual Machine [4], and a variety of Satisfiability Modulo Theory (SMT) solvers, e.g., STP [7] and Z3 [6]. - An empirical evaluation on a set of C artifacts which shows that the use of impact summaries can reduce the cost of regression verification.
Formal verification of automated teller machine systems using SPIN
NASA Astrophysics Data System (ADS)
Iqbal, Ikhwan Mohammad; Adzkiya, Dieky; Mukhlash, Imam
2017-08-01
Formal verification is a technique for ensuring the correctness of systems. This work focuses on verifying a model of the Automated Teller Machine (ATM) system against some specifications. We construct the model as a state transition diagram that is suitable for verification. The specifications are expressed as Linear Temporal Logic (LTL) formulas. We use Simple Promela Interpreter (SPIN) model checker to check whether the model satisfies the formula. This model checker accepts models written in Process Meta Language (PROMELA), and its specifications are specified in LTL formulas.
Abstraction Techniques for Parameterized Verification
2006-11-01
approach for applying model checking to unbounded systems is to extract finite state models from them using conservative abstraction techniques. Prop...36 2.5.1 Multiple Reference Processes . . . . . . . . . . . . . . . . . . . 36 2.5.2 Adding Monitor Processes...model checking to complex pieces of code like device drivers depends on the use of abstraction methods. An abstraction method extracts a small finite
NASA Technical Reports Server (NTRS)
Rushby, John
1991-01-01
The formal specification and mechanically checked verification for a model of fault-masking and transient-recovery among the replicated computers of digital flight-control systems are presented. The verification establishes, subject to certain carefully stated assumptions, that faults among the component computers are masked so that commands sent to the actuators are the same as those that would be sent by a single computer that suffers no failures.
Integrated Formal Analysis of Timed-Triggered Ethernet
NASA Technical Reports Server (NTRS)
Dutertre, Bruno; Shankar, Nstarajan; Owre, Sam
2012-01-01
We present new results related to the verification of the Timed-Triggered Ethernet (TTE) clock synchronization protocol. This work extends previous verification of TTE based on model checking. We identify a suboptimal design choice in a compression function used in clock synchronization, and propose an improvement. We compare the original design and the improved definition using the SAL model checker.
Verification and Validation of Autonomy Software at NASA
NASA Technical Reports Server (NTRS)
Pecheur, Charles
2000-01-01
Autonomous software holds the promise of new operation possibilities, easier design and development and lower operating costs. However, as those system close control loops and arbitrate resources on board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques and concrete experiments at NASA.
Verification and Validation of Autonomy Software at NASA
NASA Technical Reports Server (NTRS)
Pecheur, Charles
2000-01-01
Autonomous software holds the promise of new operation possibilities, easier design and development, and lower operating costs. However, as those system close control loops and arbitrate resources on-board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques, and concrete experiments at NASA.
Formal Verification of Quasi-Synchronous Systems
2015-07-01
pg. 215-226, Springer-Verlag: London, UK, 2001. [4] Nicolas Halbwachs and Louis Mandel, Simulation and Verification of Asynchronous Systems by...Huang, S. A. Smolka, W. Tan , and S. Tripakis, Deep Random Search for Efficient Model Checking of Timed Automata, in Proceedings of the 13th Monterey
The Automation of Nowcast Model Assessment Processes
2016-09-01
that will automate real-time WRE-N model simulations, collect and quality control check weather observations for assimilation and verification, and...domains centered near White Sands Missile Range, New Mexico, where the Meteorological Sensor Array (MSA) will be located. The MSA will provide...observations and performing quality -control checks for the pre-forecast data assimilation period. 2. Run the WRE-N model to generate model forecast data
Authoring and verification of clinical guidelines: a model driven approach.
Pérez, Beatriz; Porres, Ivan
2010-08-01
The goal of this research is to provide a framework to enable authoring and verification of clinical guidelines. The framework is part of a larger research project aimed at improving the representation, quality and application of clinical guidelines in daily clinical practice. The verification process of a guideline is based on (1) model checking techniques to verify guidelines against semantic errors and inconsistencies in their definition, (2) combined with Model Driven Development (MDD) techniques, which enable us to automatically process manually created guideline specifications and temporal-logic statements to be checked and verified regarding these specifications, making the verification process faster and cost-effective. Particularly, we use UML statecharts to represent the dynamics of guidelines and, based on this manually defined guideline specifications, we use a MDD-based tool chain to automatically process them to generate the input model of a model checker. The model checker takes the resulted model together with the specific guideline requirements, and verifies whether the guideline fulfils such properties. The overall framework has been implemented as an Eclipse plug-in named GBDSSGenerator which, particularly, starting from the UML statechart representing a guideline, allows the verification of the guideline against specific requirements. Additionally, we have established a pattern-based approach for defining commonly occurring types of requirements in guidelines. We have successfully validated our overall approach by verifying properties in different clinical guidelines resulting in the detection of some inconsistencies in their definition. The proposed framework allows (1) the authoring and (2) the verification of clinical guidelines against specific requirements defined based on a set of property specification patterns, enabling non-experts to easily write formal specifications and thus easing the verification process. Copyright 2010 Elsevier Inc. All rights reserved.
Learning Assumptions for Compositional Verification
NASA Technical Reports Server (NTRS)
Cobleigh, Jamieson M.; Giannakopoulou, Dimitra; Pasareanu, Corina; Clancy, Daniel (Technical Monitor)
2002-01-01
Compositional verification is a promising approach to addressing the state explosion problem associated with model checking. One compositional technique advocates proving properties of a system by checking properties of its components in an assume-guarantee style. However, the application of this technique is difficult because it involves non-trivial human input. This paper presents a novel framework for performing assume-guarantee reasoning in an incremental and fully automated fashion. To check a component against a property, our approach generates assumptions that the environment needs to satisfy for the property to hold. These assumptions are then discharged on the rest of the system. Assumptions are computed by a learning algorithm. They are initially approximate, but become gradually more precise by means of counterexamples obtained by model checking the component and its environment, alternately. This iterative process may at any stage conclude that the property is either true or false in the system. We have implemented our approach in the LTSA tool and applied it to the analysis of a NASA system.
Action-based verification of RTCP-nets with CADP
NASA Astrophysics Data System (ADS)
Biernacki, Jerzy; Biernacka, Agnieszka; Szpyrka, Marcin
2015-12-01
The paper presents an RTCP-nets' (real-time coloured Petri nets) coverability graphs into Aldebaran format translation algorithm. The approach provides the possibility of automatic RTCP-nets verification using model checking techniques provided by the CADP toolbox. An actual fire alarm control panel system has been modelled and several of its crucial properties have been verified to demonstrate the usability of the approach.
UTP and Temporal Logic Model Checking
NASA Astrophysics Data System (ADS)
Anderson, Hugh; Ciobanu, Gabriel; Freitas, Leo
In this paper we give an additional perspective to the formal verification of programs through temporal logic model checking, which uses Hoare and He Unifying Theories of Programming (UTP). Our perspective emphasizes the use of UTP designs, an alphabetised relational calculus expressed as a pre/post condition pair of relations, to verify state or temporal assertions about programs. The temporal model checking relation is derived from a satisfaction relation between the model and its properties. The contribution of this paper is that it shows a UTP perspective to temporal logic model checking. The approach includes the notion of efficiency found in traditional model checkers, which reduced a state explosion problem through the use of efficient data structures
Proceedings of the First NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)
2009-01-01
Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.
The Priority Inversion Problem and Real-Time Symbolic Model Checking
1993-04-23
real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties
A study of applications scribe frame data verifications using design rule check
NASA Astrophysics Data System (ADS)
Saito, Shoko; Miyazaki, Masaru; Sakurai, Mitsuo; Itoh, Takahisa; Doi, Kazumasa; Sakurai, Norioko; Okada, Tomoyuki
2013-06-01
In semiconductor manufacturing, scribe frame data generally is generated for each LSI product according to its specific process design. Scribe frame data is designed based on definition tables of scanner alignment, wafer inspection and customers specified marks. We check that scribe frame design is conforming to specification of alignment and inspection marks at the end. Recently, in COT (customer owned tooling) business or new technology development, there is no effective verification method for the scribe frame data, and we take a lot of time to work on verification. Therefore, we tried to establish new verification method of scribe frame data by applying pattern matching and DRC (Design Rule Check) which is used in device verification. We would like to show scheme of the scribe frame data verification using DRC which we tried to apply. First, verification rules are created based on specifications of scanner, inspection and others, and a mark library is also created for pattern matching. Next, DRC verification is performed to scribe frame data. Then the DRC verification includes pattern matching using mark library. As a result, our experiments demonstrated that by use of pattern matching and DRC verification our new method can yield speed improvements of more than 12 percent compared to the conventional mark checks by visual inspection and the inspection time can be reduced to less than 5 percent if multi-CPU processing is used. Our method delivers both short processing time and excellent accuracy when checking many marks. It is easy to maintain and provides an easy way for COT customers to use original marks. We believe that our new DRC verification method for scribe frame data is indispensable and mutually beneficial.
Gilbert, David
2016-01-01
Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems. PMID:27187178
Pârvu, Ovidiu; Gilbert, David
2016-01-01
Insights gained from multilevel computational models of biological systems can be translated into real-life applications only if the model correctness has been verified first. One of the most frequently employed in silico techniques for computational model verification is model checking. Traditional model checking approaches only consider the evolution of numeric values, such as concentrations, over time and are appropriate for computational models of small scale systems (e.g. intracellular networks). However for gaining a systems level understanding of how biological organisms function it is essential to consider more complex large scale biological systems (e.g. organs). Verifying computational models of such systems requires capturing both how numeric values and properties of (emergent) spatial structures (e.g. area of multicellular population) change over time and across multiple levels of organization, which are not considered by existing model checking approaches. To address this limitation we have developed a novel approximate probabilistic multiscale spatio-temporal meta model checking methodology for verifying multilevel computational models relative to specifications describing the desired/expected system behaviour. The methodology is generic and supports computational models encoded using various high-level modelling formalisms because it is defined relative to time series data and not the models used to generate it. In addition, the methodology can be automatically adapted to case study specific types of spatial structures and properties using the spatio-temporal meta model checking concept. To automate the computational model verification process we have implemented the model checking approach in the software tool Mule (http://mule.modelchecking.org). Its applicability is illustrated against four systems biology computational models previously published in the literature encoding the rat cardiovascular system dynamics, the uterine contractions of labour, the Xenopus laevis cell cycle and the acute inflammation of the gut and lung. Our methodology and software will enable computational biologists to efficiently develop reliable multilevel computational models of biological systems.
Model Checking - My 27-Year Quest to Overcome the State Explosion Problem
NASA Technical Reports Server (NTRS)
Clarke, Ed
2009-01-01
Model Checking is an automatic verification technique for state-transition systems that are finite=state or that have finite-state abstractions. In the early 1980 s in a series of joint papers with my graduate students E.A. Emerson and A.P. Sistla, we proposed that Model Checking could be used for verifying concurrent systems and gave algorithms for this purpose. At roughly the same time, Joseph Sifakis and his student J.P. Queille at the University of Grenoble independently developed a similar technique. Model Checking has been used successfully to reason about computer hardware and communication protocols and is beginning to be used for verifying computer software. Specifications are written in temporal logic, which is particularly valuable for expressing concurrency properties. An intelligent, exhaustive search is used to determine if the specification is true or not. If the specification is not true, the Model Checker will produce a counterexample execution trace that shows why the specification does not hold. This feature is extremely useful for finding obscure errors in complex systems. The main disadvantage of Model Checking is the state-explosion problem, which can occur if the system under verification has many processes or complex data structures. Although the state-explosion problem is inevitable in worst case, over the past 27 years considerable progress has been made on the problem for certain classes of state-transition systems that occur often in practice. In this talk, I will describe what Model Checking is, how it works, and the main techniques that have been developed for combating the state explosion problem.
NASA Technical Reports Server (NTRS)
Probst, D.; Jensen, L.
1991-01-01
Delay-insensitive VLSI systems have a certain appeal on the ground due to difficulties with clocks; they are even more attractive in space. We answer the question, is it possible to control state explosion arising from various sources during automatic verification (model checking) of delay-insensitive systems? State explosion due to concurrency is handled by introducing a partial-order representation for systems, and defining system correctness as a simple relation between two partial orders on the same set of system events (a graph problem). State explosion due to nondeterminism (chiefly arbitration) is handled when the system to be verified has a clean, finite recurrence structure. Backwards branching is a further optimization. The heart of this approach is the ability, during model checking, to discover a compact finite presentation of the verified system without prior composition of system components. The fully-implemented POM verification system has polynomial space and time performance on traditional asynchronous-circuit benchmarks that are exponential in space and time for other verification systems. We also sketch the generalization of this approach to handle delay-constrained VLSI systems.
Model Checking the Remote Agent Planner
NASA Technical Reports Server (NTRS)
Khatib, Lina; Muscettola, Nicola; Havelund, Klaus; Norvig, Peter (Technical Monitor)
2001-01-01
This work tackles the problem of using Model Checking for the purpose of verifying the HSTS (Scheduling Testbed System) planning system. HSTS is the planner and scheduler of the remote agent autonomous control system deployed in Deep Space One (DS1). Model Checking allows for the verification of domain models as well as planning entries. We have chosen the real-time model checker UPPAAL for this work. We start by motivating our work in the introduction. Then we give a brief description of HSTS and UPPAAL. After that, we give a sketch for the mapping of HSTS models into UPPAAL and we present samples of plan model properties one may want to verify.
NASA Technical Reports Server (NTRS)
Holzmann, Gerard J.; Joshi, Rajeev; Groce, Alex
2008-01-01
Reportedly, supercomputer designer Seymour Cray once said that he would sooner use two strong oxen to plow a field than a thousand chickens. Although this is undoubtedly wise when it comes to plowing a field, it is not so clear for other types of tasks. Model checking problems are of the proverbial "search the needle in a haystack" type. Such problems can often be parallelized easily. Alas, none of the usual divide and conquer methods can be used to parallelize the working of a model checker. Given that it has become easier than ever to gain access to large numbers of computers to perform even routine tasks it is becoming more and more attractive to find alternate ways to use these resources to speed up model checking tasks. This paper describes one such method, called swarm verification.
Verification and Planning Based on Coinductive Logic Programming
NASA Technical Reports Server (NTRS)
Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal
2008-01-01
Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.
Parallel Software Model Checking
2015-01-08
checker. This project will explore this strategy to parallelize the generalized PDR algorithm for software model checking. It belongs to TF1 due to its ... focus on formal verification . Generalized PDR. Generalized Property Driven Rechability (GPDR) i is an algorithm for solving HORN-SMT reachability...subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 08
Design and verification of distributed logic controllers with application of Petri nets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał
2015-12-31
The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.
Design and verification of distributed logic controllers with application of Petri nets
NASA Astrophysics Data System (ADS)
Wiśniewski, Remigiusz; Grobelna, Iwona; Grobelny, Michał; Wiśniewska, Monika
2015-12-01
The paper deals with the designing and verification of distributed logic controllers. The control system is initially modelled with Petri nets and formally verified against structural and behavioral properties with the application of the temporal logic and model checking technique. After that it is decomposed into separate sequential automata that are working concurrently. Each of them is re-verified and if the validation is successful, the system can be finally implemented.
Practical Application of Model Checking in Software Verification
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Skakkebaek, Jens Ulrik
1999-01-01
This paper presents our experiences in applying the JAVA PATHFINDER (J(sub PF)), a recently developed JAVA to SPIN translator, in the finding of synchronization bugs in a Chinese Chess game server application written in JAVA. We give an overview of J(sub PF) and the subset of JAVA that it supports and describe the abstraction and verification of the game server. Finally, we analyze the results of the effort. We argue that abstraction by under-approximation is necessary for abstracting sufficiently smaller models for verification purposes; that user guidance is crucial for effective abstraction; and that current model checkers do not conveniently support the computational models of software in general and JAVA in particular.
Towards the Formal Verification of a Distributed Real-Time Automotive System
NASA Technical Reports Server (NTRS)
Endres, Erik; Mueller, Christian; Shadrin, Andrey; Tverdyshev, Sergey
2010-01-01
We present the status of a project which aims at building, formally and pervasively verifying a distributed automotive system. The target system is a gate-level model which consists of several interconnected electronic control units with independent clocks. This model is verified against the specification as seen by a system programmer. The automotive system is implemented on several FPGA boards. The pervasive verification is carried out using combination of interactive theorem proving (Isabelle/HOL) and model checking (LTL).
NASA Astrophysics Data System (ADS)
Kim, Youngmi; Choi, Jae-Young; Choi, Kwangseon; Choi, Jung-Hoe; Lee, Sooryong
2011-04-01
As IC design complexity keeps increasing, it is more and more difficult to ensure the pattern transfer after optical proximity correction (OPC) due to the continuous reduction of layout dimensions and lithographic limitation by k1 factor. To guarantee the imaging fidelity, resolution enhancement technologies (RET) such as off-axis illumination (OAI), different types of phase shift masks and OPC technique have been developed. In case of model-based OPC, to cross-confirm the contour image versus target layout, post-OPC verification solutions continuously keep developed - contour generation method and matching it to target structure, method for filtering and sorting the patterns to eliminate false errors and duplicate patterns. The way to detect only real errors by excluding false errors is the most important thing for accurate and fast verification process - to save not only reviewing time and engineer resource, but also whole wafer process time and so on. In general case of post-OPC verification for metal-contact/via coverage (CC) check, verification solution outputs huge of errors due to borderless design, so it is too difficult to review and correct all points of them. It should make OPC engineer to miss the real defect, and may it cause the delay time to market, at least. In this paper, we studied method for increasing efficiency of post-OPC verification, especially for the case of CC check. For metal layers, final CD after etch process shows various CD bias, which depends on distance with neighbor patterns, so it is more reasonable that consider final metal shape to confirm the contact/via coverage. Through the optimization of biasing rule for different pitches and shapes of metal lines, we could get more accurate and efficient verification results and decrease the time for review to find real errors. In this paper, the suggestion in order to increase efficiency of OPC verification process by using simple biasing rule to metal layout instead of etch model application is presented.
UIVerify: A Web-Based Tool for Verification and Automatic Generation of User Interfaces
NASA Technical Reports Server (NTRS)
Shiffman, Smadar; Degani, Asaf; Heymann, Michael
2004-01-01
In this poster, we describe a web-based tool for verification and automatic generation of user interfaces. The verification component of the tool accepts as input a model of a machine and a model of its interface, and checks that the interface is adequate (correct). The generation component of the tool accepts a model of a given machine and the user's task, and then generates a correct and succinct interface. This write-up will demonstrate the usefulness of the tool by verifying the correctness of a user interface to a flight-control system. The poster will include two more examples of using the tool: verification of the interface to an espresso machine, and automatic generation of a succinct interface to a large hypothetical machine.
The SeaHorn Verification Framework
NASA Technical Reports Server (NTRS)
Gurfinkel, Arie; Kahsai, Temesghen; Komuravelli, Anvesh; Navas, Jorge A.
2015-01-01
In this paper, we present SeaHorn, a software verification framework. The key distinguishing feature of SeaHorn is its modular design that separates the concerns of the syntax of the programming language, its operational semantics, and the verification semantics. SeaHorn encompasses several novelties: it (a) encodes verification conditions using an efficient yet precise inter-procedural technique, (b) provides flexibility in the verification semantics to allow different levels of precision, (c) leverages the state-of-the-art in software model checking and abstract interpretation for verification, and (d) uses Horn-clauses as an intermediate language to represent verification conditions which simplifies interfacing with multiple verification tools based on Horn-clauses. SeaHorn provides users with a powerful verification tool and researchers with an extensible and customizable framework for experimenting with new software verification techniques. The effectiveness and scalability of SeaHorn are demonstrated by an extensive experimental evaluation using benchmarks from SV-COMP 2015 and real avionics code.
Incremental checking of Master Data Management model based on contextual graphs
NASA Astrophysics Data System (ADS)
Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan
2015-10-01
The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.
Verification of Autonomous Systems for Space Applications
NASA Technical Reports Server (NTRS)
Brat, G.; Denney, E.; Giannakopoulou, D.; Frank, J.; Jonsson, A.
2006-01-01
Autonomous software, especially if it is based on model, can play an important role in future space applications. For example, it can help streamline ground operations, or, assist in autonomous rendezvous and docking operations, or even, help recover from problems (e.g., planners can be used to explore the space of recovery actions for a power subsystem and implement a solution without (or with minimal) human intervention). In general, the exploration capabilities of model-based systems give them great flexibility. Unfortunately, it also makes them unpredictable to our human eyes, both in terms of their execution and their verification. The traditional verification techniques are inadequate for these systems since they are mostly based on testing, which implies a very limited exploration of their behavioral space. In our work, we explore how advanced V&V techniques, such as static analysis, model checking, and compositional verification, can be used to gain trust in model-based systems. We also describe how synthesis can be used in the context of system reconfiguration and in the context of verification.
Model Checking Verification and Validation at JPL and the NASA Fairmont IV and V Facility
NASA Technical Reports Server (NTRS)
Schneider, Frank; Easterbrook, Steve; Callahan, Jack; Montgomery, Todd
1999-01-01
We show how a technology transfer effort was carried out. The successful use of model checking on a pilot JPL flight project demonstrates the usefulness and the efficacy of the approach. The pilot project was used to model a complex spacecraft controller. Software design and implementation validation were carried out successfully. To suggest future applications we also show how the implementation validation step can be automated. The effort was followed by the formal introduction of the modeling technique as a part of the JPL Quality Assurance process.
Logic Model Checking of Time-Periodic Real-Time Systems
NASA Technical Reports Server (NTRS)
Florian, Mihai; Gamble, Ed; Holzmann, Gerard
2012-01-01
In this paper we report on the work we performed to extend the logic model checker SPIN with built-in support for the verification of periodic, real-time embedded software systems, as commonly used in aircraft, automobiles, and spacecraft. We first extended the SPIN verification algorithms to model priority based scheduling policies. Next, we added a library to support the modeling of periodic tasks. This library was used in a recent application of the SPIN model checker to verify the engine control software of an automobile, to study the feasibility of software triggers for unintended acceleration events.
Time-space modal logic for verification of bit-slice circuits
NASA Astrophysics Data System (ADS)
Hiraishi, Hiromi
1996-03-01
The major goal of this paper is to propose a new modal logic aiming at formal verification of bit-slice circuits. The new logic is called as time-space modal logic and its major feature is that it can handle two transition relations: one for time transition and the other for space transition. As for a verification algorithm, a symbolic model checking algorithm of the new logic is shown. This could be applicable to verification of bit-slice microprocessor of infinite bit width and 1D systolic array of infinite length. A simple benchmark result shows the effectiveness of the proposed approach.
Specification and Verification of Web Applications in Rewriting Logic
NASA Astrophysics Data System (ADS)
Alpuente, María; Ballis, Demis; Romero, Daniel
This paper presents a Rewriting Logic framework that formalizes the interactions between Web servers and Web browsers through a communicating protocol abstracting HTTP. The proposed framework includes a scripting language that is powerful enough to model the dynamics of complex Web applications by encompassing the main features of the most popular Web scripting languages (e.g. PHP, ASP, Java Servlets). We also provide a detailed characterization of browser actions (e.g. forward/backward navigation, page refresh, and new window/tab openings) via rewrite rules, and show how our models can be naturally model-checked by using the Linear Temporal Logic of Rewriting (LTLR), which is a Linear Temporal Logic specifically designed for model-checking rewrite theories. Our formalization is particularly suitable for verification purposes, since it allows one to perform in-depth analyses of many subtle aspects related to Web interaction. Finally, the framework has been completely implemented in Maude, and we report on some successful experiments that we conducted by using the Maude LTLR model-checker.
Experimental Evaluation of a Planning Language Suitable for Formal Verification
NASA Technical Reports Server (NTRS)
Butler, Rick W.; Munoz, Cesar A.; Siminiceanu, Radu I.
2008-01-01
The marriage of model checking and planning faces two seemingly diverging alternatives: the need for a planning language expressive enough to capture the complexity of real-life applications, as opposed to a language simple, yet robust enough to be amenable to exhaustive verification and validation techniques. In an attempt to reconcile these differences, we have designed an abstract plan description language, ANMLite, inspired from the Action Notation Modeling Language (ANML) [17]. We present the basic concepts of the ANMLite language as well as an automatic translator from ANMLite to the model checker SAL (Symbolic Analysis Laboratory) [7]. We discuss various aspects of specifying a plan in terms of constraints and explore the implications of choosing a robust logic behind the specification of constraints, rather than simply propose a new planning language. Additionally, we provide an initial assessment of the efficiency of model checking to search for solutions of planning problems. To this end, we design a basic test benchmark and study the scalability of the generated SAL models in terms of plan complexity.
2016-07-08
Systems Using Automata Theory and Barrier Certifi- cates We developed a sound but incomplete method for the computational verification of specifications...method merges ideas from automata -based model checking with those from control theory including so-called barrier certificates and optimization-based... Automata theory meets barrier certificates: Temporal logic verification of nonlinear systems,” IEEE Transactions on Automatic Control, 2015. [J2] R
Timing analysis by model checking
NASA Technical Reports Server (NTRS)
Naydich, Dimitri; Guaspari, David
2000-01-01
The safety of modern avionics relies on high integrity software that can be verified to meet hard real-time requirements. The limits of verification technology therefore determine acceptable engineering practice. To simplify verification problems, safety-critical systems are commonly implemented under the severe constraints of a cyclic executive, which make design an expensive trial-and-error process highly intolerant of change. Important advances in analysis techniques, such as rate monotonic analysis (RMA), have provided a theoretical and practical basis for easing these onerous restrictions. But RMA and its kindred have two limitations: they apply only to verifying the requirement of schedulability (that tasks meet their deadlines) and they cannot be applied to many common programming paradigms. We address both these limitations by applying model checking, a technique with successful industrial applications in hardware design. Model checking algorithms analyze finite state machines, either by explicit state enumeration or by symbolic manipulation. Since quantitative timing properties involve a potentially unbounded state variable (a clock), our first problem is to construct a finite approximation that is conservative for the properties being analyzed-if the approximation satisfies the properties of interest, so does the infinite model. To reduce the potential for state space explosion we must further optimize this finite model. Experiments with some simple optimizations have yielded a hundred-fold efficiency improvement over published techniques.
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2007-01-01
This report presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV) [SMV]. The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space. Also, additional innovative state space reduction techniques are introduced that can be used in future verification efforts applied to this and other protocols.
A Methodology for Evaluating Artifacts Produced by a Formal Verification Process
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Miner, Paul S.; Person, Suzette
2011-01-01
The goal of this study is to produce a methodology for evaluating the claims and arguments employed in, and the evidence produced by formal verification activities. To illustrate the process, we conduct a full assessment of a representative case study for the Enabling Technology Development and Demonstration (ETDD) program. We assess the model checking and satisfiabilty solving techniques as applied to a suite of abstract models of fault tolerant algorithms which were selected to be deployed in Orion, namely the TTEthernet startup services specified and verified in the Symbolic Analysis Laboratory (SAL) by TTTech. To this end, we introduce the Modeling and Verification Evaluation Score (MVES), a metric that is intended to estimate the amount of trust that can be placed on the evidence that is obtained. The results of the evaluation process and the MVES can then be used by non-experts and evaluators in assessing the credibility of the verification results.
Consistent model driven architecture
NASA Astrophysics Data System (ADS)
Niepostyn, Stanisław J.
2015-09-01
The goal of the MDA is to produce software systems from abstract models in a way where human interaction is restricted to a minimum. These abstract models are based on the UML language. However, the semantics of UML models is defined in a natural language. Subsequently the verification of consistency of these diagrams is needed in order to identify errors in requirements at the early stage of the development process. The verification of consistency is difficult due to a semi-formal nature of UML diagrams. We propose automatic verification of consistency of the series of UML diagrams originating from abstract models implemented with our consistency rules. This Consistent Model Driven Architecture approach enables us to generate automatically complete workflow applications from consistent and complete models developed from abstract models (e.g. Business Context Diagram). Therefore, our method can be used to check practicability (feasibility) of software architecture models.
Enabling model checking for collaborative process analysis: from BPMN to `Network of Timed Automata'
NASA Astrophysics Data System (ADS)
Mallek, Sihem; Daclin, Nicolas; Chapurlat, Vincent; Vallespir, Bruno
2015-04-01
Interoperability is a prerequisite for partners involved in performing collaboration. As a consequence, the lack of interoperability is now considered a major obstacle. The research work presented in this paper aims to develop an approach that allows specifying and verifying a set of interoperability requirements to be satisfied by each partner in the collaborative process prior to process implementation. To enable the verification of these interoperability requirements, it is necessary first and foremost to generate a model of the targeted collaborative process; for this research effort, the standardised language BPMN 2.0 is used. Afterwards, a verification technique must be introduced, and model checking is the preferred option herein. This paper focuses on application of the model checker UPPAAL in order to verify interoperability requirements for the given collaborative process model. At first, this step entails translating the collaborative process model from BPMN into a UPPAAL modelling language called 'Network of Timed Automata'. Second, it becomes necessary to formalise interoperability requirements into properties with the dedicated UPPAAL language, i.e. the temporal logic TCTL.
Symbolic LTL Compilation for Model Checking: Extended Abstract
NASA Technical Reports Server (NTRS)
Rozier, Kristin Y.; Vardi, Moshe Y.
2007-01-01
In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Mechanical verification of a schematic Byzantine clock synchronization algorithm
NASA Technical Reports Server (NTRS)
Shankar, Natarajan
1991-01-01
Schneider generalizes a number of protocols for Byzantine fault tolerant clock synchronization and presents a uniform proof for their correctness. The authors present a machine checked proof of this schematic protocol that revises some of the details in Schneider's original analysis. The verification was carried out with the EHDM system developed at the SRI Computer Science Laboratory. The mechanically checked proofs include the verification that the egocentric mean function used in Lamport and Melliar-Smith's Interactive Convergence Algorithm satisfies the requirements of Schneider's protocol.
Decision Engines for Software Analysis Using Satisfiability Modulo Theories Solvers
NASA Technical Reports Server (NTRS)
Bjorner, Nikolaj
2010-01-01
The area of software analysis, testing and verification is now undergoing a revolution thanks to the use of automated and scalable support for logical methods. A well-recognized premise is that at the core of software analysis engines is invariably a component using logical formulas for describing states and transformations between system states. The process of using this information for discovering and checking program properties (including such important properties as safety and security) amounts to automatic theorem proving. In particular, theorem provers that directly support common software constructs offer a compelling basis. Such provers are commonly called satisfiability modulo theories (SMT) solvers. Z3 is a state-of-the-art SMT solver. It is developed at Microsoft Research. It can be used to check the satisfiability of logical formulas over one or more theories such as arithmetic, bit-vectors, lists, records and arrays. The talk describes some of the technology behind modern SMT solvers, including the solver Z3. Z3 is currently mainly targeted at solving problems that arise in software analysis and verification. It has been applied to various contexts, such as systems for dynamic symbolic simulation (Pex, SAGE, Vigilante), for program verification and extended static checking (Spec#/Boggie, VCC, HAVOC), for software model checking (Yogi, SLAM), model-based design (FORMULA), security protocol code (F7), program run-time analysis and invariant generation (VS3). We will describe how it integrates support for a variety of theories that arise naturally in the context of the applications. There are several new promising avenues and the talk will touch on some of these and the challenges related to SMT solvers. Proceedings
Model Checking, Abstraction, and Compositional Verification
1993-07-01
the ( alois connections used by Bensalrnu el al. [6], and also has some relation to Kurshan’s automata homonuor- phisms [62]. (Actually. we can impose a...multiprocessor simulation model. ACM Transactions on Computer Systems, 4(4):273-298, November 1986. [41 D. L. Beatty, R. E. Bryant, and C.-J. Seger
Foundations of the Bandera Abstraction Tools
NASA Technical Reports Server (NTRS)
Hatcliff, John; Dwyer, Matthew B.; Pasareanu, Corina S.; Robby
2003-01-01
Current research is demonstrating that model-checking and other forms of automated finite-state verification can be effective for checking properties of software systems. Due to the exponential costs associated with model-checking, multiple forms of abstraction are often necessary to obtain system models that are tractable for automated checking. The Bandera Tool Set provides multiple forms of automated support for compiling concurrent Java software systems to models that can be supplied to several different model-checking tools. In this paper, we describe the foundations of Bandera's data abstraction mechanism which is used to reduce the cardinality (and the program's state-space) of data domains in software to be model-checked. From a technical standpoint, the form of data abstraction used in Bandera is simple, and it is based on classical presentations of abstract interpretation. We describe the mechanisms that Bandera provides for declaring abstractions, for attaching abstractions to programs, and for generating abstracted programs and properties. The contributions of this work are the design and implementation of various forms of tool support required for effective application of data abstraction to software components written in a programming language like Java which has a rich set of linguistic features.
NASA Technical Reports Server (NTRS)
Feng, C.; Sun, X.; Shen, Y. N.; Lombardi, Fabrizio
1992-01-01
This paper covers the verification and protocol validation for distributed computer and communication systems using a computer aided testing approach. Validation and verification make up the so-called process of conformance testing. Protocol applications which pass conformance testing are then checked to see whether they can operate together. This is referred to as interoperability testing. A new comprehensive approach to protocol testing is presented which address: (1) modeling for inter-layer representation for compatibility between conformance and interoperability testing; (2) computational improvement to current testing methods by using the proposed model inclusive of formulation of new qualitative and quantitative measures and time-dependent behavior; (3) analysis and evaluation of protocol behavior for interactive testing without extensive simulation.
Check-Cases for Verification of 6-Degree-of-Freedom Flight Vehicle Simulations
NASA Technical Reports Server (NTRS)
Murri, Daniel G.; Jackson, E. Bruce; Shelton, Robert O.
2015-01-01
The rise of innovative unmanned aeronautical systems and the emergence of commercial space activities have resulted in a number of relatively new aerospace organizations that are designing innovative systems and solutions. These organizations use a variety of commercial off-the-shelf and in-house-developed simulation and analysis tools including 6-degree-of-freedom (6-DOF) flight simulation tools. The increased affordability of computing capability has made highfidelity flight simulation practical for all participants. Verification of the tools' equations-of-motion and environment models (e.g., atmosphere, gravitation, and geodesy) is desirable to assure accuracy of results. However, aside from simple textbook examples, minimal verification data exists in open literature for 6-DOF flight simulation problems. This assessment compared multiple solution trajectories to a set of verification check-cases that covered atmospheric and exo-atmospheric (i.e., orbital) flight. Each scenario consisted of predefined flight vehicles, initial conditions, and maneuvers. These scenarios were implemented and executed in a variety of analytical and real-time simulation tools. This tool-set included simulation tools in a variety of programming languages based on modified flat-Earth, round- Earth, and rotating oblate spheroidal Earth geodesy and gravitation models, and independently derived equations-of-motion and propagation techniques. The resulting simulated parameter trajectories were compared by over-plotting and difference-plotting to yield a family of solutions. In total, seven simulation tools were exercised.
Full-chip level MEEF analysis using model based lithography verification
NASA Astrophysics Data System (ADS)
Kim, Juhwan; Wang, Lantian; Zhang, Daniel; Tang, Zongwu
2005-11-01
MEEF (Mask Error Enhancement Factor) has become a critical factor in CD uniformity control since optical lithography process moved to sub-resolution era. A lot of studies have been done by quantifying the impact of the mask CD (Critical Dimension) errors on the wafer CD errors1-2. However, the benefits from those studies were restricted only to small pattern areas of the full-chip data due to long simulation time. As fast turn around time can be achieved for the complicated verifications on very large data by linearly scalable distributed processing technology, model-based lithography verification becomes feasible for various types of applications such as post mask synthesis data sign off for mask tape out in production and lithography process development with full-chip data3,4,5. In this study, we introduced two useful methodologies for the full-chip level verification of mask error impact on wafer lithography patterning process. One methodology is to check MEEF distribution in addition to CD distribution through process window, which can be used for RET/OPC optimization at R&D stage. The other is to check mask error sensitivity on potential pinch and bridge hotspots through lithography process variation, where the outputs can be passed on to Mask CD metrology to add CD measurements on those hotspot locations. Two different OPC data were compared using the two methodologies in this study.
A rule-based approach to model checking of UML state machines
NASA Astrophysics Data System (ADS)
Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz
2016-12-01
In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Madden, Michael M.; Shelton, Robert; Jackson, A. A.; Castro, Manuel P.; Noble, Deleena M.; Zimmerman, Curtis J.; Shidner, Jeremy D.; White, Joseph P.; Dutta, Doumyo;
2015-01-01
This follow-on paper describes the principal methods of implementing, and documents the results of exercising, a set of six-degree-of-freedom rigid-body equations of motion and planetary geodetic, gravitation and atmospheric models for simple vehicles in a variety of endo- and exo-atmospheric conditions with various NASA, and one popular open-source, engineering simulation tools. This effort is intended to provide an additional means of verification of flight simulations. The models used in this comparison, as well as the resulting time-history trajectory data, are available electronically for persons and organizations wishing to compare their flight simulation implementations of the same models.
Safety Verification of a Fault Tolerant Reconfigurable Autonomous Goal-Based Robotic Control System
NASA Technical Reports Server (NTRS)
Braman, Julia M. B.; Murray, Richard M; Wagner, David A.
2007-01-01
Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called Mission Data System (MDS), developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is simulated in MDS and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems.
Requeno, José Ignacio; Colom, José Manuel
2014-12-01
Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.
Requeno, José Ignacio; Colom, José Manuel
2014-10-23
Model checking is a generic verification technique that allows the phylogeneticist to focus on models and specifications instead of on implementation issues. Phylogenetic trees are considered as transition systems over which we interrogate phylogenetic questions written as formulas of temporal logic. Nonetheless, standard logics become insufficient for certain practices of phylogenetic analysis since they do not allow the inclusion of explicit time and probabilities. The aim of this paper is to extend the application of model checking techniques beyond qualitative phylogenetic properties and adapt the existing logical extensions and tools to the field of phylogeny. The introduction of time and probabilities in phylogenetic specifications is motivated by the study of a real example: the analysis of the ratio of lactose intolerance in some populations and the date of appearance of this phenotype.
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F. A.
2014-12-01
Verification of geophysics codes is imperative to avoid serious academic as well as practical consequences. In case that access to any given source code is not possible, the Method of Manufactured Solution (MMS) cannot be employed in code verification. In contrast, employing the Method of Exact Solution (MES) has several practical advantages. In this research, we first provide four new one-dimensional analytical solutions designed for code verification; these solutions are able to uncover the particular imperfections of the Advection-diffusion-reaction equation, such as nonlinear advection, diffusion or source terms, as well as non-constant coefficient equations. After that, we provide a solution of Burgers' equation in a novel setup. Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. Then, we use the derived analytical solutions for code verification. To clarify gray-literature issues in the verification of transport codes, we designed a comprehensive test suite to uncover any imperfection in transport solvers via a hierarchical increase in the level of tests' complexity. The test suite includes hundreds of unit tests and system tests to check vis-a-vis the portions of the code. Examples for checking the suite start by testing a simple case of unidirectional advection; then, bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh-convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available, we utilize symmetry. Auxiliary subroutines for automation of the test suite and report generation are designed. All in all, the test package is not only a robust tool for code verification but it also provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport. We also convey our experiences in finding several errors which were not detectable with routine verification techniques.
Poster - Thurs Eve-43: Verification of dose calculation with tissue inhomogeneity using MapCHECK.
Korol, R; Chen, J; Mosalaei, H; Karnas, S
2008-07-01
MapCHECK (Sun Nuclear, Melbourne, FL) with 445 diode detectors has been used widely for routine IMRT quality assurance (QA) 1 . However, routine IMRT QA has not included the verification of inhomogeneity effects. The objective of this study is to use MapCHECK and a phantom to verify dose calculation and IMRT delivery with tissue inhomogeneity. A phantom with tissue inhomogeneities was placed on top of MapCHECK to measure the planar dose for an anterior beam with photon energy 6 MV or 18 MV. The phantom was composed of a 3.5 cm thick block of lung equivalent material and solid water arranged side by side with a 0.5 cm slab of solid water on the top of the phantom. The phantom setup including MapCHECK was CT scanned and imported into Pinnacle 8.0d for dose calculation. Absolute dose distributions were compared with gamma criteria 3% for dose difference and 3 mm for distance-to-agreement. The results are in good agreement between the measured and calculated planar dose with 88% pass rate based on the gamma analysis. The major dose difference was at the lung-water interface. Further investigation will be performed on a custom designed inhomogeneity phantom with inserts of varying densities and effective depth to create various dose gradients at the interface for dose calculation and delivery verification. In conclusion, a phantom with tissue inhomogeneities can be used with MapCHECK for verification of dose calculation and delivery with tissue inhomogeneity. © 2008 American Association of Physicists in Medicine.
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulou, Dimitra
2006-01-01
This paper discusses our initial experience with introducing automated assume-guarantee verification based on learning in the SPIN tool. We believe that compositional verification techniques such as assume-guarantee reasoning could complement the state-reduction techniques that SPIN already supports, thus increasing the size of systems that SPIN can handle. We present a "light-weight" approach to evaluating the benefits of learning-based assume-guarantee reasoning in the context of SPIN: we turn our previous implementation of learning for the LTSA tool into a main program that externally invokes SPIN to provide the model checking-related answers. Despite its performance overheads (which mandate a future implementation within SPIN itself), this approach provides accurate information about the savings in memory. We have experimented with several versions of learning-based assume guarantee reasoning, including a novel heuristic introduced here for generating component assumptions when their environment is unavailable. We illustrate the benefits of learning-based assume-guarantee reasoning in SPIN through the example of a resource arbiter for a spacecraft. Keywords: assume-guarantee reasoning, model checking, learning.
NASA Technical Reports Server (NTRS)
Koga, Dennis (Technical Monitor); Penix, John; Markosian, Lawrence Z.; OMalley, Owen; Brew, William A.
2003-01-01
Attempts to achieve widespread use of software verification tools have been notably unsuccessful. Even 'straightforward', classic, and potentially effective verification tools such as lint-like tools face limits on their acceptance. These limits are imposed by the expertise required applying the tools and interpreting the results, the high false positive rate of many verification tools, and the need to integrate the tools into development environments. The barriers are even greater for more complex advanced technologies such as model checking. Web-hosted services for advanced verification technologies may mitigate these problems by centralizing tool expertise. The possible benefits of this approach include eliminating the need for software developer expertise in tool application and results filtering, and improving integration with other development tools.
Pharmacist and Technician Perceptions of Tech-Check-Tech in Community Pharmacy Practice Settings.
Frost, Timothy P; Adams, Alex J
2018-04-01
Tech-check-tech (TCT) is a practice model in which pharmacy technicians with advanced training can perform final verification of prescriptions that have been previously reviewed for appropriateness by a pharmacist. Few states have adopted TCT in part because of the common view that this model is controversial among members of the profession. This article aims to summarize the existing research on pharmacist and technician perceptions of community pharmacy-based TCT. A literature review was conducted using MEDLINE (January 1990 to August 2016) and Google Scholar (January 1990 to August 2016) using the terms "tech* and check," "tech-check-tech," "checking technician," and "accuracy checking tech*." Of the 7 studies identified we found general agreement among both pharmacists and technicians that TCT in community pharmacy settings can be safely performed. This agreement persisted in studies of theoretical TCT models and in studies assessing participants in actual community-based TCT models. Pharmacists who had previously worked with a checking technician were generally more favorable toward TCT. Both pharmacists and technicians in community pharmacy settings generally perceived TCT to be safe, in both theoretical surveys and in surveys following actual TCT demonstration projects. These perceptions of safety align well with the actual outcomes achieved from community pharmacy TCT studies.
Formal verification of a set of memory management units
NASA Technical Reports Server (NTRS)
Schubert, E. Thomas; Levitt, K.; Cohen, Gerald C.
1992-01-01
This document describes the verification of a set of memory management units (MMU). The verification effort demonstrates the use of hierarchical decomposition and abstract theories. The MMUs can be organized into a complexity hierarchy. Each new level in the hierarchy adds a few significant features or modifications to the lower level MMU. The units described include: (1) a page check translation look-aside module (TLM); (2) a page check TLM with supervisor line; (3) a base bounds MMU; (4) a virtual address translation MMU; and (5) a virtual address translation MMU with memory resident segment table.
Research on registration algorithm for check seal verification
NASA Astrophysics Data System (ADS)
Wang, Shuang; Liu, Tiegen
2008-03-01
Nowadays seals play an important role in China. With the development of social economy, the traditional method of manual check seal identification can't meet the need s of banking transactions badly. This paper focus on pre-processing and registration algorithm for check seal verification using theory of image processing and pattern recognition. First of all, analyze the complex characteristics of check seals. To eliminate the difference of producing conditions and the disturbance caused by background and writing in check image, many methods are used in the pre-processing of check seal verification, such as color components transformation, linearity transform to gray-scale image, medium value filter, Otsu, close calculations and labeling algorithm of mathematical morphology. After the processes above, the good binary seal image can be obtained. On the basis of traditional registration algorithm, a double-level registration method including rough and precise registration method is proposed. The deflection angle of precise registration method can be precise to 0.1°. This paper introduces the concepts of difference inside and difference outside and use the percent of difference inside and difference outside to judge whether the seal is real or fake. The experimental results of a mass of check seals are satisfied. It shows that the methods and algorithmic presented have good robustness to noise sealing conditions and satisfactory tolerance of difference within class.
Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data
NASA Astrophysics Data System (ADS)
Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai
2017-11-01
Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.
Code of Federal Regulations, 2012 CFR
2012-01-01
... check includes, at a minimum, a Federal Bureau of Investigation (FBI) criminal history records check (including verification of identity based on fingerprinting), employment history, education, and personal... fingerprinting and criminal history records checks before granting access to Safeguards Information. A background...
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F.
2011-12-01
Almost all natural phenomena on Earth are highly nonlinear. Even simplifications to the equations describing nature usually end up being nonlinear partial differential equations. Transport (ADR) equation is a pivotal equation in atmospheric sciences and water quality. This nonlinear equation needs to be solved numerically for practical purposes so academicians and engineers thoroughly rely on the assistance of numerical codes. Thus, numerical codes require verification before they are utilized for multiple applications in science and engineering. Model verification is a mathematical procedure whereby a numerical code is checked to assure the governing equation is properly solved as it is described in the design document. CFD verification is not a straightforward and well-defined course. Only a complete test suite can uncover all the limitations and bugs. Results are needed to be assessed to make a distinction between bug-induced-defect and innate limitation of a numerical scheme. As Roache (2009) said, numerical verification is a state-of-the-art procedure. Sometimes novel tricks work out. This study conveys the synopsis of the experiences we gained during a comprehensive verification process which was done for a transport solver. A test suite was designed including unit tests and algorithmic tests. Tests were layered in complexity in several dimensions from simple to complex. Acceptance criteria defined for the desirable capabilities of the transport code such as order of accuracy, mass conservation, handling stiff source term, spurious oscillation, and initial shape preservation. At the begining, mesh convergence study which is the main craft of the verification is performed. To that end, analytical solution of ADR equation gathered. Also a new solution was derived. In the more general cases, lack of analytical solution could be overcome through Richardson Extrapolation and Manufactured Solution. Then, two bugs which were concealed during the mesh convergence study uncovered with the method of false injection and visualization of the results. Symmetry had dual functionality: there was a bug, which was hidden due to the symmetric nature of a test (it was detected afterward utilizing artificial false injection), on the other hand self-symmetry was used to design a new test, and in a case the analytical solution of the ADR equation was unknown. Assisting subroutines designed to check and post-process conservation of mass and oscillatory behavior. Finally, capability of the solver also checked for stiff reaction source term. The above test suite not only was a decent tool of error detection but also it provided a thorough feedback on the ADR solvers limitations. Such information is the crux of any rigorous numerical modeling for a modeler who deals with surface/subsurface pollution transport.
Foo Kune, Denis [Saint Paul, MN; Mahadevan, Karthikeyan [Mountain View, CA
2011-01-25
A recursive verification protocol to reduce the time variance due to delays in the network by putting the subject node at most one hop from the verifier node provides for an efficient manner to test wireless sensor nodes. Since the software signatures are time based, recursive testing will give a much cleaner signal for positive verification of the software running on any one node in the sensor network. In this protocol, the main verifier checks its neighbor, who in turn checks its neighbor, and continuing this process until all nodes have been verified. This ensures minimum time delays for the software verification. Should a node fail the test, the software verification downstream is halted until an alternative path (one not including the failed node) is found. Utilizing techniques well known in the art, having a node tested twice, or not at all, can be avoided.
Razzaq, Misbah; Ahmad, Jamil
2015-01-01
Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework. PMID:26713449
Razzaq, Misbah; Ahmad, Jamil
2015-01-01
Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCune, W.; Shumsky, O.
2000-02-04
IVY is a verified theorem prover for first-order logic with equality. It is coded in ACL2, and it makes calls to the theorem prover Otter to search for proofs and to the program MACE to search for countermodels. Verifications of Otter and MACE are not practical because they are coded in C. Instead, Otter and MACE give detailed proofs and models that are checked by verified ACL2 programs. In addition, the initial conversion to clause form is done by verified ACL2 code. The verification is done with respect to finite interpretations.
Verification of a Byzantine-Fault-Tolerant Self-stabilizing Protocol for Clock Synchronization
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2008-01-01
This paper presents the mechanical verification of a simplified model of a rapid Byzantine-fault-tolerant self-stabilizing protocol for distributed clock synchronization systems. This protocol does not rely on any assumptions about the initial state of the system except for the presence of sufficient good nodes, thus making the weakest possible assumptions and producing the strongest results. This protocol tolerates bursts of transient failures, and deterministically converges within a time bound that is a linear function of the self-stabilization period. A simplified model of the protocol is verified using the Symbolic Model Verifier (SMV). The system under study consists of 4 nodes, where at most one of the nodes is assumed to be Byzantine faulty. The model checking effort is focused on verifying correctness of the simplified model of the protocol in the presence of a permanent Byzantine fault as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period. Although model checking results of the simplified model of the protocol confirm the theoretical predictions, these results do not necessarily confirm that the protocol solves the general case of this problem. Modeling challenges of the protocol and the system are addressed. A number of abstractions are utilized in order to reduce the state space.
SU-E-T-762: Toward Volume-Based Independent Dose Verification as Secondary Check
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tachibana, H; Tachibana, R
2015-06-15
Purpose: Lung SBRT plan has been shifted to volume prescription technique. However, point dose agreement is still verified using independent dose verification at the secondary check. The volume dose verification is more affected by inhomogeneous correction rather than point dose verification currently used as the check. A feasibility study for volume dose verification was conducted in lung SBRT plan. Methods: Six SBRT plans were collected in our institute. Two dose distributions with / without inhomogeneous correction were generated using Adaptive Convolve (AC) in Pinnacle3. Simple MU Analysis (SMU, Triangle Product, Ishikawa, JP) was used as the independent dose verification softwaremore » program, in which a modified Clarkson-based algorithm was implemented and radiological path length was computed using CT images independently to the treatment planning system. The agreement in point dose and mean dose between the AC with / without the correction and the SMU were assessed. Results: In the point dose evaluation for the center of the GTV, the difference shows the systematic shift (4.5% ± 1.9 %) in comparison of the AC with the inhomogeneous correction, on the other hands, there was good agreement of 0.2 ± 0.9% between the SMU and the AC without the correction. In the volume evaluation, there were significant differences in mean dose for not only PTV (14.2 ± 5.1 %) but also GTV (8.0 ± 5.1 %) compared to the AC with the correction. Without the correction, the SMU showed good agreement for GTV (1.5 ± 0.9%) as well as PTV (0.9% ± 1.0%). Conclusion: The volume evaluation for secondary check may be possible in homogenous region. However, the volume including the inhomogeneous media would make larger discrepancy. Dose calculation algorithm for independent verification needs to be modified to take into account the inhomogeneous correction.« less
Security Protocol Verification and Optimization by Epistemic Model Checking
2010-11-05
Three cryptographers are sitting down to dinner at their favourite restau- rant. Their waiter informs them that arrangements have been made with the...Unfortunately, the protocol cannot be expected to satisfy this: suppose that all agents manage to broadcast their mes- sage and all messages have the
78 FR 22522 - Privacy Act of 1974: New System of Records
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-16
... Privacy Act of 1974 (5 U.S.C. 552a), as amended, titled ``Biometric Verification System (CSOSA-20).'' This... Biometric Verification System allows individuals under supervision to electronically check-in for office... determination. ADDRESSES: You may submit written comments, identified by ``Biometric Verification System, CSOSA...
SU-F-T-494: A Multi-Institutional Study of Independent Dose Verification Using Golden Beam Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Itano, M; Yamazaki, T; Tachibana, R
Purpose: In general, beam data of individual linac is measured for independent dose verification software program and the verification is performed as a secondary check. In this study, independent dose verification using golden beam data was compared to that using individual linac’s beam data. Methods: Six institutions were participated and three different beam data were prepared. The one was individual measured data (Original Beam Data, OBD) .The others were generated by all measurements from same linac model (Model-GBD) and all linac models (All-GBD). The three different beam data were registered to the independent verification software program for each institute. Subsequently,more » patient’s plans in eight sites (brain, head and neck, lung, esophagus, breast, abdomen, pelvis and bone) were analyzed using the verification program to compare doses calculated using the three different beam data. Results: 1116 plans were collected from six institutes. Compared to using the OBD, the results shows the variation using the Model-GBD based calculation and the All-GBD was 0.0 ± 0.3% and 0.0 ± 0.6%, respectively. The maximum variations were 1.2% and 2.3%, respectively. The plans with the variation over 1% shows the reference points were located away from the central axis with/without physical wedge. Conclusion: The confidence limit (2SD) using the Model-GBD and the All-GBD was within 0.6% and 1.2%, respectively. Thus, the use of golden beam data may be feasible for independent verification. In addition to it, the verification using golden beam data provide quality assurance of planning from the view of audit. This research is partially supported by Japan Agency for Medical Research and Development(AMED)« less
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
RELAP5-3D Resolution of Known Restart/Backup Issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mesina, George L.; Anderson, Nolan A.
2014-12-01
The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less
Cost-Effective CNC Part Program Verification Development for Laboratory Instruction.
ERIC Educational Resources Information Center
Chen, Joseph C.; Chang, Ted C.
2000-01-01
Describes a computer numerical control program verification system that checks a part program before its execution. The system includes character recognition, word recognition, a fuzzy-nets system, and a tool path viewer. (SK)
25 CFR 224.63 - What provisions must a TERA contain?
Code of Federal Regulations, 2010 CFR
2010-04-01
... RESOURCE AGREEMENTS UNDER THE INDIAN TRIBAL ENERGY DEVELOPMENT AND SELF DETERMINATION ACT Procedures for... cancelled checks; cash receipt vouchers; copies of money orders or cashiers checks; or verification of...
Real-Time System Verification by Kappa-Induction
NASA Technical Reports Server (NTRS)
Pike, Lee S.
2005-01-01
We report the first formal verification of a reintegration protocol for a safety-critical, fault-tolerant, real-time distributed embedded system. A reintegration protocol increases system survivability by allowing a node that has suffered a fault to regain state consistent with the operational nodes. The protocol is verified in the Symbolic Analysis Laboratory (SAL), where bounded model checking and decision procedures are used to verify infinite-state systems by k-induction. The protocol and its environment are modeled as synchronizing timeout automata. Because k-induction is exponential with respect to k, we optimize the formal model to reduce the size of k. Also, the reintegrator's event-triggered behavior is conservatively modeled as time-triggered behavior to further reduce the size of k and to make it invariant to the number of nodes modeled. A corollary is that a clique avoidance property is satisfied.
State-based verification of RTCP-nets with nuXmv
NASA Astrophysics Data System (ADS)
Biernacka, Agnieszka; Biernacki, Jerzy; Szpyrka, Marcin
2015-12-01
The paper deals with an algorithm of translation of RTCP-nets' (real-time coloured Petri nets) coverability graphs into nuXmv state machines. The approach enables users to verify RTCP-nets with model checking techniques provided by the nuXmv tool. Full details of the algorithm are presented and an illustrative example of the approach usefulness is provided.
Code of Federal Regulations, 2010 CFR
2010-07-01
... accountholder or must verify the individual's identity. Verification may be either through a signature card or... the purchaser. If the deposit accountholder's identity has not been verified previously, the financial institution shall verify the deposit accountholder's identity by examination of a document which is normally...
NASA Technical Reports Server (NTRS)
Havelund, Klaus
1999-01-01
The JAVA PATHFINDER, JPF, is a translator from a subset of JAVA 1.0 to PROMELA, the programming language of the SPIN model checker. The purpose of JPF is to establish a framework for verification and debugging of JAVA programming based on model checking. The main goal is to automate program verification such that a programmer can apply it in the daily work without the need for a specialist to manually reformulate a program into a different notation in order to analyze the program. The system is especially suited for analyzing multi-threaded JAVA applications, where normal testing usually falls short. The system can find deadlocks and violations of boolean assertions stated by the programmer in a special assertion language. This document explains how to Use JPF.
Hiraishi, Kunihiko
2014-01-01
One of the significant topics in systems biology is to develop control theory of gene regulatory networks (GRNs). In typical control of GRNs, expression of some genes is inhibited (activated) by manipulating external stimuli and expression of other genes. It is expected to apply control theory of GRNs to gene therapy technologies in the future. In this paper, a control method using a Boolean network (BN) is studied. A BN is widely used as a model of GRNs, and gene expression is expressed by a binary value (ON or OFF). In particular, a context-sensitive probabilistic Boolean network (CS-PBN), which is one of the extended models of BNs, is used. For CS-PBNs, the verification problem and the optimal control problem are considered. For the verification problem, a solution method using the probabilistic model checker PRISM is proposed. For the optimal control problem, a solution method using polynomial optimization is proposed. Finally, a numerical example on the WNT5A network, which is related to melanoma, is presented. The proposed methods provide us useful tools in control theory of GRNs. PMID:24587766
Static Verification for Code Contracts
NASA Astrophysics Data System (ADS)
Fähndrich, Manuel
The Code Contracts project [3] at Microsoft Research enables programmers on the .NET platform to author specifications in existing languages such as C# and VisualBasic. To take advantage of these specifications, we provide tools for documentation generation, runtime contract checking, and static contract verification.
Detecting Inconsistencies in Multi-View Models with Variability
NASA Astrophysics Data System (ADS)
Lopez-Herrejon, Roberto Erick; Egyed, Alexander
Multi-View Modeling (MVM) is a common modeling practice that advocates the use of multiple, different and yet related models to represent the needs of diverse stakeholders. Of crucial importance in MVM is consistency checking - the description and verification of semantic relationships amongst the views. Variability is the capacity of software artifacts to vary, and its effective management is a core tenet of the research in Software Product Lines (SPL). MVM has proven useful for developing one-of-a-kind systems; however, to reap the potential benefits of MVM in SPL it is vital to provide consistency checking mechanisms that cope with variability. In this paper we describe how to address this need by applying Safe Composition - the guarantee that all programs of a product line are type safe. We evaluate our approach with a case study.
On the concept of virtual current as a means to enhance verification of electromagnetic flowmeters
NASA Astrophysics Data System (ADS)
Baker, Roger C.
2011-10-01
Electromagnetic flowmeters are becoming increasingly widely used in the water industry and other industries which handle electrically conducting liquids. When installed they are often difficult to remove for calibration without disturbing the liquid flow. Interest has therefore increased in the possibility of in situ calibration. The result has been the development of verification which attempts to approach calibration. However, while it checks on magnetic field and amplification circuits, it does not check adequately on the internals of the flowmeter pipe. This paper considers the use of the virtual voltage, a key element of the weight function theory of the flowmeter, to identify changes which have occurred in the flow tube and its liner. These could include a deformed insulating liner to the flow tube, or a deposit in the tube resulting from solids in the flow. The equation for virtual voltage is solved using a finite difference approach and the results are checked using a tank to simulate the flow tube, and tests on a flow rig. The concept is shown to be promising as a means of approaching verification of calibration.
DOT National Transportation Integrated Search
2005-09-01
This document describes a procedure for verifying a dynamic testing system (closed-loop servohydraulic). The procedure is divided into three general phases: (1) electronic system performance verification, (2) calibration check and overall system perf...
Doing Our Homework: A Pre-Employment Screening Checklist for Better Head Start Hiring.
ERIC Educational Resources Information Center
McSherry, Jim; Caruso, Karen; Cagnetta, Mary
1999-01-01
Provides a checklist for screening potential Head Start employees. Components include checks for criminal record, credit history, motor vehicle records, social security number tracing, previous address research, education verification, work and personal reference interview, and professional license verification. (KB)
Building validation tools for knowledge-based systems
NASA Technical Reports Server (NTRS)
Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.
1987-01-01
The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.
A formally verified algorithm for interactive consistency under a hybrid fault model
NASA Technical Reports Server (NTRS)
Lincoln, Patrick; Rushby, John
1993-01-01
Consistent distribution of single-source data to replicated computing channels is a fundamental problem in fault-tolerant system design. The 'Oral Messages' (OM) algorithm solves this problem of Interactive Consistency (Byzantine Agreement) assuming that all faults are worst-cass. Thambidurai and Park introduced a 'hybrid' fault model that distinguished three fault modes: asymmetric (Byzantine), symmetric, and benign; they also exhibited, along with an informal 'proof of correctness', a modified version of OM. Unfortunately, their algorithm is flawed. The discipline of mechanically checked formal verification eventually enabled us to develop a correct algorithm for Interactive Consistency under the hybrid fault model. This algorithm withstands $a$ asymmetric, $s$ symmetric, and $b$ benign faults simultaneously, using $m+1$ rounds, provided $n is greater than 2a + 2s + b + m$, and $m\\geg a$. We present this algorithm, discuss its subtle points, and describe its formal specification and verification in PVS. We argue that formal verification systems such as PVS are now sufficiently effective that their application to fault-tolerance algorithms should be considered routine.
Quantitative reactive modeling and verification.
Henzinger, Thomas A
Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.
Automated Environment Generation for Software Model Checking
NASA Technical Reports Server (NTRS)
Tkachuk, Oksana; Dwyer, Matthew B.; Pasareanu, Corina S.
2003-01-01
A key problem in model checking open systems is environment modeling (i.e., representing the behavior of the execution context of the system under analysis). Software systems are fundamentally open since their behavior is dependent on patterns of invocation of system components and values defined outside the system but referenced within the system. Whether reasoning about the behavior of whole programs or about program components, an abstract model of the environment can be essential in enabling sufficiently precise yet tractable verification. In this paper, we describe an approach to generating environments of Java program fragments. This approach integrates formally specified assumptions about environment behavior with sound abstractions of environment implementations to form a model of the environment. The approach is implemented in the Bandera Environment Generator (BEG) which we describe along with our experience using BEG to reason about properties of several non-trivial concurrent Java programs.
An experimental method to verify soil conservation by check dams on the Loess Plateau, China.
Xu, X Z; Zhang, H W; Wang, G Q; Chen, S C; Dang, W Q
2009-12-01
A successful experiment with a physical model requires necessary conditions of similarity. This study presents an experimental method with a semi-scale physical model. The model is used to monitor and verify soil conservation by check dams in a small watershed on the Loess Plateau of China. During experiments, the model-prototype ratio of geomorphic variables was kept constant under each rainfall event. Consequently, experimental data are available for verification of soil erosion processes in the field and for predicting soil loss in a model watershed with check dams. Thus, it can predict the amount of soil loss in a catchment. This study also mentions four criteria: similarities of watershed geometry, grain size and bare land, Froude number (Fr) for rainfall event, and soil erosion in downscaled models. The efficacy of the proposed method was confirmed using these criteria in two different downscaled model experiments. The B-Model, a large scale model, simulates watershed prototype. The two small scale models, D(a) and D(b), have different erosion rates, but are the same size. These two models simulate hydraulic processes in the B-Model. Experiment results show that while soil loss in the small scale models was converted by multiplying the soil loss scale number, it was very close to that of the B-Model. Obviously, with a semi-scale physical model, experiments are available to verify and predict soil loss in a small watershed area with check dam system on the Loess Plateau, China.
Tornero-López, Ana M; Torres Del Río, Julia; Ruiz, Carmen; Perez-Calatayud, Jose; Guirado, Damián; Lallena, Antonio M
2015-12-01
In brachytherapy using (125)I seed implants, a verification of the air kerma strength of the sources used is required. Typically, between 40 and 100 seeds are implanted. Checking all of them is unaffordable, especially when seeds are disposed in sterile cartridges. Recently, a new procedure allowing the accomplishment of the international recommendations has been proposed for the seedSelectron system of Elekta Brachytherapy. In this procedure, the SourceCheck ionization chamber is used with a special lodgment (Valencia lodgment) that allows to measure up to 10 seeds simultaneously. In this work we analyze this procedure, showing the feasibility of the approximations required for its application, as well as the effect of the additional dependence with the air density that shows the chamber model used. Uncertainty calculations and the verification of the approximation needed to obtain a calibration factor for the Valencia lodgment are carried out. The results of the present work show that the chamber dependence with the air density is the same whether the Valencia lodgment is used or not. On the contrary, the chamber response profile is influenced by the presence of the lodgment. The determination of this profile requires various measurements due to the nonnegligible variability found between different experiments. If it is considered, the uncertainty in the determination of the air-kerma strength increases from 0.5% to 1%. Otherwise, a systematic additional uncertainty of 1% would occur. This could be relevant for the comparison between user and manufacturer measurements that is mandatory in the case studied here. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Development of an inpatient operational pharmacy productivity model.
Naseman, Ryan W; Lopez, Ben R; Forrey, Ryan A; Weber, Robert J; Kipp, Kris M
2015-02-01
An innovative model for measuring the operational productivity of medication order management in inpatient settings is described. Order verification within a computerized prescriber order-entry system was chosen as the pharmacy workload driver. To account for inherent variability in the tasks involved in processing different types of orders, pharmaceutical products were grouped by class, and each class was assigned a time standard, or "medication complexity weight" reflecting the intensity of pharmacist and technician activities (verification of drug indication, verification of appropriate dosing, adverse-event prevention and monitoring, medication preparation, product checking, product delivery, returns processing, nurse/provider education, and problem-order resolution). The resulting "weighted verifications" (WV) model allows productivity monitoring by job function (pharmacist versus technician) to guide hiring and staffing decisions. A 9-month historical sample of verified medication orders was analyzed using the WV model, and the calculations were compared with values derived from two established models—one based on the Case Mix Index (CMI) and the other based on the proprietary Pharmacy Intensity Score (PIS). Evaluation of Pearson correlation coefficients indicated that values calculated using the WV model were highly correlated with those derived from the CMI-and PIS-based models (r = 0.845 and 0.886, respectively). Relative to the comparator models, the WV model offered the advantage of less period-to-period variability. The WV model yielded productivity data that correlated closely with values calculated using two validated workload management models. The model may be used as an alternative measure of pharmacy operational productivity. Copyright © 2015 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Fisher, Forest; Gladden, Roy; Khanampornpan, Teerapat
2008-01-01
The MRO Sequence Checking Tool program, mro_check, automates significant portions of the MRO (Mars Reconnaissance Orbiter) sequence checking procedure. Though MRO has similar checks to the ODY s (Mars Odyssey) Mega Check tool, the checks needed for MRO are unique to the MRO spacecraft. The MRO sequence checking tool automates the majority of the sequence validation procedure and check lists that are used to validate the sequences generated by MRO MPST (mission planning and sequencing team). The tool performs more than 50 different checks on the sequence. The automation varies from summarizing data about the sequence needed for visual verification of the sequence, to performing automated checks on the sequence and providing a report for each step. To allow for the addition of new checks as needed, this tool is built in a modular fashion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, J; Hu, W; Xing, Y
Purpose: All plan verification systems for particle therapy are designed to do plan verification before treatment. However, the actual dose distributions during patient treatment are not known. This study develops an online 2D dose verification tool to check the daily dose delivery accuracy. Methods: A Siemens particle treatment system with a modulated scanning spot beam is used in our center. In order to do online dose verification, we made a program to reconstruct the delivered 2D dose distributions based on the daily treatment log files and depth dose distributions. In the log files we can get the focus size, positionmore » and particle number for each spot. A gamma analysis is used to compare the reconstructed dose distributions with the dose distributions from the TPS to assess the daily dose delivery accuracy. To verify the dose reconstruction algorithm, we compared the reconstructed dose distributions to dose distributions measured using PTW 729XDR ion chamber matrix for 13 real patient plans. Then we analyzed 100 treatment beams (58 carbon and 42 proton) for prostate, lung, ACC, NPC and chordoma patients. Results: For algorithm verification, the gamma passing rate was 97.95% for the 3%/3mm and 92.36% for the 2%/2mm criteria. For patient treatment analysis,the results were 97.7%±1.1% and 91.7%±2.5% for carbon and 89.9%±4.8% and 79.7%±7.7% for proton using 3%/3mm and 2%/2mm criteria, respectively. The reason for the lower passing rate for the proton beam is that the focus size deviations were larger than for the carbon beam. The average focus size deviations were −14.27% and −6.73% for proton and −5.26% and −0.93% for carbon in the x and y direction respectively. Conclusion: The verification software meets our requirements to check for daily dose delivery discrepancies. Such tools can enhance the current treatment plan and delivery verification processes and improve safety of clinical treatments.« less
Model Checking JAVA Programs Using Java Pathfinder
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Pressburger, Thomas
2000-01-01
This paper describes a translator called JAVA PATHFINDER from JAVA to PROMELA, the "programming language" of the SPIN model checker. The purpose is to establish a framework for verification and debugging of JAVA programs based on model checking. This work should be seen in a broader attempt to make formal methods applicable "in the loop" of programming within NASA's areas such as space, aviation, and robotics. Our main goal is to create automated formal methods such that programmers themselves can apply these in their daily work (in the loop) without the need for specialists to manually reformulate a program into a different notation in order to analyze the program. This work is a continuation of an effort to formally verify, using SPIN, a multi-threaded operating system programmed in Lisp for the Deep-Space 1 spacecraft, and of previous work in applying existing model checkers and theorem provers to real applications.
Interface Generation and Compositional Verification in JavaPathfinder
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Pasareanu, Corina
2009-01-01
We present a novel algorithm for interface generation of software components. Given a component, our algorithm uses learning techniques to compute a permissive interface representing legal usage of the component. Unlike our previous work, this algorithm does not require knowledge about the component s environment. Furthermore, in contrast to other related approaches, our algorithm computes permissive interfaces even in the presence of non-determinism in the component. Our algorithm is implemented in the JavaPathfinder model checking framework for UML statechart components. We have also added support for automated assume-guarantee style compositional verification in JavaPathfinder, using component interfaces. We report on the application of the presented approach to the generation of interfaces for flight software components.
A verification library for multibody simulation software
NASA Technical Reports Server (NTRS)
Kim, Sung-Soo; Haug, Edward J.; Frisch, Harold P.
1989-01-01
A multibody dynamics verification library, that maintains and manages test and validation data is proposed, based on RRC Robot arm and CASE backhoe validation and a comparitive study of DADS, DISCOS, and CONTOPS that are existing public domain and commercial multibody dynamic simulation programs. Using simple representative problems, simulation results from each program are cross checked, and the validation results are presented. Functionalities of the verification library are defined, in order to automate validation procedure.
Software Safety Analysis of a Flight Guidance System
NASA Technical Reports Server (NTRS)
Butler, Ricky W. (Technical Monitor); Tribble, Alan C.; Miller, Steven P.; Lempia, David L.
2004-01-01
This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.
Sierra/SolidMechanics 4.48 Verification Tests Manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plews, Julia A.; Crane, Nathan K; de Frias, Gabriel Jose
2018-03-01
Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less
Sierra/SolidMechanics 4.48 Verification Tests Manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plews, Julia A.; Crane, Nathan K.; de Frias, Gabriel Jose
Presented in this document is a small portion of the tests that exist in the Sierra / SolidMechanics (Sierra / SM) verification test suite. Most of these tests are run nightly with the Sierra / SM code suite, and the results of the test are checked versus the correct analytical result. For each of the tests presented in this document, the test setup, a description of the analytic solution, and comparison of the Sierra / SM code results to the analytic solution is provided. Mesh convergence is also checked on a nightly basis for several of these tests. This documentmore » can be used to confirm that a given code capability is verified or referenced as a compilation of example problems. Additional example problems are provided in the Sierra / SM Example Problems Manual. Note, many other verification tests exist in the Sierra / SM test suite, but have not yet been included in this manual.« less
The Mars Science Laboratory Organic Check Material
NASA Astrophysics Data System (ADS)
Conrad, Pamela G.; Eigenbrode, Jennifer L.; Von der Heydt, Max O.; Mogensen, Claus T.; Canham, John; Harpold, Dan N.; Johnson, Joel; Errigo, Therese; Glavin, Daniel P.; Mahaffy, Paul R.
2012-09-01
Mars Science Laboratory's Curiosity rover carries a set of five external verification standards in hermetically sealed containers that can be sampled as would be a Martian rock, by drilling and then portioning into the solid sample inlet of the Sample Analysis at Mars (SAM) suite. Each organic check material (OCM) canister contains a porous ceramic solid, which has been doped with a fluorinated hydrocarbon marker that can be detected by SAM. The purpose of the OCM is to serve as a verification tool for the organic cleanliness of those parts of the sample chain that cannot be cleaned other than by dilution, i.e., repeated sampling of Martian rock. SAM possesses internal calibrants for verification of both its performance and its internal cleanliness, and the OCM is not used for that purpose. Each OCM unit is designed for one use only, and the choice to do so will be made by the project science group (PSG).
Verification and Trust: Background Investigations Preceding Faculty Appointment
ERIC Educational Resources Information Center
Finkin, Matthew W.; Post, Robert C.; Thomson, Judith J.
2004-01-01
Many employers in the United States have responded to the terrorist attacks of September 11, 2001, by initiating or expanding policies requiring background checks of prospective employees. Their ability to perform such checks has been abetted by the growth of computerized databases and of commercial enterprises that facilitate access to personal…
Verification and Trust: Background Investigations Preceding Faculty Appointment
ERIC Educational Resources Information Center
Academe, 2004
2004-01-01
Many employers in the United States have been initiating or expanding policies requiring background checks of prospective employees. The ability to perform such checks has been abetted by the growth of computerized databases and of commercial enterprises that facilitate access to personal information. Employers now have ready access to public…
Tien, Han-Kuang; Chung, Wen
2018-05-10
This research addressed adults' health check-ups through the lens of Role Transportation Theory. This theory is applied to narrative advertising that lures adults into seeking health check-ups by causing audiences to empathize with the advertisement's character. This study explored the persuasive mechanism behind narrative advertising and reinforced the Protection Motivation Theory model. We added two key perturbation variables: optimistic bias and truth avoidance. To complete the verification hypothesis, we performed two experiments. In Experiment 1, we recruited 77 respondents online for testing. We used analyses of variance to verify the effectiveness of narrative and informative advertising. Then, in Experiment 2, we recruited 228 respondents to perform offline physical experiments and conducted a path analysis through structural equation modelling. The findings showed that narrative advertising positively impacted participants' disease prevention intentions. The use of Role Transportation Theory in advertising enables the audience to be emotionally connected with the character, which enhances persuasiveness. In Experiment 2, we found that the degree of role transference can interfere with optimistic bias, improve perceived health risk, and promote behavioral intentions for health check-ups. Furthermore, truth avoidance can interfere with perceived health risks, which, in turn, reduce behavioral intentions for health check-ups.
Post-OPC verification using a full-chip pattern-based simulation verification method
NASA Astrophysics Data System (ADS)
Hung, Chi-Yuan; Wang, Ching-Heng; Ma, Cliff; Zhang, Gary
2005-11-01
In this paper, we evaluated and investigated techniques for performing fast full-chip post-OPC verification using a commercial product platform. A number of databases from several technology nodes, i.e. 0.13um, 0.11um and 90nm are used in the investigation. Although it has proven that for most cases, our OPC technology is robust in general, due to the variety of tape-outs with complicated design styles and technologies, it is difficult to develop a "complete or bullet-proof" OPC algorithm that would cover every possible layout patterns. In the evaluation, among dozens of databases, some OPC databases were found errors by Model-based post-OPC checking, which could cost significantly in manufacturing - reticle, wafer process, and more importantly the production delay. From such a full-chip OPC database verification, we have learned that optimizing OPC models and recipes on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. And, fatal errors (such as pinch or bridge) or poor CD distribution and process-sensitive patterns may still occur. As a result, more than one reticle tape-out cycle is not uncommon to prove models and recipes that approach the center of process for a range of designs. So, we will describe a full-chip pattern-based simulation verification flow serves both OPC model and recipe development as well as post OPC verification after production release of the OPC. Lastly, we will discuss the differentiation of the new pattern-based and conventional edge-based verification tools and summarize the advantages of our new tool and methodology: 1). Accuracy: Superior inspection algorithms, down to 1nm accuracy with the new "pattern based" approach 2). High speed performance: Pattern-centric algorithms to give best full-chip inspection efficiency 3). Powerful analysis capability: Flexible error distribution, grouping, interactive viewing and hierarchical pattern extraction to narrow down to unique patterns/cells.
NASA Technical Reports Server (NTRS)
Cleveland, Paul E.; Parrish, Keith A.
2005-01-01
A thorough and unique thermal verification and model validation plan has been developed for NASA s James Webb Space Telescope. The JWST observatory consists of a large deployed aperture optical telescope passively cooled to below 50 Kelvin along with a suite of several instruments passively and actively cooled to below 37 Kelvin and 7 Kelvin, respectively. Passive cooling to these extremely low temperatures is made feasible by the use of a large deployed high efficiency sunshield and an orbit location at the L2 Lagrange point. Another enabling feature is the scale or size of the observatory that allows for large radiator sizes that are compatible with the expected power dissipation of the instruments and large format Mercury Cadmium Telluride (HgCdTe) detector arrays. This passive cooling concept is simple, reliable, and mission enabling when compared to the alternatives of mechanical coolers and stored cryogens. However, these same large scale observatory features, which make passive cooling viable, also prevent the typical flight configuration fully-deployed thermal balance test that is the keystone to most space missions thermal verification plan. JWST is simply too large in its deployed configuration to be properly thermal balance tested in the facilities that currently exist. This reality, when combined with a mission thermal concept with little to no flight heritage, has necessitated the need for a unique and alternative approach to thermal system verification and model validation. This paper describes the thermal verification and model validation plan that has been developed for JWST. The plan relies on judicious use of cryogenic and thermal design margin, a completely independent thermal modeling cross check utilizing different analysis teams and software packages, and finally, a comprehensive set of thermal tests that occur at different levels of JWST assembly. After a brief description of the JWST mission and thermal architecture, a detailed description of the three aspects of the thermal verification and model validation plan is presented.
76 FR 14038 - TWIC/MTSA Policy Advisory Council; Voluntary Use of TWIC Readers
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-15
... and would like to know that they reached the Facility, please enclose a stamped, self-addressed... regulatory requirements for effective (1) identity verification, (2) card validity, and (3) card... access is granted. 33 CFR 101.514. At each entry, the TWIC must be checked for (1) identity verification...
Check-Cases for Verification of 6-Degree-of-Freedom Flight Vehicle Simulations. Volume 2; Appendices
NASA Technical Reports Server (NTRS)
Murri, Daniel G.; Jackson, E. Bruce; Shelton, Robert O.
2015-01-01
This NASA Engineering and Safety Center (NESC) assessment was established to develop a set of time histories for the flight behavior of increasingly complex example aerospacecraft that could be used to partially validate various simulation frameworks. The assessment was conducted by representatives from several NASA Centers and an open-source simulation project. This document contains details on models, implementation, and results.
Model Checking of a Diabetes-Cancer Model
NASA Astrophysics Data System (ADS)
Gong, Haijun; Zuliani, Paolo; Clarke, Edmund M.
2011-06-01
Accumulating evidence suggests that cancer incidence might be associated with diabetes mellitus, especially Type II diabetes which is characterized by hyperinsulinaemia, hyperglycaemia, obesity, and overexpression of multiple WNT pathway components. These diabetes risk factors can activate a number of signaling pathways that are important in the development of different cancers. To systematically understand the signaling components that link diabetes and cancer risk, we have constructed a single-cell, Boolean network model by integrating the signaling pathways that are influenced by these risk factors to study insulin resistance, cancer cell proliferation and apoptosis. Then, we introduce and apply the Symbolic Model Verifier (SMV), a formal verification tool, to qualitatively study some temporal logic properties of our diabetes-cancer model. The verification results show that the diabetes risk factors might not increase cancer risk in normal cells, but they will promote cell proliferation if the cell is in a precancerous or cancerous stage characterized by losses of the tumor-suppressor proteins ARF and INK4a.
NASA Astrophysics Data System (ADS)
Martin, L.; Schatalov, M.; Hagner, M.; Goltz, U.; Maibaum, O.
Today's software for aerospace systems typically is very complex. This is due to the increasing number of features as well as the high demand for safety, reliability, and quality. This complexity also leads to significant higher software development costs. To handle the software complexity, a structured development process is necessary. Additionally, compliance with relevant standards for quality assurance is a mandatory concern. To assure high software quality, techniques for verification are necessary. Besides traditional techniques like testing, automated verification techniques like model checking become more popular. The latter examine the whole state space and, consequently, result in a full test coverage. Nevertheless, despite the obvious advantages, this technique is rarely yet used for the development of aerospace systems. In this paper, we propose a tool-supported methodology for the development and formal verification of safety-critical software in the aerospace domain. The methodology relies on the V-Model and defines a comprehensive work flow for model-based software development as well as automated verification in compliance to the European standard series ECSS-E-ST-40C. Furthermore, our methodology supports the generation and deployment of code. For tool support we use the tool SCADE Suite (Esterel Technology), an integrated design environment that covers all the requirements for our methodology. The SCADE Suite is well established in avionics and defense, rail transportation, energy and heavy equipment industries. For evaluation purposes, we apply our approach to an up-to-date case study of the TET-1 satellite bus. In particular, the attitude and orbit control software is considered. The behavioral models for the subsystem are developed, formally verified, and optimized.
SU-F-T-268: A Feasibility Study of Independent Dose Verification for Vero4DRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamashita, M; Kokubo, M; Institute of Biomedical Research and Innovation, Kobe, Hyogo
2016-06-15
Purpose: Vero4DRT (Mitsubishi Heavy Industries Ltd.) has been released for a few years. The treatment planning system (TPS) of Vero4DRT is dedicated, so the measurement is the only method of dose verification. There have been no reports of independent dose verification using Clarksonbased algorithm for Vero4DRT. An independent dose verification software program of the general-purpose linac using a modified Clarkson-based algorithm was modified for Vero4DRT. In this study, we evaluated the accuracy of independent dose verification program and the feasibility of the secondary check for Vero4DRT. Methods: iPlan (Brainlab AG) was used as the TPS. PencilBeam Convolution was used formore » dose calculation algorithm of IMRT and X-ray Voxel Monte Carlo was used for the others. Simple MU Analysis (SMU, Triangle Products, Japan) was used as the independent dose verification software program in which CT-based dose calculation was performed using a modified Clarkson-based algorithm. In this study, 120 patients’ treatment plans were collected in our institute. The treatments were performed using the conventional irradiation for lung and prostate, SBRT for lung and Step and shoot IMRT for prostate. Comparison in dose between the TPS and the SMU was done and confidence limits (CLs, Mean ± 2SD %) were compared to those from the general-purpose linac. Results: As the results of the CLs, the conventional irradiation (lung, prostate), SBRT (lung) and IMRT (prostate) show 2.2 ± 3.5% (CL of the general-purpose linac: 2.4 ± 5.3%), 1.1 ± 1.7% (−0.3 ± 2.0%), 4.8 ± 3.7% (5.4 ± 5.3%) and −0.5 ± 2.5% (−0.1 ± 3.6%), respectively. The CLs for Vero4DRT show similar results to that for the general-purpose linac. Conclusion: The independent dose verification for the new linac is clinically available as a secondary check and we performed the check with the similar tolerance level of the general-purpose linac. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less
Modeling and Analysis of Asynchronous Systems Using SAL and Hybrid SAL
NASA Technical Reports Server (NTRS)
Tiwari, Ashish; Dutertre, Bruno
2013-01-01
We present formal models and results of formal analysis of two different asynchronous systems. We first examine a mid-value select module that merges the signals coming from three different sensors that are each asynchronously sampling the same input signal. We then consider the phase locking protocol proposed by Daly, Hopkins, and McKenna. This protocol is designed to keep a set of non-faulty (asynchronous) clocks phase locked even in the presence of Byzantine-faulty clocks on the network. All models and verifications have been developed using the SAL model checking tools and the Hybrid SAL abstractor.
Xu, Haiyang; Wang, Ping
2016-01-01
In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system.
Xu, Haiyang; Wang, Ping
2016-01-01
In order to verify the real-time reliability of unmanned aerial vehicle (UAV) flight control system and comply with the airworthiness certification standard, we proposed a model-based integration framework for modeling and verification of time property. Combining with the advantages of MARTE, this framework uses class diagram to create the static model of software system, and utilizes state chart to create the dynamic model. In term of the defined transformation rules, the MARTE model could be transformed to formal integrated model, and the different part of the model could also be verified by using existing formal tools. For the real-time specifications of software system, we also proposed a generating algorithm for temporal logic formula, which could automatically extract real-time property from time-sensitive live sequence chart (TLSC). Finally, we modeled the simplified flight control system of UAV to check its real-time property. The results showed that the framework could be used to create the system model, as well as precisely analyze and verify the real-time reliability of UAV flight control system. PMID:27918594
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramanathan, Arvind; Steed, Chad A; Pullum, Laura L
Compartmental models in epidemiology are widely used as a means to model disease spread mechanisms and understand how one can best control the disease in case an outbreak of a widespread epidemic occurs. However, a significant challenge within the community is in the development of approaches that can be used to rigorously verify and validate these models. In this paper, we present an approach to rigorously examine and verify the behavioral properties of compartmen- tal epidemiological models under several common modeling scenarios including birth/death rates and multi-host/pathogen species. Using metamorphic testing, a novel visualization tool and model checking, we buildmore » a workflow that provides insights into the functionality of compartmental epidemiological models. Our initial results indicate that metamorphic testing can be used to verify the implementation of these models and provide insights into special conditions where these mathematical models may fail. The visualization front-end allows the end-user to scan through a variety of parameters commonly used in these models to elucidate the conditions under which an epidemic can occur. Further, specifying these models using a process algebra allows one to automatically construct behavioral properties that can be rigorously verified using model checking. Taken together, our approach allows for detecting implementation errors as well as handling conditions under which compartmental epidemiological models may fail to provide insights into disease spread dynamics.« less
32 CFR Appendix A to Part 154 - Investigative Scope
Code of Federal Regulations, 2010 CFR
2010-07-01
... verification of citizenship. g. Central Intelligence Agency (CIA). The files of CIA contain information on.... Investigation Criteria for CIA Checks NAC, DNACI or ENTNAC Residence anywhere outside of the U.S. for a year or... files shall also be checked if subject has been an employee of CIA or when other sources indicate that...
32 CFR Appendix A to Part 154 - Investigative Scope
Code of Federal Regulations, 2011 CFR
2011-07-01
... verification of citizenship. g. Central Intelligence Agency (CIA). The files of CIA contain information on.... Investigation Criteria for CIA Checks NAC, DNACI or ENTNAC Residence anywhere outside of the U.S. for a year or... files shall also be checked if subject has been an employee of CIA or when other sources indicate that...
32 CFR Appendix A to Part 154 - Investigative Scope
Code of Federal Regulations, 2012 CFR
2012-07-01
... verification of citizenship. g. Central Intelligence Agency (CIA). The files of CIA contain information on.... Investigation Criteria for CIA Checks NAC, DNACI or ENTNAC Residence anywhere outside of the U.S. for a year or... files shall also be checked if subject has been an employee of CIA or when other sources indicate that...
32 CFR Appendix A to Part 154 - Investigative Scope
Code of Federal Regulations, 2014 CFR
2014-07-01
... verification of citizenship. g. Central Intelligence Agency (CIA). The files of CIA contain information on.... Investigation Criteria for CIA Checks NAC, DNACI or ENTNAC Residence anywhere outside of the U.S. for a year or... files shall also be checked if subject has been an employee of CIA or when other sources indicate that...
32 CFR Appendix A to Part 154 - Investigative Scope
Code of Federal Regulations, 2013 CFR
2013-07-01
... verification of citizenship. g. Central Intelligence Agency (CIA). The files of CIA contain information on.... Investigation Criteria for CIA Checks NAC, DNACI or ENTNAC Residence anywhere outside of the U.S. for a year or... files shall also be checked if subject has been an employee of CIA or when other sources indicate that...
Regulatory Conformance Checking: Logic and Logical Form
ERIC Educational Resources Information Center
Dinesh, Nikhil
2010-01-01
We consider the problem of checking whether an organization conforms to a body of regulation. Conformance is studied in a runtime verification setting. The regulation is translated to a logic, from which we synthesize monitors. The monitors are evaluated as the state of an organization evolves over time, raising an alarm if a violation is…
ERIC Educational Resources Information Center
Washington Consulting Group, Inc., Washington, DC.
Module 13 of the 17-module self-instructional course on student financial aid administration (designed for novice financial aid administrators and other institutional personnel) focuses on the verification procedure for checking the accuracy of applicant data used in making financial aid awards. The full course provides an introduction to the…
40 CFR 1065.362 - Non-stoichiometric raw exhaust FID O2 interference verification.
Code of Federal Regulations, 2013 CFR
2013-07-01
... air source during testing, use zero air as the FID burner's air source for this verification. (4) Zero the FID analyzer using the zero gas used during emission testing. (5) Span the FID analyzer using a span gas that you use during emission testing. (6) Check the zero response of the FID analyzer using...
A New "Moodle" Module Supporting Automatic Verification of VHDL-Based Assignments
ERIC Educational Resources Information Center
Gutierrez, Eladio; Trenas, Maria A.; Ramos, Julian; Corbera, Francisco; Romero, Sergio
2010-01-01
This work describes a new "Moodle" module developed to give support to the practical content of a basic computer organization course. This module goes beyond the mere hosting of resources and assignments. It makes use of an automatic checking and verification engine that works on the VHDL designs submitted by the students. The module automatically…
Experimental verification of layout physical verification of silicon photonics
NASA Astrophysics Data System (ADS)
El Shamy, Raghi S.; Swillam, Mohamed A.
2018-02-01
Silicon photonics have been approved as one of the best platforms for dense integration of photonic integrated circuits (PICs) due to the high refractive index contrast among its materials. Silicon on insulator (SOI) is a widespread photonics technology, which support a variety of devices for lots of applications. As the photonics market is growing, the number of components in the PICs increases which increase the need for an automated physical verification (PV) process. This PV process will assure reliable fabrication of the PICs as it will check both the manufacturability and the reliability of the circuit. However, PV process is challenging in the case of PICs as it requires running an exhaustive electromagnetic (EM) simulations. Our group have recently proposed an empirical closed form models for the directional coupler and the waveguide bends based on the SOI technology. The models have shown a very good agreement with both finite element method (FEM) and finite difference time domain (FDTD) solvers. These models save the huge time of the 3D EM simulations and can be easily included in any electronic design automation (EDA) flow as the equations parameters can be easily extracted from the layout. In this paper we present experimental verification for our previously proposed models. SOI directional couplers with different dimensions have been fabricated using electron beam lithography and measured. The results from the measurements of the fabricate devices have been compared to the derived models and show a very good agreement. Also the matching can reach 100% by calibrating certain parameter in the model.
Formal Verification of the Runway Safety Monitor
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu; Ciardo, Gianfranco
2006-01-01
The Runway Safety Monitor (RSM) designed by Lockheed Martin is part of NASA's effort to reduce runway accidents. We developed a Petri net model of the RSM protocol and used the model checking functions of our tool SMART to investigate a number of safety properties in RSM. To mitigate the impact of state-space explosion, we built a highly discretized model of the system, obtained by partitioning the monitored runway zone into a grid of smaller volumes and by considering scenarios involving only two aircraft. The model also assumes that there are no communication failures, such as bad input from radar or lack of incoming data, thus it relies on a consistent view of reality by all participants. In spite of these simplifications, we were able to expose potential problems in the RSM conceptual design. Our findings were forwarded to the design engineers, who undertook corrective action. Additionally, the results stress the efficiency attained by the new model checking algorithms implemented in SMART, and demonstrate their applicability to real-world systems.
NASA Technical Reports Server (NTRS)
Tijidjian, Raffi P.
2010-01-01
The TEAMS model analyzer is a supporting tool developed to work with models created with TEAMS (Testability, Engineering, and Maintenance System), which was developed by QSI. In an effort to reduce the time spent in the manual process that each TEAMS modeler must perform in the preparation of reporting for model reviews, a new tool has been developed as an aid to models developed in TEAMS. The software allows for the viewing, reporting, and checking of TEAMS models that are checked into the TEAMS model database. The software allows the user to selectively model in a hierarchical tree outline view that displays the components, failure modes, and ports. The reporting features allow the user to quickly gather statistics about the model, and generate an input/output report pertaining to all of the components. Rules can be automatically validated against the model, with a report generated containing resulting inconsistencies. In addition to reducing manual effort, this software also provides an automated process framework for the Verification and Validation (V&V) effort that will follow development of these models. The aid of such an automated tool would have a significant impact on the V&V process.
Scalable and Accurate SMT-based Model Checking of Data Flow Systems
2013-10-30
guided by the semantics of the description language . In this project we developed instead a complementary and novel approach based on a somewhat brute...believe that our approach could help considerably in expanding the reach of abstract interpretation techniques to a variety of tar- get languages , as...project. We worked on developing a framework for compositional verification that capitalizes on the fact that data-flow languages , such as Lustre, have
Garnerin, P; Arès, M; Huchet, A; Clergue, F
2008-12-01
The potential severity of wrong patient/procedure/site of surgery and the view that these events are avoidable, make the prevention of such errors a priority. An intervention was set up to develop a verification protocol for checking patient identity and the site of surgery with periodic audits to measure compliance while providing feedback. A nurse auditor performed the compliance audits in inpatients and outpatients during three consecutive 3-month periods and three 1-month follow-up periods; 11 audit criteria were recorded, as well as reasons for not performing a check. The nurse auditor provided feedback to the health professionals, including discussion of inadequate checks. 1,000 interactions between patients and their anaesthetist or nurse anaesthetist were observed. Between the first and second audit periods compliance with all audit criteria except "surgical site marked" noticeably improved, such as the proportion of patients whose identities were checked (62.6% to 81.4%); full compliance with protocol in patient identity checks (9.7% to 38.1%); proportion of site of surgery checks carried out (77.1% to 92.6%); and full compliance with protocol in site of surgery checks (32.2% to 52.0%). Thereafter, compliance was stable for most criteria. The reason for failure to perform checks of patient identity or site of surgery was mostly that the anaesthetist in charge had seen the patient at the preanaesthetic consultation. By combining the implementation of a verification protocol with periodic audits with feedback, the intervention changed practice and increased compliance with patient identity and site of surgery checks. The impact of the intervention was limited by communication problems between patients and professionals, and lack of collaboration with surgical services.
2015-09-01
Detectability ...............................................................................................37 Figure 20. Excel VBA Codes for Checker...National Vulnerability Database OS Operating System SQL Structured Query Language VC Verification Condition VBA Visual Basic for Applications...checks each of these assertions for detectability by Daikon. The checker is an Excel Visual Basic for Applications ( VBA ) script that checks the
NASA Technical Reports Server (NTRS)
Nguyen, Truong X.; Koppen, Sandra V.; Ely, Jay J.; Williams, Reuben A.; Smith, Laura J.; Salud, Maria Theresa P.
2004-01-01
This document summarizes the safety analysis performed on a Flight Guidance System (FGS) requirements model. In particular, the safety properties desired of the FGS model are identified and the presence of the safety properties in the model is formally verified. Chapter 1 provides an introduction to the entire project, while Chapter 2 gives a brief overview of the problem domain, the nature of accidents, model based development, and the four-variable model. Chapter 3 outlines the approach. Chapter 4 presents the results of the traditional safety analysis techniques and illustrates how the hazardous conditions associated with the system trace into specific safety properties. Chapter 5 presents the results of the formal methods analysis technique model checking that was used to verify the presence of the safety properties in the requirements model. Finally, Chapter 6 summarizes the main conclusions of the study, first and foremost that model checking is a very effective verification technique to use on discrete models with reasonable state spaces. Additional supporting details are provided in the appendices.
Applying Formal Verification Techniques to Ambient Assisted Living Systems
NASA Astrophysics Data System (ADS)
Benghazi, Kawtar; Visitación Hurtado, María; Rodríguez, María Luisa; Noguera, Manuel
This paper presents a verification approach based on timed traces semantics and MEDISTAM-RT [1] to check the fulfillment of non-functional requirements, such as timeliness and safety, and assure the correct functioning of the Ambient Assisted Living (AAL) systems. We validate this approach by its application to an Emergency Assistance System for monitoring people suffering from cardiac alteration with syncope.
Code of Federal Regulations, 2013 CFR
2013-07-01
... discrete-mode testing. For this check we consider water vapor a gaseous constituent. This verification does... for water removed from the sample done in post-processing according to § 1065.659 and it does not... humidification vessel that contains water. You must humidify NO2 span gas with another moist gas stream. We...
Code of Federal Regulations, 2014 CFR
2014-07-01
... discrete-mode testing. For this check we consider water vapor a gaseous constituent. This verification does... for water removed from the sample done in post-processing according to § 1065.659 (40 CFR 1066.620 for... contains water. You must humidify NO2 span gas with another moist gas stream. We recommend humidifying your...
Extension of specification language for soundness and completeness of service workflow
NASA Astrophysics Data System (ADS)
Viriyasitavat, Wattana; Xu, Li Da; Bi, Zhuming; Sapsomboon, Assadaporn
2018-05-01
A Service Workflow is an aggregation of distributed services to fulfill specific functionalities. With ever increasing available services, the methodologies for the selections of the services against the given requirements become main research subjects in multiple disciplines. A few of researchers have contributed to the formal specification languages and the methods for model checking; however, existing methods have the difficulties to tackle with the complexity of workflow compositions. In this paper, we propose to formalize the specification language to reduce the complexity of the workflow composition. To this end, we extend a specification language with the consideration of formal logic, so that some effective theorems can be derived for the verification of syntax, semantics, and inference rules in the workflow composition. The logic-based approach automates compliance checking effectively. The Service Workflow Specification (SWSpec) has been extended and formulated, and the soundness, completeness, and consistency of SWSpec applications have been verified; note that a logic-based SWSpec is mandatory for the development of model checking. The application of the proposed SWSpec has been demonstrated by the examples with the addressed soundness, completeness, and consistency.
Towards Verification of Operational Procedures Using Auto-Generated Diagnostic Trees
NASA Technical Reports Server (NTRS)
Kurtoglu, Tolga; Lutz, Robyn; Patterson-Hine, Ann
2009-01-01
The design, development, and operation of complex space, lunar and planetary exploration systems require the development of general procedures that describe a detailed set of instructions capturing how mission tasks are performed. For both crewed and uncrewed NASA systems, mission safety and the accomplishment of the scientific mission objectives are highly dependent on the correctness of procedures. In this paper, we describe how to use the auto-generated diagnostic trees from existing diagnostic models to improve the verification of standard operating procedures. Specifically, we introduce a systematic method, namely the Diagnostic Tree for Verification (DTV), developed with the goal of leveraging the information contained within auto-generated diagnostic trees in order to check the correctness of procedures, to streamline the procedures in terms of reducing the number of steps or use of resources in them, and to propose alternative procedural steps adaptive to changing operational conditions. The application of the DTV method to a spacecraft electrical power system shows the feasibility of the approach and its range of capabilities
NDEC: A NEA platform for nuclear data testing, verification and benchmarking
NASA Astrophysics Data System (ADS)
Díez, C. J.; Michel-Sendis, F.; Cabellos, O.; Bossant, M.; Soppera, N.
2017-09-01
The selection, testing, verification and benchmarking of evaluated nuclear data consists, in practice, in putting an evaluated file through a number of checking steps where different computational codes verify that the file and the data it contains complies with different requirements. These requirements range from format compliance to good performance in application cases, while at the same time physical constraints and the agreement with experimental data are verified. At NEA, the NDEC (Nuclear Data Evaluation Cycle) platform aims at providing, in a user friendly interface, a thorough diagnose of the quality of a submitted evaluated nuclear data file. Such diagnose is based on the results of different computational codes and routines which carry out the mentioned verifications, tests and checks. NDEC also searches synergies with other existing NEA tools and databases, such as JANIS, DICE or NDaST, including them into its working scheme. Hence, this paper presents NDEC, its current development status and its usage in the JEFF nuclear data project.
Stern, Robin L; Heaton, Robert; Fraser, Martin W; Goddu, S Murty; Kirby, Thomas H; Lam, Kwok Leung; Molineu, Andrea; Zhu, Timothy C
2011-01-01
The requirement of an independent verification of the monitor units (MU) or time calculated to deliver the prescribed dose to a patient has been a mainstay of radiation oncology quality assurance. The need for and value of such a verification was obvious when calculations were performed by hand using look-up tables, and the verification was achieved by a second person independently repeating the calculation. However, in a modern clinic using CT/MR/PET simulation, computerized 3D treatment planning, heterogeneity corrections, and complex calculation algorithms such as convolution/superposition and Monte Carlo, the purpose of and methodology for the MU verification have come into question. In addition, since the verification is often performed using a simpler geometrical model and calculation algorithm than the primary calculation, exact or almost exact agreement between the two can no longer be expected. Guidelines are needed to help the physicist set clinically reasonable action levels for agreement. This report addresses the following charges of the task group: (1) To re-evaluate the purpose and methods of the "independent second check" for monitor unit calculations for non-IMRT radiation treatment in light of the complexities of modern-day treatment planning. (2) To present recommendations on how to perform verification of monitor unit calculations in a modern clinic. (3) To provide recommendations on establishing action levels for agreement between primary calculations and verification, and to provide guidance in addressing discrepancies outside the action levels. These recommendations are to be used as guidelines only and shall not be interpreted as requirements.
Compliance and Verification of Standards and Labelling Programs in China: Lessons Learned
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saheb, Yamina; Zhou, Nan; Fridley, David
2010-06-11
After implementing several energy efficiency standards and labels (30 products covered by MEPS, 50 products covered by voluntary labels and 19 products by mandatory labels), the China National Institute of Standardization (CNIS) is now implementing verification and compliance mechanism to ensure that the energy information of labeled products comply with the requirements of their labels. CNIS is doing so by organizing check testing on a random basis for room air-conditioners, refrigerators, motors, heaters, computer displays, ovens, and self -ballasted lamps. The purpose of the check testing is to understand the implementation of the Chinese labeling scheme and help local authoritiesmore » establishing effective compliance mechanisms. In addition, to ensure robustness and consistency of testing results, CNIS has coordinated a round robin testing for room air conditioners. Eight laboratories (Chinese (6), Australian (1) and Japanese (1)) have been involved in the round robin testing and tests were performed on four sets of samples selected from manufacturer?s production line. This paper describes the methodology used in undertaking both check and round robin testing, provides analysis of testing results and reports on the findings. The analysis of both check and round robin testing demonstrated the benefits of a regularized verification and monitoring system for both laboratories and products such as (i) identifying the possible deviations between laboratories to correct them, (ii) improving the quality of testing facilities, (iii) ensuring the accuracy and reliability of energy label information in order to strength the social credibility of the labeling program and the enforcement mechanism in place.« less
Compliance and Verification of Standards and Labeling Programs in China: Lessons Learned
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saheb, Yamina; Zhou, Nan; Fridley, David
2010-08-01
After implementing several energy efficiency standards and labels (30 products covered by MEPS, 50 products covered by voluntary labels and 19 products by mandatory labels), the China National Institute of Standardization (CNIS) is now implementing verification and compliance mechanism to ensure that the energy information of labeled products comply with the requirements of their labels. CNIS is doing so by organizing check testing on a random basis for room air-conditioners, refrigerators, motors, heaters, computer displays, ovens, and self -ballasted lamps. The purpose of the check testing is to understand the implementation of the Chinese labeling scheme and help local authoritiesmore » establishing effective compliance mechanisms. In addition, to ensure robustness and consistency of testing results, CNIS has coordinated a round robin testing for room air conditioners. Eight laboratories (Chinese (6), Australian (1) and Japanese (1)) have been involved in the round robin testing and tests were performed on four sets of samples selected from manufacturer's production line. This paper describes the methodology used in undertaking both check and round robin testing, provides analysis of testing results and reports on the findings. The analysis of both check and round robin testing demonstrated the benefits of a regularized verification and monitoring system for both laboratories and products such as (i) identifying the possible deviations between laboratories to correct them, (ii) improving the quality of testing facilities, (iii) ensuring the accuracy and reliability of energy label information in order to strength the social credibility of the labeling program and the enforcement mechanism in place.« less
Assessing Requirements Quality through Requirements Coverage
NASA Technical Reports Server (NTRS)
Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt
2008-01-01
In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.
C code generation from Petri-net-based logic controller specification
NASA Astrophysics Data System (ADS)
Grobelny, Michał; Grobelna, Iwona; Karatkevich, Andrei
2017-08-01
The article focuses on programming of logic controllers. It is important that a programming code of a logic controller is executed flawlessly according to the primary specification. In the presented approach we generate C code for an AVR microcontroller from a rule-based logical model of a control process derived from a control interpreted Petri net. The same logical model is also used for formal verification of the specification by means of the model checking technique. The proposed rule-based logical model and formal rules of transformation ensure that the obtained implementation is consistent with the already verified specification. The approach is validated by practical experiments.
A Practical Approach to Implementing Real-Time Semantics
NASA Technical Reports Server (NTRS)
Luettgen, Gerald; Bhat, Girish; Cleaveland, Rance
1999-01-01
This paper investigates implementations of process algebras which are suitable for modeling concurrent real-time systems. It suggests an approach for efficiently implementing real-time semantics using dynamic priorities. For this purpose a proces algebra with dynamic priority is defined, whose semantics corresponds one-to-one to traditional real-time semantics. The advantage of the dynamic-priority approach is that it drastically reduces the state-space sizes of the systems in question while preserving all properties of their functional and real-time behavior. The utility of the technique is demonstrated by a case study which deals with the formal modeling and verification of the SCSI-2 bus-protocol. The case study is carried out in the Concurrency Workbench of North Carolina, an automated verification tool in which the process algebra with dynamic priority is implemented. It turns out that the state space of the bus-protocol model is about an order of magnitude smaller than the one resulting from real-time semantics. The accuracy of the model is proved by applying model checking for verifying several mandatory properties of the bus protocol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amadio, G.; et al.
An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less
Universal Verification Methodology Based Register Test Automation Flow.
Woo, Jae Hun; Cho, Yong Kwan; Park, Sun Kyu
2016-05-01
In today's SoC design, the number of registers has been increased along with complexity of hardware blocks. Register validation is a time-consuming and error-pron task. Therefore, we need an efficient way to perform verification with less effort in shorter time. In this work, we suggest register test automation flow based UVM (Universal Verification Methodology). UVM provides a standard methodology, called a register model, to facilitate stimulus generation and functional checking of registers. However, it is not easy for designers to create register models for their functional blocks or integrate models in test-bench environment because it requires knowledge of SystemVerilog and UVM libraries. For the creation of register models, many commercial tools support a register model generation from register specification described in IP-XACT, but it is time-consuming to describe register specification in IP-XACT format. For easy creation of register model, we propose spreadsheet-based register template which is translated to IP-XACT description, from which register models can be easily generated using commercial tools. On the other hand, we also automate all the steps involved integrating test-bench and generating test-cases, so that designers may use register model without detailed knowledge of UVM or SystemVerilog. This automation flow involves generating and connecting test-bench components (e.g., driver, checker, bus adaptor, etc.) and writing test sequence for each type of register test-case. With the proposed flow, designers can save considerable amount of time to verify functionality of registers.
Multi-canister overpack project -- verification and validation, MCNP 4A
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldmann, L.H.
This supporting document contains the software verification and validation (V and V) package used for Phase 2 design of the Spent Nuclear Fuel Multi-Canister Overpack. V and V packages for both ANSYS and MCNP are included. Description of Verification Run(s): This software requires that it be compiled specifically for the machine it is to be used on. Therefore to facilitate ease in the verification process the software automatically runs 25 sample problems to ensure proper installation and compilation. Once the runs are completed the software checks for verification by performing a file comparison on the new output file and themore » old output file. Any differences between any of the files will cause a verification error. Due to the manner in which the verification is completed a verification error does not necessarily indicate a problem. This indicates that a closer look at the output files is needed to determine the cause of the error.« less
Formally verifying human–automation interaction as part of a system model: limitations and tradeoffs
Bass, Ellen J.
2011-01-01
Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human–automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human–automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE. PMID:21572930
Simple method to verify OPC data based on exposure condition
NASA Astrophysics Data System (ADS)
Moon, James; Ahn, Young-Bae; Oh, Sey-Young; Nam, Byung-Ho; Yim, Dong Gyu
2006-03-01
In a world where Sub100nm lithography tool is an everyday household item for device makers, shrinkage of the device is at a rate that no one ever have imagined. With the shrinkage of device at such a high rate, demand placed on Optical Proximity Correction (OPC) is like never before. To meet this demand with respect to shrinkage rate of the device, more aggressive OPC tactic is involved. Aggressive OPC tactics is a must for sub 100nm lithography tech but this tactic eventually results in greater room for OPC error and complexity of the OPC data. Until now, Optical Rule Check (ORC) or Design Rule Check (DRC) was used to verify this complex OPC error. But each of these methods has its pros and cons. ORC verification of OPC data is rather accurate "process" wise but inspection of full chip device requires a lot of money (Computer , software,..) and patience (run time). DRC however has no such disadvantage, but accuracy of the verification is a total downfall "process" wise. In this study, we were able to create a new method for OPC data verification that combines the best of both ORC and DRC verification method. We created a method that inspects the biasing of the OPC data with respect to the illumination condition of the process that's involved. This new method for verification was applied to 80nm tech ISOLATION and GATE layer of the 512M DRAM device and showed accuracy equivalent to ORC inspection with run time that of DRC verification.
Enhancement and Verification of the Navy CASEE Model (Calendar Year 1982 Task).
1982-12-15
FAILED • "REPAIR CENTRAL SYSTEM a POST REPAIR STATISTC TUNOFFNTIONAL PR TUN b OTH PART EAR ETALSSE I CHECK FOR -m NEXT APPROVAL !•YES .DOWN NO 6 PORT BE...X0 0A. 0- LA- ww .J n 1- 0 . &M Z: or40* AX WI-WL 6-MW IA %A 3. % 00-1. .- 4Am - w a m -A.. OM U -4CM &Ab ZAW 44- w 1-u 3- *- w .- J! j0 44 A Oa tK -6
Trust Based Evaluation of Wikipedia's Contributors
NASA Astrophysics Data System (ADS)
Krupa, Yann; Vercouter, Laurent; Hübner, Jomi Fred; Herzig, Andreas
Wikipedia is an encyclopedia on which anybody can change its content. Some users, self-proclaimed "patrollers", regularly check recent changes in order to delete or correct those which are ruining articles integrity. The huge quantity of updates leads some articles to remain polluted a certain time before being corrected. In this work, we show how a multiagent trust model can help patrollers in their task of controlling the Wikipedia. To direct the patrollers verification towards suspicious contributors, our work relies on a formalisation of Castelfranchi & Falcone's social trust theory to assist them by representing their trust model in a cognitive way.
Jabbari, Keyvan; Pashaei, Fakhereh; Ay, Mohammad R.; Amouheidari, Alireza; Tavakoli, Mohammad B.
2018-01-01
Background: MapCHECK2 is a two-dimensional diode arrays planar dosimetry verification system. Dosimetric results are evaluated with gamma index. This study aims to provide comprehensive information on the impact of various factors on the gamma index values of MapCHECK2, which is mostly used for IMRT dose verification. Methods: Seven fields were planned for 6 and 18 MV photons. The azimuthal angle is defined as any rotation of collimators or the MapCHECK2 around the central axis, which was varied from 5 to −5°. The gantry angle was changed from −8 to 8°. Isodose sampling resolution was studied in the range of 0.5 to 4 mm. The effects of additional buildup on gamma index in three cases were also assessed. Gamma test acceptance criteria were 3%/3 mm. Results: The change of azimuthal angle in 5° interval reduced gamma index value by about 9%. The results of putting buildups of various thicknesses on the MapCHECK2 surface showed that gamma index was generally improved in thicker buildup, especially for 18 MV. Changing the sampling resolution from 4 to 2 mm resulted in an increase in gamma index by about 3.7%. The deviation of the gantry in 8° intervals in either directions changed the gamma index only by about 1.6% for 6 MV and 2.1% for 18 MV. Conclusion: Among the studied parameters, the azimuthal angle is one of the most effective factors on gamma index value. The gantry angle deviation and sampling resolution are less effective on gamma index value reduction. PMID:29535922
Verification Test for Ultra-Light Deployment Mechanism for Sectioned Deployable Antenna Reflectors
NASA Astrophysics Data System (ADS)
Zajac, Kai; Schmidt, Tilo; Schiller, Marko; Seifart, Klaus; Schmalbach, Matthias; Scolamiero, Lucio
2013-09-01
The ultra-light deployment mechanism (UDM) is based on three carbon fibre reinforced plastics (CFRP) curved tape springs made of carbon fibre / cyanate ester prepregs.In the frame of the activity its space application suitability for the deployment of solid reflector antenna sections was investigated. A projected diameter of the full reflector of 4 m to 7 m and specific mass in the order of magnitude of 2.6kg/m2 was focused for requirement derivation.Extensive verification tests including health checks, environmental and functional tests were carried out with an engineering model to enable representative characterizing of the UDM unit.This paper presents the design and a technical description of the UDM as well as a summary of achieved development status with respect to test results and possible design improvements.
Symbolic Analysis of Concurrent Programs with Polymorphism
NASA Technical Reports Server (NTRS)
Rungta, Neha Shyam
2010-01-01
The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.
On the verification of intransitive noninterference in mulitlevel security.
Ben Hadj-Alouane, Nejib; Lafrance, Stéphane; Lin, Feng; Mullins, John; Yeddes, Mohamed Moez
2005-10-01
We propose an algorithmic approach to the problem of verification of the property of intransitive noninterference (INI), using tools and concepts of discrete event systems (DES). INI can be used to characterize and solve several important security problems in multilevel security systems. In a previous work, we have established the notion of iP-observability, which precisely captures the property of INI. We have also developed an algorithm for checking iP-observability by indirectly checking P-observability for systems with at most three security levels. In this paper, we generalize the results for systems with any finite number of security levels by developing a direct method for checking iP-observability, based on an insightful observation that the iP function is a left congruence in terms of relations on formal languages. To demonstrate the applicability of our approach, we propose a formal method to detect denial of service vulnerabilities in security protocols based on INI. This method is illustrated using the TCP/IP protocol. The work extends the theory of supervisory control of DES to a new application domain.
Low-cost and high-speed optical mark reader based on an intelligent line camera
NASA Astrophysics Data System (ADS)
Hussmann, Stephan; Chan, Leona; Fung, Celine; Albrecht, Martin
2003-08-01
Optical Mark Recognition (OMR) is thoroughly reliable and highly efficient provided that high standards are maintained at both the planning and implementation stages. It is necessary to ensure that OMR forms are designed with due attention to data integrity checks, the best use is made of features built into the OMR, used data integrity is checked before the data is processed and data is validated before it is processed. This paper describes the design and implementation of an OMR prototype system for marking multiple-choice tests automatically. Parameter testing is carried out before the platform and the multiple-choice answer sheet has been designed. Position recognition and position verification methods have been developed and implemented in an intelligent line scan camera. The position recognition process is implemented into a Field Programmable Gate Array (FPGA), whereas the verification process is implemented into a micro-controller. The verified results are then sent to the Graphical User Interface (GUI) for answers checking and statistical analysis. At the end of the paper the proposed OMR system will be compared with commercially available system on the market.
Electroacoustic verification of frequency modulation systems in cochlear implant users.
Fidêncio, Vanessa Luisa Destro; Jacob, Regina Tangerino de Souza; Tanamati, Liége Franzini; Bucuvic, Érika Cristina; Moret, Adriane Lima Mortari
2017-12-26
The frequency modulation system is a device that helps to improve speech perception in noise and is considered the most beneficial approach to improve speech recognition in noise in cochlear implant users. According to guidelines, there is a need to perform a check before fitting the frequency modulation system. Although there are recommendations regarding the behavioral tests that should be performed at the fitting of the frequency modulation system to cochlear implant users, there are no published recommendations regarding the electroacoustic test that should be performed. Perform and determine the validity of an electroacoustic verification test for frequency modulation systems coupled to different cochlear implant speech processors. The sample included 40 participants between 5 and 18 year's users of four different models of speech processors. For the electroacoustic evaluation, we used the Audioscan Verifit device with the HA-1 coupler and the listening check devices corresponding to each speech processor model. In cases where the transparency was not achieved, a modification was made in the frequency modulation gain adjustment and we used the Brazilian version of the "Phrases in Noise Test" to evaluate the speech perception in competitive noise. It was observed that there was transparency between the frequency modulation system and the cochlear implant in 85% of the participants evaluated. After adjusting the gain of the frequency modulation receiver in the other participants, the devices showed transparency when the electroacoustic verification test was repeated. It was also observed that patients demonstrated better performance in speech perception in noise after a new adjustment, that is, in these cases; the electroacoustic transparency caused behavioral transparency. The electroacoustic evaluation protocol suggested was effective in evaluation of transparency between the frequency modulation system and the cochlear implant. Performing the adjustment of the speech processor and the frequency modulation system gain are essential when fitting this device. Copyright © 2017 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.
CMM Interim Check Design of Experiments (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montano, Joshua Daniel
2015-07-29
Coordinate Measuring Machines (CMM) are widely used in industry, throughout the Nuclear Weapons Complex and at Los Alamos National Laboratory (LANL) to verify part conformance to design definition. Calibration cycles for CMMs at LANL are predominantly one year in length and include a weekly interim check to reduce risk. The CMM interim check makes use of Renishaw’s Machine Checking Gauge which is an off-the-shelf product simulates a large sphere within a CMM’s measurement volume and allows for error estimation. As verification on the interim check process a design of experiments investigation was proposed to test a couple of key factorsmore » (location and inspector). The results from the two-factor factorial experiment proved that location influenced results more than the inspector or interaction.« less
75 FR 12811 - Petition for Waiver of Compliance
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-17
... verifying accuracy of the check sum and CRC values of all programmable elements used in the solid-state... software being used. This verification is done by comparing the parameters found on all programmable...
12 CFR Appendix A to Part 749 - Record Retention Guidelines
Code of Federal Regulations, 2010 CFR
2010-01-01
... account verification. (e) Applications for membership and joint share account agreements. (f) Journal and... required by law. (d) Cash received vouchers. (e) Journal vouchers. (f) Canceled checks. (g) Bank statements...
Projected Impact of Compositional Verification on Current and Future Aviation Safety Risk
NASA Technical Reports Server (NTRS)
Reveley, Mary S.; Withrow, Colleen A.; Leone, Karen M.; Jones, Sharon M.
2014-01-01
The projected impact of compositional verification research conducted by the National Aeronautic and Space Administration System-Wide Safety and Assurance Technologies on aviation safety risk was assessed. Software and compositional verification was described. Traditional verification techniques have two major problems: testing at the prototype stage where error discovery can be quite costly and the inability to test for all potential interactions leaving some errors undetected until used by the end user. Increasingly complex and nondeterministic aviation systems are becoming too large for these tools to check and verify. Compositional verification is a "divide and conquer" solution to addressing increasingly larger and more complex systems. A review of compositional verification research being conducted by academia, industry, and Government agencies is provided. Forty-four aviation safety risks in the Biennial NextGen Safety Issues Survey were identified that could be impacted by compositional verification and grouped into five categories: automation design; system complexity; software, flight control, or equipment failure or malfunction; new technology or operations; and verification and validation. One capability, 1 research action, 5 operational improvements, and 13 enablers within the Federal Aviation Administration Joint Planning and Development Office Integrated Work Plan that could be addressed by compositional verification were identified.
Environment Modeling Using Runtime Values for JPF-Android
NASA Technical Reports Server (NTRS)
van der Merwe, Heila; Tkachuk, Oksana; Nel, Seal; van der Merwe, Brink; Visser, Willem
2015-01-01
Software applications are developed to be executed in a specific environment. This environment includes external native libraries to add functionality to the application and drivers to fire the application execution. For testing and verification, the environment of an application is simplified abstracted using models or stubs. Empty stubs, returning default values, are simple to generate automatically, but they do not perform well when the application expects specific return values. Symbolic execution is used to find input parameters for drivers and return values for library stubs, but it struggles to detect the values of complex objects. In this work-in-progress paper, we explore an approach to generate drivers and stubs based on values collected during runtime instead of using default values. Entry-points and methods that need to be modeled are instrumented to log their parameters and return values. The instrumented applications are then executed using a driver and instrumented libraries. The values collected during runtime are used to generate driver and stub values on- the-fly that improve coverage during verification by enabling the execution of code that previously crashed or was missed. We are implementing this approach to improve the environment model of JPF-Android, our model checking and analysis tool for Android applications.
HDM/PASCAL Verification System User's Manual
NASA Technical Reports Server (NTRS)
Hare, D.
1983-01-01
The HDM/Pascal verification system is a tool for proving the correctness of programs written in PASCAL and specified in the Hierarchical Development Methodology (HDM). This document assumes an understanding of PASCAL, HDM, program verification, and the STP system. The steps toward verification which this tool provides are parsing programs and specifications, checking the static semantics, and generating verification conditions. Some support functions are provided such as maintaining a data base, status management, and editing. The system runs under the TOPS-20 and TENEX operating systems and is written in INTERLISP. However, no knowledge is assumed of these operating systems or of INTERLISP. The system requires three executable files, HDMVCG, PARSE, and STP. Optionally, the editor EMACS should be on the system in order for the editor to work. The file HDMVCG is invoked to run the system. The files PARSE and STP are used as lower forks to perform the functions of parsing and proving.
Verification for measurement-only blind quantum computing
NASA Astrophysics Data System (ADS)
Morimae, Tomoyuki
2014-06-01
Blind quantum computing is a new secure quantum computing protocol where a client who does not have any sophisticated quantum technology can delegate her quantum computing to a server without leaking any privacy. It is known that a client who has only a measurement device can perform blind quantum computing [T. Morimae and K. Fujii, Phys. Rev. A 87, 050301(R) (2013), 10.1103/PhysRevA.87.050301]. It has been an open problem whether the protocol can enjoy the verification, i.e., the ability of the client to check the correctness of the computing. In this paper, we propose a protocol of verification for the measurement-only blind quantum computing.
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F. A.
2013-12-01
ADR equation describes many physical phenomena of interest in the field of water quality in natural streams and groundwater. In many cases such as: density driven flow, multiphase reactive transport, and sediment transport, either one or a number of terms in the ADR equation may become nonlinear. For that reason, numerical tools are the only practical choice to solve these PDEs. All numerical solvers developed for transport equation need to undergo code verification procedure before they are put in to practice. Code verification is a mathematical activity to uncover failures and check for rigorous discretization of PDEs and implementation of initial/boundary conditions. In the context computational PDE verification is not a well-defined procedure on a clear path. Thus, verification tests should be designed and implemented with in-depth knowledge of numerical algorithms and physics of the phenomena as well as mathematical behavior of the solution. Even test results need to be mathematically analyzed to distinguish between an inherent limitation of algorithm and a coding error. Therefore, it is well known that code verification is a state of the art, in which innovative methods and case-based tricks are very common. This study presents full verification of a general transport code. To that end, a complete test suite is designed to probe the ADR solver comprehensively and discover all possible imperfections. In this study we convey our experiences in finding several errors which were not detectable with routine verification techniques. We developed a test suit including hundreds of unit tests and system tests. The test package has gradual increment in complexity such that tests start from simple and increase to the most sophisticated level. Appropriate verification metrics are defined for the required capabilities of the solver as follows: mass conservation, convergence order, capabilities in handling stiff problems, nonnegative concentration, shape preservation, and spurious wiggles. Thereby, we provide objective, quantitative values as opposed to subjective qualitative descriptions as 'weak' or 'satisfactory' agreement with those metrics. We start testing from a simple case of unidirectional advection, then bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. For all of the mentioned cases we conduct mesh convergence tests. These tests compare the results' order of accuracy versus the formal order of accuracy of discretization. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available we utilize Symmetry, Complete Richardson Extrapolation and Method of False Injection to uncover bugs. Detailed discussions of capabilities of the mentioned code verification techniques are given. Auxiliary subroutines for automation of the test suit and report generation are designed. All in all, the test package is not only a robust tool for code verification but also it provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport.
Verification using Satisfiability Checking, Predicate Abstraction, and Craig Interpolation
2008-09-01
297, 2007. 4.10.1 196 [48] Roberto Bruttomesso, Alessandro Cimatti, Anders Franzen, Alberto Grig- gio, Ziyad Hanna, Alexander Nadel, Amit Palti, and...using SAT based conflict analysis. In Formal Methods in Computer Aided Design, pages 33–51, 2002. 1.1, 7 [54] Alessandro Cimatti, Alberto Griggio, and...and D. Vroon. Automatic memory reductions for RTL-level verification. In ICCAD, 2006. 1.2.4, 6.2, 7 [108] Joao P. Marques-Silva and Karem A. Sakallah
MESA: Message-Based System Analysis Using Runtime Verification
NASA Technical Reports Server (NTRS)
Shafiei, Nastaran; Tkachuk, Oksana; Mehlitz, Peter
2017-01-01
In this paper, we present a novel approach and framework for run-time verication of large, safety critical messaging systems. This work was motivated by verifying the System Wide Information Management (SWIM) project of the Federal Aviation Administration (FAA). SWIM provides live air traffic, site and weather data streams for the whole National Airspace System (NAS), which can easily amount to several hundred messages per second. Such safety critical systems cannot be instrumented, therefore, verification and monitoring has to happen using a nonintrusive approach, by connecting to a variety of network interfaces. Due to a large number of potential properties to check, the verification framework needs to support efficient formulation of properties with a suitable Domain Specific Language (DSL). Our approach is to utilize a distributed system that is geared towards connectivity and scalability and interface it at the message queue level to a powerful verification engine. We implemented our approach in the tool called MESA: Message-Based System Analysis, which leverages the open source projects RACE (Runtime for Airspace Concept Evaluation) and TraceContract. RACE is a platform for instantiating and running highly concurrent and distributed systems and enables connectivity to SWIM and scalability. TraceContract is a runtime verication tool that allows for checking traces against properties specified in a powerful DSL. We applied our approach to verify a SWIM service against several requirements.We found errors such as duplicate and out-of-order messages.
Resource Public Key Infrastructure Extension
2012-01-01
tests for checking compliance with the RFC 3779 extensions that are used in the RPKI. These tests also were used to identify an error in the OPENSSL ...rsync, OpenSSL , Cryptlib, and MySQL/ODBC. We assume that the adversaries can exploit any publicly known vulnerability in this software. • Server...NULL, set FLAG_NOCHAIN in Ctemp, defer verification. T = P Use OpenSSL to verify certificate chain S using trust anchor T, checking signature and
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, X; Yin, Y; Lin, X
Purpose: To assess the preliminary feasibility of automated treatment planning verification system in cervical cancer IMRT pre-treatment dose verification. Methods: The study selected randomly clinical IMRT treatment planning data for twenty patients with cervical cancer, all IMRT plans were divided into 7 fields to meet the dosimetric goals using a commercial treatment planning system(PianncleVersion 9.2and the EclipseVersion 13.5). The plans were exported to the Mobius 3D (M3D)server percentage differences of volume of a region of interest (ROI) and dose calculation of target region and organ at risk were evaluated, in order to validate the accuracy automated treatment planning verification system.more » Results: The difference of volume for Pinnacle to M3D was less than results for Eclipse to M3D in ROI, the biggest difference was 0.22± 0.69%, 3.5±1.89% for Pinnacle and Eclipse respectively. M3D showed slightly better agreement in dose of target and organ at risk compared with TPS. But after recalculating plans by M3D, dose difference for Pinnacle was less than Eclipse on average, results were within 3%. Conclusion: The method of utilizing the automated treatment planning system to validate the accuracy of plans is convenientbut the scope of differences still need more clinical patient cases to determine. At present, it should be used as a secondary check tool to improve safety in the clinical treatment planning.« less
Model Checker for Java Programs
NASA Technical Reports Server (NTRS)
Visser, Willem
2007-01-01
Java Pathfinder (JPF) is a verification and testing environment for Java that integrates model checking, program analysis, and testing. JPF consists of a custom-made Java Virtual Machine (JVM) that interprets bytecode, combined with a search interface to allow the complete behavior of a Java program to be analyzed, including interleavings of concurrent programs. JPF is implemented in Java, and its architecture is highly modular to support rapid prototyping of new features. JPF is an explicit-state model checker, because it enumerates all visited states and, therefore, suffers from the state-explosion problem inherent in analyzing large programs. It is suited to analyzing programs less than 10kLOC, but has been successfully applied to finding errors in concurrent programs up to 100kLOC. When an error is found, a trace from the initial state to the error is produced to guide the debugging. JPF works at the bytecode level, meaning that all of Java can be model-checked. By default, the software checks for all runtime errors (uncaught exceptions), assertions violations (supports Java s assert), and deadlocks. JPF uses garbage collection and symmetry reductions of the heap during model checking to reduce state-explosion, as well as dynamic partial order reductions to lower the number of interleavings analyzed. JPF is capable of symbolic execution of Java programs, including symbolic execution of complex data such as linked lists and trees. JPF is extensible as it allows for the creation of listeners that can subscribe to events during searches. The creation of dedicated code to be executed in place of regular classes is supported and allows users to easily handle native calls and to improve the efficiency of the analysis.
NASA Technical Reports Server (NTRS)
Mehhtz, Peter
2005-01-01
JPF is an explicit state software model checker for Java bytecode. Today, JPF is a swiss army knife for all sort of runtime based verification purposes. This basically means JPF is a Java virtual machine that executes your program not just once (like a normal VM), but theoretically in all possible ways, checking for property violations like deadlocks or unhandled exceptions along all potential execution paths. If it finds an error, JPF reports the whole execution that leads to it. Unlike a normal debugger, JPF keeps track of every step how it got to the defect.
Formal Verification Toolkit for Requirements and Early Design Stages
NASA Technical Reports Server (NTRS)
Badger, Julia M.; Miller, Sheena Judson
2011-01-01
Efficient flight software development from natural language requirements needs an effective way to test designs earlier in the software design cycle. A method to automatically derive logical safety constraints and the design state space from natural language requirements is described. The constraints can then be checked using a logical consistency checker and also be used in a symbolic model checker to verify the early design of the system. This method was used to verify a hybrid control design for the suit ports on NASA Johnson Space Center's Space Exploration Vehicle against safety requirements.
NASA Technical Reports Server (NTRS)
Bolton, Matthew L.; Bass, Ellen J.
2009-01-01
Both the human factors engineering (HFE) and formal methods communities are concerned with finding and eliminating problems with safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to use model checking with HFE practices to perform formal verification of a human-interactive system. Despite the use of a seemingly simple target system, a patient controlled analgesia pump, the initial model proved to be difficult for the model checker to verify in a reasonable amount of time. This resulted in a number of model revisions that affected the HFE architectural, representativeness, and understandability goals of the effort. If formal methods are to meet the needs of the HFE community, additional modeling tools and technological developments are necessary.
Aviation Safety: Modeling and Analyzing Complex Interactions between Humans and Automated Systems
NASA Technical Reports Server (NTRS)
Rungta, Neha; Brat, Guillaume; Clancey, William J.; Linde, Charlotte; Raimondi, Franco; Seah, Chin; Shafto, Michael
2013-01-01
The on-going transformation from the current US Air Traffic System (ATS) to the Next Generation Air Traffic System (NextGen) will force the introduction of new automated systems and most likely will cause automation to migrate from ground to air. This will yield new function allocations between humans and automation and therefore change the roles and responsibilities in the ATS. Yet, safety in NextGen is required to be at least as good as in the current system. We therefore need techniques to evaluate the safety of the interactions between humans and automation. We think that current human factor studies and simulation-based techniques will fall short in front of the ATS complexity, and that we need to add more automated techniques to simulations, such as model checking, which offers exhaustive coverage of the non-deterministic behaviors in nominal and off-nominal scenarios. In this work, we present a verification approach based both on simulations and on model checking for evaluating the roles and responsibilities of humans and automation. Models are created using Brahms (a multi-agent framework) and we show that the traditional Brahms simulations can be integrated with automated exploration techniques based on model checking, thus offering a complete exploration of the behavioral space of the scenario. Our formal analysis supports the notion of beliefs and probabilities to reason about human behavior. We demonstrate the technique with the Ueberligen accident since it exemplifies authority problems when receiving conflicting advices from human and automated systems.
Kate's Model Verification Tools
NASA Technical Reports Server (NTRS)
Morgan, Steve
1991-01-01
Kennedy Space Center's Knowledge-based Autonomous Test Engineer (KATE) is capable of monitoring electromechanical systems, diagnosing their errors, and even repairing them when they crash. A survey of KATE's developer/modelers revealed that they were already using a sophisticated set of productivity enhancing tools. They did request five more, however, and those make up the body of the information presented here: (1) a transfer function code fitter; (2) a FORTRAN-Lisp translator; (3) three existing structural consistency checkers to aid in syntax checking their modeled device frames; (4) an automated procedure for calibrating knowledge base admittances to protect KATE's hardware mockups from inadvertent hand valve twiddling; and (5) three alternatives for the 'pseudo object', a programming patch that currently apprises KATE's modeling devices of their operational environments.
Parallel State Space Construction for a Model Checking Based on Maximality Semantics
NASA Astrophysics Data System (ADS)
El Abidine Bouneb, Zine; Saīdouni, Djamel Eddine
2009-03-01
The main limiting factor of the model checker integrated in the concurrency verification environment FOCOVE [1, 2], which use the maximality based labeled transition system (noted MLTS) as a true concurrency model[3, 4], is currently the amount of available physical memory. Many techniques have been developed to reduce the size of a state space. An interesting technique among them is the alpha equivalence reduction. Distributed memory execution environment offers yet another choice. The main contribution of the paper is to show that the parallel state space construction algorithm proposed in [5], which is based on interleaving semantics using LTS as semantic model, may be adapted easily to the distributed implementation of the alpha equivalence reduction for the maximality based labeled transition systems.
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco
2004-01-01
The Runway Safety Monitor (RSM) designed by Lockheed Martin is part of NASA's effort to reduce aviation accidents. We developed a Petri net model of the RSM protocol and used the model checking functions of our tool SMART to investigate a number of safety properties in RSM. To mitigate the impact of state-space explosion, we built a highly discretized model of the system, obtained by partitioning the monitored runway zone into a grid of smaller volumes and by considering scenarios involving only two aircraft. The model also assumes that there are no communication failures, such as bad input from radar or lack of incoming data, thus it relies on a consistent view of reality by all participants. In spite of these simplifications, we were able to expose potential problems in the RSM conceptual design. Our findings were forwarded to the design engineers, who undertook corrective action. Additionally, the results stress the efficiency attained by the new model checking algorithms implemented in SMART, and demonstrate their applicability to real-world systems. Attempts to verify RSM with NuSMV and SPIN have failed due to excessive memory consumption.
40 CFR 1065.340 - Diluted exhaust flow (CVS) calibration.
Code of Federal Regulations, 2010 CFR
2010-07-01
... action does not resolve a failure to meet the diluted exhaust flow verification (i.e., propane check) in... subsonic venturi flow meter, a long-radius ASME/NIST flow nozzle, a smooth approach orifice, a laminar flow...
Spacecraft command verification: The AI solution
NASA Technical Reports Server (NTRS)
Fesq, Lorraine M.; Stephan, Amy; Smith, Brian K.
1990-01-01
Recently, a knowledge-based approach was used to develop a system called the Command Constraint Checker (CCC) for TRW. CCC was created to automate the process of verifying spacecraft command sequences. To check command files by hand for timing and sequencing errors is a time-consuming and error-prone task. Conventional software solutions were rejected when it was estimated that it would require 36 man-months to build an automated tool to check constraints by conventional methods. Using rule-based representation to model the various timing and sequencing constraints of the spacecraft, CCC was developed and tested in only three months. By applying artificial intelligence techniques, CCC designers were able to demonstrate the viability of AI as a tool to transform difficult problems into easily managed tasks. The design considerations used in developing CCC are discussed and the potential impact of this system on future satellite programs is examined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Otani, Y; Sumida, I; Yagi, M
Purpose: Brachytherapy has multiple manual procedures which are prone to human error, especially during the connection process of the treatment device to applicator. This is when considerable attention is required. In this study, we propose a new connection verification device concept. Methods: The system is composed of a ring magnet (anisotropic ferrite : magfine Inc), hole device (A1324LUA-T : Allegro MicroSystems Phil Inc) and an in-house check cable, which is made from magnetic material (Figure1). The magnetic field distribution is affected by the check cable position and any magnetic field variation is detected by the hole device. This system frequencymore » is 20Hz and the average of 4 signals was used as hole device value to reduce noise. Results: The value of the hole device is altered, depending on the location of the check cable. The resolution of the check cable position is 5mm and 10mm, around a 10mm region from the hole device and over 10mm, respectively. There was a reduction in sensitivity of the hole device, in our test, which was linked to the distance of the hole device from the check cable. Conclusion: We demonstrated a new concept of connection verification in a brachytherapy. This system has the possibility to detect an incorrect connection. Moreover, the system is capable of self-optimization, such as determining the number of hole device and the magnet strength.Acknowledgement:This work was supported by JSPS Core -to-Core program Number 23003 and KAKENHI Grant Number 26860401. This work was supported by JSPS Core-to-Core program Number 23003 and KAKENHI Grant Number 26860401.« less
Kuppusamy, Vijayalakshmi; Nagarajan, Vivekanandan; Jeevanandam, Prakash; Murugan, Lavanya
2016-02-01
The study was aimed to compare two different monitor unit (MU) or dose verification software in volumetric modulated arc therapy (VMAT) using modified Clarkson's integration technique for 6 MV photons beams. In-house Excel Spreadsheet based monitor unit verification calculation (MUVC) program and PTW's DIAMOND secondary check software (SCS), version-6 were used as a secondary check to verify the monitor unit (MU) or dose calculated by treatment planning system (TPS). In this study 180 patients were grouped into 61 head and neck, 39 thorax and 80 pelvic sites. Verification plans are created using PTW OCTAVIUS-4D phantom and also measured using 729 detector chamber and array with isocentre as the suitable point of measurement for each field. In the analysis of 154 clinically approved VMAT plans with isocentre at a region above -350 HU, using heterogeneity corrections, In-house Spreadsheet based MUVC program and Diamond SCS showed good agreement TPS. The overall percentage average deviations for all sites were (-0.93% + 1.59%) and (1.37% + 2.72%) for In-house Excel Spreadsheet based MUVC program and Diamond SCS respectively. For 26 clinically approved VMAT plans with isocentre at a region below -350 HU showed higher variations for both In-house Spreadsheet based MUVC program and Diamond SCS. It can be concluded that for patient specific quality assurance (QA), the In-house Excel Spreadsheet based MUVC program and Diamond SCS can be used as a simple and fast accompanying to measurement based verification for plans with isocentre at a region above -350 HU. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
SMAP Verification and Validation Project - Final Report
NASA Technical Reports Server (NTRS)
Murry, Michael
2012-01-01
In 2007, the National Research Council (NRC) released the Decadal Survey of Earth science. In the future decade, the survey identified 15 new space missions of significant scientific and application value for the National Aeronautics and Space Administration (NASA) to undertake. One of these missions was the Soil Moisture Active Passive (SMAP) mission that NASA assigned to the Jet Propulsion Laboratory (JPL) in 2008. The goal of SMAP1 is to provide global, high resolution mapping of soil moisture and its freeze/thaw states. The SMAP project recently passed its Critical Design Review and is proceeding with its fabrication and testing phase.Verification and Validation (V&V) is widely recognized as a critical component in system engineering and is vital to the success of any space mission. V&V is a process that is used to check that a system meets its design requirements and specifications in order to fulfill its intended purpose. Verification often refers to the question "Have we built the system right?" whereas Validation asks "Have we built the right system?" Currently the SMAP V&V team is verifying design requirements through inspection, demonstration, analysis, or testing. An example of the SMAP V&V process is the verification of the antenna pointing accuracy with mathematical models since it is not possible to provide the appropriate micro-gravity environment for testing the antenna on Earth before launch.
Failure analysis of blots for diesel engine intercooler
NASA Astrophysics Data System (ADS)
Ren, Ping; Li, Zongquan; Wu, Jiangfei; Guo, Yibin; Li, Wanyou
2017-05-01
In diesel generating sets, it will lead to the abominable working condition if the fault couldn’t be recovered when the bolt of intercooler cracks. This paper aims at the fault of the blots of diesel generator intercooler and completes the analysis of the static strength and fatigue strength. Static intensity is checked considering blot preload and thermal stress. In order to obtain the thermal stress of the blot, thermodynamic of intercooler is calculated according to the measured temperature. Based on the measured vibration response and the finite element model, using dynamic load identification technique, equivalent excitation force of unit was solved. In order to obtain the force of bolt, the excitation force is loaded into the finite element model. By considering the thermal stress and preload as the average stress while the mechanical stress as the wave stress, fatigue strength analysis has been accomplished. Procedure of diagnosis is proposed in this paper. Finally, according to the result of intensity verification the fatigue failure is validation. Thereby, further studies are necessary to verification the result of the intensity analysis and put forward some improvement suggestion.
SSC Geopositional Assessment of the Advanced Wide Field Sensor
NASA Technical Reports Server (NTRS)
Ross, Kenton
2007-01-01
The objective is to provide independent verification of IRS geopositional accuracy claims and of the internal geopositional characterization provided by Lutes (2005). Six sub-scenes (quads) were assessed; Three from each AWiFS camera. Check points were manually matched to digital orthophoto quarter quadrangle (DOQQ) reference (assumed accuracy approx. 5 m, RMSE) Check points were selected to meet or exceed Federal Geographic Data Committee's guidelines. Used ESRI ArcGIS for data collection and SSC-written MATLAB scripts for data analysis.
Simulation environment based on the Universal Verification Methodology
NASA Astrophysics Data System (ADS)
Fiergolski, A.
2017-01-01
Universal Verification Methodology (UVM) is a standardized approach of verifying integrated circuit designs, targeting a Coverage-Driven Verification (CDV). It combines automatic test generation, self-checking testbenches, and coverage metrics to indicate progress in the design verification. The flow of the CDV differs from the traditional directed-testing approach. With the CDV, a testbench developer, by setting the verification goals, starts with an structured plan. Those goals are targeted further by a developed testbench, which generates legal stimuli and sends them to a device under test (DUT). The progress is measured by coverage monitors added to the simulation environment. In this way, the non-exercised functionality can be identified. Moreover, the additional scoreboards indicate undesired DUT behaviour. Such verification environments were developed for three recent ASIC and FPGA projects which have successfully implemented the new work-flow: (1) the CLICpix2 65 nm CMOS hybrid pixel readout ASIC design; (2) the C3PD 180 nm HV-CMOS active sensor ASIC design; (3) the FPGA-based DAQ system of the CLICpix chip. This paper, based on the experience from the above projects, introduces briefly UVM and presents a set of tips and advices applicable at different stages of the verification process-cycle.
Semi-Immersive Virtual Turbine Engine Simulation System
NASA Astrophysics Data System (ADS)
Abidi, Mustufa H.; Al-Ahmari, Abdulrahman M.; Ahmad, Ali; Darmoul, Saber; Ameen, Wadea
2018-05-01
The design and verification of assembly operations is essential for planning product production operations. Recently, virtual prototyping has witnessed tremendous progress, and has reached a stage where current environments enable rich and multi-modal interaction between designers and models through stereoscopic visuals, surround sound, and haptic feedback. The benefits of building and using Virtual Reality (VR) models in assembly process verification are discussed in this paper. In this paper, we present the virtual assembly (VA) of an aircraft turbine engine. The assembly parts and sequences are explained using a virtual reality design system. The system enables stereoscopic visuals, surround sounds, and ample and intuitive interaction with developed models. A special software architecture is suggested to describe the assembly parts and assembly sequence in VR. A collision detection mechanism is employed that provides visual feedback to check the interference between components. The system is tested for virtual prototype and assembly sequencing of a turbine engine. We show that the developed system is comprehensive in terms of VR feedback mechanisms, which include visual, auditory, tactile, as well as force feedback. The system is shown to be effective and efficient for validating the design of assembly, part design, and operations planning.
The ANMLite Language and Logic for Specifying Planning Problems
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Siminiceanu, Radu I.; Munoz, Cesar A.
2007-01-01
We present the basic concepts of the ANMLite planning language. We discuss various aspects of specifying a plan in terms of constraints and checking the existence of a solution with the help of a model checker. The constructs of the ANMLite language have been kept as simple as possible in order to reduce complexity and simplify the verification problem. We illustrate the language with a specification of the space shuttle crew activity model that was constructed under the Spacecraft Autonomy for Vehicles and Habitats (SAVH) project. The main purpose of this study was to explore the implications of choosing a robust logic behind the specification of constraints, rather than simply proposing a new planning language.
Proof Rules for Automated Compositional Verification through Learning
NASA Technical Reports Server (NTRS)
Barringer, Howard; Giannakopoulou, Dimitra; Pasareanu, Corina S.
2003-01-01
Compositional proof systems not only enable the stepwise development of concurrent processes but also provide a basis to alleviate the state explosion problem associated with model checking. An assume-guarantee style of specification and reasoning has long been advocated to achieve compositionality. However, this style of reasoning is often non-trivial, typically requiring human input to determine appropriate assumptions. In this paper, we present novel assume- guarantee rules in the setting of finite labelled transition systems with blocking communication. We show how these rules can be applied in an iterative and fully automated fashion within a framework based on learning.
Results from an Independent View on The Validation of Safety-Critical Space Systems
NASA Astrophysics Data System (ADS)
Silva, N.; Lopes, R.; Esper, A.; Barbosa, R.
2013-08-01
The Independent verification and validation (IV&V) has been a key process for decades, and is considered in several international standards. One of the activities described in the “ESA ISVV Guide” is the independent test verification (stated as Integration/Unit Test Procedures and Test Data Verification). This activity is commonly overlooked since customers do not really see the added value of checking thoroughly the validation team work (could be seen as testing the tester's work). This article presents the consolidated results of a large set of independent test verification activities, including the main difficulties, results obtained and advantages/disadvantages for the industry of these activities. This study will support customers in opting-in or opting-out for this task in future IV&V contracts since we provide concrete results from real case studies in the space embedded systems domain.
Model Checking A Self-Stabilizing Synchronization Protocol for Arbitrary Digraphs
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2012-01-01
This report presents the mechanical verification of a self-stabilizing distributed clock synchronization protocol for arbitrary digraphs in the absence of faults. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. The system under study is an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV) for a subset of digraphs. Modeling challenges of the protocol and the system are addressed. The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirmation of claims of determinism and linear convergence with respect to the self-stabilization period.
Automatic Review of Abstract State Machines by Meta Property Verification
NASA Technical Reports Server (NTRS)
Arcaini, Paolo; Gargantini, Angelo; Riccobene, Elvinia
2010-01-01
A model review is a validation technique aimed at determining if a model is of sufficient quality and allows defects to be identified early in the system development, reducing the cost of fixing them. In this paper we propose a technique to perform automatic review of Abstract State Machine (ASM) formal specifications. We first detect a family of typical vulnerabilities and defects a developer can introduce during the modeling activity using the ASMs and we express such faults as the violation of meta-properties that guarantee certain quality attributes of the specification. These meta-properties are then mapped to temporal logic formulas and model checked for their violation. As a proof of concept, we also report the result of applying this ASM review process to several specifications.
[Model for unplanned self extubation of ICU patients using system dynamics approach].
Song, Yu Gil; Yun, Eun Kyoung
2015-04-01
In this study a system dynamics methodology was used to identify correlation and nonlinear feedback structure among factors affecting unplanned extubation (UE) of ICU patients and to construct and verify a simulation model. Factors affecting UE were identified through a theoretical background established by reviewing literature and preceding studies and referencing various statistical data. Related variables were decided through verification of content validity by an expert group. A causal loop diagram (CLD) was made based on the variables. Stock & Flow modeling using Vensim PLE Plus Version 6.0 b was performed to establish a model for UE. Based on the literature review and expert verification, 18 variables associated with UE were identified and CLD was prepared. From the prepared CLD, a model was developed by converting to the Stock & Flow Diagram. Results of the simulation showed that patient stress, patient in an agitated state, restraint application, patient movability, and individual intensive nursing were variables giving the greatest effect to UE probability. To verify agreement of the UE model with real situations, simulation with 5 cases was performed. Equation check and sensitivity analysis on TIME STEP were executed to validate model integrity. Results show that identification of a proper model enables prediction of UE probability. This prediction allows for adjustment of related factors, and provides basic data do develop nursing interventions to decrease UE.
Defining the IEEE-854 floating-point standard in PVS
NASA Technical Reports Server (NTRS)
Miner, Paul S.
1995-01-01
A significant portion of the ANSI/IEEE-854 Standard for Radix-Independent Floating-Point Arithmetic is defined in PVS (Prototype Verification System). Since IEEE-854 is a generalization of the ANSI/IEEE-754 Standard for Binary Floating-Point Arithmetic, the definition of IEEE-854 in PVS also formally defines much of IEEE-754. This collection of PVS theories provides a basis for machine checked verification of floating-point systems. This formal definition illustrates that formal specification techniques are sufficiently advanced that is is reasonable to consider their use in the development of future standards.
An Overview of the Runtime Verification Tool Java PathExplorer
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Rosu, Grigore; Clancy, Daniel (Technical Monitor)
2002-01-01
We present an overview of the Java PathExplorer runtime verification tool, in short referred to as JPAX. JPAX can monitor the execution of a Java program and check that it conforms with a set of user provided properties formulated in temporal logic. JPAX can in addition analyze the program for concurrency errors such as deadlocks and data races. The concurrency analysis requires no user provided specification. The tool facilitates automated instrumentation of a program's bytecode, which when executed will emit an event stream, the execution trace, to an observer. The observer dispatches the incoming event stream to a set of observer processes, each performing a specialized analysis, such as the temporal logic verification, the deadlock analysis and the data race analysis. Temporal logic specifications can be formulated by the user in the Maude rewriting logic, where Maude is a high-speed rewriting system for equational logic, but here extended with executable temporal logic. The Maude rewriting engine is then activated as an event driven monitoring process. Alternatively, temporal specifications can be translated into efficient automata, which check the event stream. JPAX can be used during program testing to gain increased information about program executions, and can potentially furthermore be applied during operation to survey safety critical systems.
A Categorization of Dynamic Analyzers
NASA Technical Reports Server (NTRS)
Lujan, Michelle R.
1997-01-01
Program analysis techniques and tools are essential to the development process because of the support they provide in detecting errors and deficiencies at different phases of development. The types of information rendered through analysis includes the following: statistical measurements of code, type checks, dataflow analysis, consistency checks, test data,verification of code, and debugging information. Analyzers can be broken into two major categories: dynamic and static. Static analyzers examine programs with respect to syntax errors and structural properties., This includes gathering statistical information on program content, such as the number of lines of executable code, source lines. and cyclomatic complexity. In addition, static analyzers provide the ability to check for the consistency of programs with respect to variables. Dynamic analyzers in contrast are dependent on input and the execution of a program providing the ability to find errors that cannot be detected through the use of static analysis alone. Dynamic analysis provides information on the behavior of a program rather than on the syntax. Both types of analysis detect errors in a program, but dynamic analyzers accomplish this through run-time behavior. This paper focuses on the following broad classification of dynamic analyzers: 1) Metrics; 2) Models; and 3) Monitors. Metrics are those analyzers that provide measurement. The next category, models, captures those analyzers that present the state of the program to the user at specified points in time. The last category, monitors, checks specified code based on some criteria. The paper discusses each classification and the techniques that are included under them. In addition, the role of each technique in the software life cycle is discussed. Familiarization with the tools that measure, model and monitor programs provides a framework for understanding the program's dynamic behavior from different, perspectives through analysis of the input/output data.
Design and analysis of DNA strand displacement devices using probabilistic model checking
Lakin, Matthew R.; Parker, David; Cardelli, Luca; Kwiatkowska, Marta; Phillips, Andrew
2012-01-01
Designing correct, robust DNA devices is difficult because of the many possibilities for unwanted interference between molecules in the system. DNA strand displacement has been proposed as a design paradigm for DNA devices, and the DNA strand displacement (DSD) programming language has been developed as a means of formally programming and analysing these devices to check for unwanted interference. We demonstrate, for the first time, the use of probabilistic verification techniques to analyse the correctness, reliability and performance of DNA devices during the design phase. We use the probabilistic model checker prism, in combination with the DSD language, to design and debug DNA strand displacement components and to investigate their kinetics. We show how our techniques can be used to identify design flaws and to evaluate the merits of contrasting design decisions, even on devices comprising relatively few inputs. We then demonstrate the use of these components to construct a DNA strand displacement device for approximate majority voting. Finally, we discuss some of the challenges and possible directions for applying these methods to more complex designs. PMID:22219398
Current status of verification practices in clinical biochemistry in Spain.
Gómez-Rioja, Rubén; Alvarez, Virtudes; Ventura, Montserrat; Alsina, M Jesús; Barba, Núria; Cortés, Mariano; Llopis, María Antonia; Martínez, Cecilia; Ibarz, Mercè
2013-09-01
Verification uses logical algorithms to detect potential errors before laboratory results are released to the clinician. Even though verification is one of the main processes in all laboratories, there is a lack of standardization mainly in the algorithms used and the criteria and verification limits applied. A survey in clinical laboratories in Spain was conducted in order to assess the verification process, particularly the use of autoverification. Questionnaires were sent to the laboratories involved in the External Quality Assurance Program organized by the Spanish Society of Clinical Biochemistry and Molecular Pathology. Seven common biochemical parameters were included (glucose, cholesterol, triglycerides, creatinine, potassium, calcium, and alanine aminotransferase). Completed questionnaires were received from 85 laboratories. Nearly all the laboratories reported using the following seven verification criteria: internal quality control, instrument warnings, sample deterioration, reference limits, clinical data, concordance between parameters, and verification of results. The use of all verification criteria varied according to the type of verification (automatic, technical, or medical). Verification limits for these parameters are similar to biological reference ranges. Delta Check was used in 24% of laboratories. Most laboratories (64%) reported using autoverification systems. Autoverification use was related to laboratory size, ownership, and type of laboratory information system, but amount of use (percentage of test autoverified) was not related to laboratory size. A total of 36% of Spanish laboratories do not use autoverification, despite the general implementation of laboratory information systems, most of them, with autoverification ability. Criteria and rules for seven routine biochemical tests were obtained.
Cross-checking of Large Evaluated and Experimental Nuclear Reaction Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeydina, O.; Koning, A.J.; Soppera, N.
2014-06-15
Automated methods are presented for the verification of large experimental and evaluated nuclear reaction databases (e.g. EXFOR, JEFF, TENDL). These methods allow an assessment of the overall consistency of the data and detect aberrant values in both evaluated and experimental databases.
Ghasroddashti, E; Sawchuk, S
2008-07-01
To assess a diode detector array (MapCheck) for commissioning, quality assurance (QA); and patient specific QA for electrons. 2D dose information was captured for various depths at several square fields ranging from 2×2 to 25×25cm 2 , and 9 patient customized cutouts using both Mapcheck and a scanning water phantom. Beam energies of 6, 9, 12, 16 and 20 MeV produced by Varian linacs were used. The water tank, beam energies and fields were also modeled on the Pinnacle planning system obtaining dose information. Mapcheck, water phantom and Pinnacle results were compared. Relative output factors (ROF) acquired with Mapcheck were compared to an in-house algorithm (JeffIrreg). Inter- and intra-observer variability was also investigated Results: Profiles and %DD data for Mapcheck, water tank, and Pinnacle agree well. High-dose, low-dose-gradient comparisons agree to within 1% between Mapcheck and water phantom. Field size comparisons showed mostly sub-millimeter agreement. ROFs for Mapcheck and JeffIrreg agreed within 2.0% (mean=0.9%±0.6%). The current standard for electron commissioning and QA is the scanning water tank which may be inefficient. Our results demonstrate that MapCheck can potentially be an alternative. Also the dose distributions for patient specific electron treatment require verification. This procedure is particularly challenging when the minimum dimension across the central axis of the cutout is smaller than the range of the electrons in question. Mapcheck offers an easy and efficient way of determining patient dose distributions especially compared to using the alternatives, namely, ion chamber and film. © 2008 American Association of Physicists in Medicine.
A-7 Aloft Demonstration Flight Test Plan
1975-09-01
6095979 72A2130 Power Supply 12 VDC 6095681 72A29 NOC 72A30 ALOFT ASCU Adapter Set 72A3100 ALOFT ASCU Adapter L20-249-1 72A3110 Page assy L Bay and ASCU ...checks will also be performed for each of the following: 3.1.2.1.1 ASCU Codes. Verification will be made that all legal ASCU codes are recognized and...invalid codes inhibit attack mode. A check will also be made to verify that the ASCU codes for pilot-option weapons A-25 enable the retarded weapons
NASA Astrophysics Data System (ADS)
Boyarnikov, A. V.; Boyarnikova, L. V.; Kozhushko, A. A.; Sekachev, A. F.
2017-08-01
In the article the process of verification (calibration) of oil metering units secondary equipment is considered. The purpose of the work is to increase the reliability and reduce the complexity of this process by developing a software and hardware system that provides automated verification and calibration. The hardware part of this complex carries out the commutation of the measuring channels of the verified controller and the reference channels of the calibrator in accordance with the introduced algorithm. The developed software allows controlling the commutation of channels, setting values on the calibrator, reading the measured data from the controller, calculating errors and compiling protocols. This system can be used for checking the controllers of the secondary equipment of the oil metering units in the automatic verification mode (with the open communication protocol) or in the semi-automatic verification mode (without it). The peculiar feature of the approach used is the development of a universal signal switch operating under software control, which can be configured for various verification methods (calibration), which allows to cover the entire range of controllers of metering units secondary equipment. The use of automatic verification with the help of a hardware and software system allows to shorten the verification time by 5-10 times and to increase the reliability of measurements, excluding the influence of the human factor.
Optical/digital identification/verification system based on digital watermarking technology
NASA Astrophysics Data System (ADS)
Herrigel, Alexander; Voloshynovskiy, Sviatoslav V.; Hrytskiv, Zenon D.
2000-06-01
This paper presents a new approach for the secure integrity verification of driver licenses, passports or other analogue identification documents. The system embeds (detects) the reference number of the identification document with the DCT watermark technology in (from) the owner photo of the identification document holder. During verification the reference number is extracted and compared with the reference number printed in the identification document. The approach combines optical and digital image processing techniques. The detection system must be able to scan an analogue driver license or passport, convert the image of this document into a digital representation and then apply the watermark verification algorithm to check the payload of the embedded watermark. If the payload of the watermark is identical with the printed visual reference number of the issuer, the verification was successful and the passport or driver license has not been modified. This approach constitutes a new class of application for the watermark technology, which was originally targeted for the copyright protection of digital multimedia data. The presented approach substantially increases the security of the analogue identification documents applied in many European countries.
Quantified Event Automata: Towards Expressive and Efficient Runtime Monitors
NASA Technical Reports Server (NTRS)
Barringer, Howard; Falcone, Ylies; Havelund, Klaus; Reger, Giles; Rydeheard, David
2012-01-01
Runtime verification is the process of checking a property on a trace of events produced by the execution of a computational system. Runtime verification techniques have recently focused on parametric specifications where events take data values as parameters. These techniques exist on a spectrum inhabited by both efficient and expressive techniques. These characteristics are usually shown to be conflicting - in state-of-the-art solutions, efficiency is obtained at the cost of loss of expressiveness and vice-versa. To seek a solution to this conflict we explore a new point on the spectrum by defining an alternative runtime verification approach.We introduce a new formalism for concisely capturing expressive specifications with parameters. Our technique is more expressive than the currently most efficient techniques while at the same time allowing for optimizations.
CTEPP STANDARD OPERATING PROCEDURE FOR PROCESSING COMPLETED DATA FORMS (SOP-4.10)
This SOP describes the methods for processing completed data forms. Key components of the SOP include (1) field editing, (2) data form Chain-of-Custody, (3) data processing verification, (4) coding, (5) data entry, (6) programming checks, (7) preparation of data dictionaries, cod...
46 CFR 45.191 - Pre-departure requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
..., verification of mooring/docking space availability, and weather forecast checks were performed, and record the... voyage, the towing vessel master must conduct the following: (a) Weather forecast. Determine the marine weather forecast along the planned route, and contact the dock operator at the destination port to get an...
46 CFR 45.191 - Pre-departure requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
..., verification of mooring/docking space availability, and weather forecast checks were performed, and record the... voyage, the towing vessel master must conduct the following: (a) Weather forecast. Determine the marine weather forecast along the planned route, and contact the dock operator at the destination port to get an...
46 CFR 45.191 - Pre-departure requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
..., verification of mooring/docking space availability, and weather forecast checks were performed, and record the... voyage, the towing vessel master must conduct the following: (a) Weather forecast. Determine the marine weather forecast along the planned route, and contact the dock operator at the destination port to get an...
46 CFR 45.191 - Pre-departure requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., verification of mooring/docking space availability, and weather forecast checks were performed, and record the... voyage, the towing vessel master must conduct the following: (a) Weather forecast. Determine the marine weather forecast along the planned route, and contact the dock operator at the destination port to get an...
Communication Satellite Payload Special Check out Equipment (SCOE) for Satellite Testing
NASA Astrophysics Data System (ADS)
Subhani, Noman
2016-07-01
This paper presents Payload Special Check out Equipment (SCOE) for the test and measurement of communication satellite Payload at subsystem and system level. The main emphasis of this paper is to demonstrate the principle test equipment, instruments and the payload test matrix for an automatic test control. Electrical Ground Support Equipment (EGSE)/ Special Check out Equipment (SCOE) requirements, functions and architecture for C-band and Ku-band payloads are presented in details along with their interface with satellite during different phases of satellite testing. It provides test setup, in a single rack cabinet that can easily be moved from payload assembly and integration environment to thermal vacuum chamber all the way to launch site (for pre-launch test and verification).
Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems
NASA Technical Reports Server (NTRS)
Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris
2010-01-01
Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.
Formal Methods for Automated Diagnosis of Autosub 6000
NASA Technical Reports Server (NTRS)
Ernits, Juhan; Dearden, Richard; Pebody, Miles
2009-01-01
This is a progress report on applying formal methods in the context of building an automated diagnosis and recovery system for Autosub 6000, an Autonomous Underwater Vehicle (AUV). The diagnosis task involves building abstract models of the control system of the AUV. The diagnosis engine is based on Livingstone 2, a model-based diagnoser originally built for aerospace applications. Large parts of the diagnosis model can be built without concrete knowledge about each mission, but actual mission scripts and configuration parameters that carry important information for diagnosis are changed for every mission. Thus we use formal methods for generating the mission control part of the diagnosis model automatically from the mission script and perform a number of invariant checks to validate the configuration. After the diagnosis model is augmented with the generated mission control component model, it needs to be validated using verification techniques.
Sierra/Aria 4.48 Verification Manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal Fluid Development Team
Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
Expert system verification and validation survey, delivery 4
NASA Technical Reports Server (NTRS)
1990-01-01
The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.
A risk analysis approach applied to field surveillance in utility meters in legal metrology
NASA Astrophysics Data System (ADS)
Rodrigues Filho, B. A.; Nonato, N. S.; Carvalho, A. D.
2018-03-01
Field surveillance represents the level of control in metrological supervision responsible for checking the conformity of measuring instruments in-service. Utility meters represent the majority of measuring instruments produced by notified bodies due to self-verification in Brazil. They play a major role in the economy once electricity, gas and water are the main inputs to industries in their production processes. Then, to optimize the resources allocated to control these devices, the present study applied a risk analysis in order to identify among the 11 manufacturers notified to self-verification, the instruments that demand field surveillance.
User-friendly design approach for analog layout design
NASA Astrophysics Data System (ADS)
Li, Yongfu; Lee, Zhao Chuan; Tripathi, Vikas; Perez, Valerio; Ong, Yoong Seang; Hui, Chiu Wing
2017-03-01
Analog circuits are sensitives to the changes in the layout environment conditions, manufacturing processes, and variations. This paper presents analog verification flow with five types of analogfocused layout constraint checks to assist engineers in identifying any potential device mismatch and layout drawing mistakes. Compared to several solutions, our approach only requires layout design, which is sufficient to recognize all the matched devices. Our approach simplifies the data preparation and allows seamless integration into the layout environment with minimum disruption to the custom layout flow. Our user-friendly analog verification flow provides the engineer with more confident with their layouts quality.
Expert system verification and validation survey. Delivery 2: Survey results
NASA Technical Reports Server (NTRS)
1990-01-01
The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and industry applications. This is the first task of the series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.
Expert system verification and validation survey. Delivery 5: Revised
NASA Technical Reports Server (NTRS)
1990-01-01
The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.
Expert system verification and validation survey. Delivery 3: Recommendations
NASA Technical Reports Server (NTRS)
1990-01-01
The purpose is to determine the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of ESs.
Specification and verification of gate-level VHDL models of synchronous and asynchronous circuits
NASA Technical Reports Server (NTRS)
Russinoff, David M.
1995-01-01
We present a mathematical definition of hardware description language (HDL) that admits a semantics-preserving translation to a subset of VHDL. Our HDL includes the basic VHDL propagation delay mechanisms and gate-level circuit descriptions. We also develop formal procedures for deriving and verifying concise behavioral specifications of combinational and sequential devices. The HDL and the specification procedures have been formally encoded in the computational logic of Boyer and Moore, which provides a LISP implementation as well as a facility for mechanical proof-checking. As an application, we design, specify, and verify a circuit that achieves asynchronous communication by means of the biphase mark protocol.
"Tech-check-tech": a review of the evidence on its safety and benefits.
Adams, Alex J; Martin, Steven J; Stolpe, Samuel F
2011-10-01
The published evidence on state-authorized programs permitting final verification of medication orders by pharmacy technicians, including the programs' impact on pharmacist work hours and clinical activities, is reviewed. Some form of "tech-check-tech" (TCT)--the checking of a technician's order-filling accuracy by another technician rather than a pharmacist--is authorized for use by pharmacies in at least nine states. The results of 11 studies published since 1978 indicate that technicians' accuracy in performing final dispensing checks is very comparable to pharmacists' accuracy (mean ± S.D., 99.6% ± 0.55% versus 99.3% ± 0.68%, respectively). In 6 of those studies, significant differences in accuracy or error detection rates favoring TCT were reported (p < 0.05), although published TCT studies to date have had important limitations. In states with active or pilot TCT programs, pharmacists surveyed have reported that the practice has yielded time savings (estimates range from 10 hours per month to 1 hour per day), enabling them to spend more time providing clinical services. States permitting TCT programs require technicians to complete special training before assuming TCT duties, which are generally limited to restocking automated dispensing machines and filling unit dose batches of refills in hospitals and other institutional settings. The published evidence demonstrates that pharmacy technicians can perform as accurately as pharmacists, perhaps more accurately, in the final verification of unit dose orders in institutional settings. Current TCT programs have fairly consistent elements, including the limitation of TCT to institutional settings, advanced education and training requirements for pharmacy technicians, and ongoing quality assurance.
Digital conversion of INEL archeological data using ARC/INFO and Oracle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, R.D.; Brizzee, J.; White, L.
1993-11-04
This report documents the procedures used to convert archaeological data for the INEL to digital format, lists the equipment used, and explains the verification and validation steps taken to check data entry. It also details the production of an engineered interface between ARC/INFO and Oracle.
48 CFR 245.7201 - Performing inventory verification and determination of allocability.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... Use the following guidance for verifying inventory schedules— (a) Allocability. (1) Review contract... Government use. (2) Review the contractor's— (i) Recent purchases of similar material; (ii) Plans for current.... While a complete physical count of each item is not required, perform sufficient checks to ensure...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-12
...) also requires Eligible Telecommunications Carriers (ETCs) to submit to the Universal Service.... Prior to 2009, USAC provided sample certification and verification letters on its website to assist ETCs... check box to accommodate wireless ETCs serving non-federal default states that do not assert...
40 CFR 92.124 - Test sequence; general requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... (e) Pre-test engine measurements (e.g., idle and throttle notch speeds, fuel flows, etc.), pre-test engine performance checks (e.g., verification of engine power, etc.) and pre-test system calibrations (e... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Test sequence; general requirements...
40 CFR 92.124 - Test sequence; general requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... (e) Pre-test engine measurements (e.g., idle and throttle notch speeds, fuel flows, etc.), pre-test engine performance checks (e.g., verification of engine power, etc.) and pre-test system calibrations (e... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Test sequence; general requirements. 92...
40 CFR 92.124 - Test sequence; general requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... (e) Pre-test engine measurements (e.g., idle and throttle notch speeds, fuel flows, etc.), pre-test engine performance checks (e.g., verification of engine power, etc.) and pre-test system calibrations (e... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Test sequence; general requirements...
40 CFR 92.124 - Test sequence; general requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... (e) Pre-test engine measurements (e.g., idle and throttle notch speeds, fuel flows, etc.), pre-test engine performance checks (e.g., verification of engine power, etc.) and pre-test system calibrations (e... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Test sequence; general requirements...
40 CFR 92.124 - Test sequence; general requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... (e) Pre-test engine measurements (e.g., idle and throttle notch speeds, fuel flows, etc.), pre-test engine performance checks (e.g., verification of engine power, etc.) and pre-test system calibrations (e... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Test sequence; general requirements...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayler, E; Harrison, A; Eldredge-Hindy, H
Purpose: and Leipzig applicators (VLAs) are single-channel brachytherapy surface applicators used to treat skin lesions up to 2cm diameter. Source dwell times can be calculated and entered manually after clinical set-up or ultrasound. This procedure differs dramatically from CT-based planning; the novelty and unfamiliarity could lead to severe errors. To build layers of safety and ensure quality, a multidisciplinary team created a protocol and applied Failure Modes and Effects Analysis (FMEA) to the clinical procedure for HDR VLA skin treatments. Methods: team including physicists, physicians, nurses, therapists, residents, and administration developed a clinical procedure for VLA treatment. The procedure wasmore » evaluated using FMEA. Failure modes were identified and scored by severity, occurrence, and detection. The clinical procedure was revised to address high-scoring process nodes. Results: Several key components were added to the clinical procedure to minimize risk probability numbers (RPN): -Treatments are reviewed at weekly QA rounds, where physicians discuss diagnosis, prescription, applicator selection, and set-up. Peer review reduces the likelihood of an inappropriate treatment regime. -A template for HDR skin treatments was established in the clinical EMR system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planning physicist, and increases the detectability of an error during the physics second check. -A screen check was implemented during the second check to increase detectability of an error. -To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display. This facilitates data entry and verification. -VLAs are color-coded and labeled to match the EMR prescriptions, which simplifies in-room selection and verification. Conclusion: Multidisciplinary planning and FMEA increased delectability and reduced error probability during VLA HDR Brachytherapy. This clinical model may be useful to institutions implementing similar procedures.« less
TH-AB-201-01: A Feasibility Study of Independent Dose Verification for CyberKnife
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sato, A; Noda, T; Keduka, Y
2016-06-15
Purpose: CyberKnife irradiation is composed of tiny-size, multiple and intensity-modulated beams compared to conventional linacs. Few of the publications for Independent dose calculation verification for CyberKnife have been reported. In this study, we evaluated the feasibility of independent dose verification for CyberKnife treatment as Secondary check. Methods: The followings were measured: test plans using some static and single beams, clinical plans in a phantom and using patient’s CT. 75 patient plans were collected from several treatment sites of brain, lung, liver and bone. In the test plans and the phantom plans, a pinpoint ion-chamber measurement was performed to assess dosemore » deviation for a treatment planning system (TPS) and an independent verification program of Simple MU Analysis (SMU). In the clinical plans, dose deviation between the SMU and the TPS was performed. Results: In test plan, the dose deviations were 3.3±4.5%, and 4.1±4.4% for the TPS and the SMU, respectively. In the phantom measurements for the clinical plans, the dose deviations were −0.2±3.6% for the TPS and −2.3±4.8% for the SMU. In the clinical plans using the patient’s CT, the dose deviations were −3.0±2.1% (Mean±1SD). The systematic difference was partially derived from inverse square law and penumbra calculation. Conclusion: The independent dose calculation for CyberKnife shows −3.0±4.2% (Mean±2SD) and our study, the confidence limit was achieved within 5% of the tolerance level from AAPM task group 114 for non-IMRT treatment. Thus, it may be feasible to use independent dose calculation verification for CyberKnife treatment as the secondary check. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less
Enhanced dynamic wedge and independent monitor unit verification.
Howlett, S J
2005-03-01
Some serious radiation accidents have occurred around the world during the delivery of radiotherapy treatment. The regrettable incident in Panama clearly indicated the need for independent monitor unit (MU) verification. Indeed the International Atomic Energy Agency (IAEA), after investigating the incident, made specific recommendations for radiotherapy centres which included an independent monitor unit check for all treatments. Independent monitor unit verification is practiced in many radiotherapy centres in developed countries around the world. It is mandatory in USA but not yet in Australia. This paper describes development of an independent MU program, concentrating on the implementation of the Enhanced Dynamic Wedge (EDW) component. The difficult case of non centre of field (COF) calculation points under the EDW was studied in some detail. Results of a survey of Australasian centres regarding the use of independent MU check systems is also presented. The system was developed with reference to MU calculations made by Pinnacle 3D Radiotherapy Treatment Planning (RTP) system (ADAC - Philips) for 4MV, 6MV and 18MV X-ray beams used at the Newcastle Mater Misericordiae Hospital (NMMH) in the clinical environment. A small systematic error was detected in the equation used for the EDW calculations. Results indicate that COF equations may be used in the non COF situation with similar accuracy to that achieved with profile corrected methods. Further collaborative work with other centres is planned to extend these findings.
Model-Driven Safety Analysis of Closed-Loop Medical Systems
Pajic, Miroslav; Mangharam, Rahul; Sokolsky, Oleg; Arney, David; Goldman, Julian; Lee, Insup
2013-01-01
In modern hospitals, patients are treated using a wide array of medical devices that are increasingly interacting with each other over the network, thus offering a perfect example of a cyber-physical system. We study the safety of a medical device system for the physiologic closed-loop control of drug infusion. The main contribution of the paper is the verification approach for the safety properties of closed-loop medical device systems. We demonstrate, using a case study, that the approach can be applied to a system of clinical importance. Our method combines simulation-based analysis of a detailed model of the system that contains continuous patient dynamics with model checking of a more abstract timed automata model. We show that the relationship between the two models preserves the crucial aspect of the timing behavior that ensures the conservativeness of the safety analysis. We also describe system design that can provide open-loop safety under network failure. PMID:24177176
Symbolically Modeling Concurrent MCAPI Executions
NASA Technical Reports Server (NTRS)
Fischer, Topher; Mercer, Eric; Rungta, Neha
2011-01-01
Improper use of Inter-Process Communication (IPC) within concurrent systems often creates data races which can lead to bugs that are challenging to discover. Techniques that use Satisfiability Modulo Theories (SMT) problems to symbolically model possible executions of concurrent software have recently been proposed for use in the formal verification of software. In this work we describe a new technique for modeling executions of concurrent software that use a message passing API called MCAPI. Our technique uses an execution trace to create an SMT problem that symbolically models all possible concurrent executions and follows the same sequence of conditional branch outcomes as the provided execution trace. We check if there exists a satisfying assignment to the SMT problem with respect to specific safety properties. If such an assignment exists, it provides the conditions that lead to the violation of the property. We show how our method models behaviors of MCAPI applications that are ignored in previously published techniques.
Model-Driven Safety Analysis of Closed-Loop Medical Systems.
Pajic, Miroslav; Mangharam, Rahul; Sokolsky, Oleg; Arney, David; Goldman, Julian; Lee, Insup
2012-10-26
In modern hospitals, patients are treated using a wide array of medical devices that are increasingly interacting with each other over the network, thus offering a perfect example of a cyber-physical system. We study the safety of a medical device system for the physiologic closed-loop control of drug infusion. The main contribution of the paper is the verification approach for the safety properties of closed-loop medical device systems. We demonstrate, using a case study, that the approach can be applied to a system of clinical importance. Our method combines simulation-based analysis of a detailed model of the system that contains continuous patient dynamics with model checking of a more abstract timed automata model. We show that the relationship between the two models preserves the crucial aspect of the timing behavior that ensures the conservativeness of the safety analysis. We also describe system design that can provide open-loop safety under network failure.
FAME, a microprocessor based front-end analysis and modeling environment
NASA Technical Reports Server (NTRS)
Rosenbaum, J. D.; Kutin, E. B.
1980-01-01
Higher order software (HOS) is a methodology for the specification and verification of large scale, complex, real time systems. The HOS methodology was implemented as FAME (front end analysis and modeling environment), a microprocessor based system for interactively developing, analyzing, and displaying system models in a low cost user-friendly environment. The nature of the model is such that when completed it can be the basis for projection to a variety of forms such as structured design diagrams, Petri-nets, data flow diagrams, and PSL/PSA source code. The user's interface with the analyzer is easily recognized by any current user of a structured modeling approach; therefore extensive training is unnecessary. Furthermore, when all the system capabilities are used one can check on proper usage of data types, functions, and control structures thereby adding a new dimension to the design process that will lead to better and more easily verified software designs.
D Modeling with Photogrammetry by Uavs and Model Quality Verification
NASA Astrophysics Data System (ADS)
Barrile, V.; Bilotta, G.; Nunnari, A.
2017-11-01
This paper deals with a test lead by Geomatics laboratory (DICEAM, Mediterranea University of Reggio Calabria), concerning the application of UAV photogrammetry for survey, monitoring and checking. The study case relies with the surroundings of the Department of Agriculture Sciences. In the last years, such area was interested by landslides and survey activities carried out to take the phenomenon under control. For this purpose, a set of digital images were acquired through a UAV equipped with a digital camera and GPS. Successively, the processing for the production of a 3D georeferenced model was performed by using the commercial software Agisoft PhotoScan. Similarly, the use of a terrestrial laser scanning technique allowed to product dense cloud and 3D models of the same area. To assess the accuracy of the UAV-derived 3D models, a comparison between image and range-based methods was performed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harpool, K; De La Fuente Herman, T; Ahmad, S
Purpose: To evaluate the performance of a two-dimensional (2D) array-diode- detector for geometric and dosimetric quality assurance (QA) tests of high-dose-rate (HDR) brachytherapy with an Ir-192-source. Methods: A phantom setup was designed that encapsulated a two-dimensional (2D) array-diode-detector (MapCheck2) and a catheter for the HDR brachytherapy Ir-192 source. This setup was used to perform both geometric and dosimetric quality assurance for the HDR-Ir192 source. The geometric tests included: (a) measurement of the position of the source and (b) spacing between different dwell positions. The dosimteric tests include: (a) linearity of output with time, (b) end effect and (c) relative dosemore » verification. The 2D-dose distribution measured with MapCheck2 was used to perform the previous tests. The results of MapCheck2 were compared with the corresponding quality assurance testes performed with Gafchromic-film and well-ionization-chamber. Results: The position of the source and the spacing between different dwell-positions were reproducible within 1 mm accuracy by measuring the position of maximal dose using MapCheck2 in contrast to the film which showed a blurred image of the dwell positions due to limited film sensitivity to irradiation. The linearity of the dose with dwell times measured from MapCheck2 was superior to the linearity measured with ionization chamber due to higher signal-to-noise ratio of the diode readings. MapCheck2 provided more accurate measurement of the end effect with uncertainty < 1.5% in comparison with the ionization chamber uncertainty of 3%. Although MapCheck2 did not provide absolute calibration dosimeter for the activity of the source, it provided accurate tool for relative dose verification in HDR-brachytherapy. Conclusion: The 2D-array-diode-detector provides a practical, compact and accurate tool to perform quality assurance for HDR-brachytherapy with an Ir-192 source. The diodes in MapCheck2 have high radiation sensitivity and linearity that is superior to Gafchromic-films and ionization chamber used for geometric and dosimetric QA in HDR-brachytherapy, respectively.« less
Automata-Based Verification of Temporal Properties on Running Programs
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Havelund, Klaus; Lan, Sonie (Technical Monitor)
2001-01-01
This paper presents an approach to checking a running program against its Linear Temporal Logic (LTL) specifications. LTL is a widely used logic for expressing properties of programs viewed as sets of executions. Our approach consists of translating LTL formulae to finite-state automata, which are used as observers of the program behavior. The translation algorithm we propose modifies standard LTL to Buchi automata conversion techniques to generate automata that check finite program traces. The algorithm has been implemented in a tool, which has been integrated with the generic JPaX framework for runtime analysis of Java programs.
Model Checking Failed Conjectures in Theorem Proving: A Case Study
NASA Technical Reports Server (NTRS)
Pike, Lee; Miner, Paul; Torres-Pomales, Wilfredo
2004-01-01
Interactive mechanical theorem proving can provide high assurance of correct design, but it can also be a slow iterative process. Much time is spent determining why a proof of a conjecture is not forthcoming. In some cases, the conjecture is false and in others, the attempted proof is insufficient. In this case study, we use the SAL family of model checkers to generate a concrete counterexample to an unproven conjecture specified in the mechanical theorem prover, PVS. The focus of our case study is the ROBUS Interactive Consistency Protocol. We combine the use of a mechanical theorem prover and a model checker to expose a subtle flaw in the protocol that occurs under a particular scenario of faults and processor states. Uncovering the flaw allows us to mend the protocol and complete its general verification in PVS.
Flight Guidance System Validation Using SPIN
NASA Technical Reports Server (NTRS)
Naydich, Dimitri; Nowakowski, John
1998-01-01
To verify the requirements for the mode control logic of a Flight Guidance System (FGS) we applied SPIN, a widely used software package that supports the formal verification of distributed systems. These requirements, collectively called the FGS specification, were developed at Rockwell Avionics & Communications and expressed in terms of the Consortium Requirements Engineering (CoRE) method. The properties to be verified are the invariants formulated in the FGS specification, along with the standard properties of consistency and completeness. The project had two stages. First, the FGS specification and the properties to be verified were reformulated in PROMELA, the input language of SPIN. This involved a semantics issue, as some constructs of the FGS specification do not have well-defined semantics in CoRE. Then we attempted to verify the requirements' properties using the automatic model checking facilities of SPIN. Due to the large size of the state space of the FGS specification an exhaustive state space analysis with SPIN turned out to be impossible. So we used the supertrace model checking procedure of SPIN that provides for a partial analysis of the state space. During this process, we found some subtle errors in the FGS specification.
40 CFR 86.1330-90 - Test sequence; general requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... fuel and the point at which it is measured shall be specified by the manufacturer. (g) Pre-test engine...-fueled or methanol-fueled diesel engine fuel flows, etc.), pre-test engine performance checks (e.g., verification of actual rated rpm, etc.) and pre-test system calibrations (e.g., inlet and exhaust restrictions...
40 CFR 86.1330-90 - Test sequence; general requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... fuel and the point at which it is measured shall be specified by the manufacturer. (g) Pre-test engine...-fueled or methanol-fueled diesel engine fuel flows, etc.), pre-test engine performance checks (e.g., verification of actual rated rpm, etc.) and pre-test system calibrations (e.g., inlet and exhaust restrictions...
40 CFR 86.1330-90 - Test sequence; general requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... fuel and the point at which it is measured shall be specified by the manufacturer. (g) Pre-test engine...-fueled or methanol-fueled diesel engine fuel flows, etc.), pre-test engine performance checks (e.g., verification of actual rated rpm, etc.) and pre-test system calibrations (e.g., inlet and exhaust restrictions...
40 CFR 86.1330-90 - Test sequence; general requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... fuel and the point at which it is measured shall be specified by the manufacturer. (g) Pre-test engine...-fueled or methanol-fueled diesel engine fuel flows, etc.), pre-test engine performance checks (e.g., verification of actual rated rpm, etc.) and pre-test system calibrations (e.g., inlet and exhaust restrictions...
Intellectual Functioning and Aging: A Selected Bibliography. Technical Bibliographies on Aging.
ERIC Educational Resources Information Center
Schaie, K. Warner; Zelinski, Elizabeth M.
The selected bibliography contains about 400 references taken from a keysort file of more than 45,000 references, compiled from commercially available data bases and published sources, relevant to gerontology. Those of questionable accuracy were checked or deleted during the verification process. Most references are in English and were selected…
A system for EPID-based real-time treatment delivery verification during dynamic IMRT treatment.
Fuangrod, Todsaporn; Woodruff, Henry C; van Uytven, Eric; McCurdy, Boyd M C; Kuncic, Zdenka; O'Connor, Daryl J; Greer, Peter B
2013-09-01
To design and develop a real-time electronic portal imaging device (EPID)-based delivery verification system for dynamic intensity modulated radiation therapy (IMRT) which enables detection of gross treatment delivery errors before delivery of substantial radiation to the patient. The system utilizes a comprehensive physics-based model to generate a series of predicted transit EPID image frames as a reference dataset and compares these to measured EPID frames acquired during treatment. The two datasets are using MLC aperture comparison and cumulative signal checking techniques. The system operation in real-time was simulated offline using previously acquired images for 19 IMRT patient deliveries with both frame-by-frame comparison and cumulative frame comparison. Simulated error case studies were used to demonstrate the system sensitivity and performance. The accuracy of the synchronization method was shown to agree within two control points which corresponds to approximately ∼1% of the total MU to be delivered for dynamic IMRT. The system achieved mean real-time gamma results for frame-by-frame analysis of 86.6% and 89.0% for 3%, 3 mm and 4%, 4 mm criteria, respectively, and 97.9% and 98.6% for cumulative gamma analysis. The system can detect a 10% MU error using 3%, 3 mm criteria within approximately 10 s. The EPID-based real-time delivery verification system successfully detected simulated gross errors introduced into patient plan deliveries in near real-time (within 0.1 s). A real-time radiation delivery verification system for dynamic IMRT has been demonstrated that is designed to prevent major mistreatments in modern radiation therapy.
A new verification film system for routine quality control of radiation fields: Kodak EC-L.
Hermann, A; Bratengeier, K; Priske, A; Flentje, M
2000-06-01
The use of modern irradiation techniques requires better verification films for determining set-up deviations and patient movements during the course of radiation treatment. This is an investigation of the image quality and time requirement of a new verification film system compared to a conventional portal film system. For conventional verifications we used Agfa Curix HT 1000 films which were compared to the new Kodak EC-L film system. 344 Agfa Curix HT 1000 and 381 Kodak EC-L portal films of different tumor sites (prostate, rectum, head and neck) were visually judged on a light box by 2 experienced physicians. Subjective judgement of image quality, masking of films and time requirement were checked. In this investigation 68% of 175 Kodak EC-L ap/pa-films were judged "good", only 18% were classified "moderate" or "poor" 14%, but only 22% of 173 conventional ap/pa verification films (Agfa Curix HT 1000) were judged to be "good". The image quality, detail perception and time required for film inspection of the new Kodak EC-L film system was significantly improved when compared with standard portal films. They could be read more accurately and the detection of set-up deviation was facilitated.
NASA Technical Reports Server (NTRS)
1990-01-01
The purpose is to report the state-of-the-practice in Verification and Validation (V and V) of Expert Systems (ESs) on current NASA and Industry applications. This is the first task of a series which has the ultimate purpose of ensuring that adequate ES V and V tools and techniques are available for Space Station Knowledge Based Systems development. The strategy for determining the state-of-the-practice is to check how well each of the known ES V and V issues are being addressed and to what extent they have impacted the development of Expert Systems.
Classical verification of quantum circuits containing few basis changes
NASA Astrophysics Data System (ADS)
Demarie, Tommaso F.; Ouyang, Yingkai; Fitzsimons, Joseph F.
2018-04-01
We consider the task of verifying the correctness of quantum computation for a restricted class of circuits which contain at most two basis changes. This contains circuits giving rise to the second level of the Fourier hierarchy, the lowest level for which there is an established quantum advantage. We show that when the circuit has an outcome with probability at least the inverse of some polynomial in the circuit size, the outcome can be checked in polynomial time with bounded error by a completely classical verifier. This verification procedure is based on random sampling of computational paths and is only possible given knowledge of the likely outcome.
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiCostanzo, D; Ayan, A; Woollard, J
Purpose: To automate the daily verification of each patient’s treatment by utilizing the trajectory log files (TLs) written by the Varian TrueBeam linear accelerator while reducing the number of false positives including jaw and gantry positioning errors, that are displayed in the Treatment History tab of Varian’s Chart QA module. Methods: Small deviations in treatment parameters are difficult to detect in weekly chart checks, but may be significant in reducing delivery errors, and would be critical if detected daily. Software was developed in house to read TLs. Multiple functions were implemented within the software that allow it to operate viamore » a GUI to analyze TLs, or as a script to run on a regular basis. In order to determine tolerance levels for the scripted analysis, 15,241 TLs from seven TrueBeams were analyzed. The maximum error of each axis for each TL was written to a CSV file and statistically analyzed to determine the tolerance for each axis accessible in the TLs to flag for manual review. The software/scripts developed were tested by varying the tolerance values to ensure veracity. After tolerances were determined, multiple weeks of manual chart checks were performed simultaneously with the automated analysis to ensure validity. Results: The tolerance values for the major axis were determined to be, 0.025 degrees for the collimator, 1.0 degree for the gantry, 0.002cm for the y-jaws, 0.01cm for the x-jaws, and 0.5MU for the MU. The automated verification of treatment parameters has been in clinical use for 4 months. During that time, no errors in machine delivery of the patient treatments were found. Conclusion: The process detailed here is a viable and effective alternative to manually checking treatment parameters during weekly chart checks.« less
Cycle time reduction by Html report in mask checking flow
NASA Astrophysics Data System (ADS)
Chen, Jian-Cheng; Lu, Min-Ying; Fang, Xiang; Shen, Ming-Feng; Ma, Shou-Yuan; Yang, Chuen-Huei; Tsai, Joe; Lee, Rachel; Deng, Erwin; Lin, Ling-Chieh; Liao, Hung-Yueh; Tsai, Jenny; Bowhill, Amanda; Vu, Hien; Russell, Gordon
2017-07-01
The Mask Data Correctness Check (MDCC) is a reticle-level, multi-layer DRC-like check evolved from mask rule check (MRC). The MDCC uses extended job deck (EJB) to achieve mask composition and to perform a detailed check for positioning and integrity of each component of the reticle. Different design patterns on the mask will be mapped to different layers. Therefore, users may be able to review the whole reticle and check the interactions between different designs before the final mask pattern file is available. However, many types of MDCC check results, such as errors from overlapping patterns usually have very large and complex-shaped highlighted areas covering the boundary of the design. Users have to load the result OASIS file and overlap it to the original database that was assembled in MDCC process on a layout viewer, then search for the details of the check results. We introduce a quick result-reviewing method based on an html format report generated by Calibre® RVE. In the report generation process, we analyze and extract the essential part of result OASIS file to a result database (RDB) file by standard verification rule format (SVRF) commands. Calibre® RVE automatically loads the assembled reticle pattern and generates screen shots of these check results. All the processes are automatically triggered just after the MDCC process finishes. Users just have to open the html report to get the information they need: for example, check summary, captured images of results and their coordinates.
Interaction model between capsule robot and intestine based on nonlinear viscoelasticity.
Zhang, Cheng; Liu, Hao; Tan, Renjia; Li, Hongyi
2014-03-01
Active capsule endoscope could also be called capsule robot, has been developed from laboratory research to clinical application. However, the system still has defects, such as poor controllability and failing to realize automatic checks. The imperfection of the interaction model between capsule robot and intestine is one of the dominating reasons causing the above problems. A model is hoped to be established for the control method of the capsule robot in this article. It is established based on nonlinear viscoelasticity. The interaction force of the model consists of environmental resistance, viscous resistance and Coulomb friction. The parameters of the model are identified by experimental investigation. Different methods are used in the experiment to obtain different values of the same parameter at different velocities. The model is proved to be valid by experimental verification. The achievement in this article is the attempted perfection of an interaction model. It is hoped that the model can optimize the control method of the capsule robot in the future.
UML activity diagrams in requirements specification of logic controllers
NASA Astrophysics Data System (ADS)
Grobelna, Iwona; Grobelny, Michał
2015-12-01
Logic controller specification can be prepared using various techniques. One of them is the wide understandable and user-friendly UML language and its activity diagrams. Using formal methods during the design phase increases the assurance that implemented system meets the project requirements. In the approach we use the model checking technique to formally verify a specification against user-defined behavioral requirements. The properties are usually defined as temporal logic formulas. In the paper we propose to use UML activity diagrams in requirements definition and then to formalize them as temporal logic formulas. As a result, UML activity diagrams can be used both for logic controller specification and for requirements definition, what simplifies the specification and verification process.
Theorem Proving in Intel Hardware Design
NASA Technical Reports Server (NTRS)
O'Leary, John
2009-01-01
For the past decade, a framework combining model checking (symbolic trajectory evaluation) and higher-order logic theorem proving has been in production use at Intel. Our tools and methodology have been used to formally verify execution cluster functionality (including floating-point operations) for a number of Intel products, including the Pentium(Registered TradeMark)4 and Core(TradeMark)i7 processors. Hardware verification in 2009 is much more challenging than it was in 1999 - today s CPU chip designs contain many processor cores and significant firmware content. This talk will attempt to distill the lessons learned over the past ten years, discuss how they apply to today s problems, outline some future directions.
Highly efficient simulation environment for HDTV video decoder in VLSI design
NASA Astrophysics Data System (ADS)
Mao, Xun; Wang, Wei; Gong, Huimin; He, Yan L.; Lou, Jian; Yu, Lu; Yao, Qingdong; Pirsch, Peter
2002-01-01
With the increase of the complex of VLSI such as the SoC (System on Chip) of MPEG-2 Video decoder with HDTV scalability especially, simulation and verification of the full design, even as high as the behavior level in HDL, often proves to be very slow, costly and it is difficult to perform full verification until late in the design process. Therefore, they become bottleneck of the procedure of HDTV video decoder design, and influence it's time-to-market mostly. In this paper, the architecture of Hardware/Software Interface of HDTV video decoder is studied, and a Hardware-Software Mixed Simulation (HSMS) platform is proposed to check and correct error in the early design stage, based on the algorithm of MPEG-2 video decoding. The application of HSMS to target system could be achieved by employing several introduced approaches. Those approaches speed up the simulation and verification task without decreasing performance.
Gender verification testing in sport.
Ferris, E A
1992-07-01
Gender verification testing in sport, first introduced in 1966 by the International Amateur Athletic Federation (IAAF) in response to fears that males with a physical advantage in terms of muscle mass and strength were cheating by masquerading as females in women's competition, has led to unfair disqualifications of women athletes and untold psychological harm. The discredited sex chromatin test, which identifies only the sex chromosome component of gender and is therefore misleading, was abandoned in 1991 by the IAAF in favour of medical checks for all athletes, women and men, which preclude the need for gender testing. But, women athletes will still be tested at the Olympic Games at Albertville and Barcelona using polymerase chain reaction (PCR) to amplify DNA sequences on the Y chromosome which identifies genetic sex only. Gender verification testing may in time be abolished when the sporting community are fully cognizant of its scientific and ethical implications.
External tank aerothermal design criteria verification, volume 2
NASA Technical Reports Server (NTRS)
Crain, William K.; Frost, Cynthia; Warmbrod, John
1990-01-01
The objective of the study was to produce an independent set of ascent environments which would serve as a check on the Rockwell International (RI) IVBC-3 environments and provide an independent reevaluation of the thermal design criteria for the External Tank (ET). Given here are the plotted timewise environments comparing REMTECH results to the RI IVBC results.
2010-07-12
Germany, 1999. [8] L. Babai, L. Fortnow, L. A. Levin, and M. Szegedy. Checking Computations in Polylogarithmic Time. In STOC, 1991. [9] A. Ben- David ...their work. J. ACM, 42(1):269–291, 1995. [12] D. Chaum , C. Crépeau, and I. Damgard. Multiparty unconditionally secure protocols. In STOC, 1988. [13
DownscaleConcept 2.3 User Manual. Downscaled, Spatially Distributed Soil Moisture Calculator
2011-01-01
be first presented with the dataset 28 results to your query. From this page, check the box next to the ASTER GDEM dataset and press the "List...information for verification. No charge will be associated with GDEM data archives. 14. Select "Submit Order Now!" to process your order. 15. Wait for
An Event Driven Hybrid Identity Management Approach to Privacy Enhanced e-Health
Sánchez-Guerrero, Rosa; Almenárez, Florina; Díaz-Sánchez, Daniel; Marín, Andrés; Arias, Patricia; Sanvido, Fabio
2012-01-01
Credential-based authorization offers interesting advantages for ubiquitous scenarios involving limited devices such as sensors and personal mobile equipment: the verification can be done locally; it offers a more reduced computational cost than its competitors for issuing, storing, and verification; and it naturally supports rights delegation. The main drawback is the revocation of rights. Revocation requires handling potentially large revocation lists, or using protocols to check the revocation status, bringing extra communication costs not acceptable for sensors and other limited devices. Moreover, the effective revocation consent—considered as a privacy rule in sensitive scenarios—has not been fully addressed. This paper proposes an event-based mechanism empowering a new concept, the sleepyhead credentials, which allows to substitute time constraints and explicit revocation by activating and deactivating authorization rights according to events. Our approach is to integrate this concept in IdM systems in a hybrid model supporting delegation, which can be an interesting alternative for scenarios where revocation of consent and user privacy are critical. The delegation includes a SAML compliant protocol, which we have validated through a proof-of-concept implementation. This article also explains the mathematical model describing the event-based model and offers estimations of the overhead introduced by the system. The paper focus on health care scenarios, where we show the flexibility of the proposed event-based user consent revocation mechanism. PMID:22778634
An event driven hybrid identity management approach to privacy enhanced e-health.
Sánchez-Guerrero, Rosa; Almenárez, Florina; Díaz-Sánchez, Daniel; Marín, Andrés; Arias, Patricia; Sanvido, Fabio
2012-01-01
Credential-based authorization offers interesting advantages for ubiquitous scenarios involving limited devices such as sensors and personal mobile equipment: the verification can be done locally; it offers a more reduced computational cost than its competitors for issuing, storing, and verification; and it naturally supports rights delegation. The main drawback is the revocation of rights. Revocation requires handling potentially large revocation lists, or using protocols to check the revocation status, bringing extra communication costs not acceptable for sensors and other limited devices. Moreover, the effective revocation consent--considered as a privacy rule in sensitive scenarios--has not been fully addressed. This paper proposes an event-based mechanism empowering a new concept, the sleepyhead credentials, which allows to substitute time constraints and explicit revocation by activating and deactivating authorization rights according to events. Our approach is to integrate this concept in IdM systems in a hybrid model supporting delegation, which can be an interesting alternative for scenarios where revocation of consent and user privacy are critical. The delegation includes a SAML compliant protocol, which we have validated through a proof-of-concept implementation. This article also explains the mathematical model describing the event-based model and offers estimations of the overhead introduced by the system. The paper focus on health care scenarios, where we show the flexibility of the proposed event-based user consent revocation mechanism.
Optical proximity correction for anamorphic extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas
2017-10-01
The change from isomorphic to anamorphic optics in high numerical aperture extreme ultraviolet scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking. OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs that are more tolerant to mask errors.
Updraft Model for Development of Autonomous Soaring Uninhabited Air Vehicles
NASA Technical Reports Server (NTRS)
Allen, Michael J.
2006-01-01
Large birds and glider pilots commonly use updrafts caused by convection in the lower atmosphere to extend flight duration, increase cross-country speed, improve range, or simply to conserve energy. Uninhabited air vehicles may also have the ability to exploit updrafts to improve performance. An updraft model was developed at NASA Dryden Flight Research Center (Edwards, California) to investigate the use of convective lift for uninhabited air vehicles in desert regions. Balloon and surface measurements obtained at the National Oceanic and Atmospheric Administration Surface Radiation station (Desert Rock, Nevada) enabled the model development. The data were used to create a statistical representation of the convective velocity scale, w*, and the convective mixing-layer thickness, zi. These parameters were then used to determine updraft size, vertical velocity profile, spacing, and maximum height. This paper gives a complete description of the updraft model and its derivation. Computer code for running the model is also given in conjunction with a check case for model verification.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takahashi, R; Kamima, T; Tachibana, H
2016-06-15
Purpose: To investigate the effect of the trajectory files from linear accelerator for Clarkson-based independent dose verification in IMRT and VMAT plans. Methods: A CT-based independent dose verification software (Simple MU Analysis: SMU, Triangle Products, Japan) with a Clarksonbased algorithm was modified to calculate dose using the trajectory log files. Eclipse with the three techniques of step and shoot (SS), sliding window (SW) and Rapid Arc (RA) was used as treatment planning system (TPS). In this study, clinically approved IMRT and VMAT plans for prostate and head and neck (HN) at two institutions were retrospectively analyzed to assess the dosemore » deviation between DICOM-RT plan (PL) and trajectory log file (TJ). An additional analysis was performed to evaluate MLC error detection capability of SMU when the trajectory log files was modified by adding systematic errors (0.2, 0.5, 1.0 mm) and random errors (5, 10, 30 mm) to actual MLC position. Results: The dose deviations for prostate and HN in the two sites were 0.0% and 0.0% in SS, 0.1±0.0%, 0.1±0.1% in SW and 0.6±0.5%, 0.7±0.9% in RA, respectively. The MLC error detection capability shows the plans for HN IMRT were the most sensitive and 0.2 mm of systematic error affected 0.7% dose deviation on average. Effect of the MLC random error did not affect dose error. Conclusion: The use of trajectory log files including actual information of MLC location, gantry angle, etc should be more effective for an independent verification. The tolerance level for the secondary check using the trajectory file may be similar to that of the verification using DICOM-RT plan file. From the view of the resolution of MLC positional error detection, the secondary check could detect the MLC position error corresponding to the treatment sites and techniques. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less
NASA Technical Reports Server (NTRS)
Brat, Guillaume P.; Martinie, Celia; Palanque, Philippe
2013-01-01
During early phases of the development of an interactive system, future system properties are identified (through interaction with end users in the brainstorming and prototyping phase of the application, or by other stakehold-ers) imposing requirements on the final system. They can be specific to the application under development or generic to all applications such as usability principles. Instances of specific properties include visibility of the aircraft altitude, speed… in the cockpit and the continuous possibility of disengaging the autopilot in whatever state the aircraft is. Instances of generic properties include availability of undo (for undoable functions) and availability of a progression bar for functions lasting more than four seconds. While behavioral models of interactive systems using formal description techniques provide complete and unambiguous descriptions of states and state changes, it does not provide explicit representation of the absence or presence of properties. Assessing that the system that has been built is the right system remains a challenge usually met through extensive use and acceptance tests. By the explicit representation of properties and the availability of tools to support checking these properties, it becomes possible to provide developers with means for systematic exploration of the behavioral models and assessment of the presence or absence of these properties. This paper proposes the synergistic use two tools for checking both generic and specific properties of interactive applications: Petshop and Java PathFinder. Petshop is dedicated to the description of interactive system behavior. Java PathFinder is dedicated to the runtime verification of Java applications and as an extension dedicated to User Interfaces. This approach is exemplified on a safety critical application in the area of interactive cockpits for large civil aircrafts.
A Design of a Surgical Site Verification System.
Shen, Biyu; He, Yan; Chen, Haoyang
2017-01-01
Patient security is a significant issue in medical research and clinical practice at present. The Surgical Verification System (Patent Number: ZL 201420079273.5) is designed to recognize and check surgical sites of patients so as to ensure operation security and decrease the risk for practitioners. Composition: (1) Operating Room Server, (2) Label Reader, (3) E-Label, (4) Surgical Site Display, (5) Ward Client, (6) Label Rader-Writer, and (7) Acousto-Optic Alarm. If the Surgical identification, the surgical site, and so on are incorrect, a flashing label control will appear when the alarm rings. You can specify a sound to play for the alarm, a picture to draw, and a message to send. It is a user-friendly system.
Formal development of a clock synchronization circuit
NASA Technical Reports Server (NTRS)
Miner, Paul S.
1995-01-01
This talk presents the latest stage in formal development of a fault-tolerant clock synchronization circuit. The development spans from a high level specification of the required properties to a circuit realizing the core function of the system. An abstract description of an algorithm has been verified to satisfy the high-level properties using the mechanical verification system EHDM. This abstract description is recast as a behavioral specification input to the Digital Design Derivation system (DDD) developed at Indiana University. DDD provides a formal design algebra for developing correct digital hardware. Using DDD as the principle design environment, a core circuit implementing the clock synchronization algorithm was developed. The design process consisted of standard DDD transformations augmented with an ad hoc refinement justified using the Prototype Verification System (PVS) from SRI International. Subsequent to the above development, Wilfredo Torres-Pomales discovered an area-efficient realization of the same function. Establishing correctness of this optimization requires reasoning in arithmetic, so a general verification is outside the domain of both DDD transformations and model-checking techniques. DDD represents digital hardware by systems of mutually recursive stream equations. A collection of PVS theories was developed to aid in reasoning about DDD-style streams. These theories include a combinator for defining streams that satisfy stream equations, and a means for proving stream equivalence by exhibiting a stream bisimulation. DDD was used to isolate the sub-system involved in Torres-Pomales' optimization. The equivalence between the original design and the optimized verified was verified in PVS by exhibiting a suitable bisimulation. The verification depended upon type constraints on the input streams and made extensive use of the PVS type system. The dependent types in PVS provided a useful mechanism for defining an appropriate bisimulation.
Sayler, Elaine; Eldredge-Hindy, Harriet; Dinome, Jessie; Lockamy, Virginia; Harrison, Amy S
2015-01-01
The planning procedure for Valencia and Leipzig surface applicators (VLSAs) (Nucletron, Veenendaal, The Netherlands) differs substantially from CT-based planning; the unfamiliarity could lead to significant errors. This study applies failure modes and effects analysis (FMEA) to high-dose-rate (HDR) skin brachytherapy using VLSAs to ensure safety and quality. A multidisciplinary team created a protocol for HDR VLSA skin treatments and applied FMEA. Failure modes were identified and scored by severity, occurrence, and detectability. The clinical procedure was then revised to address high-scoring process nodes. Several key components were added to the protocol to minimize risk probability numbers. (1) Diagnosis, prescription, applicator selection, and setup are reviewed at weekly quality assurance rounds. Peer review reduces the likelihood of an inappropriate treatment regime. (2) A template for HDR skin treatments was established in the clinic's electronic medical record system to standardize treatment instructions. This reduces the chances of miscommunication between the physician and planner as well as increases the detectability of an error. (3) A screen check was implemented during the second check to increase detectability of an error. (4) To reduce error probability, the treatment plan worksheet was designed to display plan parameters in a format visually similar to the treatment console display, facilitating data entry and verification. (5) VLSAs are color coded and labeled to match the electronic medical record prescriptions, simplifying in-room selection and verification. Multidisciplinary planning and FMEA increased detectability and reduced error probability during VLSA HDR brachytherapy. This clinical model may be useful to institutions implementing similar procedures. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Two-year experience with the commercial Gamma Knife Check software.
Xu, Andy Yuanguang; Bhatnagar, Jagdish; Bednarz, Greg; Novotny, Josef; Flickinger, John; Lunsford, L Dade; Huq, M Saiful
2016-07-08
The Gamma Knife Check software is an FDA approved second check system for dose calculations in Gamma Knife radiosurgery. The purpose of this study was to evaluate the accuracy and the stability of the commercial software package as a tool for independent dose verification. The Gamma Knife Check software version 8.4 was commissioned for a Leksell Gamma Knife Perfexion and a 4C unit at the University of Pittsburgh Medical Center in May 2012. Independent dose verifications were performed using this software for 319 radiosurgery cases on the Perfexion and 283 radiosurgery cases on the 4C units. The cases on each machine were divided into groups according to their diagnoses, and an averaged absolute percent dose difference for each group was calculated. The percentage dose difference for each treatment target was obtained as the relative difference between the Gamma Knife Check dose and the dose from the tissue maximum ratio algorithm (TMR 10) from the GammaPlan software version 10 at the reference point. For treatment plans with imaging skull definition, results obtained from the Gamma Knife Check software using the measurement-based skull definition method are used for comparison. The collected dose difference data were also analyzed in terms of the distance from the treatment target to the skull, the number of treatment shots used for the target, and the gamma angles of the treatment shots. The averaged percent dose differences between the Gamma Knife Check software and the GammaPlan treatment planning system are 0.3%, 0.89%, 1.24%, 1.09%, 0.83%, 0.55%, 0.33%, and 1.49% for the trigeminal neuralgia, acoustic neuroma, arteriovenous malformation (AVM), meningioma, pituitary adenoma, glioma, functional disorders, and metastasis cases on the Perfexion unit. The corresponding averaged percent dose differences for the 4C unit are 0.33%, 1.2%, 2.78% 1.99%, 1.4%, 1.92%, 0.62%, and 1.51%, respectively. The dose difference is, in general, larger for treatment targets in the peripheral regions of the skull owing to the difference in the numerical methods used for skull shape simulation in the GammaPlan and the Gamma Knife Check software. Larger than 5% dose differences were observed on both machines for certain targets close to patient skull surface and for certain targets in the lower half of the brain on the Perfexion, especially when shots with 70 and/or 110 gamma angles are used. Out of the 1065 treatment targets studied, a 5% cutoff criterion cannot always be met for the dose differences between the studied versions of the Gamma Knife Check software and the planning system for 40 treatment targets. © 2016 The Authors.
Two‐year experience with the commercial Gamma Knife Check software
Bhatnagar, Jagdish; Bednarz, Greg; Novotny, Josef; Flickinger, John; Lunsford, L. Dade; Huq, M. Saiful
2016-01-01
The Gamma Knife Check software is an FDA approved second check system for dose calculations in Gamma Knife radiosurgery. The purpose of this study was to evaluate the accuracy and the stability of the commercial software package as a tool for independent dose verification. The Gamma Knife Check software version 8.4 was commissioned for a Leksell Gamma Knife Perfexion and a 4C unit at the University of Pittsburgh Medical Center in May 2012. Independent dose verifications were performed using this software for 319 radiosurgery cases on the Perfexion and 283 radiosurgery cases on the 4C units. The cases on each machine were divided into groups according to their diagnoses, and an averaged absolute percent dose difference for each group was calculated. The percentage dose difference for each treatment target was obtained as the relative difference between the Gamma Knife Check dose and the dose from the tissue maximum ratio algorithm (TMR 10) from the GammaPlan software version 10 at the reference point. For treatment plans with imaging skull definition, results obtained from the Gamma Knife Check software using the measurement‐based skull definition method are used for comparison. The collected dose difference data were also analyzed in terms of the distance from the treatment target to the skull, the number of treatment shots used for the target, and the gamma angles of the treatment shots. The averaged percent dose differences between the Gamma Knife Check software and the GammaPlan treatment planning system are 0.3%, 0.89%, 1.24%, 1.09%, 0.83%, 0.55%, 0.33%, and 1.49% for the trigeminal neuralgia, acoustic neuroma, arteriovenous malformation (AVM), meningioma, pituitary adenoma, glioma, functional disorders, and metastasis cases on the Perfexion unit. The corresponding averaged percent dose differences for the 4C unit are 0.33%, 1.2%, 2.78% 1.99%, 1.4%, 1.92%, 0.62%, and 1.51%, respectively. The dose difference is, in general, larger for treatment targets in the peripheral regions of the skull owing to the difference in the numerical methods used for skull shape simulation in the GammaPlan and the Gamma Knife Check software. Larger than 5% dose differences were observed on both machines for certain targets close to patient skull surface and for certain targets in the lower half of the brain on the Perfexion, especially when shots with 70 and/or 110 gamma angles are used. Out of the 1065 treatment targets studied, a 5% cutoff criterion cannot always be met for the dose differences between the studied versions of the Gamma Knife Check software and the planning system for 40 treatment targets. PACS number(s): 87.55.Qr, 87.56.Fc PMID:27455470
Signature-based store checking buffer
Sridharan, Vilas; Gurumurthi, Sudhanva
2015-06-02
A system and method for optimizing redundant output verification, are provided. A hardware-based store fingerprint buffer receives multiple instances of output from multiple instances of computation. The store fingerprint buffer generates a signature from the content included in the multiple instances of output. When a barrier is reached, the store fingerprint buffer uses the signature to verify the content is error-free.
31 CFR 1010.330 - Reports relating to currency in excess of $10,000 received in a trade or business.
Code of Federal Regulations, 2013 CFR
2013-07-01
... business use), but a $20,000 dump truck or a $20,000 factory machine is not. (8) Collectible. The term... identification card, or other official document evidencing nationality or residence. Verification of the identity... identification when cashing or accepting checks (for example, a driver's license or a credit card). In addition...
31 CFR 1010.330 - Reports relating to currency in excess of $10,000 received in a trade or business.
Code of Federal Regulations, 2014 CFR
2014-07-01
... business use), but a $20,000 dump truck or a $20,000 factory machine is not. (8) Collectible. The term... identification card, or other official document evidencing nationality or residence. Verification of the identity... identification when cashing or accepting checks (for example, a driver's license or a credit card). In addition...
40 CFR 1065.341 - CVS and batch sampler verification (propane check).
Code of Federal Regulations, 2010 CFR
2010-07-01
... contamination. Otherwise, zero, span, and verify contamination of the HC sampling system, as follows: (1) Select... flow rates. (2) Zero the HC analyzer using zero air introduced at the analyzer port. (3) Span the HC analyzer using C3H8 span gas introduced at the analyzer port. (4) Overflow zero air at the HC probe inlet...
40 CFR 1065.341 - CVS and batch sampler verification (propane check).
Code of Federal Regulations, 2011 CFR
2011-07-01
... contamination. Otherwise, zero, span, and verify contamination of the HC sampling system, as follows: (1) Select... flow rates. (2) Zero the HC analyzer using zero air introduced at the analyzer port. (3) Span the HC analyzer using C3H8 span gas introduced at the analyzer port. (4) Overflow zero air at the HC probe inlet...
Adherence to balance tolerance limits at the Upper Mississippi Science Center, La Crosse, Wisconsin.
Myers, C.T.; Kennedy, D.M.
1998-01-01
Verification of balance accuracy entails applying a series of standard masses to a balance prior to use and recording the measured values. The recorded values for each standard should have lower and upper weight limits or tolerances that are accepted as verification of balance accuracy under normal operating conditions. Balance logbooks for seven analytical balances at the Upper Mississippi Science Center were checked over a 3.5-year period to determine if the recorded weights were within the established tolerance limits. A total of 9435 measurements were checked. There were 14 instances in which the balance malfunctioned and operators recorded a rationale in the balance logbook. Sixty-three recording errors were found. Twenty-eight operators were responsible for two types of recording errors: Measurements of weights were recorded outside of the tolerance limit but not acknowledged as an error by the operator (n = 40); and measurements were recorded with the wrong number of decimal places (n = 23). The adherence rate for following tolerance limits was 99.3%. To ensure the continued adherence to tolerance limits, the quality-assurance unit revised standard operating procedures to require more frequent review of balance logbooks.
DYNA3D/ParaDyn Regression Test Suite Inventory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Jerry I.
2016-09-01
The following table constitutes an initial assessment of feature coverage across the regression test suite used for DYNA3D and ParaDyn. It documents the regression test suite at the time of preliminary release 16.1 in September 2016. The columns of the table represent groupings of functionalities, e.g., material models. Each problem in the test suite is represented by a row in the table. All features exercised by the problem are denoted by a check mark (√) in the corresponding column. The definition of “feature” has not been subdivided to its smallest unit of user input, e.g., algorithmic parameters specific to amore » particular type of contact surface. This represents a judgment to provide code developers and users a reasonable impression of feature coverage without expanding the width of the table by several multiples. All regression testing is run in parallel, typically with eight processors, except problems involving features only available in serial mode. Many are strictly regression tests acting as a check that the codes continue to produce adequately repeatable results as development unfolds; compilers change and platforms are replaced. A subset of the tests represents true verification problems that have been checked against analytical or other benchmark solutions. Users are welcomed to submit documented problems for inclusion in the test suite, especially if they are heavily exercising, and dependent upon, features that are currently underrepresented.« less
Simulation and Verification of Synchronous Set Relations in Rewriting Logic
NASA Technical Reports Server (NTRS)
Rocha, Camilo; Munoz, Cesar A.
2011-01-01
This paper presents a mathematical foundation and a rewriting logic infrastructure for the execution and property veri cation of synchronous set relations. The mathematical foundation is given in the language of abstract set relations. The infrastructure consists of an ordersorted rewrite theory in Maude, a rewriting logic system, that enables the synchronous execution of a set relation provided by the user. By using the infrastructure, existing algorithm veri cation techniques already available in Maude for traditional asynchronous rewriting, such as reachability analysis and model checking, are automatically available to synchronous set rewriting. The use of the infrastructure is illustrated with an executable operational semantics of a simple synchronous language and the veri cation of temporal properties of a synchronous system.
Verifying Architectural Design Rules of the Flight Software Product Line
NASA Technical Reports Server (NTRS)
Ganesan, Dharmalingam; Lindvall, Mikael; Ackermann, Chris; McComas, David; Bartholomew, Maureen
2009-01-01
This paper presents experiences of verifying architectural design rules of the NASA Core Flight Software (CFS) product line implementation. The goal of the verification is to check whether the implementation is consistent with the CFS architectural rules derived from the developer's guide. The results indicate that consistency checking helps a) identifying architecturally significant deviations that were eluded during code reviews, b) clarifying the design rules to the team, and c) assessing the overall implementation quality. Furthermore, it helps connecting business goals to architectural principles, and to the implementation. This paper is the first step in the definition of a method for analyzing and evaluating product line implementations from an architecture-centric perspective.
A web-based system for supporting global land cover data production
NASA Astrophysics Data System (ADS)
Han, Gang; Chen, Jun; He, Chaoying; Li, Songnian; Wu, Hao; Liao, Anping; Peng, Shu
2015-05-01
Global land cover (GLC) data production and verification process is very complicated, time consuming and labor intensive, requiring huge amount of imagery data and ancillary data and involving many people, often from different geographic locations. The efficient integration of various kinds of ancillary data and effective collaborative classification in large area land cover mapping requires advanced supporting tools. This paper presents the design and development of a web-based system for supporting 30-m resolution GLC data production by combining geo-spatial web-service and Computer Support Collaborative Work (CSCW) technology. Based on the analysis of the functional and non-functional requirements from GLC mapping, a three tiers system model is proposed with four major parts, i.e., multisource data resources, data and function services, interactive mapping and production management. The prototyping and implementation of the system have been realised by a combination of Open Source Software (OSS) and commercially available off-the-shelf system. This web-based system not only facilitates the integration of heterogeneous data and services required by GLC data production, but also provides online access, visualization and analysis of the images, ancillary data and interim 30 m global land-cover maps. The system further supports online collaborative quality check and verification workflows. It has been successfully applied to China's 30-m resolution GLC mapping project, and has improved significantly the efficiency of GLC data production and verification. The concepts developed through this study should also benefit other GLC or regional land-cover data production efforts.
NASA Astrophysics Data System (ADS)
Krejsa, M.; Brozovsky, J.; Mikolasek, D.; Parenica, P.; Koubova, L.
2018-04-01
The paper is focused on the numerical modeling of welded steel bearing elements using commercial software system ANSYS, which is based on the finite element method - FEM. It is important to check and compare the results of FEM analysis with the results of physical verification test, in which the real behavior of the bearing element can be observed. The results of the comparison can be used for calibration of the computational model. The article deals with the physical test of steel supporting elements, whose main purpose is obtaining of material, geometry and strength characteristics of the fillet and butt welds including heat affected zone in the basic material of welded steel bearing element. The pressure test was performed during the experiment, wherein the total load value and the corresponding deformation of the specimens under the load was monitored. Obtained data were used for the calibration of numerical models of test samples and they are necessary for further stress and strain analysis of steel supporting elements.
NASA Technical Reports Server (NTRS)
Lee, Jeh Won
1990-01-01
The objective is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. The resulting equation of motion have a structure which is useful to reduce the number of terms calculated, to check correctness, or to extend the model to higher order. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. Elastic motion is expressed by the assumed mode method. Mode shape functions of each link are chosen using the load interfaced component mode synthesis. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model.
NASA Technical Reports Server (NTRS)
Miner, Paul S.
1993-01-01
A critical function in a fault-tolerant computer architecture is the synchronization of the redundant computing elements. The synchronization algorithm must include safeguards to ensure that failed components do not corrupt the behavior of good clocks. Reasoning about fault-tolerant clock synchronization is difficult because of the possibility of subtle interactions involving failed components. Therefore, mechanical proof systems are used to ensure that the verification of the synchronization system is correct. In 1987, Schneider presented a general proof of correctness for several fault-tolerant clock synchronization algorithms. Subsequently, Shankar verified Schneider's proof by using the mechanical proof system EHDM. This proof ensures that any system satisfying its underlying assumptions will provide Byzantine fault-tolerant clock synchronization. The utility of Shankar's mechanization of Schneider's theory for the verification of clock synchronization systems is explored. Some limitations of Shankar's mechanically verified theory were encountered. With minor modifications to the theory, a mechanically checked proof is provided that removes these limitations. The revised theory also allows for proven recovery from transient faults. Use of the revised theory is illustrated with the verification of an abstract design of a clock synchronization system.
A Scala DSL for RETE-Based Runtime Verification
NASA Technical Reports Server (NTRS)
Havelund, Klaus
2013-01-01
Runtime verification (RV) consists in part of checking execution traces against formalized specifications. Several systems have emerged, most of which support specification notations based on state machines, regular expressions, temporal logic, or grammars. The field of Artificial Intelligence (AI) has for an even longer period of time studied rule-based production systems, which at a closer look appear to be relevant for RV, although seemingly focused on slightly different application domains, such as for example business processes and expert systems. The core algorithm in many of these systems is the Rete algorithm. We have implemented a Rete-based runtime verification system, named LogFire (originally intended for offline log analysis but also applicable to online analysis), as an internal DSL in the Scala programming language, using Scala's support for defining DSLs. This combination appears attractive from a practical point of view. Our contribution is in part conceptual in arguing that such rule-based frameworks originating from AI may be suited for RV.
Double patterning from design enablement to verification
NASA Astrophysics Data System (ADS)
Abercrombie, David; Lacour, Pat; El-Sewefy, Omar; Volkov, Alex; Levine, Evgueni; Arb, Kellen; Reid, Chris; Li, Qiao; Ghosh, Pradiptya
2011-11-01
Litho-etch-litho-etch (LELE) is the double patterning (DP) technology of choice for 20 nm contact, via, and lower metal layers. We discuss the unique design and process characteristics of LELE DP, the challenges they present, and various solutions. ∘ We examine DP design methodologies, current DP conflict feedback mechanisms, and how they can help designers identify and resolve conflicts. ∘ In place and route (P&R), the placement engine must now be aware of the assumptions made during IP cell design, and use placement directives provide by the library designer. We examine the new effects DP introduces in detail routing, discuss how multiple choices of LELE and the cut allowances can lead to different solutions, and describe new capabilities required by detail routers and P&R engines. ∘ We discuss why LELE DP cuts and overlaps are critical to optical process correction (OPC), and how a hybrid mechanism of rule and model-based overlap generation can provide a fast and effective solution. ∘ With two litho-etch steps, mask misalignment and image rounding are now verification considerations. We present enhancements to the OPCVerify engine that check for pinching and bridging in the presence of DP overlay errors and acute angles.
Development tests for the 2.5 megawatt Mod-2 wind turbine generator
NASA Technical Reports Server (NTRS)
Andrews, J. S.; Baskin, J. M.
1982-01-01
The 2.5 megawatt MOD-2 wind turbine generator test program is discussed. The development of the 2.5 megawatt MOD-2 wind turbine generator included an extensive program of testing which encompassed verification of analytical procedures, component development, and integrated system verification. The test program was to assure achievement of the thirty year design operational life of the wind turbine system as well as to minimize costly design modifications which would otherwise have been required during on site system testing. Computer codes were modified, fatigue life of structure and dynamic components were verified, mechanical and electrical component and subsystems were functionally checked and modified where necessary to meet system specifications, and measured dynamic responses of coupled systems confirmed analytical predictions.
31 CFR 1010.330 - Reports relating to currency in excess of $10,000 received in a trade or business.
Code of Federal Regulations, 2011 CFR
2011-07-01
... automobile is a consumer durable (whether or not it is sold for business use), but a $20,000 dump truck or a... identification card, or other official document evidencing nationality or residence. Verification of the identity... identification when cashing or accepting checks (for example, a driver's license or a credit card). In addition...
31 CFR 1010.330 - Reports relating to currency in excess of $10,000 received in a trade or business.
Code of Federal Regulations, 2012 CFR
2012-07-01
... automobile is a consumer durable (whether or not it is sold for business use), but a $20,000 dump truck or a... identification card, or other official document evidencing nationality or residence. Verification of the identity... identification when cashing or accepting checks (for example, a driver's license or a credit card). In addition...
Evaluation of Healthy Beginnings; Fort Carson’s Pregnancy Physical Training Program
1999-04-20
effects that medication may have on nursing infants, exercise as a non-pharmacological intervention for post-partum depression and anxiety may be...record. Verification of this information was made by cross-checking a community nursing form titled “Pregnancy Surveillance Record”, which is...Standard Form (SF) 539, “Abbreviated Medical Record”, which documents complications during pregnancy. Listed under the Diagnosis and Procedures
40 CFR 1065.341 - CVS, PFD, and batch sampler verification (propane check).
Code of Federal Regulations, 2014 CFR
2014-07-01
... may use the HC contamination procedure in § 1065.520(f) to verify HC contamination. Otherwise, zero... range that can measure the C3H8 concentration expected for the CVS and C3H8 flow rates. (2) Zero the HC analyzer using zero air introduced at the analyzer port. (3) Span the HC analyzer using C3H8 span gas...
40 CFR 1065.341 - CVS and batch sampler verification (propane check).
Code of Federal Regulations, 2012 CFR
2012-07-01
... may use the HC contamination procedure in § 1065.520(g) to verify HC contamination. Otherwise, zero... range that can measure the C3H8 concentration expected for the CVS and C3H8 flow rates. (2) Zero the HC analyzer using zero air introduced at the analyzer port. (3) Span the HC analyzer using C3H8 span gas...
40 CFR 1065.341 - CVS and batch sampler verification (propane check).
Code of Federal Regulations, 2013 CFR
2013-07-01
... may use the HC contamination procedure in § 1065.520(g) to verify HC contamination. Otherwise, zero... range that can measure the C3H8 concentration expected for the CVS and C3H8 flow rates. (2) Zero the HC analyzer using zero air introduced at the analyzer port. (3) Span the HC analyzer using C3H8 span gas...
AMFESYS: Modelling and diagnosis functions for operations support
NASA Technical Reports Server (NTRS)
Wheadon, J.
1993-01-01
Packetized telemetry, combined with low station coverage for close-earth satellites, may introduce new problems in presenting to the operator a clear picture of what the spacecraft is doing. A recent ESOC study has gone some way to show, by means of a practical demonstration, how the use of subsystem models combined with artificial intelligence techniques, within a real-time spacecraft control system (SCS), can help to overcome these problems. A spin-off from using these techniques can be an improvement in the reliability of the telemetry (TM) limit-checking function, as well as the telecommand verification function, of the Spacecraft Control systems (SCS). The problem and how it was addressed, including an overview of the 'AMF Expert System' prototype are described, and proposes further work which needs to be done to prove the concept. The Automatic Mirror Furnace is part of the payload of the European Retrievable Carrier (EURECA) spacecraft, which was launched in July 1992.
Strategy optimization for mask rule check in wafer fab
NASA Astrophysics Data System (ADS)
Yang, Chuen Huei; Lin, Shaina; Lin, Roger; Wang, Alice; Lee, Rachel; Deng, Erwin
2015-07-01
Photolithography process is getting more and more sophisticated for wafer production following Moore's law. Therefore, for wafer fab, consolidated and close cooperation with mask house is a key to achieve silicon wafer success. However, generally speaking, it is not easy to preserve such partnership because many engineering efforts and frequent communication are indispensable. The inattentive connection is obvious in mask rule check (MRC). Mask houses will do their own MRC at job deck stage, but the checking is only for identification of mask process limitation including writing, etching, inspection, metrology, etc. No further checking in terms of wafer process concerned mask data errors will be implemented after data files of whole mask are composed in mask house. There are still many potential data errors even post-OPC verification has been done for main circuits. What mentioned here are the kinds of errors which will only occur as main circuits combined with frame and dummy patterns to form whole reticle. Therefore, strategy optimization is on-going in UMC to evaluate MRC especially for wafer fab concerned errors. The prerequisite is that no impact on mask delivery cycle time even adding this extra checking. A full-mask checking based on job deck in gds or oasis format is necessary in order to secure acceptable run time. Form of the summarized error report generated by this checking is also crucial because user friendly interface will shorten engineers' judgment time to release mask for writing. This paper will survey the key factors of MRC in wafer fab.
Pella, A; Riboldi, M; Tagaste, B; Bianculli, D; Desplanques, M; Fontana, G; Cerveri, P; Seregni, M; Fattori, G; Orecchia, R; Baroni, G
2014-08-01
In an increasing number of clinical indications, radiotherapy with accelerated particles shows relevant advantages when compared with high energy X-ray irradiation. However, due to the finite range of ions, particle therapy can be severely compromised by setup errors and geometric uncertainties. The purpose of this work is to describe the commissioning and the design of the quality assurance procedures for patient positioning and setup verification systems at the Italian National Center for Oncological Hadrontherapy (CNAO). The accuracy of systems installed in CNAO and devoted to patient positioning and setup verification have been assessed using a laser tracking device. The accuracy in calibration and image based setup verification relying on in room X-ray imaging system was also quantified. Quality assurance tests to check the integration among all patient setup systems were designed, and records of daily QA tests since the start of clinical operation (2011) are presented. The overall accuracy of the patient positioning system and the patient verification system motion was proved to be below 0.5 mm under all the examined conditions, with median values below the 0.3 mm threshold. Image based registration in phantom studies exhibited sub-millimetric accuracy in setup verification at both cranial and extra-cranial sites. The calibration residuals of the OTS were found consistent with the expectations, with peak values below 0.3 mm. Quality assurance tests, daily performed before clinical operation, confirm adequate integration and sub-millimetric setup accuracy. Robotic patient positioning was successfully integrated with optical tracking and stereoscopic X-ray verification for patient setup in particle therapy. Sub-millimetric setup accuracy was achieved and consistently verified in daily clinical operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Y; Yuan, J; Geis, P
2016-06-15
Purpose: To verify the similarity of the dosimetric characteristics between two Elekta linear accelerators (linacs) in order to treat patients interchangeably on these two machines without re-planning. Methods: To investigate the viability of matching the 6 MV flattened beam on an existing linac (Elekta Synergy with Agility head) with a recently installed new linca (Elekta Versa HD), percent depth doses (PDD), flatness and symmetry output factors were compared for both machines. To validate the beam matching among machines, we carried out two approaches to cross-check the dosimetrical equivalence: 1) the prior treatment plans were re-computed based on the newly builtmore » Versa HD treatment planning system (TPS) model without changing the beam control points; 2) The same plans were delivered on both machines and the radiation dose measurements on a MapCheck2 were compared with TPS calculations. Three VMAT plans (Head and neck, lung, and prostate) were used in the study. Results: The difference between the PDDs for 10×10 cm{sup 2} field at all depths was less than 0.8%. The difference of flatness and symmetry for 30×30 cm{sup 2} field was less than 0.8%, and the measured output factors varies by less than 1% for each field size ranging from 2×2 cm2 to 40×40 cm{sup 2}. For the same plans, the maximum difference of the two calculated dose distributions is 2% of prescription. For the QA measurements, the gamma index passing rates were above 99% for 3%/3mm criteria with 10% threshold for all three clinical plans. Conclusion: A beam modality matching between two Elekta linacs is demonstrated with a cross-checking approach.« less
The Mars Science Laboratory Organic Check Material
NASA Technical Reports Server (NTRS)
Conrad, Pamela G.; Eigenbrode, J. E.; Mogensen, C. T.; VonderHeydt, M. O.; Glavin, D. P.; Mahaffy, P. M.; Johnson, J. A.
2011-01-01
The Organic Check Material (OCM) has been developed for use on the Mars Science Laboratory mission to serve as a sample standard for verification of organic cleanliness and characterization of potential sample alteration as a function of the sample acquisition and portioning process on the Curiosity rover. OCM samples will be acquired using the same procedures for drilling, portioning and delivery as are used to study martian samples with The Sample Analysis at Mars (SAM) instrument suite during MSL surface operations. Because the SAM suite is highly sensitive to organic molecules, the mission can better verify the cleanliness of Curiosity's sample acquisition hardware if a known material can be processed through SAM and compared with the results obtained from martian samples.
Jinno, Shunta; Tachibana, Hidenobu; Moriya, Shunsuke; Mizuno, Norifumi; Takahashi, Ryo; Kamima, Tatsuya; Ishibashi, Satoru; Sato, Masanori
2018-05-21
In inhomogeneous media, there is often a large systematic difference in the dose between the conventional Clarkson algorithm (C-Clarkson) for independent calculation verification and the superposition-based algorithms of treatment planning systems (TPSs). These treatment site-dependent differences increase the complexity of the radiotherapy planning secondary check. We developed a simple and effective method of heterogeneity correction integrated with the Clarkson algorithm (L-Clarkson) to account for the effects of heterogeneity in the lateral dimension, and performed a multi-institutional study to evaluate the effectiveness of the method. In the method, a 2D image reconstructed from computed tomography (CT) images is divided according to lines extending from the reference point to the edge of the multileaf collimator (MLC) or jaw collimator for each pie sector, and the radiological path length (RPL) of each line is calculated on the 2D image to obtain a tissue maximum ratio and phantom scatter factor, allowing the dose to be calculated. A total of 261 plans (1237 beams) for conventional breast and lung treatments and lung stereotactic body radiotherapy were collected from four institutions. Disagreements in dose between the on-site TPSs and a verification program using the C-Clarkson and L-Clarkson algorithms were compared. Systematic differences with the L-Clarkson method were within 1% for all sites, while the C-Clarkson method resulted in systematic differences of 1-5%. The L-Clarkson method showed smaller variations. This heterogeneity correction integrated with the Clarkson algorithm would provide a simple evaluation within the range of -5% to +5% for a radiotherapy plan secondary check.
Compendium of Arms Control Verification Proposals.
1982-03-01
proposals. This appears to be the case for virtually all kinds of prospective arms control topics from general and complete disarmament to control of specific... virtually anywhere. Unrestricted access would be particularly necessary in the case of states which previously had nuclear weapons. Practically speaking...been transferred out of the area, it would be harder to check on remaining stocks without some intrusive inspection and it would be virtually impossible
Zero-point angular momentum of supersymmetric Penning trap
NASA Astrophysics Data System (ADS)
Zhang, Jian-zu; Xu, Qiang
2000-10-01
The quantum behavior of supersymmetric Penning trap, specially the superpartner of its angular momentum, is investigated in the formulation of multi-dimensional semiunitary transformation of supersymmetric quantum mechanics. In the limit case of vanishing kinetic energy it is found that its lowest angular momentum is 3ℏ/2, which provides a possibility of directly checking the idea of supersymmetric quantum mechanics and thus suggests a possible experimental verification about this prediction.
A hydrochemical data base for the Hanford Site, Washington
DOE Office of Scientific and Technical Information (OSTI.GOV)
Early, T.O.; Mitchell, M.D.; Spice, G.D.
1986-05-01
This data package contains a revision of the Site Hydrochemical Data Base for water samples associated with the Basalt Waste Isolation Project (BWIP). In addition to the detailed chemical analyses, a summary description of the data base format, detailed descriptions of verification procedures used to check data entries, and detailed descriptions of validation procedures used to evaluate data quality are included. 32 refs., 21 figs., 3 tabs.
Optical proximity correction for anamorphic extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas
2017-10-01
The change from isomorphic to anamorphic optics in high numerical aperture (NA) extreme ultraviolet (EUV) scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated, and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking (MRC). OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs which are more tolerant to mask errors.
Combining Task Execution and Background Knowledge for the Verification of Medical Guidelines
NASA Astrophysics Data System (ADS)
Hommersom, Arjen; Groot, Perry; Lucas, Peter; Balser, Michael; Schmitt, Jonathan
The use of a medical guideline can be seen as the execution of computational tasks, sequentially or in parallel, in the face of patient data. It has been shown that many of such guidelines can be represented as a 'network of tasks', i.e., as a number of steps that have a specific function or goal. To investigate the quality of such guidelines we propose a formalization of criteria for good practice medicine a guideline should comply to. We use this theory in conjunction with medical background knowledge to verify the quality of a guideline dealing with diabetes mellitus type 2 using the interactive theorem prover KIV. Verification using task execution and background knowledge is a novel approach to quality checking of medical guidelines.
Experimental verification of multipartite entanglement in quantum networks
McCutcheon, W.; Pappa, A.; Bell, B. A.; McMillan, A.; Chailloux, A.; Lawson, T.; Mafu, M.; Markham, D.; Diamanti, E.; Kerenidis, I.; Rarity, J. G.; Tame, M. S.
2016-01-01
Multipartite entangled states are a fundamental resource for a wide range of quantum information processing tasks. In particular, in quantum networks, it is essential for the parties involved to be able to verify if entanglement is present before they carry out a given distributed task. Here we design and experimentally demonstrate a protocol that allows any party in a network to check if a source is distributing a genuinely multipartite entangled state, even in the presence of untrusted parties. The protocol remains secure against dishonest behaviour of the source and other parties, including the use of system imperfections to their advantage. We demonstrate the verification protocol in a three- and four-party setting using polarization-entangled photons, highlighting its potential for realistic photonic quantum communication and networking applications. PMID:27827361
Evaluation of the efficiency and fault density of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1993-01-01
Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, S; Gulam, M; Song, K
2014-06-01
Purpose: The Varian EDGE machine is a new stereotactic platform, combining Calypso and VisionRT localization systems with a stereotactic linac. The system includes TrueBeam DeveloperMode, making possible the use of XML-scripting for automation of linac-related tasks. This study details the use of DeveloperMode to automate commissioning tasks for Varian EDGE, thereby improving efficiency and measurement consistency. Methods: XML-scripting was used for various commissioning tasks,including couch model verification,beam-scanning,and isocenter verification. For couch measurements, point measurements were acquired for several field sizes (2×2,4×4,10×10cm{sup 2}) at 42 gantry angles for two couch-models. Measurements were acquired with variations in couch position(rails in/out,couch shifted inmore » each of motion axes) compared to treatment planning system(TPS)-calculated values,which were logged automatically through advanced planning interface(API) scripting functionality. For beam scanning, XML-scripts were used to create custom MLC-apertures. For isocenter verification, XML-scripts were used to automate various Winston-Lutz-type tests. Results: For couch measurements, the time required for each set of angles was approximately 9 minutes. Without scripting, each set required approximately 12 minutes. Automated measurements required only one physicist, while manual measurements required at least two physicists to handle linac positions/beams and data recording. MLC apertures were generated outside of the TPS,and with the .xml file format, double-checking without use of TPS/operator console was possible. Similar time efficiency gains were found for isocenter verification measurements Conclusion: The use of XML scripting in TrueBeam DeveloperMode allows for efficient and accurate data acquisition during commissioning. The efficiency improvement is most pronounced for iterative measurements, exemplified by the time savings for couch modeling measurements(approximately 10 hours). The scripting also allowed for creation of the files in advance without requiring access to TPS. The API scripting functionality enabled efficient creation/mining of TPS data. Finally, automation reduces the potential for human error in entering linac values at the machine console,and the script provides a log of measurements acquired for each session. This research was supported in part by a grant from Varian Medical Systems, Palo Alto, CA.« less
Antonelli, Giorgia; Padoan, Andrea; Aita, Ada; Sciacovelli, Laura; Plebani, Mario
2017-08-28
Background The International Standard ISO 15189 is recognized as a valuable guide in ensuring high quality clinical laboratory services and promoting the harmonization of accreditation programmes in laboratory medicine. Examination procedures must be verified in order to guarantee that their performance characteristics are congruent with the intended scope of the test. The aim of the present study was to propose a practice model for implementing procedures employed for the verification of validated examination procedures already used for at least 2 years in our laboratory, in agreement with the ISO 15189 requirement at the Section 5.5.1.2. Methods In order to identify the operative procedure to be used, approved documents were identified, together with the definition of performance characteristics to be evaluated for the different methods; the examination procedures used in laboratory were analyzed and checked for performance specifications reported by manufacturers. Then, operative flow charts were identified to compare the laboratory performance characteristics with those declared by manufacturers. Results The choice of performance characteristics for verification was based on approved documents used as guidance, and the specific purpose tests undertaken, a consideration being made of: imprecision and trueness for quantitative methods; diagnostic accuracy for qualitative methods; imprecision together with diagnostic accuracy for semi-quantitative methods. Conclusions The described approach, balancing technological possibilities, risks and costs and assuring the compliance of the fundamental component of result accuracy, appears promising as an easily applicable and flexible procedure helping laboratories to comply with the ISO 15189 requirements.
The Multiple Doppler Radar Workshop, November 1979.
NASA Astrophysics Data System (ADS)
Carbone, R. E.; Harris, F. I.; Hildebrand, P. H.; Kropfli, R. A.; Miller, L. J.; Moninger, W.; Strauch, R. G.; Doviak, R. J.; Johnson, K. W.; Nelson, S. P.; Ray, P. S.; Gilet, M.
1980-10-01
The findings of the Multiple Doppler Radar Workshop are summarized by a series of six papers. Part I of this series briefly reviews the history of multiple Doppler experimentation, fundamental concepts of Doppler signal theory, and organization and objectives of the Workshop. Invited presentations by dynamicists and cloud physicists are also summarized.Experimental design and procedures (Part II) are shown to be of critical importance. Well-defined and limited experimental objectives are necessary in view of technological limitations. Specified radar scanning procedures that balance temporal and spatial resolution considerations are discussed in detail. Improved siting for suppression of ground clutter as well as scanning procedures to minimize errors at echo boundaries are discussed. The need for accelerated research using numerically simulated proxy data sets is emphasized.New technology to eliminate various sampling limitations is cited as an eventual solution to many current problems in Part III. Ground clutter contamination may be curtailed by means of full spectral processing, digital filters in real time, and/or variable pulse repetition frequency. Range and velocity ambiguities also may be minimized by various pulsing options as well as random phase transmission. Sidelobe contamination can be reduced through improvements in radomes, illumination patterns, and antenna feed types. Radar volume-scan time can be sharply reduced by means of wideband transmission, phased array antennas, multiple beam antennas, and frequency agility.Part IV deals with synthesis of data from several radars in the context of scientific requirements in cumulus clouds, widespread precipitation, and severe convective storms. The important temporal and spatial scales are examined together with the accuracy required for vertical air motion in each phenomenon. Factors that introduce errors in the vertical velocity field are identified and synthesis techniques are discussed separately for the dual Doppler and multiple Doppler cases. Various filters and techniques, including statistical and variational approaches, are mentioned. Emphasis is placed on the importance of experiment design and procedures, technological improvements, incorporation of all information from supporting sensors, and analysis priority for physically simple cases. Integrated reliability is proposed as an objective tool for radar siting.Verification of multiple Doppler-derived vertical velocity is discussed in Part V. Three categories of verification are defined as direct, deductive, and theoretical/numerical. Direct verification consists of zenith-pointing radar measurements (from either airborne or ground-based systems), air motion sensing aircraft, instrumented towers, and tracking of radar chaff. Deductive sources include mesonetworks, aircraft (thermodynamic and microphysical) measurements, satellite observations, radar reflectivity, multiple Doppler consistency, and atmospheric soundings. Theoretical/numerical sources of verification include proxy data simulation, momentum checking, and numerical cloud models. New technology, principally in the form of wide bandwidth radars, is seen as a development that may reduce the need for extensive verification of multiple Doppler-derived vertical air motions. Airborne Doppler radar is perceived as the single most important source of verification within the bounds of existing technology.Nine stages of data processing and display are identified in Part VI. The stages are identified as field checks, archival, selection, editing, coordinate transformation, synthesis of Cartesian fields, filtering, display, and physical analysis. Display of data is considered to be a problem critical to assimilation of data at all stages. Interactive computing systems and software are concluded to be very important, particularly for the editing stage. Three- and 4-dimensional displays are considered essential for data assimilation, particularly at the physical analysis stage. The concept of common data tape formats is approved both for data in radar spherical space as well as for synthesized Cartesian output.1169
Multi-centre audit of VMAT planning and pre-treatment verification.
Jurado-Bruggeman, Diego; Hernández, Victor; Sáez, Jordi; Navarro, David; Pino, Francisco; Martínez, Tatiana; Alayrach, Maria-Elena; Ailleres, Norbert; Melero, Alejandro; Jornet, Núria
2017-08-01
We performed a multi-centre intercomparison of VMAT dose planning and pre-treatment verification. The aims were to analyse the dose plans in terms of dosimetric quality and deliverability, and to validate whether in-house pre-treatment verification results agreed with those of an external audit. The nine participating centres encompassed different machines, equipment, and methodologies. Two mock cases (prostate and head and neck) were planned using one and two arcs. A plan quality index was defined to compare the plans and different complexity indices were calculated to check their deliverability. We compared gamma index pass rates using the centre's equipment and methodology to those of an external audit (global 3D gamma, absolute dose differences, 10% of maximum dose threshold). Log-file analysis was performed to look for delivery errors. All centres fulfilled the dosimetric goals but plan quality and delivery complexity were heterogeneous and uncorrelated, depending on the manufacturer and the planner's methodology. Pre-treatment verifications results were within tolerance in all cases for gamma 3%-3mm evaluation. Nevertheless, differences between the external audit and in-house measurements arose due to different equipment or methodology, especially for 2%-2mm criteria with differences up to 20%. No correlation was found between complexity indices and verification results amongst centres. All plans fulfilled dosimetric constraints, but plan quality and complexity did not correlate and were strongly dependent on the planner and the vendor. In-house measurements cannot completely replace external audits for credentialing. Copyright © 2017 Elsevier B.V. All rights reserved.
Leveraging pattern matching to solve SRAM verification challenges at advanced nodes
NASA Astrophysics Data System (ADS)
Kan, Huan; Huang, Lucas; Yang, Legender; Zou, Elaine; Wan, Qijian; Du, Chunshan; Hu, Xinyi; Liu, Zhengfang; Zhu, Yu; Zhang, Recoo; Huang, Elven; Muirhead, Jonathan
2018-03-01
Memory is a critical component in today's system-on-chip (SoC) designs. Static random-access memory (SRAM) blocks are assembled by combining intellectual property (IP) blocks that come from SRAM libraries developed and certified by the foundries for both functionality and a specific process node. Customers place these SRAM IP in their designs, adjusting as necessary to achieve DRC-clean results. However, any changes a customer makes to these SRAM IP during implementation, whether intentionally or in error, can impact yield and functionality. Physical verification of SRAM has always been a challenge, because these blocks usually contain smaller feature sizes and spacing constraints compared to traditional logic or other layout structures. At advanced nodes, critical dimension becomes smaller and smaller, until there is almost no opportunity to use optical proximity correction (OPC) and lithography to adjust the manufacturing process to mitigate the effects of any changes. The smaller process geometries, reduced supply voltages, increasing process variation, and manufacturing uncertainty mean accurate SRAM physical verification results are not only reaching new levels of difficulty, but also new levels of criticality for design success. In this paper, we explore the use of pattern matching to create an SRAM verification flow that provides both accurate, comprehensive coverage of the required checks and visual output to enable faster, more accurate error debugging. Our results indicate that pattern matching can enable foundries to improve SRAM manufacturing yield, while allowing designers to benefit from SRAM verification kits that can shorten the time to market.
NASA Astrophysics Data System (ADS)
Vielhauer, Claus; Croce Ferri, Lucilla
2003-06-01
Our paper addresses two issues of a biometric authentication algorithm for ID cardholders previously presented namely the security of the embedded reference data and the aging process of the biometric data. We describe a protocol that allows two levels of verification, combining a biometric hash technique based on handwritten signature and hologram watermarks with cryptographic signatures in a verification infrastructure. This infrastructure consists of a Trusted Central Public Authority (TCPA), which serves numerous Enrollment Stations (ES) in a secure environment. Each individual performs an enrollment at an ES, which provides the TCPA with the full biometric reference data and a document hash. The TCPA then calculates the authentication record (AR) with the biometric hash, a validity timestamp, and a document hash provided by the ES. The AR is then signed with a cryptographic signature function, initialized with the TCPA's private key and embedded in the ID card as a watermark. Authentication is performed at Verification Stations (VS), where the ID card will be scanned and the signed AR is retrieved from the watermark. Due to the timestamp mechanism and a two level biometric verification technique based on offline and online features, the AR can deal with the aging process of the biometric feature by forcing a re-enrollment of the user after expiry, making use of the ES infrastructure. We describe some attack scenarios and we illustrate the watermarking embedding, retrieval and dispute protocols, analyzing their requisites, advantages and disadvantages in relation to security requirements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gaede, S; Jordan, K; Western University, London, ON
Purpose: To present a customized programmable moving insert for the ArcCHECK™ phantom that can, in a single delivery, check both entrance dosimetry, while simultaneously verifying the delivery of respiratory-gated VMAT. Methods: The cylindrical motion phantom uses a computer-controlled stepping motor to move an insert inside a stationery sleeve. Insert motion is programmable and can include rotational motion in addition to linear motion along the axis of the cylinder. The sleeve fits securely in the bore of the ArcCHECK™. Interchangeable inserts, including an A1SL chamber, optically-stimulated luminescence dosimeters, radiochromic film, or 3D gels, allow this combination to be used for commissioning,more » routine quality assurance, and patient-specific dosimetric verification of respiratory-gated VMAT. Before clinical implementation, the effect of a moving insert on the ArcCHECK™ measurements was considered. First, the measured dose to the ArcCHECK™ containing multiple inserts in the static position was compared to the calculated dose during multiple VMAT treatment deliveries. Then, dose was measured under both sinusoidal and real-patient motion conditions to determine any effect of the moving inserts on the ArcCHECK™ measurements. Finally, dose was measured during gated VMAT delivery to the same inserts under the same motion conditions to examine any effect of various beam “on-and-off” and dose rate ramp “up-and-down”. Multiple comparisons between measured and calculated dose to different inserts were also considered. Results: The pass rate for the static delivery exceeded 98% for all measurements (3%/3mm), suggesting a valid setup for entrance dosimetry. The pass rate was not altered for any measurement delivered under motion conditions. A similar Result was observed under gated VMAT conditions, including agreement of measured and calculated dose to the various inserts. Conclusion: Incorporating a programmable moving insert within the ArcCHECK™ phantom provides an efficient verification of respiratory-gated VMAT delivery that is useful during commissioning, routine quality assurance, and patient-specific dose verification. Prototype phantom development and testing was performed in collaboration with Modus Medical Devices Inc. (London, ON). No financial support was granted.« less
National Centers for Environmental Prediction
Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS
National Centers for Environmental Prediction
Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model Configuration /EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION / DIAGNOSTICS Developmental Air Quality Forecasts and Verification Back to Table of Contents 2. PARALLEL/EXPERIMENTAL GRAPHICS
An Empirical Evaluation of Automated Theorem Provers in Software Certification
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd; Schumann, Johann
2004-01-01
We describe a system for the automated certification of safety properties of NASA software. The system uses Hoare-style program verification technology to generate proof obligations which are then processed by an automated first-order theorem prover (ATP). We discuss the unique requirements this application places on the ATPs, focusing on automation, proof checking, and usability. For full automation, however, the obligations must be aggressively preprocessed and simplified, and we demonstrate how the individual simplification stages, which are implemented by rewriting, influence the ability of the ATPs to solve the proof tasks. Our results are based on 13 certification experiments that lead to more than 25,000 proof tasks which have each been attempted by Vampire, Spass, e-setheo, and Otter. The proofs found by Otter have been proof-checked by IVY.
NASA Technical Reports Server (NTRS)
Miller, Steven P.; Whalen, Mike W.; O'Brien, Dan; Heimdahl, Mats P.; Joshi, Anjali
2005-01-01
Recent advanced in model-checking have made it practical to formally verify the correctness of many complex synchronous systems (i.e., systems driven by a single clock). However, many computer systems are implemented by asynchronously composing several synchronous components, where each component has its own clock and these clocks are not synchronized. Formal verification of such Globally Asynchronous/Locally Synchronous (GA/LS) architectures is a much more difficult task. In this report, we describe a methodology for developing and reasoning about such systems. This approach allows a developer to start from an ideal system specification and refine it along two axes. Along one axis, the system can be refined one component at a time towards an implementation. Along the other axis, the behavior of the system can be relaxed to produce a more cost effective but still acceptable solution. We illustrate this process by applying it to the synchronization logic of a Dual Fight Guidance System, evolving the system from an ideal case in which the components do not fail and communicate synchronously to one in which the components can fail and communicate asynchronously. For each step, we show how the system requirements have to change if the system is to be implemented and prove that each implementation meets the revised system requirements through modelchecking.
Galaxy-galaxy lensing in the Dark Energy Survey Science Verification data
NASA Astrophysics Data System (ADS)
Clampitt, J.; Sánchez, C.; Kwan, J.; Krause, E.; MacCrann, N.; Park, Y.; Troxel, M. A.; Jain, B.; Rozo, E.; Rykoff, E. S.; Wechsler, R. H.; Blazek, J.; Bonnett, C.; Crocce, M.; Fang, Y.; Gaztanaga, E.; Gruen, D.; Jarvis, M.; Miquel, R.; Prat, J.; Ross, A. J.; Sheldon, E.; Zuntz, J.; Abbott, T. M. C.; Abdalla, F. B.; Armstrong, R.; Becker, M. R.; Benoit-Lévy, A.; Bernstein, G. M.; Bertin, E.; Brooks, D.; Burke, D. L.; Carnero Rosell, A.; Carrasco Kind, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Dietrich, J. P.; Doel, P.; Estrada, J.; Evrard, A. E.; Fausti Neto, A.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gruendl, R. A.; Honscheid, K.; James, D. J.; Kuehn, K.; Kuropatkin, N.; Lahav, O.; Lima, M.; March, M.; Marshall, J. L.; Martini, P.; Melchior, P.; Mohr, J. J.; Nichol, R. C.; Nord, B.; Plazas, A. A.; Romer, A. K.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Swanson, M. E. C.; Tarle, G.; Thomas, D.; Vikram, V.; Walker, A. R.
2017-03-01
We present galaxy-galaxy lensing results from 139 deg2 of Dark Energy Survey (DES) Science Verification (SV) data. Our lens sample consists of red galaxies, known as redMaGiC, which are specifically selected to have a low photometric redshift error and outlier rate. The lensing measurement has a total signal-to-noise ratio of 29 over scales 0.09 < R < 15 Mpc h-1, including all lenses over a wide redshift range 0.2 < z < 0.8. Dividing the lenses into three redshift bins for this constant moving number density sample, we find no evidence for evolution in the halo mass with redshift. We obtain consistent results for the lensing measurement with two independent shear pipelines, NGMIX and IM3SHAPE. We perform a number of null tests on the shear and photometric redshift catalogues and quantify resulting systematic uncertainties. Covariances from jackknife subsamples of the data are validated with a suite of 50 mock surveys. The result and systematic checks in this work provide a critical input for future cosmological and galaxy evolution studies with the DES data and redMaGiC galaxy samples. We fit a halo occupation distribution (HOD) model, and demonstrate that our data constrain the mean halo mass of the lens galaxies, despite strong degeneracies between individual HOD parameters.
Experimental Evaluation of Verification and Validation Tools on Martian Rover Software
NASA Technical Reports Server (NTRS)
Brat, Guillaume; Giannakopoulou, Dimitra; Goldberg, Allen; Havelund, Klaus; Lowry, Mike; Pasareani, Corina; Venet, Arnaud; Visser, Willem; Washington, Rich
2003-01-01
We report on a study to determine the maturity of different verification and validation technologies (V&V) on a representative example of NASA flight software. The study consisted of a controlled experiment where three technologies (static analysis, runtime analysis and model checking) were compared to traditional testing with respect to their ability to find seeded errors in a prototype Mars Rover. What makes this study unique is that it is the first (to the best of our knowledge) to do a controlled experiment to compare formal methods based tools to testing on a realistic industrial-size example where the emphasis was on collecting as much data on the performance of the tools and the participants as possible. The paper includes a description of the Rover code that was analyzed, the tools used as well as a detailed description of the experimental setup and the results. Due to the complexity of setting up the experiment, our results can not be generalized, but we believe it can still serve as a valuable point of reference for future studies of this kind. It did confirm the belief we had that advanced tools can outperform testing when trying to locate concurrency errors. Furthermore the results of the experiment inspired a novel framework for testing the next generation of the Rover.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bojechko, Casey; Phillps, Mark; Kalet, Alan
Purpose: Complex treatments in radiation therapy require robust verification in order to prevent errors that can adversely affect the patient. For this purpose, the authors estimate the effectiveness of detecting errors with a “defense in depth” system composed of electronic portal imaging device (EPID) based dosimetry and a software-based system composed of rules-based and Bayesian network verifications. Methods: The authors analyzed incidents with a high potential severity score, scored as a 3 or 4 on a 4 point scale, recorded in an in-house voluntary incident reporting system, collected from February 2012 to August 2014. The incidents were categorized into differentmore » failure modes. The detectability, defined as the number of incidents that are detectable divided total number of incidents, was calculated for each failure mode. Results: In total, 343 incidents were used in this study. Of the incidents 67% were related to photon external beam therapy (EBRT). The majority of the EBRT incidents were related to patient positioning and only a small number of these could be detected by EPID dosimetry when performed prior to treatment (6%). A large fraction could be detected by in vivo dosimetry performed during the first fraction (74%). Rules-based and Bayesian network verifications were found to be complimentary to EPID dosimetry, able to detect errors related to patient prescriptions and documentation, and errors unrelated to photon EBRT. Combining all of the verification steps together, 91% of all EBRT incidents could be detected. Conclusions: This study shows that the defense in depth system is potentially able to detect a large majority of incidents. The most effective EPID-based dosimetry verification is in vivo measurements during the first fraction and is complemented by rules-based and Bayesian network plan checking.« less
Developing Formal Correctness Properties from Natural Language Requirements
NASA Technical Reports Server (NTRS)
Nikora, Allen P.
2006-01-01
This viewgraph presentation reviews the rationale of the program to transform natural language specifications into formal notation.Specifically, automate generation of Linear Temporal Logic (LTL)correctness properties from natural language temporal specifications. There are several reasons for this approach (1) Model-based techniques becoming more widely accepted, (2) Analytical verification techniques (e.g., model checking, theorem proving) significantly more effective at detecting types of specification design errors (e.g., race conditions, deadlock) than manual inspection, (3) Many requirements still written in natural language, which results in a high learning curve for specification languages, associated tools and increased schedule and budget pressure on projects reduce training opportunities for engineers, and (4) Formulation of correctness properties for system models can be a difficult problem. This has relevance to NASA in that it would simplify development of formal correctness properties, lead to more widespread use of model-based specification, design techniques, assist in earlier identification of defects and reduce residual defect content for space mission software systems. The presentation also discusses: potential applications, accomplishments and/or technological transfer potential and the next steps.
Verification of Functional Fault Models and the Use of Resource Efficient Verification Tools
NASA Technical Reports Server (NTRS)
Bis, Rachael; Maul, William A.
2015-01-01
Functional fault models (FFMs) are a directed graph representation of the failure effect propagation paths within a system's physical architecture and are used to support development and real-time diagnostics of complex systems. Verification of these models is required to confirm that the FFMs are correctly built and accurately represent the underlying physical system. However, a manual, comprehensive verification process applied to the FFMs was found to be error prone due to the intensive and customized process necessary to verify each individual component model and to require a burdensome level of resources. To address this problem, automated verification tools have been developed and utilized to mitigate these key pitfalls. This paper discusses the verification of the FFMs and presents the tools that were developed to make the verification process more efficient and effective.
NASA Astrophysics Data System (ADS)
Pack, Robert C.; Standiford, Keith; Lukanc, Todd; Ning, Guo Xiang; Verma, Piyush; Batarseh, Fadi; Chua, Gek Soon; Fujimura, Akira; Pang, Linyong
2014-10-01
A methodology is described wherein a calibrated model-based `Virtual' Variable Shaped Beam (VSB) mask writer process simulator is used to accurately verify complex Optical Proximity Correction (OPC) and Inverse Lithography Technology (ILT) mask designs prior to Mask Data Preparation (MDP) and mask fabrication. This type of verification addresses physical effects which occur in mask writing that may impact lithographic printing fidelity and variability. The work described here is motivated by requirements for extreme accuracy and control of variations for today's most demanding IC products. These extreme demands necessitate careful and detailed analysis of all potential sources of uncompensated error or variation and extreme control of these at each stage of the integrated OPC/ MDP/ Mask/ silicon lithography flow. The important potential sources of variation we focus on here originate on the basis of VSB mask writer physics and other errors inherent in the mask writing process. The deposited electron beam dose distribution may be examined in a manner similar to optical lithography aerial image analysis and image edge log-slope analysis. This approach enables one to catch, grade, and mitigate problems early and thus reduce the likelihood for costly long-loop iterations between OPC, MDP, and wafer fabrication flows. It moreover describes how to detect regions of a layout or mask where hotspots may occur or where the robustness to intrinsic variations may be improved by modification to the OPC, choice of mask technology, or by judicious design of VSB shots and dose assignment.
Darling, M.E.; Hubbard, L.E.
1994-01-01
Computerized Geographic Information Systems (GIS) have become viable and valuable tools for managing,analyzing, creating, and displaying data for three-dimensional finite-difference ground-water flow models. Three GIS applications demonstrated in this study are: (1) regridding of data arrays from an existing large-area, low resolution ground-water model to a smaller, high resolution grid; (2) use of GIS techniques for assembly of data-input arrays for a ground-water model; and (3) use of GIS for rapid display of data for verification, for checking of ground-water model output, and for the cre.ation of customized maps for use in reports. The Walla Walla River Basin was selected as the location for the demonstration because (1) data from a low resolution ground-water model (Columbia Plateau Regional Aquifer System Analysis [RASA]) were available and (2) concern for long-term use of water resources for irrigation in the basin. The principal advantage of regridding is that it may provide the ability to more precisely calibrate a model, assuming chat a more detailed coverage of data is available, and to evaluate the numerical errors associated with a particular grid design.Regridding gave about an 8-fold increase in grid-node density.Several FORTRAN programs were developed to load the regridded ground-water data into a finite-difference modular model as model-compatible input files for use in a steady-state model run.To facilitate the checking and validating of the GIS regridding process, maps and tabular reports were produced for each of eight ground-water parameters by model layer. Also, an automated subroutine that was developed to view the model-calculated water levels in cross-section will aid in the synthesis and interpretation of model results.
Towards a supported common NEAMS software stack
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cormac Garvey
2012-04-01
The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less
NASA Technical Reports Server (NTRS)
Bensalem, Saddek; Ganesh, Vijay; Lakhnech, Yassine; Munoz, Cesar; Owre, Sam; Ruess, Harald; Rushby, John; Rusu, Vlad; Saiedi, Hassen; Shankar, N.
2000-01-01
To become practical for assurance, automated formal methods must be made more scalable, automatic, and cost-effective. Such an increase in scope, scale, automation, and utility can be derived from an emphasis on a systematic separation of concerns during verification. SAL (Symbolic Analysis Laboratory) attempts to address these issues. It is a framework for combining different tools to calculate properties of concurrent systems. The heart of SAL is a language, developed in collaboration with Stanford, Berkeley, and Verimag for specifying concurrent systems in a compositional way. Our instantiation of the SAL framework augments PVS with tools for abstraction, invariant generation, program analysis (such as slicing), theorem proving, and model checking to separate concerns as well as calculate properties (i.e., perform, symbolic analysis) of concurrent systems. We. describe the motivation, the language, the tools, their integration in SAL/PAS, and some preliminary experience of their use.
Model Based Verification of Cyber Range Event Environments
2015-12-10
Model Based Verification of Cyber Range Event Environments Suresh K. Damodaran MIT Lincoln Laboratory 244 Wood St., Lexington, MA, USA...apply model based verification to cyber range event environment configurations, allowing for the early detection of errors in event environment...Environment Representation (CCER) ontology. We also provide an overview of a methodology to specify verification rules and the corresponding error
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, H; Tan, J; Kavanaugh, J
Purpose: Radiotherapy (RT) contours delineated either manually or semiautomatically require verification before clinical usage. Manual evaluation is very time consuming. A new integrated software tool using supervised pattern contour recognition was thus developed to facilitate this process. Methods: The contouring tool was developed using an object-oriented programming language C# and application programming interfaces, e.g. visualization toolkit (VTK). The C# language served as the tool design basis. The Accord.Net scientific computing libraries were utilized for the required statistical data processing and pattern recognition, while the VTK was used to build and render 3-D mesh models from critical RT structures in real-timemore » and 360° visualization. Principal component analysis (PCA) was used for system self-updating geometry variations of normal structures based on physician-approved RT contours as a training dataset. The inhouse design of supervised PCA-based contour recognition method was used for automatically evaluating contour normality/abnormality. The function for reporting the contour evaluation results was implemented by using C# and Windows Form Designer. Results: The software input was RT simulation images and RT structures from commercial clinical treatment planning systems. Several abilities were demonstrated: automatic assessment of RT contours, file loading/saving of various modality medical images and RT contours, and generation/visualization of 3-D images and anatomical models. Moreover, it supported the 360° rendering of the RT structures in a multi-slice view, which allows physicians to visually check and edit abnormally contoured structures. Conclusion: This new software integrates the supervised learning framework with image processing and graphical visualization modules for RT contour verification. This tool has great potential for facilitating treatment planning with the assistance of an automatic contour evaluation module in avoiding unnecessary manual verification for physicians/dosimetrists. In addition, its nature as a compact and stand-alone tool allows for future extensibility to include additional functions for physicians’ clinical needs.« less
CD volume design and verification
NASA Technical Reports Server (NTRS)
Li, Y. P.; Hughes, J. S.
1993-01-01
In this paper, we describe a prototype for CD-ROM volume design and verification. This prototype allows users to create their own model of CD volumes by modifying a prototypical model. Rule-based verification of the test volumes can then be performed later on against the volume definition. This working prototype has proven the concept of model-driven rule-based design and verification for large quantity of data. The model defined for the CD-ROM volumes becomes a data model as well as an executable specification.
2008-07-01
8501A (Reference 2), and from the V/STOL specification MIL-F-83300 (Reference 3). ADS-33E-PRF contains intermeshed requirements on not only shoi -t- and...While final verification will in most cases require flight testing, initial checks can be performed through analysis and on ground-based simulators...they are difficult to test, or for some reason are deficient in one or more areas. In such cases one or more alternate criteria are presented where
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vazquez Quino, L; Huerta Hernandez, C; Morrow, A
2016-06-15
Purpose: To evaluate the use of MobiusFX as a pre-treatment verification IMRT QA tool and compare it with a commercial 4D detector array for VMAT plan QA. Methods: 15 VMAT plan QA of different treatment sites were delivered and measured by traditional means with the 4D detector array ArcCheck (Sun Nuclear corporation) and at the same time measurement in linac treatment logs (Varian Dynalogs files) were analyzed from the same delivery with MobiusFX software (Mobius Medical Systems). VMAT plan QAs created in Eclipse treatment planning system (Varian) in a TrueBeam linac machine (Varian) were delivered and analyzed with the gammamore » analysis routine from SNPA software (Sun Nuclear corporation). Results: Comparable results in terms of the gamma analysis with 99.06% average gamma passing with 3%,3mm passing rate is observed in the comparison among MobiusFX, ArcCheck measurements, and the Treatment Planning System dose calculated. When going to a stricter criterion (1%,1mm) larger discrepancies are observed in different regions of the measurements with an average gamma of 66.24% between MobiusFX and ArcCheck. Conclusion: This work indicates the potential for using MobiusFX as a routine pre-treatment patient specific IMRT method for quality assurance purposes and its advantages as a phantom-less method which reduce the time for IMRT QA measurement. MobiusFX is capable of produce similar results of those by traditional methods used for patient specific pre-treatment verification VMAT QA. Even the gamma results comparing to the TPS are similar the analysis of both methods show that the errors being identified by each method are found in different regions. Traditional methods like ArcCheck are sensitive to setup errors and dose difference errors coming from the linac output. On the other hand linac log files analysis record different errors in the VMAT QA associated with the MLCs and gantry motion that by traditional methods cannot be detected.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, S; Wu, Y; Chang, X
Purpose: A novel computer software system, namely APDV (Automatic Pre-Delivery Verification), has been developed for verifying patient treatment plan parameters right prior to treatment deliveries in order to automatically detect and prevent catastrophic errors. Methods: APDV is designed to continuously monitor new DICOM plan files on the TMS computer at the treatment console. When new plans to be delivered are detected, APDV checks the consistencies of plan parameters and high-level plan statistics using underlying rules and statistical properties based on given treatment site, technique and modality. These rules were quantitatively derived by retrospectively analyzing all the EBRT treatment plans ofmore » the past 8 years at authors’ institution. Therapists and physicists will be notified with a warning message displayed on the TMS computer if any critical errors are detected, and check results, confirmation, together with dismissal actions will be saved into database for further review. Results: APDV was implemented as a stand-alone program using C# to ensure required real time performance. Mean values and standard deviations were quantitatively derived for various plan parameters including MLC usage, MU/cGy radio, beam SSD, beam weighting, and the beam gantry angles (only for lateral targets) per treatment site, technique and modality. 2D-based rules of combined MU/cGy ratio and averaged SSD values were also derived using joint probabilities of confidence error ellipses. The statistics of these major treatment plan parameters quantitatively evaluate the consistency of any treatment plans which facilitates automatic APDV checking procedures. Conclusion: APDV could be useful in detecting and preventing catastrophic errors immediately before treatment deliveries. Future plan including automatic patient identify and patient setup checks after patient daily images are acquired by the machine and become available on the TMS computer. This project is supported by the Agency for Healthcare Research and Quality (AHRQ) under award 1R01HS0222888. The senior author received research grants from ViewRay Inc. and Varian Medical System.« less
A calibration method for patient specific IMRT QA using a single therapy verification film
Shukla, Arvind Kumar; Oinam, Arun S.; Kumar, Sanjeev; Sandhu, I.S.; Sharma, S.C.
2013-01-01
Aim The aim of the present study is to develop and verify the single film calibration procedure used in intensity-modulated radiation therapy (IMRT) quality assurance. Background Radiographic films have been regularly used in routine commissioning of treatment modalities and verification of treatment planning system (TPS). The radiation dosimetery based on radiographic films has ability to give absolute two-dimension dose distribution and prefer for the IMRT quality assurance. However, the single therapy verification film gives a quick and significant reliable method for IMRT verification. Materials and methods A single extended dose rate (EDR 2) film was used to generate the sensitometric curve of film optical density and radiation dose. EDR 2 film was exposed with nine 6 cm × 6 cm fields of 6 MV photon beam obtained from a medical linear accelerator at 5-cm depth in solid water phantom. The nine regions of single film were exposed with radiation doses raging from 10 to 362 cGy. The actual dose measurements inside the field regions were performed using 0.6 cm3 ionization chamber. The exposed film was processed after irradiation using a VIDAR film scanner and the value of optical density was noted for each region. Ten IMRT plans of head and neck carcinoma were used for verification using a dynamic IMRT technique, and evaluated using the gamma index method with TPS calculated dose distribution. Results Sensitometric curve has been generated using a single film exposed at nine field region to check quantitative dose verifications of IMRT treatments. The radiation scattered factor was observed to decrease exponentially with the increase in the distance from the centre of each field region. The IMRT plans based on calibration curve were verified using the gamma index method and found to be within acceptable criteria. Conclusion The single film method proved to be superior to the traditional calibration method and produce fast daily film calibration for highly accurate IMRT verification. PMID:24416558
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sethuraman, TKR; Sherif, M; Subramanian, N
Purpose: The complexity of IMRT delivery requires pre-treatment quality assurance and plan verification. KCCC has implemented IMRT clinically in few sites and will extend to all sites. Recently, our Varian linear accelerator and Eclipse planning system were upgraded from Millennium 80 to 120 Multileaf Collimator (MLC) and from v8.6 to 11.0 respectively. Our preliminary experience on the pre-treatment quality assurance verification is discussed. Methods: Eight Breast, Three Prostate and One Hypopharynx cancer patients were planned with step and shoot IMRT. All breast cases were planned before the upgrade with 60% cases treated. The ICRU 83 recommendations were followed for themore » dose prescription and constraints to OAR for all cases. Point dose measurement was done with CIRS cylindrical phantom and PTW 0.125 cc ionization chamber. Measured dose was compared with calculated dose at the point of measurement. Map CHECK diode array phantom was used for the plan verification. Planned and measured doses were compared by applying gamma index of 3% (dose difference) / 3 mm DTA (average distance to agreement). For all cases, a plan is considered to be successful if more than 95% of the tested diodes pass the gamma test. A prostate case was chosen to compare the plan verification before and after the upgrade. Results: Point dose measurement results were in agreement with the calculated doses. The maximum deviation observed was 2.3%. The passing rate of average gamma index was measured higher than 97% for the plan verification of all cases. Similar result was observed for plan verification of the chosen prostate case before and after the upgrade. Conclusion: Our preliminary experience from the obtained results validates the accuracy of our QA process and provides confidence to extend IMRT to all sites in Kuwait.« less
Systems, methods and apparatus for pattern matching in procedure development and verification
NASA Technical Reports Server (NTRS)
Hinchey, Michael G. (Inventor); Rouff, Christopher A. (Inventor); Rash, James L. (Inventor)
2011-01-01
Systems, methods and apparatus are provided through which, in some embodiments, a formal specification is pattern-matched from scenarios, the formal specification is analyzed, and flaws in the formal specification are corrected. The systems, methods and apparatus may include pattern-matching an equivalent formal model from an informal specification. Such a model can be analyzed for contradictions, conflicts, use of resources before the resources are available, competition for resources, and so forth. From such a formal model, an implementation can be automatically generated in a variety of notations. The approach can improve the resulting implementation, which, in some embodiments, is provably equivalent to the procedures described at the outset, which in turn can improve confidence that the system reflects the requirements, and in turn reduces system development time and reduces the amount of testing required of a new system. Moreover, in some embodiments, two or more implementations can be "reversed" to appropriate formal models, the models can be combined, and the resulting combination checked for conflicts. Then, the combined, error-free model can be used to generate a new (single) implementation that combines the functionality of the original separate implementations, and may be more likely to be correct.
47 CFR 73.151 - Field strength measurements to establish performance of directional antennas.
Code of Federal Regulations, 2010 CFR
2010-10-01
... verified either by field strength measurement or by computer modeling and sampling system verification. (a... specifically identified by the Commission. (c) Computer modeling and sample system verification of modeled... performance verified by computer modeling and sample system verification. (1) A matrix of impedance...
Jornet, Núria; Carrasco, Pablo; Beltrán, Mercè; Calvo, Juan Francisco; Escudé, Lluís; Hernández, Victor; Quera, Jaume; Sáez, Jordi
2014-09-01
We performed a multicentre intercomparison of IMRT optimisation and dose planning and IMRT pre-treatment verification methods and results. The aims were to check consistency between dose plans and to validate whether in-house pre-treatment verification results agreed with those of an external audit. Participating centres used two mock cases (prostate and head and neck) for the intercomparison and audit. Compliance to dosimetric goals and total number of MU per plan were collected. A simple quality index to compare the different plans was proposed. We compared gamma index pass rates using the centre's equipment and methodology to those of an external audit. While for the prostate case, all centres fulfilled the dosimetric goals and plan quality was homogeneous, that was not the case for the head and neck case. The number of MU did not correlate with the plan quality index. Pre-treatment verifications results of the external audit did not agree with those of the in-house measurements for two centres: being within tolerance for in-house measurements and unacceptable for the audit or the other way round. Although all plans fulfilled dosimetric constraints, plan quality is highly dependent on the planner expertise. External audits are an excellent tool to detect errors in IMRT implementation and cannot be replaced by intercomparison using results obtained by centres. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Factors associated with tobacco sales to minors: lessons learned from the FDA compliance checks.
Clark, P I; Natanblut, S L; Schmitt, C L; Wolters, C; Iachan, R
2000-08-09
Tobacco products continue to be widely accessible to minors. Between 1997 and 1999, the US Food and Drug Administration (FDA) conducted more than 150,000 tobacco sales age-restriction compliance checks. Data obtained from these checks provide important guidance for curbing illegal sales. To determine which elements of the compliance checks were most highly associated with illegal sales and thereby inform best practices for conducting efficient compliance check programs. Cross-sectional analysis of FDA compliance checks in 110,062 unique establishments in 36 US states and the District of Columbia. Illegal sales of tobacco to minors at compliance checks; association of illegal sales with variables such as age and sex of the minor. The rate of illegal sales for all first compliance checks in unique stores was 26.6%. Clerk failure to request proof of age was strongly associated with illegal sales (uncorrected sales rate, 10.5% compared with 89.5% sales when proof was not requested; multivariate-adjusted odds ratio [OR], 0.03; 95% confidence interval [CI], 0.03-0.04). Other factors associated with increased illegal sales were employment of older minors to make the purchase attempt (adjusted ORs for 16- and 17-year-old minors compared with 15-year-olds were 1.52 [95% CI, 1.46-1.63] and 2.43 [95% CI, 2.31-2. 59], respectively), attempt to purchase smokeless tobacco (adjusted OR, 2.16 [95% CI, 1.90-2.45] vs cigarette purchase attempts), and performing checks at or after 5 PM (adjusted OR, 1.28 [95% CI, 1. 21-1.35] vs before 5 PM). Female sex of clerk and minor, Saturday checks, type of store (convenience store selling gas, gas station, drugstore, supermarket and general merchandise), and rural store locations also were associated with increased illegal sales. This analysis found that a request for age verification strongly predicted compliance with the law. The results suggest several ways in which the process of compliance checks might be optimized. JAMA. 2000;284:729-734
Catuzzo, P; Zenone, F; Aimonetto, S; Peruzzo, A; Casanova Borca, V; Pasquino, M; Franco, P; La Porta, M R; Ricardi, U; Tofani, S
2012-07-01
To investigate the feasibility of implementing a novel approach for patient-specific QA of TomoDirect(TM) whole breast treatment. The most currently used TomoTherapy DQA method, consisting in the verification of the 2D dose distribution in a coronal or sagittal plane of the Cheese Phantom by means of gafchromic films, was compared with an alternative approach based on the use of two commercially available diode arrays, MapCHECK2(TM) and ArcCHECK(TM). The TomoDirect(TM) plans of twenty patients with a primary unilateral breast cancer were applied to a CT scan of the Cheese Phantom and a MVCT dataset of the diode arrays. Then measurements of 2D dose distribution were performed and compared with the calculated ones using the gamma analysis method with different sets of DTA and DD criteria (3%-3 mm, 3%-2 mm). The sensitivity of the diode arrays to detect delivery and setup errors was also investigated. The measured dose distributions showed excellent agreement with the TPS calculations for each detector, with averaged fractions of passed Γ values greater than 95%. The percentage of points satisfying the constraint Γ < 1 was significantly higher for MapCHECK2(TM) than for ArcCHECK(TM) and gafchromic films using both the 3%-3 mm and 3%-2 mm gamma criteria. Both the diode arrays show a good sensitivity to delivery and setup errors using a 3%-2 mm gamma criteria. MapCHECK2™ and ArcCHECK(TM) may fulfill the demands of an adequate system for TomoDirect(TM) patient-specific QA.
, GFS, RAP, HRRR, HIRESW, SREF mean, International Global Models, HPC analysis Precipitation Skill Scores : 1995-Present NAM, GFS, NAM CONUS nest, International Models EMC Forecast Verfication Stats: NAM ) Real Time Verification of NCEP Operational Models against observations Real Time Verification of NCEP
A Multi-Encoding Approach for LTL Symbolic Satisfiability Checking
NASA Technical Reports Server (NTRS)
Rozier, Kristin Y.; Vardi, Moshe Y.
2011-01-01
Formal behavioral specifications written early in the system-design process and communicated across all design phases have been shown to increase the efficiency, consistency, and quality of the system under development. To prevent introducing design or verification errors, it is crucial to test specifications for satisfiability. Our focus here is on specifications expressed in linear temporal logic (LTL). We introduce a novel encoding of symbolic transition-based Buchi automata and a novel, "sloppy," transition encoding, both of which result in improved scalability. We also define novel BDD variable orders based on tree decomposition of formula parse trees. We describe and extensively test a new multi-encoding approach utilizing these novel encoding techniques to create 30 encoding variations. We show that our novel encodings translate to significant, sometimes exponential, improvement over the current standard encoding for symbolic LTL satisfiability checking.
Aerobraking Maneuver (ABM) Report Generator
NASA Technical Reports Server (NTRS)
Fisher, Forrest; Gladden, Roy; Khanampornpan, Teerapat
2008-01-01
abmREPORT Version 3.1 is a Perl script that extracts vital summarization information from the Mars Reconnaissance Orbiter (MRO) aerobraking ABM build process. This information facilitates sequence reviews, and provides a high-level summarization of the sequence for mission management. The script extracts information from the ENV, SSF, FRF, SCMFmax, and OPTG files and burn magnitude configuration files and presents them in a single, easy-to-check report that provides the majority of the parameters necessary for cross check and verification during the sequence review process. This means that needed information, formerly spread across a number of different files and each in a different format, is all available in this one application. This program is built on the capabilities developed in dragReport and then the scripts evolved as the two tools continued to be developed in parallel.
Design and Varactors: Operational Considerations. A Reliability Study for Robust Planar GaAs
NASA Technical Reports Server (NTRS)
Maiwald, Frank; Schlecht, Erich; Ward, John; Lin, Robert; Leon, Rosa; Pearson, John; Mehdi, Imran
2003-01-01
Preliminary conclusions include: Limits for reverse currents cannot be set. Based on current data we want to avoid any reverse bias current. We know 1 micro-A is too high. Leakage current gets suppressed when operated at 120K. Migration and verification: a) Reverse Bias Voltage will be limited; b) Health check with I/V curve: 1) Minimal reverse voltage shall be x0.75 of the calculated voltage breakdown Vbr; 2) Degradation of the Reverse Bias voltage at given current will be used as indication of ESD incidents or other Damages (high RF power, heat); 3) Calculation of diodes parameter to verify initial health check result in forward direction. RF output power starts to degrade when diode I/V curve is very strongly degraded only. Experienced on 400GHz doubler and 200GHz doubler
Validation of SCIAMACHY and TOMS UV Radiances Using Ground and Space Observations
NASA Technical Reports Server (NTRS)
Hilsenrath, E.; Bhartia, P. K.; Bojkov, B. R.; Kowalewski, M.; Labow, G.; Ahmad, Z.
2004-01-01
Verification of a stratospheric ozone recovery remains a high priority for environmental research and policy definition. Models predict an ozone recovery at a much lower rate than the measured depletion rate observed to date. Therefore improved precision of the satellite and ground ozone observing systems are required over the long term to verify its recovery. We show that validation of satellite radiances from space and from the ground can be a very effective means for correcting long term drifts of backscatter type satellite measurements and can be used to cross calibrate all B W instruments in orbit (TOMS, SBW/2, GOME, SCIAMACHY, OM, GOME-2, OMPS). This method bypasses the retrieval algorithms used for both satellite and ground based measurements that are normally used to validate and correct the satellite data. Radiance comparisons employ forward models and are inherently more accurate than inverse (retrieval) algorithms. This approach however requires well calibrated instruments and an accurate radiative transfer model that accounts for aerosols. TOMS and SCIAMACHY calibrations are checked to demonstrate this method and to demonstrate applicability for long term trends.
Validation of mesoscale models
NASA Technical Reports Server (NTRS)
Kuo, Bill; Warner, Tom; Benjamin, Stan; Koch, Steve; Staniforth, Andrew
1993-01-01
The topics discussed include the following: verification of cloud prediction from the PSU/NCAR mesoscale model; results form MAPS/NGM verification comparisons and MAPS observation sensitivity tests to ACARS and profiler data; systematic errors and mesoscale verification for a mesoscale model; and the COMPARE Project and the CME.
CFD Aerothermodynamic Characterization Of The IXV Hypersonic Vehicle
NASA Astrophysics Data System (ADS)
Roncioni, P.; Ranuzzi, G.; Marini, M.; Battista, F.; Rufolo, G. C.
2011-05-01
In this paper, and in the framework of the ESA technical assistance activities for IXV project, the numerical activities carried out by ASI/CIRA to support the development of Aerodynamic and Aerothermodynamic databases, independent from the ones developed by the IXV Industrial consortium, are reported. A general characterization of the IXV aerothermodynamic environment has been also provided for cross checking and verification purposes. The work deals with the first year activities of Technical Assistance Contract agreed between the Italian Space Agency/CIRA and ESA.
NASA Technical Reports Server (NTRS)
Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.
1975-01-01
Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.
VAVUQ, Python and Matlab freeware for Verification and Validation, Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Courtney, J. E.; Zamani, K.; Bombardelli, F. A.; Fleenor, W. E.
2015-12-01
A package of scripts is presented for automated Verification and Validation (V&V) and Uncertainty Quantification (UQ) for engineering codes that approximate Partial Differential Equations (PDFs). The code post-processes model results to produce V&V and UQ information. This information can be used to assess model performance. Automated information on code performance can allow for a systematic methodology to assess the quality of model approximations. The software implements common and accepted code verification schemes. The software uses the Method of Manufactured Solutions (MMS), the Method of Exact Solution (MES), Cross-Code Verification, and Richardson Extrapolation (RE) for solution (calculation) verification. It also includes common statistical measures that can be used for model skill assessment. Complete RE can be conducted for complex geometries by implementing high-order non-oscillating numerical interpolation schemes within the software. Model approximation uncertainty is quantified by calculating lower and upper bounds of numerical error from the RE results. The software is also able to calculate the Grid Convergence Index (GCI), and to handle adaptive meshes and models that implement mixed order schemes. Four examples are provided to demonstrate the use of the software for code and solution verification, model validation and uncertainty quantification. The software is used for code verification of a mixed-order compact difference heat transport solver; the solution verification of a 2D shallow-water-wave solver for tidal flow modeling in estuaries; the model validation of a two-phase flow computation in a hydraulic jump compared to experimental data; and numerical uncertainty quantification for 3D CFD modeling of the flow patterns in a Gust erosion chamber.
Efficient Verification of Holograms Using Mobile Augmented Reality.
Hartl, Andreas Daniel; Arth, Clemens; Grubert, Jens; Schmalstieg, Dieter
2016-07-01
Paper documents such as passports, visas and banknotes are frequently checked by inspection of security elements. In particular, optically variable devices such as holograms are important, but difficult to inspect. Augmented Reality can provide all relevant information on standard mobile devices. However, hologram verification on mobiles still takes long and provides lower accuracy than inspection by human individuals using appropriate reference information. We aim to address these drawbacks by automatic matching combined with a special parametrization of an efficient goal-oriented user interface which supports constrained navigation. We first evaluate a series of similarity measures for matching hologram patches to provide a sound basis for automatic decisions. Then a re-parametrized user interface is proposed based on observations of typical user behavior during document capture. These measures help to reduce capture time to approximately 15 s with better decisions regarding the evaluated samples than what can be achieved by untrained users.
Research on the injectors remanufacturing
NASA Astrophysics Data System (ADS)
Daraba, D.; Alexandrescu, I. M.; Daraba, C.
2017-05-01
During the remanufacturing process, the injector body - after disassembling and cleaning process - should be subjected to some strict control processes, both visually and by an electronic microscope, for evidencing any defects that may occur on the sealing surface of the injector body and the atomizer. In this paper we present the path followed by an injector body in the process of remanufacturing, exemplifying the verification method of roughness and hardness of the sealing surfaces, as well as the microscopic analysis of the sealing surface areas around the inlet. These checks can indicate which path the injector body has to follow during the remanufacturing. The control methodology of the injector body, that is established on the basis of this research, helps preventing some defective injector bodies to enter into the remanufacturing process, thus reducing to a minimum the number of remanufactured injectors to be declared non-conforming after final verification process.
NASA Technical Reports Server (NTRS)
Fey, M. G.
1981-01-01
The experimental verification system for the production of silicon via the arc heater-sodium reduction of SiCl4 was designed, fabricated, installed, and operated. Each of the attendant subsystems was checked out and operated to insure performance requirements. These subsystems included: the arc heaters/reactor, cooling water system, gas system, power system, Control & Instrumentation system, Na injection system, SiCl4 injection system, effluent disposal system and gas burnoff system. Prior to introducing the reactants (Na and SiCl4) to the arc heater/reactor, a series of gas only-power tests was conducted to establish the operating parameters of the three arc heaters of the system. Following the successful completion of the gas only-power tests and the readiness tests of the sodium and SiCl4 injection systems, a shakedown test of the complete experimental verification system was conducted.
NASA Astrophysics Data System (ADS)
Maroto, Oscar; Diez-Merino, Laura; Carbonell, Jordi; Tomàs, Albert; Reyes, Marcos; Joven-Alvarez, Enrique; Martín, Yolanda; Morales de los Ríos, J. A.; del Peral, Luis; Rodríguez-Frías, M. D.
2014-07-01
The Japanese Experiment Module (JEM) Extreme Universe Space Observatory (EUSO) will be launched and attached to the Japanese module of the International Space Station (ISS). Its aim is to observe UV photon tracks produced by ultra-high energy cosmic rays developing in the atmosphere and producing extensive air showers. The key element of the instrument is a very wide-field, very fast, large-lense telescope that can detect extreme energy particles with energy above 1019 eV. The Atmospheric Monitoring System (AMS), comprising, among others, the Infrared Camera (IRCAM), which is the Spanish contribution, plays a fundamental role in the understanding of the atmospheric conditions in the Field of View (FoV) of the telescope. It is used to detect the temperature of clouds and to obtain the cloud coverage and cloud top altitude during the observation period of the JEM-EUSO main instrument. SENER is responsible for the preliminary design of the Front End Electronics (FEE) of the Infrared Camera, based on an uncooled microbolometer, and the manufacturing and verification of the prototype model. This paper describes the flight design drivers and key factors to achieve the target features, namely, detector biasing with electrical noise better than 100μV from 1Hz to 10MHz, temperature control of the microbolometer, from 10°C to 40°C with stability better than 10mK over 4.8hours, low noise high bandwidth amplifier adaptation of the microbolometer output to differential input before analog to digital conversion, housekeeping generation, microbolometer control, and image accumulation for noise reduction. It also shows the modifications implemented in the FEE prototype design to perform a trade-off of different technologies, such as the convenience of using linear or switched regulation for the temperature control, the possibility to check the camera performances when both microbolometer and analog electronics are moved further away from the power and digital electronics, and the addition of switching regulators to demonstrate the design is immune to the electrical noise the switching converters introduce. Finally, the results obtained during the verification phase are presented: FEE limitations, verification results, including FEE noise for each channel and its equivalent NETD and microbolometer temperature stability achieved, technologies trade-off, lessons learnt, and design improvement to implement in future project phases.
Knowledge-based verification of clinical guidelines by detection of anomalies.
Duftschmid, G; Miksch, S
2001-04-01
As shown in numerous studies, a significant part of published clinical guidelines is tainted with different types of semantical errors that interfere with their practical application. The adaptation of generic guidelines, necessitated by circumstances such as resource limitations within the applying organization or unexpected events arising in the course of patient care, further promotes the introduction of defects. Still, most current approaches for the automation of clinical guidelines are lacking mechanisms, which check the overall correctness of their output. In the domain of software engineering in general and in the domain of knowledge-based systems (KBS) in particular, a common strategy to examine a system for potential defects consists in its verification. The focus of this work is to present an approach, which helps to ensure the semantical correctness of clinical guidelines in a three-step process. We use a particular guideline specification language called Asbru to demonstrate our verification mechanism. A scenario-based evaluation of our method is provided based on a guideline for the artificial ventilation of newborn infants. The described approach is kept sufficiently general in order to allow its application to several other guideline representation formats.
Systematic Model-in-the-Loop Test of Embedded Control Systems
NASA Astrophysics Data System (ADS)
Krupp, Alexander; Müller, Wolfgang
Current model-based development processes offer new opportunities for verification automation, e.g., in automotive development. The duty of functional verification is the detection of design flaws. Current functional verification approaches exhibit a major gap between requirement definition and formal property definition, especially when analog signals are involved. Besides lack of methodical support for natural language formalization, there does not exist a standardized and accepted means for formal property definition as a target for verification planning. This article addresses several shortcomings of embedded system verification. An Enhanced Classification Tree Method is developed based on the established Classification Tree Method for Embeded Systems CTM/ES which applies a hardware verification language to define a verification environment.
Standardized verification of fuel cycle modeling
Feng, B.; Dixon, B.; Sunny, E.; ...
2016-04-05
A nuclear fuel cycle systems modeling and code-to-code comparison effort was coordinated across multiple national laboratories to verify the tools needed to perform fuel cycle analyses of the transition from a once-through nuclear fuel cycle to a sustainable potential future fuel cycle. For this verification study, a simplified example transition scenario was developed to serve as a test case for the four systems codes involved (DYMOND, VISION, ORION, and MARKAL), each used by a different laboratory participant. In addition, all participants produced spreadsheet solutions for the test case to check all the mass flows and reactor/facility profiles on a year-by-yearmore » basis throughout the simulation period. The test case specifications describe a transition from the current US fleet of light water reactors to a future fleet of sodium-cooled fast reactors that continuously recycle transuranic elements as fuel. After several initial coordinated modeling and calculation attempts, it was revealed that most of the differences in code results were not due to different code algorithms or calculation approaches, but due to different interpretations of the input specifications among the analysts. Therefore, the specifications for the test case itself were iteratively updated to remove ambiguity and to help calibrate interpretations. In addition, a few corrections and modifications were made to the codes as well, which led to excellent agreement between all codes and spreadsheets for this test case. Although no fuel cycle transition analysis codes matched the spreadsheet results exactly, all remaining differences in the results were due to fundamental differences in code structure and/or were thoroughly explained. As a result, the specifications and example results are provided so that they can be used to verify additional codes in the future for such fuel cycle transition scenarios.« less
Review and verification of CARE 3 mathematical model and code
NASA Technical Reports Server (NTRS)
Rose, D. M.; Altschul, R. E.; Manke, J. W.; Nelson, D. L.
1983-01-01
The CARE-III mathematical model and code verification performed by Boeing Computer Services were documented. The mathematical model was verified for permanent and intermittent faults. The transient fault model was not addressed. The code verification was performed on CARE-III, Version 3. A CARE III Version 4, which corrects deficiencies identified in Version 3, is being developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colletti, Lisa Michelle
2016-02-19
In December of 2015, the old qualified Prim UV-Vis instrument used in the ANC122 and NFANC122 procedures failed its monthly calibration check. Attempts to get the system to pass by cleaning flow cells, replacing the light bulb and realigning the light path failed to get sustained reproducible results from the system. As it could no longer pass QA/QC requirements the decision to take it out of service was made. To replace the system, a previously purchased Thermo Fisher Scientific UV-VIS spectrometer (model Genesys 10S; Serial #: 2L5R059137) was retrieved from storage. This report shows that the system is performing asmore » required and meeting all QC requirements listed in ANC122 and NF-ANC122« less
Model Checking a Self-Stabilizing Distributed Clock Synchronization Protocol for Arbitrary Digraphs
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2011-01-01
This report presents the mechanical verification of a self-stabilizing distributed clock synchronization protocol for arbitrary digraphs in the absence of faults. This protocol does not rely on assumptions about the initial state of the system, other than the presence of at least one node, and no central clock or a centrally generated signal, pulse, or message is used. The system under study is an arbitrary, non-partitioned digraph ranging from fully connected to 1-connected networks of nodes while allowing for differences in the network elements. Nodes are anonymous, i.e., they do not have unique identities. There is no theoretical limit on the maximum number of participating nodes. The only constraint on the behavior of the node is that the interactions with other nodes are restricted to defined links and interfaces. This protocol deterministically converges within a time bound that is a linear function of the self-stabilization period.
Galileo spacecraft modal test and evaluation of testing techniques
NASA Technical Reports Server (NTRS)
Chen, J.-C.
1984-01-01
The structural configuration, modal test requirements and pre-test activities involved in modeling the expected dynamic environment and responses of the Galileo spacecraft are discussed. The probe will be Shuttle-launched in 1986 and will gather data on the Jupiter system. Loads analysis for the 5300 lb spacecraft were performed with the NASTRAN code, and covered 10,000 static degrees of freedom and 1600 mass degrees of freedom. A modal analysis will be used to verify the predictions for natural frequencies, mode shapes, orthogonality checks, residual mass, modal damping and forces, and generalized forces. Verification of the validity of considering only 70 natural modes in the numerical simulation is being performed by examining the forcing functions of the analysis. The analysis led to requirements that 162 channels of accelerometer data and 118 channels of strain gage data be recorded during shaker tests to reveal areas where design changes will be needed to eliminate vibration peaks.
National Centers for Environmental Prediction
/ VISION | About EMC EMC > NAM > EXPERIMENTAL DATA Home NAM Operational Products HIRESW Operational Products Operational Forecast Graphics Experimental Forecast Graphics Verification and Diagnostics Model PARALLEL/EXPERIMENTAL MODEL FORECAST GRAPHICS OPERATIONAL VERIFICATION / DIAGNOSTICS PARALLEL VERIFICATION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-03-01
The certification and verification of the Northrup Model NSC-01-0732 Fresnel lens tracking solar collector are presented. A certification statement is included with signatures and a separate report on the structural analysis of the collector system. System verification against the Interim Performance Criteria are indicated by matrices with verification discussion, analysis, and enclosed test results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fakir, H.; Gaede, S.; Mulligan, M.
Purpose: To design a versatile, nonhomogeneous insert for the dose verification phantom ArcCHECK{sup Trade-Mark-Sign} (Sun Nuclear Corp., FL) and to demonstrate its usefulness for the verification of dose distributions in inhomogeneous media. As an example, we demonstrate it can be used clinically for routine quality assurance of two volumetric modulated arc therapy (VMAT) systems for lung stereotactic body radiation therapy (SBRT): SmartArc{sup Registered-Sign} (Pinnacle{sup 3}, Philips Radiation Oncology Systems, Fitchburg, WI) and RapidArc{sup Registered-Sign} (Eclipse{sup Trade-Mark-Sign }, Varian Medical Systems, Palo Alto, CA). Methods: The cylindrical detector array ArcCHECK{sup Trade-Mark-Sign} has a retractable homogeneous acrylic insert. In this work, wemore » designed and manufactured a customized heterogeneous insert with densities that simulate soft tissue, lung, bone, and air. The insert offers several possible heterogeneity configurations and multiple locations for point dose measurements. SmartArc{sup Registered-Sign} and RapidArc{sup Registered-Sign} plans for lung SBRT were generated and copied to ArcCHECK{sup Trade-Mark-Sign} for each inhomogeneity configuration. Dose delivery was done on a Varian 2100 ix linac. The evaluation of dose distributions was based on gamma analysis of the diode measurements and point doses measurements at different positions near the inhomogeneities. Results: The insert was successfully manufactured and tested with different measurements of VMAT plans. Dose distributions measured with the homogeneous insert showed gamma passing rates similar to our clinical results ({approx}99%) for both treatment-planning systems. Using nonhomogeneous inserts decreased the passing rates by up to 3.6% in the examples studied. Overall, SmartArc{sup Registered-Sign} plans showed better gamma passing rates for nonhomogeneous measurements. The discrepancy between calculated and measured point doses was increased up to 6.5% for the nonhomogeneous insert depending on the inhomogeneity configuration and measurement location. SmartArc{sup Registered-Sign} and RapidArc{sup Registered-Sign} plans had similar plan quality but RapidArc{sup Registered-Sign} plans had significantly higher monitor units (up to 70%). Conclusions: A versatile, nonhomogeneous insert was developed for ArcCHECK{sup Trade-Mark-Sign} for an easy and quick evaluation of dose calculations with nonhomogeneous media and for comparison of different treatment planning systems. The device was tested for SmartArc{sup Registered-Sign} and RapidArc{sup Registered-Sign} plans for lung SBRT, showing the uncertainties of dose calculations with inhomogeneities. The new insert combines the convenience of the ArcCHECK{sup Trade-Mark-Sign} and the possibility of assessing dose distributions in inhomogeneous media.« less
Wright, Kevin B; King, Shawn; Rosenberg, Jenny
2014-01-01
This study investigated the influence of social support and self-verification on loneliness, depression, and stress among 477 college students. The authors propose and test a theoretical model using structural equation modeling. The results indicated empirical support for the model, with self-verification mediating the relation between social support and health outcomes. The results have implications for social support and self-verification research, which are discussed along with directions for future research and limitations of the study.
Galaxy-galaxy lensing in the Dark Energy Survey Science Verification data
Clampitt, J.; S?nchez, C.; Kwan, J.; ...
2016-11-22
We present galaxy-galaxy lensing results from 139 square degrees of Dark Energy Survey (DES) Science Verification (SV) data. Our lens sample consists of red galaxies, known as redMaGiC, which are specifically selected to have a low photometric redshift error and outlier rate. The lensing measurement has a total signal-to-noise of 29 over scales $0.09 < R < 15$ Mpc/$h$, including all lenses over a wide redshift range $0.2 < z < 0.8$. Dividing the lenses into three redshift bins for this constant moving number density sample, we find no evidence for evolution in the halo mass with redshift. We obtainmore » consistent results for the lensing measurement with two independent shear pipelines, ngmix and im3shape. We perform a number of null tests on the shear and photometric redshift catalogs and quantify resulting systematic uncertainties. Covariances from jackknife subsamples of the data are validated with a suite of 50 mock surveys. The results and systematics checks in this work provide a critical input for future cosmological and galaxy evolution studies with the DES data and redMaGiC galaxy samples. We fit a Halo Occupation Distribution (HOD) model, and demonstrate that our data constrains the mean halo mass of the lens galaxies, despite strong degeneracies between individual HOD parameters.« less
Credit Card Fraud Detection: A Realistic Modeling and a Novel Learning Strategy.
Dal Pozzolo, Andrea; Boracchi, Giacomo; Caelen, Olivier; Alippi, Cesare; Bontempi, Gianluca
2017-09-14
Detecting frauds in credit card transactions is perhaps one of the best testbeds for computational intelligence algorithms. In fact, this problem involves a number of relevant challenges, namely: concept drift (customers' habits evolve and fraudsters change their strategies over time), class imbalance (genuine transactions far outnumber frauds), and verification latency (only a small set of transactions are timely checked by investigators). However, the vast majority of learning algorithms that have been proposed for fraud detection rely on assumptions that hardly hold in a real-world fraud-detection system (FDS). This lack of realism concerns two main aspects: 1) the way and timing with which supervised information is provided and 2) the measures used to assess fraud-detection performance. This paper has three major contributions. First, we propose, with the help of our industrial partner, a formalization of the fraud-detection problem that realistically describes the operating conditions of FDSs that everyday analyze massive streams of credit card transactions. We also illustrate the most appropriate performance measures to be used for fraud-detection purposes. Second, we design and assess a novel learning strategy that effectively addresses class imbalance, concept drift, and verification latency. Third, in our experiments, we demonstrate the impact of class unbalance and concept drift in a real-world data stream containing more than 75 million transactions, authorized over a time window of three years.
Galaxy-galaxy lensing in the Dark Energy Survey Science Verification data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clampitt, J.; S?nchez, C.; Kwan, J.
We present galaxy-galaxy lensing results from 139 square degrees of Dark Energy Survey (DES) Science Verification (SV) data. Our lens sample consists of red galaxies, known as redMaGiC, which are specifically selected to have a low photometric redshift error and outlier rate. The lensing measurement has a total signal-to-noise of 29 over scales $0.09 < R < 15$ Mpc/$h$, including all lenses over a wide redshift range $0.2 < z < 0.8$. Dividing the lenses into three redshift bins for this constant moving number density sample, we find no evidence for evolution in the halo mass with redshift. We obtainmore » consistent results for the lensing measurement with two independent shear pipelines, ngmix and im3shape. We perform a number of null tests on the shear and photometric redshift catalogs and quantify resulting systematic uncertainties. Covariances from jackknife subsamples of the data are validated with a suite of 50 mock surveys. The results and systematics checks in this work provide a critical input for future cosmological and galaxy evolution studies with the DES data and redMaGiC galaxy samples. We fit a Halo Occupation Distribution (HOD) model, and demonstrate that our data constrains the mean halo mass of the lens galaxies, despite strong degeneracies between individual HOD parameters.« less
NASA Astrophysics Data System (ADS)
Capiński, Maciej J.; Gidea, Marian; de la Llave, Rafael
2017-01-01
We present a diffusion mechanism for time-dependent perturbations of autonomous Hamiltonian systems introduced in Gidea (2014 arXiv:1405.0866). This mechanism is based on shadowing of pseudo-orbits generated by two dynamics: an ‘outer dynamics’, given by homoclinic trajectories to a normally hyperbolic invariant manifold, and an ‘inner dynamics’, given by the restriction to that manifold. On the inner dynamics the only assumption is that it preserves area. Unlike other approaches, Gidea (2014 arXiv:1405.0866) does not rely on the KAM theory and/or Aubry-Mather theory to establish the existence of diffusion. Moreover, it does not require to check twist conditions or non-degeneracy conditions near resonances. The conditions are explicit and can be checked by finite precision calculations in concrete systems (roughly, they amount to checking that Melnikov-type integrals do not vanish and that some manifolds are transversal). As an application, we study the planar elliptic restricted three-body problem. We present a rigorous theorem that shows that if some concrete calculations yield a non zero value, then for any sufficiently small, positive value of the eccentricity of the orbits of the main bodies, there are orbits of the infinitesimal body that exhibit a change of energy that is bigger than some fixed number, which is independent of the eccentricity. We verify numerically these calculations for values of the masses close to that of the Jupiter/Sun system. The numerical calculations are not completely rigorous, because we ignore issues of round-off error and do not estimate the truncations, but they are not delicate at all by the standard of numerical analysis. (Standard tests indicate that we get 7 or 8 figures of accuracy where 1 would be enough.) The code of these verifications is available. We hope that some full computer assisted proofs will be obtained in the near future since there are packages (CAPD) designed for problems of this type.
Saotome, Naoya; Furukawa, Takuji; Hara, Yousuke; Mizushima, Kota; Tansho, Ryohei; Saraya, Yuichi; Shirai, Toshiyuki; Noda, Koji
2016-04-01
Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors' facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. A cylindrical plastic scintillator block and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. The results of this study demonstrate that the authors' range check system is capable of quick and easy range verification with sufficient accuracy.
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Walsh, Timothy J.; Beltran, Chris J.; Stoker, Joshua B.; Mundy, Daniel W.; Parry, Mark D.; Bues, Martin; Fatyga, Mirek
2016-04-01
The use of radiation therapy for the treatment of cancer has been carried out clinically since the late 1800's. Early on however, it was discovered that a radiation dose sufficient to destroy cancer cells can also cause severe injury to surrounding healthy tissue. Radiation oncologists continually strive to find the perfect balance between a dose high enough to destroy the cancer and one that avoids damage to healthy organs. Spot scanning or "pencil beam" proton radiotherapy offers another option to improve on this. Unlike traditional photon therapy, proton beams stop in the target tissue, thus better sparing all organs beyond the targeted tumor. In addition, the beams are far narrower and thus can be more precisely "painted" onto the tumor, avoiding exposure to surrounding healthy tissue. To safely treat patients with proton beam radiotherapy, dose verification should be carried out for each plan prior to treatment. Proton dose verification systems are not currently commercially available so the Department of Radiation Oncology at the Mayo Clinic developed its own, called DOSeCHECK, which offers two distinct dose simulation methods: GPU-based Monte Carlo and CPU-based analytical. The three major components of the system include the web-based user interface, the Linux-based dose verification simulation engines, and the supporting services and components. The architecture integrates multiple applications, libraries, platforms, programming languages, and communication protocols and was successfully deployed in time for Mayo Clinic's first proton beam therapy patient. Having a simple, efficient application for dose verification greatly reduces staff workload and provides additional quality assurance, ultimately improving patient safety.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saotome, Naoya, E-mail: naosao@nirs.go.jp; Furukawa, Takuji; Hara, Yousuke
Purpose: Three-dimensional irradiation with a scanned carbon-ion beam has been performed from 2011 at the authors’ facility. The authors have developed the rotating-gantry equipped with the scanning irradiation system. The number of combinations of beam properties to measure for the commissioning is more than 7200, i.e., 201 energy steps, 3 intensities, and 12 gantry angles. To compress the commissioning time, quick and simple range verification system is required. In this work, the authors develop a quick range verification system using scintillator and charge-coupled device (CCD) camera and estimate the accuracy of the range verification. Methods: A cylindrical plastic scintillator blockmore » and a CCD camera were installed on the black box. The optical spatial resolution of the system is 0.2 mm/pixel. The camera control system was connected and communicates with the measurement system that is part of the scanning system. The range was determined by image processing. Reference range for each energy beam was determined by a difference of Gaussian (DOG) method and the 80% of distal dose of the depth-dose distribution that were measured by a large parallel-plate ionization chamber. The authors compared a threshold method and a DOG method. Results: The authors found that the edge detection method (i.e., the DOG method) is best for the range detection. The accuracy of range detection using this system is within 0.2 mm, and the reproducibility of the same energy measurement is within 0.1 mm without setup error. Conclusions: The results of this study demonstrate that the authors’ range check system is capable of quick and easy range verification with sufficient accuracy.« less
A Machine-Checked Proof of A State-Space Construction Algorithm
NASA Technical Reports Server (NTRS)
Catano, Nestor; Siminiceanu, Radu I.
2010-01-01
This paper presents the correctness proof of Saturation, an algorithm for generating state spaces of concurrent systems, implemented in the SMART tool. Unlike the Breadth First Search exploration algorithm, which is easy to understand and formalise, Saturation is a complex algorithm, employing a mutually-recursive pair of procedures that compute a series of non-trivial, nested local fixed points, corresponding to a chaotic fixed point strategy. A pencil-and-paper proof of Saturation exists, but a machine checked proof had never been attempted. The key element of the proof is the characterisation theorem of saturated nodes in decision diagrams, stating that a saturated node represents a set of states encoding a local fixed-point with respect to firing all events affecting only the node s level and levels below. For our purpose, we have employed the Prototype Verification System (PVS) for formalising the Saturation algorithm, its data structures, and for conducting the proofs.
Statistical analysis of NWP rainfall data from Poland..
NASA Astrophysics Data System (ADS)
Starosta, Katarzyna; Linkowska, Joanna
2010-05-01
A goal of this work is to summarize the latest results of precipitation verification in Poland. In IMGW, COSMO_PL version 4.0 has been running. The model configuration is: 14 km horizontal grid spacing, initial time at 00 UTC and 12 UTC, the forecast range 72 h. The fields from the model had been verified with Polish SYNOP stations. The verification was performed using a new verification tool. For the accumulated precipitation indices FBI, POD, FAR, ETS from contingency table are calculated. In this paper the comparison of monthly and seasonal verification of 6h, 12h, 24h accumulated precipitation in 2009 is presented. Since February 2010 the model with 7 km grid spacing will be running in IMGW. The results of precipitation verification for two different models' resolution will be shown.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Damato, A; Devlin, P; Bhagwat, M
Purpose: To investigate the sensitivity and specificity of a novel verification methodology for image-guided skin HDR brachytherapy plans using a TRAK-based reasonableness test, compared to a typical manual verification methodology. Methods: Two methodologies were used to flag treatment plans necessitating additional review due to a potential discrepancy of 3 mm between planned dose and clinical target in the skin. Manual verification was used to calculate the discrepancy between the average dose to points positioned at time of planning representative of the prescribed depth and the expected prescription dose. Automatic verification was used to calculate the discrepancy between TRAK of themore » clinical plan and its expected value, which was calculated using standard plans with varying curvatures, ranging from flat to cylindrically circumferential. A plan was flagged if a discrepancy >10% was observed. Sensitivity and specificity were calculated using as a criteria for true positive that >10% of plan dwells had a distance to prescription dose >1 mm different than prescription depth (3 mm + size of applicator). All HDR image-based skin brachytherapy plans treated at our institution in 2013 were analyzed. Results: 108 surface applicator plans to treat skin of the face, scalp, limbs, feet, hands or abdomen were analyzed. Median number of catheters was 19 (range, 4 to 71) and median number of dwells was 257 (range, 20 to 1100). Sensitivity/specificity were 57%/78% for manual and 70%/89% for automatic verification. Conclusion: A check based on expected TRAK value is feasible for irregularly shaped, image-guided skin HDR brachytherapy. This test yielded higher sensitivity and specificity than a test based on the identification of representative points, and can be implemented with a dedicated calculation code or with pre-calculated lookup tables of ideally shaped, uniform surface applicators.« less
A Formal Methodology to Design and Deploy Dependable Wireless Sensor Networks
Testa, Alessandro; Cinque, Marcello; Coronato, Antonio; Augusto, Juan Carlos
2016-01-01
Wireless Sensor Networks (WSNs) are being increasingly adopted in critical applications, where verifying the correct operation of sensor nodes is a major concern. Undesired events may undermine the mission of the WSNs. Hence, their effects need to be properly assessed before deployment, to obtain a good level of expected performance; and during the operation, in order to avoid dangerous unexpected results. In this paper, we propose a methodology that aims at assessing and improving the dependability level of WSNs by means of an event-based formal verification technique. The methodology includes a process to guide designers towards the realization of a dependable WSN and a tool (“ADVISES”) to simplify its adoption. The tool is applicable to homogeneous WSNs with static routing topologies. It allows the automatic generation of formal specifications used to check correctness properties and evaluate dependability metrics at design time and at runtime for WSNs where an acceptable percentage of faults can be defined. During the runtime, we can check the behavior of the WSN accordingly to the results obtained at design time and we can detect sudden and unexpected failures, in order to trigger recovery procedures. The effectiveness of the methodology is shown in the context of two case studies, as proof-of-concept, aiming to illustrate how the tool is helpful to drive design choices and to check the correctness properties of the WSN at runtime. Although the method scales up to very large WSNs, the applicability of the methodology may be compromised by the state space explosion of the reasoning model, which must be faced by partitioning large topologies into sub-topologies. PMID:28025568
Andreski, Michael; Myers, Megan; Gainer, Kate; Pudlo, Anthony
Determine the effects of an 18-month pilot project using tech-check-tech in 7 community pharmacies on 1) rate of dispensing errors not identified during refill prescription final product verification; 2) pharmacist workday task composition; and 3) amount of patient care services provided and the reimbursement status of those services. Pretest-posttest quasi-experimental study where baseline and study periods were compared. Pharmacists and pharmacy technicians in 7 community pharmacies in Iowa. The outcome measures were 1) percentage of technician verified refill prescriptions where dispensing errors were not identified on final product verification; 2) percentage of time spent by pharmacists in dispensing, management, patient care, practice development, and other activities; 3) the number of pharmacist patient care services provided per pharmacist hours worked; and 4) percentage of time that technician product verification was used. There was no significant difference in overall errors (0.2729% vs. 0.5124%, P = 0.513), patient safety errors (0.0525% vs. 0.0651%, P = 0.837), or administrative errors (0.2204% vs. 0.4784%, P = 0.411). Pharmacist's time in dispensing significantly decreased (67.3% vs. 49.06%, P = 0.005), and time in direct patient care (19.96% vs. 34.72%, P = 0.003), increased significantly. Time in other activities did not significantly change. Reimbursable services per pharmacist hour (0.11 vs. 0.30, P = 0.129), did not significantly change. Non-reimbursable services increased significantly (2.77 vs. 4.80, P = 0.042). Total services significantly increased (2.88 vs. 5.16, P = 0.044). Pharmacy technician product verification of refill prescriptions preserved dispensing safety while significantly increasing the time spent in delivery of pharmacist provided patient care services. The total number of pharmacist services provided per hour also increased significantly, driven primarily by a significant increase in the number of non-reimbursed services. This was mostly likely due to the increased time available to provide patient care. Reimbursed services per hour did not increase significantly mostly likely due to lack of payers. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Analyzing Tabular and State-Transition Requirements Specifications in PVS
NASA Technical Reports Server (NTRS)
Owre, Sam; Rushby, John; Shankar, Natarajan
1997-01-01
We describe PVS's capabilities for representing tabular specifications of the kind advocated by Parnas and others, and show how PVS's Type Correctness Conditions (TCCs) are used to ensure certain well-formedness properties. We then show how these and other capabilities of PVS can be used to represent the AND/OR tables of Leveson and the Decision Tables of Sherry, and we demonstrate how PVS's TCCs can expose and help isolate errors in the latter. We extend this approach to represent the mode transition tables of the Software Cost Reduction (SCR) method in an attractive manner. We show how PVS can check these tables for well-formedness, and how PVS's model checking capabilities can be used to verify invariants and reachability properties of SCR requirements specifications, and inclusion relations between the behaviors of different specifications. These examples demonstrate how several capabilities of the PVS language and verification system can be used in combination to provide customized support for specific methodologies for documenting and analyzing requirements. Because they use only the standard capabilities of PVS, users can adapt and extend these customizations to suit their own needs. Those developing dedicated tools for individual methodologies may find these constructions in PVS helpful for prototyping purposes, or as a useful adjunct to a dedicated tool when the capabilities of a full theorem prover are required. The examples also illustrate the power and utility of an integrated general-purpose system such as PVS. For example, there was no need to adapt or extend the PVS model checker to make it work with SCR specifications described using the PVS TABLE construct: the model checker is applicable to any transition relation, independently of the PVS language constructs used in its definition.
The Role of Integrated Modeling in the Design and Verification of the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Mosier, Gary E.; Howard, Joseph M.; Johnston, John D.; Parrish, Keith A.; Hyde, T. Tupper; McGinnis, Mark A.; Bluth, Marcel; Kim, Kevin; Ha, Kong Q.
2004-01-01
The James Web Space Telescope (JWST) is a large, infrared-optimized space telescope scheduled for launch in 2011. System-level verification of critical optical performance requirements will rely on integrated modeling to a considerable degree. In turn, requirements for accuracy of the models are significant. The size of the lightweight observatory structure, coupled with the need to test at cryogenic temperatures, effectively precludes validation of the models and verification of optical performance with a single test in 1-g. Rather, a complex series of steps are planned by which the components of the end-to-end models are validated at various levels of subassembly, and the ultimate verification of optical performance is by analysis using the assembled models. This paper describes the critical optical performance requirements driving the integrated modeling activity, shows how the error budget is used to allocate and track contributions to total performance, and presents examples of integrated modeling methods and results that support the preliminary observatory design. Finally, the concepts for model validation and the role of integrated modeling in the ultimate verification of observatory are described.
Sensitivity in error detection of patient specific QA tools for IMRT plans
NASA Astrophysics Data System (ADS)
Lat, S. Z.; Suriyapee, S.; Sanghangthum, T.
2016-03-01
The high complexity of dose calculation in treatment planning and accurate delivery of IMRT plan need high precision of verification method. The purpose of this study is to investigate error detection capability of patient specific QA tools for IMRT plans. The two H&N and two prostate IMRT plans with MapCHECK2 and portal dosimetry QA tools were studied. Measurements were undertaken for original and modified plans with errors introduced. The intentional errors composed of prescribed dose (±2 to ±6%) and position shifting in X-axis and Y-axis (±1 to ±5mm). After measurement, gamma pass between original and modified plans were compared. The average gamma pass for original H&N and prostate plans were 98.3% and 100% for MapCHECK2 and 95.9% and 99.8% for portal dosimetry, respectively. In H&N plan, MapCHECK2 can detect position shift errors starting from 3mm while portal dosimetry can detect errors started from 2mm. Both devices showed similar sensitivity in detection of position shift error in prostate plan. For H&N plan, MapCHECK2 can detect dose errors starting at ±4%, whereas portal dosimetry can detect from ±2%. For prostate plan, both devices can identify dose errors starting from ±4%. Sensitivity of error detection depends on type of errors and plan complexity.
Using SysML for verification and validation planning on the Large Synoptic Survey Telescope (LSST)
NASA Astrophysics Data System (ADS)
Selvy, Brian M.; Claver, Charles; Angeli, George
2014-08-01
This paper provides an overview of the tool, language, and methodology used for Verification and Validation Planning on the Large Synoptic Survey Telescope (LSST) Project. LSST has implemented a Model Based Systems Engineering (MBSE) approach as a means of defining all systems engineering planning and definition activities that have historically been captured in paper documents. Specifically, LSST has adopted the Systems Modeling Language (SysML) standard and is utilizing a software tool called Enterprise Architect, developed by Sparx Systems. Much of the historical use of SysML has focused on the early phases of the project life cycle. Our approach is to extend the advantages of MBSE into later stages of the construction project. This paper details the methodology employed to use the tool to document the verification planning phases, including the extension of the language to accommodate the project's needs. The process includes defining the Verification Plan for each requirement, which in turn consists of a Verification Requirement, Success Criteria, Verification Method(s), Verification Level, and Verification Owner. Each Verification Method for each Requirement is defined as a Verification Activity and mapped into Verification Events, which are collections of activities that can be executed concurrently in an efficient and complementary way. Verification Event dependency and sequences are modeled using Activity Diagrams. The methodology employed also ties in to the Project Management Control System (PMCS), which utilizes Primavera P6 software, mapping each Verification Activity as a step in a planned activity. This approach leads to full traceability from initial Requirement to scheduled, costed, and resource loaded PMCS task-based activities, ensuring all requirements will be verified.
2011-01-01
Purpose To verify the dose distribution and number of monitor units (MU) for dynamic treatment techniques like volumetric modulated single arc radiation therapy - Rapid Arc - each patient treatment plan has to be verified prior to the first treatment. The purpose of this study was to develop a patient related treatment plan verification protocol using a two dimensional ionization chamber array (MatriXX, IBA, Schwarzenbruck, Germany). Method Measurements were done to determine the dependence between response of 2D ionization chamber array, beam direction, and field size. Also the reproducibility of the measurements was checked. For the patient related verifications the original patient Rapid Arc treatment plan was projected on CT dataset of the MatriXX and the dose distribution was calculated. After irradiation of the Rapid Arc verification plans measured and calculated 2D dose distributions were compared using the gamma evaluation method implemented in the measuring software OmniPro (version 1.5, IBA, Schwarzenbruck, Germany). Results The dependence between response of 2D ionization chamber array, field size and beam direction has shown a passing rate of 99% for field sizes between 7 cm × 7 cm and 24 cm × 24 cm for measurements of single arc. For smaller and larger field sizes than 7 cm × 7 cm and 24 cm × 24 cm the passing rate was less than 99%. The reproducibility was within a passing rate of 99% and 100%. The accuracy of the whole process including the uncertainty of the measuring system, treatment planning system, linear accelerator and isocentric laser system in the treatment room was acceptable for treatment plan verification using gamma criteria of 3% and 3 mm, 2D global gamma index. Conclusion It was possible to verify the 2D dose distribution and MU of Rapid Arc treatment plans using the MatriXX. The use of the MatriXX for Rapid Arc treatment plan verification in clinical routine is reasonable. The passing rate should be 99% than the verification protocol is able to detect clinically significant errors. PMID:21342509
NASA Astrophysics Data System (ADS)
Sarkar, Biplab; Ray, Jyotirmoy; Ganesh, Tharmarnadar; Manikandan, Arjunan; Munshi, Anusheel; Rathinamuthu, Sasikumar; Kaur, Harpreet; Anbazhagan, Satheeshkumar; Giri, Upendra K.; Roy, Soumya; Jassal, Kanan; Kalyan Mohanti, Bidhu
2018-04-01
The aim of this article is to derive and verify a mathematical formulation for the reduction of the six-dimensional (6D) positional inaccuracies of patients (lateral, longitudinal, vertical, pitch, roll and yaw) to three-dimensional (3D) linear shifts. The formulation was mathematically and experimentally tested and verified for 169 stereotactic radiotherapy patients. The mathematical verification involves the comparison of any (one) of the calculated rotational coordinates with the corresponding value from the 6D shifts obtained by cone beam computed tomography (CBCT). The experimental verification involves three sets of measurements using an ArcCHECK phantom, when (i) the phantom was not moved (neutral position: 0MES), (ii) the position of the phantom shifted by 6D shifts obtained from CBCT (6DMES) from neutral position and (iii) the phantom shifted from its neutral position by 3D shifts reduced from 6D shifts (3DMES). Dose volume histogram and statistical comparisons were made between ≤ft< TPSCAL{\\text -}0MES \\right> and ≤ft< 3DMES{\\text -6DMES} \\right> . The mathematical verification was performed by a comparison of the calculated and measured yaw (γ°) rotation values, which gave a straight line, Y = 1X with a goodness of fit as R 2 = 0.9982. The verification, based on measurements, gave a planning target volume receiving 100% of the dose (V100%) as 99.1 ± 1.9%, 96.3 ± 1.8%, 74.3 ± 1.9% and 72.6 ± 2.8% for the calculated treatment planning system values TPSCAL, 0MES, 3DMES and 6DMES, respectively. The statistical significance (p-values: paired sample t-test) of V100% were found to be 0.03 for the paired sample ≤ft< 3DMES{\\text -6DMES} \\right> and 0.01 for ≤ft< 0MES{\\text -TPSCAL} \\right> . In this paper, a mathematical method to reduce 6D shifts to 3D shifts is presented. The mathematical method is verified by using well-matched values between the measured and calculated γ°. Measurements done on the ArcCHECK phantom also proved that the proposed methodology is correct. The post-correction of the table position condition introduces a minimal spatial dose delivery error in the frameless stereotactic system, using a 6D motion enabled robotic couch. This formulation enables the reduction of 6D positional inaccuracies to 3D linear shifts, and hence allows the treatment of patients with frameless stereotactic radiosurgery by using only a 3D linear motion enabled couch.
Sarkar, Biplab; Ray, Jyotirmoy; Ganesh, Tharmarnadar; Manikandan, Arjunan; Munshi, Anusheel; Rathinamuthu, Sasikumar; Kaur, Harpreet; Anbazhagan, Satheeshkumar; Giri, Upendra K; Roy, Soumya; Jassal, Kanan; Mohanti, Bidhu Kalyan
2018-03-22
The aim of this article is to derive and verify a mathematical formulation for the reduction of the six-dimensional (6D) positional inaccuracies of patients (lateral, longitudinal, vertical, pitch, roll and yaw) to three-dimensional (3D) linear shifts. The formulation was mathematically and experimentally tested and verified for 169 stereotactic radiotherapy patients. The mathematical verification involves the comparison of any (one) of the calculated rotational coordinates with the corresponding value from the 6D shifts obtained by cone beam computed tomography (CBCT). The experimental verification involves three sets of measurements using an ArcCHECK phantom, when (i) the phantom was not moved (neutral position: 0MES), (ii) the position of the phantom shifted by 6D shifts obtained from CBCT (6DMES) from neutral position and (iii) the phantom shifted from its neutral position by 3D shifts reduced from 6D shifts (3DMES). Dose volume histogram and statistical comparisons were made between [Formula: see text] and [Formula: see text]. The mathematical verification was performed by a comparison of the calculated and measured yaw (γ°) rotation values, which gave a straight line, Y = 1X with a goodness of fit as R 2 = 0.9982. The verification, based on measurements, gave a planning target volume receiving 100% of the dose (V100%) as 99.1 ± 1.9%, 96.3 ± 1.8%, 74.3 ± 1.9% and 72.6 ± 2.8% for the calculated treatment planning system values TPSCAL, 0MES, 3DMES and 6DMES, respectively. The statistical significance (p-values: paired sample t-test) of V100% were found to be 0.03 for the paired sample [Formula: see text] and 0.01 for [Formula: see text]. In this paper, a mathematical method to reduce 6D shifts to 3D shifts is presented. The mathematical method is verified by using well-matched values between the measured and calculated γ°. Measurements done on the ArcCHECK phantom also proved that the proposed methodology is correct. The post-correction of the table position condition introduces a minimal spatial dose delivery error in the frameless stereotactic system, using a 6D motion enabled robotic couch. This formulation enables the reduction of 6D positional inaccuracies to 3D linear shifts, and hence allows the treatment of patients with frameless stereotactic radiosurgery by using only a 3D linear motion enabled couch.
NASA Astrophysics Data System (ADS)
Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard
2006-05-01
A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.
COVERT: A Framework for Finding Buffer Overflows in C Programs via Software Verification
2010-08-01
is greater than the allocated size of B. In the case of a type-safe language or a language with runtime bounds checking (such as Java), an overflow...leads either to a (compile-time) type error or a (runtime) exception. In such languages , a buffer overflow can lead to a denial of service attack (i.e...of current and legacy software is written in unsafe languages (such as C or C++) that allow buffers to be overflowed with impunity. For reasons such as
Control Chart on Semi Analytical Weighting
NASA Astrophysics Data System (ADS)
Miranda, G. S.; Oliveira, C. C.; Silva, T. B. S. C.; Stellato, T. B.; Monteiro, L. R.; Marques, J. R.; Faustino, M. G.; Soares, S. M. V.; Ulrich, J. C.; Pires, M. A. F.; Cotrim, M. E. B.
2018-03-01
Semi-analytical balance verification intends to assess the balance performance using graphs that illustrate measurement dispersion, trough time, and to demonstrate measurements were performed in a reliable manner. This study presents internal quality control of a semi-analytical balance (GEHAKA BG400) using control charts. From 2013 to 2016, 2 weight standards were monitored before any balance operation. This work intended to evaluate if any significant difference or bias were presented on weighting procedure over time, to check the generated data reliability. This work also exemplifies how control intervals are established.
FORMED: Bringing Formal Methods to the Engineering Desktop
2016-02-01
integrates formal verification into software design and development by precisely defining semantics for a restricted subset of the Unified Modeling...input-output contract satisfaction and absence of null pointer dereferences. 15. SUBJECT TERMS Formal Methods, Software Verification , Model-Based...Domain specific languages (DSLs) drive both implementation and formal verification
78 FR 58492 - Generator Verification Reliability Standards
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-24
... Control Functions), MOD-027-1 (Verification of Models and Data for Turbine/Governor and Load Control or...), MOD-027-1 (Verification of Models and Data for Turbine/Governor and Load Control or Active Power... Category B and C contingencies, as required by wind generators in Order No. 661, or that those generators...
Transmutation Fuel Performance Code Thermal Model Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregory K. Miller; Pavel G. Medvedev
2007-09-01
FRAPCON fuel performance code is being modified to be able to model performance of the nuclear fuels of interest to the Global Nuclear Energy Partnership (GNEP). The present report documents the effort for verification of the FRAPCON thermal model. It was found that, with minor modifications, FRAPCON thermal model temperature calculation agrees with that of the commercial software ABAQUS (Version 6.4-4). This report outlines the methodology of the verification, code input, and calculation results.
PFLOTRAN Verification: Development of a Testing Suite to Ensure Software Quality
NASA Astrophysics Data System (ADS)
Hammond, G. E.; Frederick, J. M.
2016-12-01
In scientific computing, code verification ensures the reliability and numerical accuracy of a model simulation by comparing the simulation results to experimental data or known analytical solutions. The model is typically defined by a set of partial differential equations with initial and boundary conditions, and verification ensures whether the mathematical model is solved correctly by the software. Code verification is especially important if the software is used to model high-consequence systems which cannot be physically tested in a fully representative environment [Oberkampf and Trucano (2007)]. Justified confidence in a particular computational tool requires clarity in the exercised physics and transparency in its verification process with proper documentation. We present a quality assurance (QA) testing suite developed by Sandia National Laboratories that performs code verification for PFLOTRAN, an open source, massively-parallel subsurface simulator. PFLOTRAN solves systems of generally nonlinear partial differential equations describing multiphase, multicomponent and multiscale reactive flow and transport processes in porous media. PFLOTRAN's QA test suite compares the numerical solutions of benchmark problems in heat and mass transport against known, closed-form, analytical solutions, including documentation of the exercised physical process models implemented in each PFLOTRAN benchmark simulation. The QA test suite development strives to follow the recommendations given by Oberkampf and Trucano (2007), which describes four essential elements in high-quality verification benchmark construction: (1) conceptual description, (2) mathematical description, (3) accuracy assessment, and (4) additional documentation and user information. Several QA tests within the suite will be presented, including details of the benchmark problems and their closed-form analytical solutions, implementation of benchmark problems in PFLOTRAN simulations, and the criteria used to assess PFLOTRAN's performance in the code verification procedure. References Oberkampf, W. L., and T. G. Trucano (2007), Verification and Validation Benchmarks, SAND2007-0853, 67 pgs., Sandia National Laboratories, Albuquerque, NM.
The experimental verification on the shear bearing capacity of exposed steel column foot
NASA Astrophysics Data System (ADS)
Xijin, LIU
2017-04-01
In terms of the shear bearing capacity of the exposed steel column foot, there are many researches both home and abroad. However, the majority of the researches are limited to the theoretical analysis sector and few of them make the experimental analysis. In accordance with the prototype of an industrial plant in Beijing, this paper designs the experimental model. The experimental model is composed of six steel structural members in two groups, with three members without shear key and three members with shear key. The paper checks the shear bearing capacity of two groups respectively under different axial forces. The experiment shows: The anchor bolt of the exposed steel column foot features relatively large shear bearing capacity which could not be neglected. The results deducted through calculation methods proposed by this paper under two situations match the experimental results in terms of the shear bearing capacity of the steel column foot. Besides, it also proposed suggestions on revising the Code for Design of Steel Structure in the aspect of setting the shear key in the steel column foot.
NASA Technical Reports Server (NTRS)
Moser, Louise; Melliar-Smith, Michael; Schwartz, Richard
1987-01-01
A SIFT reliable aircraft control computer system, designed to meet the ultrahigh reliability required for safety critical flight control applications by use of processor replications and voting, was constructed for SRI, and delivered to NASA Langley for evaluation in the AIRLAB. To increase confidence in the reliability projections for SIFT, produced by a Markov reliability model, SRI constructed a formal specification, defining the meaning of reliability in the context of flight control. A further series of specifications defined, in increasing detail, the design of SIFT down to pre- and post-conditions on Pascal code procedures. Mechanically checked mathematical proofs were constructed to demonstrate that the more detailed design specifications for SIFT do indeed imply the formal reliability requirement. An additional specification defined some of the assumptions made about SIFT by the Markov model, and further proofs were constructed to show that these assumptions, as expressed by that specification, did indeed follow from the more detailed design specifications for SIFT. This report provides an outline of the methodology used for this hierarchical specification and proof, and describes the various specifications and proofs performed.
Formally verifying Ada programs which use real number types
NASA Technical Reports Server (NTRS)
Sutherland, David
1986-01-01
Formal verification is applied to programs which use real number arithmetic operations (mathematical programs). Formal verification of a program P consists of creating a mathematical model of F, stating the desired properties of P in a formal logical language, and proving that the mathematical model has the desired properties using a formal proof calculus. The development and verification of the mathematical model are discussed.
Dynamic testing for shuttle design verification
NASA Technical Reports Server (NTRS)
Green, C. E.; Leadbetter, S. A.; Rheinfurth, M. H.
1972-01-01
Space shuttle design verification requires dynamic data from full scale structural component and assembly tests. Wind tunnel and other scaled model tests are also required early in the development program to support the analytical models used in design verification. Presented is a design philosophy based on mathematical modeling of the structural system strongly supported by a comprehensive test program; some of the types of required tests are outlined.
Property-Based Monitoring of Analog and Mixed-Signal Systems
NASA Astrophysics Data System (ADS)
Havlicek, John; Little, Scott; Maler, Oded; Nickovic, Dejan
In the recent past, there has been a steady growth of the market for consumer embedded devices such as cell phones, GPS and portable multimedia systems. In embedded systems, digital, analog and software components are combined on a single chip, resulting in increasingly complex designs that introduce richer functionality on smaller devices. As a consequence, the potential insertion of errors into a design becomes higher, yielding an increasing need for automated analog and mixed-signal validation tools. In the purely digital setting, formal verification based on properties expressed in industrial specification languages such as PSL and SVA is nowadays successfully integrated in the design flow. On the other hand, the validation of analog and mixed-signal systems still largely depends on simulation-based, ad-hoc methods. In this tutorial, we consider some ingredients of the standard verification methodology that can be successfully exported from digital to analog and mixed-signal setting, in particular property-based monitoring techniques. Property-based monitoring is a lighter approach to the formal verification, where the system is seen as a "black-box" that generates sets of traces, whose correctness is checked against a property, that is its high-level specification. Although incomplete, monitoring is effectively used to catch faults in systems, without guaranteeing their full correctness.
Formalization of the Integral Calculus in the PVS Theorem Prover
NASA Technical Reports Server (NTRS)
Butler, Ricky W.
2004-01-01
The PVS Theorem prover is a widely used formal verification tool used for the analysis of safety-critical systems. The PVS prover, though fully equipped to support deduction in a very general logic framework, namely higher-order logic, it must nevertheless, be augmented with the definitions and associated theorems for every branch of mathematics and Computer Science that is used in a verification. This is a formidable task, ultimately requiring the contributions of researchers and developers all over the world. This paper reports on the formalization of the integral calculus in the PVS theorem prover. All of the basic definitions and theorems covered in a first course on integral calculus have been completed.The theory and proofs were based on Rosenlicht's classic text on real analysis and follow the traditional epsilon-delta method. The goal of this work was to provide a practical set of PVS theories that could be used for verification of hybrid systems that arise in air traffic management systems and other aerospace applications. All of the basic linearity, integrability, boundedness, and continuity properties of the integral calculus were proved. The work culminated in the proof of the Fundamental Theorem Of Calculus. There is a brief discussion about why mechanically checked proofs are so much longer than standard mathematics textbook proofs.
HDL to verification logic translator
NASA Technical Reports Server (NTRS)
Gambles, J. W.; Windley, P. J.
1992-01-01
The increasingly higher number of transistors possible in VLSI circuits compounds the difficulty in insuring correct designs. As the number of possible test cases required to exhaustively simulate a circuit design explodes, a better method is required to confirm the absence of design faults. Formal verification methods provide a way to prove, using logic, that a circuit structure correctly implements its specification. Before verification is accepted by VLSI design engineers, the stand alone verification tools that are in use in the research community must be integrated with the CAD tools used by the designers. One problem facing the acceptance of formal verification into circuit design methodology is that the structural circuit descriptions used by the designers are not appropriate for verification work and those required for verification lack some of the features needed for design. We offer a solution to this dilemma: an automatic translation from the designers' HDL models into definitions for the higher-ordered logic (HOL) verification system. The translated definitions become the low level basis of circuit verification which in turn increases the designer's confidence in the correctness of higher level behavioral models.
Evaluation of the 29-km Eta Model. Part 1; Objective Verification at Three Selected Stations
NASA Technical Reports Server (NTRS)
Nutter, Paul A.; Manobianco, John; Merceret, Francis J. (Technical Monitor)
1998-01-01
This paper describes an objective verification of the National Centers for Environmental Prediction (NCEP) 29-km eta model from May 1996 through January 1998. The evaluation was designed to assess the model's surface and upper-air point forecast accuracy at three selected locations during separate warm (May - August) and cool (October - January) season periods. In order to enhance sample sizes available for statistical calculations, the objective verification includes two consecutive warm and cool season periods. Systematic model deficiencies comprise the larger portion of the total error in most of the surface forecast variables that were evaluated. The error characteristics for both surface and upper-air forecasts vary widely by parameter, season, and station location. At upper levels, a few characteristic biases are identified. Overall however, the upper-level errors are more nonsystematic in nature and could be explained partly by observational measurement uncertainty. With a few exceptions, the upper-air results also indicate that 24-h model error growth is not statistically significant. In February and August 1997, NCEP implemented upgrades to the eta model's physical parameterizations that were designed to change some of the model's error characteristics near the surface. The results shown in this paper indicate that these upgrades led to identifiable and statistically significant changes in forecast accuracy for selected surface parameters. While some of the changes were expected, others were not consistent with the intent of the model updates and further emphasize the need for ongoing sensitivity studies and localized statistical verification efforts. Objective verification of point forecasts is a stringent measure of model performance, but when used alone, is not enough to quantify the overall value that model guidance may add to the forecast process. Therefore, results from a subjective verification of the meso-eta model over the Florida peninsula are discussed in the companion paper by Manobianco and Nutter. Overall verification results presented here and in part two should establish a reasonable benchmark from which model users and developers may pursue the ongoing eta model verification strategies in the future.
The DaveMLTranslator: An Interface for DAVE-ML Aerodynamic Models
NASA Technical Reports Server (NTRS)
Hill, Melissa A.; Jackson, E. Bruce
2007-01-01
It can take weeks or months to incorporate a new aerodynamic model into a vehicle simulation and validate the performance of the model. The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) has been proposed as a means to reduce the time required to accomplish this task by defining a standard format for typical components of a flight dynamic model. The purpose of this paper is to describe an object-oriented C++ implementation of a class that interfaces a vehicle subsystem model specified in DAVE-ML and a vehicle simulation. Using the DaveMLTranslator class, aerodynamic or other subsystem models can be automatically imported and verified at run-time, significantly reducing the elapsed time between receipt of a DAVE-ML model and its integration into a simulation environment. The translator performs variable initializations, data table lookups, and mathematical calculations for the aerodynamic build-up, and executes any embedded static check-cases for verification. The implementation is efficient, enabling real-time execution. Simple interface code for the model inputs and outputs is the only requirement to integrate the DaveMLTranslator as a vehicle aerodynamic model. The translator makes use of existing table-lookup utilities from the Langley Standard Real-Time Simulation in C++ (LaSRS++). The design and operation of the translator class is described and comparisons with existing, conventional, C++ aerodynamic models of the same vehicle are given.
NASA Technical Reports Server (NTRS)
Lee, S. S.; Sengupta, S.; Nwadike, E. V.; Sinha, S. K.
1980-01-01
The rigid lid model was developed to predict three dimensional temperature and velocity distributions in lakes. This model was verified at various sites (Lake Belews, Biscayne Bay, etc.) and th verification at Lake Keowee was the last of these series of verification runs. The verification at Lake Keowee included the following: (1) selecting the domain of interest, grid systems, and comparing the preliminary results with archival data; (2) obtaining actual ground truth and infrared scanner data both for summer and winter; and (3) using the model to predict the measured data for the above periods and comparing the predicted results with the actual data. The model results compared well with measured data. Thus, the model can be used as an effective predictive tool for future sites.
Verification of component mode techniques for flexible multibody systems
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1990-01-01
Investigations were conducted in the modeling aspects of flexible multibodies undergoing large angular displacements. Models were to be generated and analyzed through application of computer simulation packages employing the 'component mode synthesis' techniques. Multibody Modeling, Verification and Control Laboratory (MMVC) plan was implemented, which includes running experimental tests on flexible multibody test articles. From these tests, data was to be collected for later correlation and verification of the theoretical results predicted by the modeling and simulation process.
Known-component 3D-2D registration for quality assurance of spine surgery pedicle screw placement
NASA Astrophysics Data System (ADS)
Uneri, A.; De Silva, T.; Stayman, J. W.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gokaslan, Z. L.; Wolinsky, J.-P.; Siewerdsen, J. H.
2015-10-01
A 3D-2D image registration method is presented that exploits knowledge of interventional devices (e.g. K-wires or spine screws—referred to as ‘known components’) to extend the functionality of intraoperative radiography/fluoroscopy by providing quantitative measurement and quality assurance (QA) of the surgical product. The known-component registration (KC-Reg) algorithm uses robust 3D-2D registration combined with 3D component models of surgical devices known to be present in intraoperative 2D radiographs. Component models were investigated that vary in fidelity from simple parametric models (e.g. approximation of a screw as a simple cylinder, referred to as ‘parametrically-known’ component [pKC] registration) to precise models based on device-specific CAD drawings (referred to as ‘exactly-known’ component [eKC] registration). 3D-2D registration from three intraoperative radiographs was solved using the covariance matrix adaptation evolution strategy (CMA-ES) to maximize image-gradient similarity, relating device placement relative to 3D preoperative CT of the patient. Spine phantom and cadaver studies were conducted to evaluate registration accuracy and demonstrate QA of the surgical product by verification of the type of devices delivered and conformance within the ‘acceptance window’ of the spinal pedicle. Pedicle screws were successfully registered to radiographs acquired from a mobile C-arm, providing TRE 1-4 mm and <5° using simple parametric (pKC) models, further improved to <1 mm and <1° using eKC registration. Using advanced pKC models, screws that did not match the device models specified in the surgical plan were detected with an accuracy of >99%. Visualization of registered devices relative to surgical planning and the pedicle acceptance window provided potentially valuable QA of the surgical product and reliable detection of pedicle screw breach. 3D-2D registration combined with 3D models of known surgical devices offers a novel method for intraoperative QA. The method provides a near-real-time independent check against pedicle breach, facilitating revision within the same procedure if necessary and providing more rigorous verification of the surgical product.
A Practitioners Perspective on Verification
NASA Astrophysics Data System (ADS)
Steenburgh, R. A.
2017-12-01
NOAAs Space Weather Prediction Center offers a wide range of products and services to meet the needs of an equally wide range of customers. A robust verification program is essential to the informed use of model guidance and other tools by both forecasters and end users alike. In this talk, we present current SWPC practices and results, and examine emerging requirements and potential approaches to satisfy them. We explore the varying verification needs of forecasters and end users, as well as the role of subjective and objective verification. Finally, we describe a vehicle used in the meteorological community to unify approaches to model verification and facilitate intercomparison.
NASA Technical Reports Server (NTRS)
Mukhopadhyay, A. K.
1979-01-01
Design adequacy of the lead-lag compensator of the frequency loop, accuracy checking of the analytical expression for the electrical motor transfer function, and performance evaluation of the speed control servo of the digital tape recorder used on-board the 1976 Viking Mars Orbiters and Voyager 1977 Jupiter-Saturn flyby spacecraft are analyzed. The transfer functions of the most important parts of a simplified frequency loop used for test simulation are described and ten simulation cases are reported. The first four of these cases illustrate the method of selecting the most suitable transfer function for the hysteresis synchronous motor, while the rest verify and determine the servo performance parameters and alternative servo compensation schemes. It is concluded that the linear methods provide a starting point for the final verification/refinement of servo design by nonlinear time response simulation and that the variation of the parameters of the static/dynamic Coulomb friction is as expected in a long-life space mission environment.
Jeagle: a JAVA Runtime Verification Tool
NASA Technical Reports Server (NTRS)
DAmorim, Marcelo; Havelund, Klaus
2005-01-01
We introduce the temporal logic Jeagle and its supporting tool for runtime verification of Java programs. A monitor for an Jeagle formula checks if a finite trace of program events satisfies the formula. Jeagle is a programming oriented extension of the rule-based powerful Eagle logic that has been shown to be capable of defining and implementing a range of finite trace monitoring logics, including future and past time temporal logic, real-time and metric temporal logics, interval logics, forms of quantified temporal logics, and so on. Monitoring is achieved on a state-by-state basis avoiding any need to store the input trace. Jeagle extends Eagle with constructs for capturing parameterized program events such as method calls and method returns. Parameters can be the objects that methods are called upon, arguments to methods, and return values. Jeagle allows one to refer to these in formulas. The tool performs automated program instrumentation using AspectJ. We show the transformational semantics of Jeagle.
Computer Modeling and Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pronskikh, V. S.
2014-05-09
Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossiblemore » to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes« less
Mathematical Modeling of Ni/H2 and Li-Ion Batteries
NASA Technical Reports Server (NTRS)
Weidner, John W.; White, Ralph E.; Dougal, Roger A.
2001-01-01
The modelling effort outlined in this viewgraph presentation encompasses the following topics: 1) Electrochemical Deposition of Nickel Hydroxide; 2) Deposition rates of thin films; 3) Impregnation of porous electrodes; 4) Experimental Characterization of Nickel Hydroxide; 5) Diffusion coefficients of protons; 6) Self-discharge rates (i.e., oxygen-evolution kinetics); 7) Hysteresis between charge and discharge; 8) Capacity loss on cycling; 9) Experimental Verification of the Ni/H2 Battery Model; 10) Mathematical Modeling Li-Ion Batteries; 11) Experimental Verification of the Li-Ion Battery Model; 11) Integrated Power System Models for Satellites; and 12) Experimental Verification of Integrated-Systems Model.
Verification of ICESat-2/ATLAS Science Receiver Algorithm Onboard Databases
NASA Astrophysics Data System (ADS)
Carabajal, C. C.; Saba, J. L.; Leigh, H. W.; Magruder, L. A.; Urban, T. J.; Mcgarry, J.; Schutz, B. E.
2013-12-01
NASA's ICESat-2 mission will fly the Advanced Topographic Laser Altimetry System (ATLAS) instrument on a 3-year mission scheduled to launch in 2016. ATLAS is a single-photon detection system transmitting at 532nm with a laser repetition rate of 10 kHz, and a 6 spot pattern on the Earth's surface. A set of onboard Receiver Algorithms will perform signal processing to reduce the data rate and data volume to acceptable levels. These Algorithms distinguish surface echoes from the background noise, limit the daily data volume, and allow the instrument to telemeter only a small vertical region about the signal. For this purpose, three onboard databases are used: a Surface Reference Map (SRM), a Digital Elevation Model (DEM), and a Digital Relief Maps (DRMs). The DEM provides minimum and maximum heights that limit the signal search region of the onboard algorithms, including a margin for errors in the source databases, and onboard geolocation. Since the surface echoes will be correlated while noise will be randomly distributed, the signal location is found by histogramming the received event times and identifying the histogram bins with statistically significant counts. Once the signal location has been established, the onboard Digital Relief Maps (DRMs) will be used to determine the vertical width of the telemetry band about the signal. University of Texas-Center for Space Research (UT-CSR) is developing the ICESat-2 onboard databases, which are currently being tested using preliminary versions and equivalent representations of elevation ranges and relief more recently developed at Goddard Space Flight Center (GSFC). Global and regional elevation models have been assessed in terms of their accuracy using ICESat geodetic control, and have been used to develop equivalent representations of the onboard databases for testing against the UT-CSR databases, with special emphasis on the ice sheet regions. A series of verification checks have been implemented, including comparisons against ICESat altimetry for selected regions with tall vegetation and high relief. The extensive verification effort by the Receiver Algorithm team at GSFC is aimed at assuring that the onboard databases are sufficiently accurate. We will present the results of those assessments and verification tests, along with measures taken to implement modifications to the databases to optimize their use by the receiver algorithms. Companion presentations by McGarry et al. and Leigh et al. describe the details on the ATLAS Onboard Receiver Algorithms and databases development, respectively.
Trojanowicz, Karol; Wójcik, Włodzimierz
2011-01-01
The article presents a case-study on the calibration and verification of mathematical models of organic carbon removal kinetics in biofilm. The chosen Harremöes and Wanner & Reichert models were calibrated with a set of model parameters obtained both during dedicated studies conducted at pilot- and lab-scales for petrochemical wastewater conditions and from the literature. Next, the models were successfully verified through studies carried out utilizing a pilot ASFBBR type bioreactor installed in an oil-refinery wastewater treatment plant. During verification the pilot biofilm reactor worked under varying surface organic loading rates (SOL), dissolved oxygen concentrations and temperatures. The verification proved that the models can be applied in practice to petrochemical wastewater treatment engineering for e.g. biofilm bioreactor dimensioning.
Verification testing of the Brome Agri Sales Ltd. Maximizer Separator, Model MAX 1016 (Maximizer) was conducted at the Lake Wheeler Road Field Laboratory Swine Educational Unit in Raleigh, North Carolina. The Maximizer is an inclined screen solids separator that can be used to s...
NASA Astrophysics Data System (ADS)
Kim, Cheol-kyun; Kim, Jungchan; Choi, Jaeseung; Yang, Hyunjo; Yim, Donggyu; Kim, Jinwoong
2007-03-01
As the minimum transistor length is getting smaller, the variation and uniformity of transistor length seriously effect device performance. So, the importance of optical proximity effects correction (OPC) and resolution enhancement technology (RET) cannot be overemphasized. However, OPC process is regarded by some as a necessary evil in device performance. In fact, every group which includes process and design, are interested in whole chip CD variation trend and CD uniformity, which represent real wafer. Recently, design based metrology systems are capable of detecting difference between data base to wafer SEM image. Design based metrology systems are able to extract information of whole chip CD variation. According to the results, OPC abnormality was identified and design feedback items are also disclosed. The other approaches are accomplished on EDA companies, like model based OPC verifications. Model based verification will be done for full chip area by using well-calibrated model. The object of model based verification is the prediction of potential weak point on wafer and fast feed back to OPC and design before reticle fabrication. In order to achieve robust design and sufficient device margin, appropriate combination between design based metrology system and model based verification tools is very important. Therefore, we evaluated design based metrology system and matched model based verification system for optimum combination between two systems. In our study, huge amount of data from wafer results are classified and analyzed by statistical method and classified by OPC feedback and design feedback items. Additionally, novel DFM flow would be proposed by using combination of design based metrology and model based verification tools.
Exploration of Uncertainty in Glacier Modelling
NASA Technical Reports Server (NTRS)
Thompson, David E.
1999-01-01
There are procedures and methods for verification of coding algebra and for validations of models and calculations that are in use in the aerospace computational fluid dynamics (CFD) community. These methods would be efficacious if used by the glacier dynamics modelling community. This paper is a presentation of some of those methods, and how they might be applied to uncertainty management supporting code verification and model validation for glacier dynamics. The similarities and differences between their use in CFD analysis and the proposed application of these methods to glacier modelling are discussed. After establishing sources of uncertainty and methods for code verification, the paper looks at a representative sampling of verification and validation efforts that are underway in the glacier modelling community, and establishes a context for these within overall solution quality assessment. Finally, an information architecture and interactive interface is introduced and advocated. This Integrated Cryospheric Exploration (ICE) Environment is proposed for exploring and managing sources of uncertainty in glacier modelling codes and methods, and for supporting scientific numerical exploration and verification. The details and functionality of this Environment are described based on modifications of a system already developed for CFD modelling and analysis.
RF model of the distribution system as a communication channel, phase 2. Volume 2: Task reports
NASA Technical Reports Server (NTRS)
Rustay, R. C.; Gajjar, J. T.; Rankin, R. W.; Wentz, R. C.; Wooding, R.
1982-01-01
Based on the established feasibility of predicting, via a model, the propagation of Power Line Frequency on radial type distribution feeders, verification studies comparing model predictions against measurements were undertaken using more complicated feeder circuits and situations. Detailed accounts of the major tasks are presented. These include: (1) verification of model; (2) extension, implementation, and verification of perturbation theory; (3) parameter sensitivity; (4) transformer modeling; and (5) compensation of power distribution systems for enhancement of power line carrier communication reliability.
Addressing Dynamic Issues of Program Model Checking
NASA Technical Reports Server (NTRS)
Lerda, Flavio; Visser, Willem
2001-01-01
Model checking real programs has recently become an active research area. Programs however exhibit two characteristics that make model checking difficult: the complexity of their state and the dynamic nature of many programs. Here we address both these issues within the context of the Java PathFinder (JPF) model checker. Firstly, we will show how the state of a Java program can be encoded efficiently and how this encoding can be exploited to improve model checking. Next we show how to use symmetry reductions to alleviate some of the problems introduced by the dynamic nature of Java programs. Lastly, we show how distributed model checking of a dynamic program can be achieved, and furthermore, how dynamic partitions of the state space can improve model checking. We support all our findings with results from applying these techniques within the JPF model checker.
SU-E-T-32: A Feasibility Study of Independent Dose Verification for IMAT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamima, T; Takahashi, R; Sato, Y
2015-06-15
Purpose: To assess the feasibility of the independent dose verification (Indp) for intensity modulated arc therapy (IMAT). Methods: An independent dose calculation software program (Simple MU Analysis, Triangle Products, JP) was used in this study, which can compute the radiological path length from the surface to the reference point for each control point using patient’s CT image dataset and the MLC aperture shape was simultaneously modeled in reference to the information of MLC from DICOM-RT plan. Dose calculation was performed using a modified Clarkson method considering MLC transmission and dosimetric leaf gap. In this study, a retrospective analysis was conductedmore » in which IMAT plans from 120 patients of the two sites (prostate / head and neck) from four institutes were retrospectively analyzed to compare the Indp to the TPS using patient CT images. In addition, an ion-chamber measurement was performed to verify the accuracy of the TPS and the Indp in water-equivalent phantom. Results: The agreements between the Indp and the TPS (mean±1SD) were −0.8±2.4% and −1.3±3.8% for the regions of prostate and head and neck, respectively. The measurement comparison showed similar results (−0.8±1.6% and 0.1±4.6% for prostate and head and neck). The variation was larger in the head and neck because the number of the segments was increased that the reference point was under the MLC and the modified Clarkson method cannot consider the smooth falloff of the leaf penumbra. Conclusion: The independent verification program would be practical and effective for secondary check for IMAT with the sufficient accuracy in the measurement and CT-based calculation. The accuracy would be improved if considering the falloff of the leaf penumbra.« less
NASA Technical Reports Server (NTRS)
Powell, John D.
2003-01-01
This document discusses the verification of the Secure Socket Layer (SSL) communication protocol as a demonstration of the Model Based Verification (MBV) portion of the verification instrument set being developed under the Reducing Software Security Risk (RSSR) Trough an Integrated Approach research initiative. Code Q of the National Aeronautics and Space Administration (NASA) funds this project. The NASA Goddard Independent Verification and Validation (IV&V) facility manages this research program at the NASA agency level and the Assurance Technology Program Office (ATPO) manages the research locally at the Jet Propulsion Laboratory (California institute of Technology) where the research is being carried out.
He, Hua; McDermott, Michael P.
2012-01-01
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650
Chaswal, Vibha; Weldon, Michael; Gupta, Nilendu; Chakravarti, Arnab; Rong, Yi
2014-07-08
We present commissioning and comprehensive evaluation for ArcCHECK as a QA equipment for volumetric-modulated arc therapy (VMAT), using the 6 MV photon beam with and without the flattening filter, and the SNC patient software (version 6.2). In addition to commissioning involving absolute dose calibration, array calibration, and PMMA density verification, ArcCHECK was evaluated for its response dependency on linac dose rate, instantaneous dose rate, radiation field size, beam angle, and couch insertion. Scatter dose characterization, consistency and symmetry of response, and dosimetry accuracy evaluation for fixed aperture arcs and clinical VMAT patient plans were also investigated. All the evaluation tests were performed with the central plug inserted and the homogeneous PMMA density value. Results of gamma analysis demonstrated an overall agreement between ArcCHECK-measured and TPS-calculated reference doses. The diode based field size dependency was found to be within 0.5% of the reference. The dose rate-based dependency was well within 1% of the TPS reference, and the angular dependency was found to be ± 3% of the reference, as tested for BEV angles, for both beams. Dosimetry of fixed arcs, using both narrow and wide field widths, resulted in clinically acceptable global gamma passing rates on the 3%/3mm level and 10% threshold. Dosimetry of narrow arcs showed an improvement over published literature. The clinical VMAT cases demonstrated high level of dosimetry accuracy in gamma passing rates.
A methodology for the rigorous verification of plasma simulation codes
NASA Astrophysics Data System (ADS)
Riva, Fabio
2016-10-01
The methodology used to assess the reliability of numerical simulation codes constitutes the Verification and Validation (V&V) procedure. V&V is composed by two separate tasks: the verification, which is a mathematical issue targeted to assess that the physical model is correctly solved, and the validation, which determines the consistency of the code results, and therefore of the physical model, with experimental data. In the present talk we focus our attention on the verification, which in turn is composed by the code verification, targeted to assess that a physical model is correctly implemented in a simulation code, and the solution verification, that quantifies the numerical error affecting a simulation. Bridging the gap between plasma physics and other scientific domains, we introduced for the first time in our domain a rigorous methodology for the code verification, based on the method of manufactured solutions, as well as a solution verification based on the Richardson extrapolation. This methodology was applied to GBS, a three-dimensional fluid code based on a finite difference scheme, used to investigate the plasma turbulence in basic plasma physics experiments and in the tokamak scrape-off layer. Overcoming the difficulty of dealing with a numerical method intrinsically affected by statistical noise, we have now generalized the rigorous verification methodology to simulation codes based on the particle-in-cell algorithm, which are employed to solve Vlasov equation in the investigation of a number of plasma physics phenomena.
Verification of a VRF Heat Pump Computer Model in EnergyPlus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nigusse, Bereket; Raustad, Richard
2013-06-15
This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and part-load performance to manufacturer's data in cooling-only and heating-only modes of operation. The VRF heat pump computer model uses dual range bi-quadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of part-load-ratio for modeling part-loadmore » performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and part-load conditions. The predicted capacity and electric power deviations are comparbale to equation-fit HVAC computer models commonly used for packaged and split unitary HVAC equipment.« less
Verification testing of the Triton Systems, LLC Solid Bowl Centrifuge Model TS-5000 (TS-5000) was conducted at the Lake Wheeler Road Field Laboratory Swine Educational Unit in Raleigh, North Carolina. The TS-5000 was 48" in diameter and 30" deep, with a bowl capacity of 16 ft3. ...
The BOEING 777 - concurrent engineering and digital pre-assembly
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abarbanel, B.
The processes created on the 777 for checking designs were called {open_quotes}digital pre-assembly{close_quotes}. Using FlyThru(tm), a spin-off of a Boeing advanced computing research project, engineers were able to view up to 1500 models (15000 solids) in 3d traversing that data at high speed. FlyThru(tm) was rapidly deployed in 1991 to meet the needs of the 777 for large scale product visualization and verification. The digital pre-assembly process has bad fantastic results. The 777 has had far fewer assembly and systems problems compared to previous airplane programs. Today, FlyThru(tm) is installed on hundreds of workstations on almost every airplane program, andmore » is being used on Space Station, F22, AWACS, and other defense projects. It`s applications have gone far beyond just design review. In many ways, FlyThru is a Data Warehouse supported by advanced tools for analysis. It is today being integrated with Knowledge Based Engineering geometry generation tools.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rathbun, R.
Review of NMP-NCS-930087, {open_quotes}Nuclear Criticality Safety Evaluation 93-04 Enriched Uranium Receipt (U), July 30, 1993, {close_quotes} was requested of SRTC (Savannah River Technology Center) Applied Physics Group. The NCSE is a criticality assessment to determine the mass limit for Engineered Low Level Trench (ELLT) waste uranium burial. The intent is to bury uranium in pits that would be separated by a specified amount of undisturbed soil. The scope of the technical review, documented in this report, consisted of (1) an independent check of the methods and models employed, (2) independent HRXN/KENO-V.a calculations of alternate configurations, (3) application of ANSI/ANS 8.1,more » and (4) verification of WSRC Nuclear Criticality Safety Manual procedures. The NCSE under review concludes that a 500 gram limit per burial position is acceptable to ensure the burial site remains in a critically safe configuration for all normal and single credible abnormal conditions. This reviewer agrees with that conclusion.« less
Formal Methods Case Studies for DO-333
NASA Technical Reports Server (NTRS)
Cofer, Darren; Miller, Steven P.
2014-01-01
RTCA DO-333, Formal Methods Supplement to DO-178C and DO-278A provides guidance for software developers wishing to use formal methods in the certification of airborne systems and air traffic management systems. The supplement identifies the modifications and additions to DO-178C and DO-278A objectives, activities, and software life cycle data that should be addressed when formal methods are used as part of the software development process. This report presents three case studies describing the use of different classes of formal methods to satisfy certification objectives for a common avionics example - a dual-channel Flight Guidance System. The three case studies illustrate the use of theorem proving, model checking, and abstract interpretation. The material presented is not intended to represent a complete certification effort. Rather, the purpose is to illustrate how formal methods can be used in a realistic avionics software development project, with a focus on the evidence produced that could be used to satisfy the verification objectives found in Section 6 of DO-178C.
A Self-Stabilizing Hybrid-Fault Tolerant Synchronization Protocol
NASA Technical Reports Server (NTRS)
Malekpour, Mahyar R.
2014-01-01
In this report we present a strategy for solving the Byzantine general problem for self-stabilizing a fully connected network from an arbitrary state and in the presence of any number of faults with various severities including any number of arbitrary (Byzantine) faulty nodes. Our solution applies to realizable systems, while allowing for differences in the network elements, provided that the number of arbitrary faults is not more than a third of the network size. The only constraint on the behavior of a node is that the interactions with other nodes are restricted to defined links and interfaces. Our solution does not rely on assumptions about the initial state of the system and no central clock nor centrally generated signal, pulse, or message is used. Nodes are anonymous, i.e., they do not have unique identities. We also present a mechanical verification of a proposed protocol. A bounded model of the protocol is verified using the Symbolic Model Verifier (SMV). The model checking effort is focused on verifying correctness of the bounded model of the protocol as well as confirming claims of determinism and linear convergence with respect to the self-stabilization period. We believe that our proposed solution solves the general case of the clock synchronization problem.
NASA Astrophysics Data System (ADS)
Sawicki, Jean-Paul; Saint-Eve, Frédéric; Petit, Pierre; Aillerie, Michel
2017-02-01
This paper presents results of experiments aimed to verify a formula able to compute duty cycle in the case of pulse width modulation control for a DC-DC converter designed and realized in laboratory. This converter, called Magnetically Coupled Boost (MCB) is sized to step up only one photovoltaic module voltage to supply directly grid inverters. Duty cycle formula will be checked in a first time by identifying internal parameter, auto-transformer ratio, and in a second time by checking stability of operating point on the side of photovoltaic module. Thinking on nature of generator source and load connected to converter leads to imagine additional experiments to decide if auto-transformer ratio parameter could be used with fixed value or on the contrary with adaptive value. Effects of load variations on converter behavior or impact of possible shading on photovoltaic module are also mentioned, with aim to design robust control laws, in the case of parallel association, designed to compensate unwanted effects due to output voltage coupling.
NASA Technical Reports Server (NTRS)
Epp, L. W.; Stanton, P. H.
1993-01-01
In order to add the capability of an X-band uplink onto the 70-m antenna, a new dichroic plate is needed to replace the Pyle-guide-shaped dichroic plate currently in use. The replacement dichroic plate must exhibit an additional passband at the new uplink frequency of 7.165 GHz, while still maintaining a passband at the existing downlink frequency of 8.425 GHz. Because of the wide frequency separation of these two passbands, conventional methods of designing air-filled dichroic plates exhibit grating lobe problems. A new method of solving this problem by using a dichroic plate with cross-shaped holes is presented and verified experimentally. Two checks of the integral equation solution are described. One is the comparison to a modal analysis for the limiting cross shape of a square hole. As a final check, a prototype dichroic plate with cross-shaped holes was built and measured.
Toward improved design of check dam systems: A case study in the Loess Plateau, China
NASA Astrophysics Data System (ADS)
Pal, Debasish; Galelli, Stefano; Tang, Honglei; Ran, Qihua
2018-04-01
Check dams are one of the most common strategies for controlling sediment transport in erosion prone areas, along with soil and water conservation measures. However, existing mathematical models that simulate sediment production and delivery are often unable to simulate how the storage capacity of check dams varies with time. To explicitly account for this process-and to support the design of check dam systems-we developed a modelling framework consisting of two components, namely (1) the spatially distributed Soil Erosion and Sediment Delivery Model (WaTEM/SEDEM), and (2) a network-based model of check dam storage dynamics. The two models are run sequentially, with the second model receiving the initial sediment input to check dams from WaTEM/SEDEM. The framework is first applied to Shejiagou catchment, a 4.26 km2 area located in the Loess Plateau, China, where we study the effect of the existing check dam system on sediment dynamics. Results show that the deployment of check dams altered significantly the sediment delivery ratio of the catchment. Furthermore, the network-based model reveals a large variability in the life expectancy of check dams and abrupt changes in their filling rates. The application of the framework to six alternative check dam deployment scenarios is then used to illustrate its usefulness for planning purposes, and to derive some insights on the effect of key decision variables, such as the number, size, and site location of check dams. Simulation results suggest that better performance-in terms of life expectancy and sediment delivery ratio-could have been achieved with an alternative deployment strategy.
Verification of RRA and CMC in OpenSim
NASA Astrophysics Data System (ADS)
Ieshiro, Yuma; Itoh, Toshiaki
2013-10-01
OpenSim is the free software that can handle various analysis and simulation of skeletal muscle dynamics with PC. This study treated RRA and CMC tools in OpenSim. It is remarkable that we can simulate human motion with respect to nerve signal of muscles using these tools. However, these tools seem to still in developmental stages. In order to verify applicability of these tools, we analyze bending and stretching motion data which are obtained from motion capture device using these tools. In this study, we checked the consistency between real muscle behavior and numerical results from these tools.
Research of improved banker algorithm
NASA Astrophysics Data System (ADS)
Yuan, Xingde; Xu, Hong; Qiao, Shijiao
2013-03-01
In the multi-process operating system, resource management strategy of system is a critical global issue, especially when many processes implicating for the limited resources, since unreasonable scheduling will cause dead lock. The most classical solution for dead lock question is the banker algorithm; however, it has its own deficiency and only can avoid dead lock occurring in a certain extent. This article aims at reducing unnecessary safety checking, and then uses the new allocation strategy to improve the banker algorithm. Through full analysis and example verification of the new allocation strategy, the results show the improved banker algorithm obtains substantial increase in performance.
National Centers for Environmental Prediction
/ VISION | About EMC EMC > Mesoscale Modeling > PEOPLE Home Mission Models R & D Collaborators Documentation Change Log People Calendar References Verification/Diagnostics Tropical & Extratropical Cyclone Tracks & Verification Implementation Info FAQ Disclaimer More Info MESOSCALE MODELING PEOPLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, J; Held, M; Morin, O
2015-06-15
Purpose: To investigate the sensitivity of traditional gamma-index-based fluence measurements for patient-specific measurements in VMAT delivered spine SBRT. Methods: The ten most recent cases for spine SBRT were selected. All cases were planned with Eclipse RapidArc for a TrueBeam STx. The delivery was verified using a point dose measurement with a Pinpoint 3D micro-ion chamber in a Standard Imaging Stereotactic Dose Verification Phantom. Two points were selected for each case, one within the target in a low dose-gradient region and one in the spinal cord. Measurements were localized using on-board CBCT. Cumulative and separate arc measurements were acquired with themore » ArcCheck and assessed using the SNC patient software with a 3%/3mm and 2%/2mm gamma analysis with global normalization and a 10% dose threshold. Correlations between data were determined using the Pearson Product-Moment Correlation. Results: For our cohort of patients, the measured doses were higher than calculated ranging from 2.2%–9.7% for the target and 1.0%–8.2% for the spinal cord. There was strong correlation between 3%/3mm and 2%/2mm passing rates (r=0.91). Moderate correlation was found between target and cord dose with a weak fit (r=0.67, R-Square=0.45). The cumulative ArcCheck measurements showed poor correlation with the measured point doses for both the target and cord (r=0.20, r=0.35). If the arcs are assessed separately with an acceptance criteria applied to the minimum passing rate between all arcs, a moderate negative correlation was found for the target and cord (r=−0.48, r= −0.71). The case with the highest dose difference (9.7%) received a passing rate of 97.2% for the cumulative arcs and 87.8% for the minimum with separate arcs. Conclusion: Our data suggest that traditional passing criteria using ArcCheck with cumulative measurements do not correlate well with dose errors. Separate arc analysis shows better correlation but may still miss large dose errors. Point dose verifications are recommended.« less
Implementing Model-Check for Employee and Management Satisfaction
NASA Technical Reports Server (NTRS)
Jones, Corey; LaPha, Steven
2013-01-01
This presentation will discuss methods to which ModelCheck can be implemented to not only improve model quality, but also satisfy both employees and management through different sets of quality checks. This approach allows a standard set of modeling practices to be upheld throughout a company, with minimal interaction required by the end user. The presenter will demonstrate how to create multiple ModelCheck standards, preventing users from evading the system, and how it can improve the quality of drawings and models.
Monte Carlo simulations to replace film dosimetry in IMRT verification.
Goetzfried, Thomas; Rickhey, Mark; Treutwein, Marius; Koelbl, Oliver; Bogner, Ludwig
2011-01-01
Patient-specific verification of intensity-modulated radiation therapy (IMRT) plans can be done by dosimetric measurements or by independent dose or monitor unit calculations. The aim of this study was the clinical evaluation of IMRT verification based on a fast Monte Carlo (MC) program with regard to possible benefits compared to commonly used film dosimetry. 25 head-and-neck IMRT plans were recalculated by a pencil beam based treatment planning system (TPS) using an appropriate quality assurance (QA) phantom. All plans were verified both by film and diode dosimetry and compared to MC simulations. The irradiated films, the results of diode measurements and the computed dose distributions were evaluated, and the data were compared on the basis of gamma maps and dose-difference histograms. Average deviations in the high-dose region between diode measurements and point dose calculations performed with the TPS and MC program were 0.7 ± 2.7% and 1.2 ± 3.1%, respectively. For film measurements, the mean gamma values with 3% dose difference and 3mm distance-to-agreement were 0.74 ± 0.28 (TPS as reference) with dose deviations up to 10%. Corresponding values were significantly reduced to 0.34 ± 0.09 for MC dose calculation. The total time needed for both verification procedures is comparable, however, by far less labor intensive in the case of MC simulations. The presented study showed that independent dose calculation verification of IMRT plans with a fast MC program has the potential to eclipse film dosimetry more and more in the near future. Thus, the linac-specific QA part will necessarily become more important. In combination with MC simulations and due to the simple set-up, point-dose measurements for dosimetric plausibility checks are recommended at least in the IMRT introduction phase. Copyright © 2010. Published by Elsevier GmbH.
SU-F-T-267: A Clarkson-Based Independent Dose Verification for the Helical Tomotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nagata, H; Juntendo University, Hongo, Tokyo; Hongo, H
2016-06-15
Purpose: There have been few reports for independent dose verification for Tomotherapy. We evaluated the accuracy and the effectiveness of an independent dose verification system for the Tomotherapy. Methods: Simple MU Analysis (SMU, Triangle Product, Ishikawa, Japan) was used as the independent verification system and the system implemented a Clarkson-based dose calculation algorithm using CT image dataset. For dose calculation in the SMU, the Tomotherapy machine-specific dosimetric parameters (TMR, Scp, OAR and MLC transmission factor) were registered as the machine beam data. Dose calculation was performed after Tomotherapy sinogram from DICOM-RT plan information was converted to the information for MUmore » and MLC location at more segmented control points. The performance of the SMU was assessed by a point dose measurement in non-IMRT and IMRT plans (simple target and mock prostate plans). Subsequently, 30 patients’ treatment plans for prostate were compared. Results: From the comparison, dose differences between the SMU and the measurement were within 3% for all cases in non-IMRT plans. In the IMRT plan for the simple target, the differences (Average±1SD) were −0.70±1.10% (SMU vs. TPS), −0.40±0.10% (measurement vs. TPS) and −1.20±1.00% (measurement vs. SMU), respectively. For the mock prostate, the differences were −0.40±0.60% (SMU vs. TPS), −0.50±0.90% (measurement vs. TPS) and −0.90±0.60% (measurement vs. SMU), respectively. For patients’ plans, the difference was −0.50±2.10% (SMU vs. TPS). Conclusion: A Clarkson-based independent dose verification for the Tomotherapy can be clinically available as a secondary check with the similar tolerance level of AAPM Task group 114. This research is partially supported by Japan Agency for Medical Research and Development (AMED)« less
Miften, Moyed; Olch, Arthur; Mihailidis, Dimitris; Moran, Jean; Pawlicki, Todd; Molineu, Andrea; Li, Harold; Wijesooriya, Krishni; Shi, Jie; Xia, Ping; Papanikolaou, Nikos; Low, Daniel A
2018-04-01
Patient-specific IMRT QA measurements are important components of processes designed to identify discrepancies between calculated and delivered radiation doses. Discrepancy tolerance limits are neither well defined nor consistently applied across centers. The AAPM TG-218 report provides a comprehensive review aimed at improving the understanding and consistency of these processes as well as recommendations for methodologies and tolerance limits in patient-specific IMRT QA. The performance of the dose difference/distance-to-agreement (DTA) and γ dose distribution comparison metrics are investigated. Measurement methods are reviewed and followed by a discussion of the pros and cons of each. Methodologies for absolute dose verification are discussed and new IMRT QA verification tools are presented. Literature on the expected or achievable agreement between measurements and calculations for different types of planning and delivery systems are reviewed and analyzed. Tests of vendor implementations of the γ verification algorithm employing benchmark cases are presented. Operational shortcomings that can reduce the γ tool accuracy and subsequent effectiveness for IMRT QA are described. Practical considerations including spatial resolution, normalization, dose threshold, and data interpretation are discussed. Published data on IMRT QA and the clinical experience of the group members are used to develop guidelines and recommendations on tolerance and action limits for IMRT QA. Steps to check failed IMRT QA plans are outlined. Recommendations on delivery methods, data interpretation, dose normalization, the use of γ analysis routines and choice of tolerance limits for IMRT QA are made with focus on detecting differences between calculated and measured doses via the use of robust analysis methods and an in-depth understanding of IMRT verification metrics. The recommendations are intended to improve the IMRT QA process and establish consistent, and comparable IMRT QA criteria among institutions. © 2018 American Association of Physicists in Medicine.
Verification testing of the Waterloo Biofilter Systems (WBS), Inc. Waterloo Biofilter® Model 4-Bedroom system was conducted over a thirteen month period at the Massachusetts Alternative Septic System Test Center (MASSTC) located at Otis Air National Guard Base in Bourne, Mas...
Verification testing of the SeptiTech Model 400 System was conducted over a twelve month period at the Massachusetts Alternative Septic System Test Center (MASSTC) located at the Otis Air National Guard Base in Borne, MA. Sanitary Sewerage from the base residential housing was u...
Verification testing of the Aquapoint, Inc. (AQP) BioclereTM Model 16/12 was conducted over a thirteen month period at the Massachusetts Alternative Septic System Test Center (MASSTC), located at Otis Air National Guard Base in Bourne, Massachusetts. Sanitary sewerage from the ba...
Use of metaknowledge in the verification of knowledge-based systems
NASA Technical Reports Server (NTRS)
Morell, Larry J.
1989-01-01
Knowledge-based systems are modeled as deductive systems. The model indicates that the two primary areas of concern in verification are demonstrating consistency and completeness. A system is inconsistent if it asserts something that is not true of the modeled domain. A system is incomplete if it lacks deductive capability. Two forms of consistency are discussed along with appropriate verification methods. Three forms of incompleteness are discussed. The use of metaknowledge, knowledge about knowledge, is explored in connection to each form of incompleteness.
NASA Astrophysics Data System (ADS)
Shin, C.
2017-12-01
Permeability estimation has been extensively researched in diverse fields; however, methods that suitably consider varying geometries and changes within the flow region, for example, hydraulic fracture closing for several years, are yet to be developed. Therefore, in the present study a new permeability estimation method is presented based on the generalized Darcy's friction flow relation, in particular, by examining frictional flow parameters and characteristics of their variations. For this examination, computational fluid dynamics (CFD) simulations of simple hydraulic fractures filled with five layers of structured microbeads and accompanied by geometry changes and flow transitions are performed. Consequently, it was checked whether the main structures and shapes of each flow path are preserved, even for geometry variations within porous media. However, the scarcity and discontinuity of streamlines increase dramatically in the transient- and turbulent-flow regions. The quantitative and analytic examinations of the frictional flow features were also performed. Accordingly, the modified frictional flow parameters were successfully presented as similarity parameters of porous flows. In conclusion, the generalized Darcy's friction flow relation and friction equivalent permeability (FEP) equation were both modified using the similarity parameters. For verification, the FEP values of the other aperture models were estimated and then it was checked whether they agreed well with the original permeability values. Ultimately, the proposed and verified method is expected to efficiently estimate permeability variations in porous media with changing geometric factors and flow regions, including such instances as hydraulic fracture closings.
Quality Assurance in the Presence of Variability
NASA Astrophysics Data System (ADS)
Lauenroth, Kim; Metzger, Andreas; Pohl, Klaus
Software Product Line Engineering (SPLE) is a reuse-driven development paradigm that has been applied successfully in information system engineering and other domains. Quality assurance of the reusable artifacts of the product line (e.g. requirements, design, and code artifacts) is essential for successful product line engineering. As those artifacts are reused in several products, a defect in a reusable artifact can affect several products of the product line. A central challenge for quality assurance in product line engineering is how to consider product line variability. Since the reusable artifacts contain variability, quality assurance techniques from single-system engineering cannot directly be applied to those artifacts. Therefore, different strategies and techniques have been developed for quality assurance in the presence of variability. In this chapter, we describe those strategies and discuss in more detail one of those strategies, the so called comprehensive strategy. The comprehensive strategy aims at checking the quality of all possible products of the product line and thus offers the highest benefits, since it is able to uncover defects in all possible products of the product line. However, the central challenge for applying the comprehensive strategy is the complexity that results from the product line variability and the large number of potential products of a product line. In this chapter, we present one concrete technique that we have developed to implement the comprehensive strategy that addresses this challenge. The technique is based on model checking technology and allows for a comprehensive verification of domain artifacts against temporal logic properties.
Finding Feasible Abstract Counter-Examples
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Dwyer, Matthew B.; Visser, Willem; Clancy, Daniel (Technical Monitor)
2002-01-01
A strength of model checking is its ability to automate the detection of subtle system errors and produce traces that exhibit those errors. Given the high computational cost of model checking most researchers advocate the use of aggressive property-preserving abstractions. Unfortunately, the more aggressively a system is abstracted the more infeasible behavior it will have. Thus, while abstraction enables efficient model checking it also threatens the usefulness of model checking as a defect detection tool, since it may be difficult to determine whether a counter-example is feasible and hence worth developer time to analyze. We have explored several strategies for addressing this problem by extending an explicit-state model checker, Java PathFinder (JPF), to search for and analyze counter-examples in the presence of abstractions. We demonstrate that these techniques effectively preserve the defect detection ability of model checking in the presence of aggressive abstraction by applying them to check properties of several abstracted multi-threaded Java programs. These new capabilities are not specific to JPF and can be easily adapted to other model checking frameworks; we describe how this was done for the Bandera toolset.
Koller, Marianne; Becker, Christian; Thiermann, Horst; Worek, Franz
2010-05-15
The purpose of this study was to check the applicability of different analytical methods for the identification of unknown nerve agents in human body fluids. Plasma and urine samples were spiked with nerve agents (plasma) or with their metabolites (urine) or were left blank. Seven random samples (35% of all samples) were selected for the verification test. Plasma was worked up for unchanged nerve agents and for regenerated nerve agents after fluoride-induced reactivation of nerve agent-inhibited butyrylcholinesterase. Both extracts were analysed by GC-MS. Metabolites were extracted from plasma and urine, respectively, and were analysed by LC-MS. The urinary metabolites and two blank samples could be identified without further measurements, plasma metabolites and blanks were identified in six of seven samples. The analysis of unchanged nerve agent provided five agents/blanks and the sixth agent after further investigation. The determination of the regenerated agents also provided only five clear findings during the first screening because of a rather noisy baseline. Therefore, the sample preparation was extended by a size exclusion step performed before addition of fluoride which visibly reduced baseline noise and thus improved identification of the two missing agents. The test clearly showed that verification should be performed by analysing more than one biomarker to ensure identification of the agent(s). Copyright (c) 2010 Elsevier B.V. All rights reserved.
Measuring Data Quality Through a Source Data Verification Audit in a Clinical Research Setting.
Houston, Lauren; Probst, Yasmine; Humphries, Allison
2015-01-01
Health data has long been scrutinised in relation to data quality and integrity problems. Currently, no internationally accepted or "gold standard" method exists measuring data quality and error rates within datasets. We conducted a source data verification (SDV) audit on a prospective clinical trial dataset. An audit plan was applied to conduct 100% manual verification checks on a 10% random sample of participant files. A quality assurance rule was developed, whereby if >5% of data variables were incorrect a second 10% random sample would be extracted from the trial data set. Error was coded: correct, incorrect (valid or invalid), not recorded or not entered. Audit-1 had a total error of 33% and audit-2 36%. The physiological section was the only audit section to have <5% error. Data not recorded to case report forms had the greatest impact on error calculations. A significant association (p=0.00) was found between audit-1 and audit-2 and whether or not data was deemed correct or incorrect. Our study developed a straightforward method to perform a SDV audit. An audit rule was identified and error coding was implemented. Findings demonstrate that monitoring data quality by a SDV audit can identify data quality and integrity issues within clinical research settings allowing quality improvement to be made. The authors suggest this approach be implemented for future research.
Development and validation of MCNPX-based Monte Carlo treatment plan verification system
Jabbari, Iraj; Monadi, Shahram
2015-01-01
A Monte Carlo treatment plan verification (MCTPV) system was developed for clinical treatment plan verification (TPV), especially for the conformal and intensity-modulated radiotherapy (IMRT) plans. In the MCTPV, the MCNPX code was used for particle transport through the accelerator head and the patient body. MCTPV has an interface with TiGRT planning system and reads the information which is needed for Monte Carlo calculation transferred in digital image communications in medicine-radiation therapy (DICOM-RT) format. In MCTPV several methods were applied in order to reduce the simulation time. The relative dose distribution of a clinical prostate conformal plan calculated by the MCTPV was compared with that of TiGRT planning system. The results showed well implementation of the beams configuration and patient information in this system. For quantitative evaluation of MCTPV a two-dimensional (2D) diode array (MapCHECK2) and gamma index analysis were used. The gamma passing rate (3%/3 mm) of an IMRT plan was found to be 98.5% for total beams. Also, comparison of the measured and Monte Carlo calculated doses at several points inside an inhomogeneous phantom for 6- and 18-MV photon beams showed a good agreement (within 1.5%). The accuracy and timing results of MCTPV showed that MCTPV could be used very efficiently for additional assessment of complicated plans such as IMRT plan. PMID:26170554
Space Shuttle Day-of-Launch Trajectory Design and Verification
NASA Technical Reports Server (NTRS)
Harrington, Brian E.
2010-01-01
A top priority of any launch vehicle is to insert as much mass into the desired orbit as possible. This requirement must be traded against vehicle capability in terms of dynamic control, thermal constraints, and structural margins. The vehicle is certified to a specific structural envelope which will yield certain performance characteristics of mass to orbit. Some envelopes cannot be certified generically and must be checked with each mission design. The most sensitive envelopes require an assessment on the day-of-launch. To further minimize vehicle loads while maximizing vehicle performance, a day-of-launch trajectory can be designed. This design is optimized according to that day s wind and atmospheric conditions, which will increase the probability of launch. The day-of-launch trajectory verification is critical to the vehicle's safety. The Day-Of-Launch I-Load Uplink (DOLILU) is the process by which the Space Shuttle Program redesigns the vehicle steering commands to fit that day's environmental conditions and then rigorously verifies the integrated vehicle trajectory's loads, controls, and performance. The Shuttle methodology is very similar to other United States unmanned launch vehicles. By extension, this method would be similar to the methods employed for any future NASA launch vehicles. This presentation will provide an overview of the Shuttle's day-of-launch trajectory optimization and verification as an example of a more generic application of dayof- launch design and validation.
Verification, Validation and Sensitivity Studies in Computational Biomechanics
Anderson, Andrew E.; Ellis, Benjamin J.; Weiss, Jeffrey A.
2012-01-01
Computational techniques and software for the analysis of problems in mechanics have naturally moved from their origins in the traditional engineering disciplines to the study of cell, tissue and organ biomechanics. Increasingly complex models have been developed to describe and predict the mechanical behavior of such biological systems. While the availability of advanced computational tools has led to exciting research advances in the field, the utility of these models is often the subject of criticism due to inadequate model verification and validation. The objective of this review is to present the concepts of verification, validation and sensitivity studies with regard to the construction, analysis and interpretation of models in computational biomechanics. Specific examples from the field are discussed. It is hoped that this review will serve as a guide to the use of verification and validation principles in the field of computational biomechanics, thereby improving the peer acceptance of studies that use computational modeling techniques. PMID:17558646
NASA Astrophysics Data System (ADS)
Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; Price, Stephen; Hoffman, Matthew; Lipscomb, William H.; Fyke, Jeremy; Vargo, Lauren; Boghozian, Adrianna; Norman, Matthew; Worley, Patrick H.
2017-06-01
To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptops to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Ultimately, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.
NASA Technical Reports Server (NTRS)
1979-01-01
Structural analysis and certification of the collector system is presented. System verification against the interim performance criteria is presented and indicated by matrices. The verification discussion, analysis, and test results are also given.
Code of Federal Regulations, 2014 CFR
2014-01-01
... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... in different states or check processing regions)]. If you make the deposit in person to one of our...] Substitute Checks and Your Rights What Is a Substitute Check? To make check processing faster, federal law...
Evaluation of the 29-km Eta Model for Weather Support to the United States Space Program
NASA Technical Reports Server (NTRS)
Manobianco, John; Nutter, Paul
1997-01-01
The Applied Meteorology Unit (AMU) conducted a year-long evaluation of NCEP's 29-km mesoscale Eta (meso-eta) weather prediction model in order to identify added value to forecast operations in support of the United States space program. The evaluation was stratified over warm and cool seasons and considered both objective and subjective verification methodologies. Objective verification results generally indicate that meso-eta model point forecasts at selected stations exhibit minimal error growth in terms of RMS errors and are reasonably unbiased. Conversely, results from the subjective verification demonstrate that model forecasts of developing weather events such as thunderstorms, sea breezes, and cold fronts, are not always as accurate as implied by the seasonal error statistics. Sea-breeze case studies reveal that the model generates a dynamically-consistent thermally direct circulation over the Florida peninsula, although at a larger scale than observed. Thunderstorm verification reveals that the meso-eta model is capable of predicting areas of organized convection, particularly during the late afternoon hours but is not capable of forecasting individual thunderstorms. Verification of cold fronts during the cool season reveals that the model is capable of forecasting a majority of cold frontal passages through east central Florida to within +1-h of observed frontal passage.
Power System Test and Verification at Satellite Level
NASA Astrophysics Data System (ADS)
Simonelli, Giulio; Mourra, Olivier; Tonicello, Ferdinando
2008-09-01
Most of the articles on Power Systems deal with the architecture and technical solutions related to the functionalities of the power system and their performances. Very few articles, if none, address integration and verification aspects of the Power System at satellite level and the related issues with the Power EGSE (Electrical Ground Support Equipment), which, also, have to support the AIT/AIV (Assembly Integration Test and Verification) program of the satellite and, eventually, the launch campaign. In the last years a more complex development and testing concept based on MDVE (Model Based Development and Verification Environment) has been introduced. In the MDVE approach the simulation software is used to simulate the Satellite environment and, in the early stages, the satellites units. This approach changed significantly the Power EGSE requirements. Power EGSEs or, better, Power SCOEs (Special Check Out Equipment) are now requested to provide the instantaneous power generated by the solar array throughout the orbit. To achieve that, the Power SCOE interfaces to the RTS (Real Time Simulator) of the MDVE. The RTS provides the instantaneous settings, which belong to that point along the orbit, to the Power SCOE so that the Power SCOE generates the instantaneous {I,V} curve of the SA (Solar Array). That means a real time test for the power system, which is even more valuable for EO (Earth Observation) satellites where the Solar Array aspect angle to the sun is rarely fixed, and the power load profile can be particularly complex (for example, in radar applications). In this article the major issues related to integration and testing of Power Systems will be discussed taking into account different power system topologies (i.e. regulated bus, unregulated bus, battery bus, based on MPPT or S3R…). Also aspects about Power System AIT I/Fs (interfaces) and Umbilical I/Fs with the launcher and the Power SCOE I/Fs will be addressed. Last but not least, protection strategy of the Power System during AIT/AIV program will also be discussed. The objective of this discussion is also to provide the Power System Engineer with a checklist of key aspects linked to the satellite AIT/AIV program, that have to be considered in the early phases of a new power system development.
Williams, Rebecca S; Derrick, Jason; Phillips, K Jean
2017-07-01
To assess how easily minors can purchase cigarettes online and online cigarette vendors' compliance with federal age/ID verification and shipping regulations, North Carolina's 2013 tobacco age verification law, and federal prohibitions on the sale of non-menthol flavoured cigarettes or those labelled or advertised as 'light'. In early 2014, 10 minors aged 14-17 attempted to purchase cigarettes by credit card and electronic check from 68 popular internet vendors. Minors received cigarettes from 32.4% of purchase attempts, all delivered by the US Postal Service (USPS) from overseas sellers. None failed due to age/ID verification. All failures were due to payment processing problems. USPS left 63.6% of delivered orders at the door with the remainder handed to minors with no age verification. 70.6% of vendors advertised light cigarettes and 60.3% flavoured, with 23.5% and 11.8%, respectively, delivered to the teens. Study credit cards were exposed to an estimated $7000 of fraudulent charges. Despite years of regulations restricting internet cigarette sales, poor vendor compliance and lack of shipper and federal enforcement leaves minors still able to obtain cigarettes (including 'light' and flavoured) online. The internet cigarette marketplace has shifted overseas, exposing buyers to widespread credit card fraud. Federal agencies should rigorously enforce existing internet cigarette sales laws to prevent illegal shipments from reaching US consumers, shut down non-compliant and fraudulent websites, and stop the theft and fraudulent use of credit card information provided online. Future studies should assess whether these agencies begin adequately enforcing the existing laws. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.; Thieberger, P.; Wegner, H. E.
1985-01-01
Single-Event Upset (SEU) response of a bipolar low-power Schottky-diode-clamped TTL static RAM has been observed using Br ions in the 100-240 MeV energy range and O ions in the 20-100 MeV range. These data complete the experimental verification of circuit-simulation SEU modeling for this device. The threshold for onset of SEU has been observed by the variation of energy, ion species and angle of incidence. The results obtained from the computer circuit-simulation modeling and experimental model verification demonstrate a viable methodology for modeling SEU in bipolar integrated circuits.
Multibody modeling and verification
NASA Technical Reports Server (NTRS)
Wiens, Gloria J.
1989-01-01
A summary of a ten week project on flexible multibody modeling, verification and control is presented. Emphasis was on the need for experimental verification. A literature survey was conducted for gathering information on the existence of experimental work related to flexible multibody systems. The first portion of the assigned task encompassed the modeling aspects of flexible multibodies that can undergo large angular displacements. Research in the area of modeling aspects were also surveyed, with special attention given to the component mode approach. Resulting from this is a research plan on various modeling aspects to be investigated over the next year. The relationship between the large angular displacements, boundary conditions, mode selection, and system modes is of particular interest. The other portion of the assigned task was the generation of a test plan for experimental verification of analytical and/or computer analysis techniques used for flexible multibody systems. Based on current and expected frequency ranges of flexible multibody systems to be used in space applications, an initial test article was selected and designed. A preliminary TREETOPS computer analysis was run to ensure frequency content in the low frequency range, 0.1 to 50 Hz. The initial specifications of experimental measurement and instrumentation components were also generated. Resulting from this effort is the initial multi-phase plan for a Ground Test Facility of Flexible Multibody Systems for Modeling Verification and Control. The plan focusses on the Multibody Modeling and Verification (MMV) Laboratory. General requirements of the Unobtrusive Sensor and Effector (USE) and the Robot Enhancement (RE) laboratories were considered during the laboratory development.
Rafanoharana, Serge; Boissière, Manuel; Wijaya, Arief; Wardhana, Wahyu
2016-01-01
Remote sensing has been widely used for mapping land cover and is considered key to monitoring changes in forest areas in the REDD+ Measurement, Reporting and Verification (MRV) system. But Remote Sensing as a desk study cannot capture the whole picture; it also requires ground checking. Therefore, complementing remote sensing analysis using participatory mapping can help provide information for an initial forest cover assessment, gain better understanding of how local land use might affect changes, and provide a way to engage local communities in REDD+. Our study looked at the potential of participatory mapping in providing complementary information for remotely sensed maps. The research sites were located in different ecological and socio-economic contexts in the provinces of Papua, West Kalimantan and Central Java, Indonesia. Twenty-one maps of land cover and land use were drawn with local community participation during focus group discussions in seven villages. These maps, covering a total of 270,000ha, were used to add information to maps developed using remote sensing, adding 39 land covers to the eight from our initial desk assessment. They also provided additional information on drivers of land use and land cover change, resource areas, territory claims and land status, which we were able to correlate to understand changes in forest cover. Incorporating participatory mapping in the REDD+ MRV protocol would help with initial remotely sensed land classifications, stratify an area for ground checks and measurement plots, and add other valuable social data not visible at the RS scale. Ultimately, it would provide a forum for local communities to discuss REDD+ activities and develop a better understanding of REDD+. PMID:27977685
Prevalence of plagiarism in recent submissions to the Croatian Medical Journal.
Baždarić, Ksenija; Bilić-Zulle, Lidija; Brumini, Gordana; Petrovečki, Mladen
2012-06-01
To assess the prevalence of plagiarism in manuscripts submitted for publication in the Croatian Medical Journal (CMJ). All manuscripts submitted in 2009-2010 were analyzed using plagiarism detection software: eTBLAST, CrossCheck, and WCopyfind. Plagiarism was suspected in manuscripts with more than 10% of the text derived from other sources. These manuscripts were checked against the Déjà vu database and manually verified by investigators. Of 754 submitted manuscripts, 105 (14%) were identified by the software as suspicious of plagiarism. Manual verification confirmed that 85 (11%) manuscripts were plagiarized: 63 (8%) were true plagiarism and 22 (3%) were self-plagiarism. Plagiarized manuscripts were mostly submitted from China (21%), Croatia (14%), and Turkey (19%). There was no significant difference in the text similarity rate between plagiarized and self-plagiarized manuscripts (25% [95% CI 22-27%] vs. 28% [95% CI 20-33%]; U = 645.50; P = 0.634). Differences in text similarity rate were found between various sections of self-plagiarized manuscripts (H = 12.65, P = 0.013). The plagiarism rate in the Materials and Methods (61% (95% CI 41-68%) was higher than in the Results (23% [95% CI 17-36%], U = 33.50; P = 0.009) or Discussion (25.5 [95% CI 15-35%]; U = 57.50; P < 0.001) sections. Three authors were identified in the Déjà vu database. Plagiarism detection software combined with manual verification may be used to detect plagiarized manuscripts and prevent their publication. The prevalence of plagiarized manuscripts submitted to the CMJ, a journal dedicated to promoting research integrity, was 11% in the 2-year period 2009-2010.
Autoverification process improvement by Six Sigma approach: Clinical chemistry & immunoassay.
Randell, Edward W; Short, Garry; Lee, Natasha; Beresford, Allison; Spencer, Margaret; Kennell, Marina; Moores, Zoë; Parry, David
2018-05-01
This study examines effectiveness of a project to enhance an autoverification (AV) system through application of Six Sigma (DMAIC) process improvement strategies. Similar AV systems set up at three sites underwent examination and modification to produce improved systems while monitoring proportions of samples autoverified, the time required for manual review and verification, sample processing time, and examining characteristics of tests not autoverified. This information was used to identify areas for improvement and monitor the impact of changes. Use of reference range based criteria had the greatest impact on the proportion of tests autoverified. To improve AV process, reference range based criteria was replaced with extreme value limits based on a 99.5% test result interval, delta check criteria were broadened, and new specimen consistency rules were implemented. Decision guidance tools were also developed to assist staff using the AV system. The mean proportion of tests and samples autoverified improved from <62% for samples and <80% for tests, to >90% for samples and >95% for tests across all three sites. The new AV system significantly decreased turn-around time and total sample review time (to about a third), however, time spent for manual review of held samples almost tripled. There was no evidence of compromise to the quality of testing process and <1% of samples held for exceeding delta check or extreme limits required corrective action. The Six Sigma (DMAIC) process improvement methodology was successfully applied to AV systems resulting in an increase in overall test and sample AV by >90%, improved turn-around time, reduced time for manual verification, and with no obvious compromise to quality or error detection. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Application of conditional moment tests to model checking for generalized linear models.
Pan, Wei
2002-06-01
Generalized linear models (GLMs) are increasingly being used in daily data analysis. However, model checking for GLMs with correlated discrete response data remains difficult. In this paper, through a case study on marginal logistic regression using a real data set, we illustrate the flexibility and effectiveness of using conditional moment tests (CMTs), along with other graphical methods, to do model checking for generalized estimation equation (GEE) analyses. Although CMTs provide an array of powerful diagnostic tests for model checking, they were originally proposed in the econometrics literature and, to our knowledge, have never been applied to GEE analyses. CMTs cover many existing tests, including the (generalized) score test for an omitted covariate, as special cases. In summary, we believe that CMTs provide a class of useful model checking tools.
Take the Reins on Model Quality with ModelCHECK and Gatekeeper
NASA Technical Reports Server (NTRS)
Jones, Corey
2012-01-01
Model quality and consistency has been an issue for us due to the diverse experience level and imaginative modeling techniques of our users. Fortunately, setting up ModelCHECK and Gatekeeper to enforce our best practices has helped greatly, but it wasn't easy. There were many challenges associated with setting up ModelCHECK and Gatekeeper including: limited documentation, restrictions within ModelCHECK, and resistance from end users. However, we consider ours a success story. In this presentation we will describe how we overcame these obstacles and present some of the details of how we configured them to work for us.
ENVIRONMENTAL TECHNOLOGY VERIFICATION OF URBAN RUNOFF MODELS
This paper will present the verification process and available results of the XP-SWMM modeling system produced by XP-Software conducted unde the USEPA's ETV Program. Wet weather flow (WWF) models are used throughout the US for the evaluation of storm and combined sewer systems. M...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William
The purpose of the verification project is to establish, through rigorous convergence analysis, that each ASC computational physics code correctly implements a set of physics models and algorithms (code verification); Evaluate and analyze the uncertainties of code outputs associated with the choice of temporal and spatial discretization (solution or calculation verification); and Develop and maintain the capability to expand and update these analyses on demand. This presentation describes project milestones.
The U.S. Environmental Protection Agency has created the Environmental Technology Verification Program to facilitate the deployment of innovative or improved environmental technologies through performance verification and dissemination of information. The goal of the ETV Program...
WRAP-RIB antenna technology development
NASA Technical Reports Server (NTRS)
Freeland, R. E.; Garcia, N. F.; Iwamoto, H.
1985-01-01
The wrap-rib deployable antenna concept development is based on a combination of hardware development and testing along with extensive supporting analysis. The proof-of-concept hardware models are large in size so they will address the same basic problems associated with the design fabrication, assembly and test as the full-scale systems which were selected to be 100 meters at the beginning of the program. The hardware evaluation program consists of functional performance tests, design verification tests and analytical model verification tests. Functional testing consists of kinematic deployment, mesh management and verification of mechanical packaging efficiencies. Design verification consists of rib contour precision measurement, rib cross-section variation evaluation, rib materials characterizations and manufacturing imperfections assessment. Analytical model verification and refinement include mesh stiffness measurement, rib static and dynamic testing, mass measurement, and rib cross-section characterization. This concept was considered for a number of potential applications that include mobile communications, VLBI, and aircraft surveillance. In fact, baseline system configurations were developed by JPL, using the appropriate wrap-rib antenna, for all three classes of applications.
Gaia challenging performances verification: combination of spacecraft models and test results
NASA Astrophysics Data System (ADS)
Ecale, Eric; Faye, Frédéric; Chassat, François
2016-08-01
To achieve the ambitious scientific objectives of the Gaia mission, extremely stringent performance requirements have been given to the spacecraft contractor (Airbus Defence and Space). For a set of those key-performance requirements (e.g. end-of-mission parallax, maximum detectable magnitude, maximum sky density or attitude control system stability), this paper describes how they are engineered during the whole spacecraft development process, with a focus on the end-to-end performance verification. As far as possible, performances are usually verified by end-to-end tests onground (i.e. before launch). However, the challenging Gaia requirements are not verifiable by such a strategy, principally because no test facility exists to reproduce the expected flight conditions. The Gaia performance verification strategy is therefore based on a mix between analyses (based on spacecraft models) and tests (used to directly feed the models or to correlate them). Emphasis is placed on how to maximize the test contribution to performance verification while keeping the test feasible within an affordable effort. In particular, the paper highlights the contribution of the Gaia Payload Module Thermal Vacuum test to the performance verification before launch. Eventually, an overview of the in-flight payload calibration and in-flight performance verification is provided.
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Development of eddy current probe for fiber orientation assessment in carbon fiber composites
NASA Astrophysics Data System (ADS)
Wincheski, Russell A.; Zhao, Selina
2018-04-01
Measurement of the fiber orientation in a carbon fiber composite material is crucial in understanding the load carrying capability of the structure. As manufacturing conditions including resin flow and molding pressures can alter fiber orientation, verification of the as-designed fiber layup is necessary to ensure optimal performance of the structure. In this work, the development of an eddy current probe and data processing technique for analysis of fiber orientation in carbon fiber composites is presented. A proposed directional eddy current probe is modeled and its response to an anisotropic multi-layer conductor simulated. The modeling results are then used to finalize specifications of the eddy current probe. Experimental testing of the fabricated probe is presented for several samples including a truncated pyramid part with complex fiber orientation draped to the geometry for resin transfer molding. The inductively coupled single sided measurement enables fiber orientation characterization through the thickness of the part. The fast and cost-effective technique can be applied as a spot check or as a surface map of the fiber orientations across the structure. This paper will detail the results of the probe design, computer simulations, and experimental results.
Statement Verification: A Stochastic Model of Judgment and Response.
ERIC Educational Resources Information Center
Wallsten, Thomas S.; Gonzalez-Vallejo, Claudia
1994-01-01
A stochastic judgment model (SJM) is presented as a framework for addressing issues in statement verification and probability judgment. Results of 5 experiments with 264 undergraduates support the validity of the model and provide new information that is interpreted in terms of the SJM. (SLD)
Land Ice Verification and Validation Kit
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-07-15
To address a pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice-sheet models is underway. The associated verification and validation process of these models is being coordinated through a new, robust, python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVV). This release provides robust and automated verification and a performance evaluation on LCF platforms. The performance V&V involves a comprehensive comparison of model performance relative to expected behavior on a given computing platform. LIVV operates on a set of benchmark and testmore » data, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-4-bit evaluation, and plots of tests where differences occur.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.
To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less
Kennedy, Joseph H.; Bennett, Andrew R.; Evans, Katherine J.; ...
2017-03-23
To address the pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice sheet models is underway. Concurrent to the development of the Community Ice Sheet Model (CISM), the corresponding verification and validation (V&V) process is being coordinated through a new, robust, Python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVVkit). Incorporated into the typical ice sheet model development cycle, it provides robust and automated numerical verification, software verification, performance validation, and physical validation analyses on a variety of platforms, from personal laptopsmore » to the largest supercomputers. LIVVkit operates on sets of regression test and reference data sets, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-for-bit evaluation, and plots of model variables to indicate where differences occur. LIVVkit also provides an easily extensible framework to incorporate and analyze results of new intercomparison projects, new observation data, and new computing platforms. LIVVkit is designed for quick adaptation to additional ice sheet models via abstraction of model specific code, functions, and configurations into an ice sheet model description bundle outside the main LIVVkit structure. Furthermore, through shareable and accessible analysis output, LIVVkit is intended to help developers build confidence in their models and enhance the credibility of ice sheet models overall.« less
Verification of the ages of supercentenarians in the United States: results of a matching study.
Rosenwaike, Ira; Stone, Leslie F
2003-11-01
Unprecedented declines in mortality among the very old have led to the emergence of "true" supercentenarians (persons aged 110 and over). The ages of these individuals have been well-documented in European countries with a history of birth registration, but have not been systematically studied in the United States, which lacks similar documentation and where the inaccuracy of age reporting has been an issue. To verify age, we linked records from the Social Security Administration for close to 700 individuals who died from 1980 to 1999 purportedly at ages 110 and older to records of the U.S. censuses of 1880 and 1900, conducted when these individuals were children. This group was a residual group from an earlier file that was reduced by the SSA after data checks that eliminated incorrect records. The results of the matched records for the residual file indicate that over 90% of the whites were accurately reported as supercentenarians, but only half of the blacks appeared to have attained age 110. The verification of age shows that the United States has more "true" supercentenarians than do other nations.
Model checking for linear temporal logic: An efficient implementation
NASA Technical Reports Server (NTRS)
Sherman, Rivi; Pnueli, Amir
1990-01-01
This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.
Penn, Alexandra S.; Knight, Christopher J. K.; Lloyd, David J. B.; Avitabile, Daniele; Kok, Kasper; Schiller, Frank; Woodward, Amy; Druckman, Angela; Basson, Lauren
2013-01-01
Fuzzy Cognitive Mapping (FCM) is a widely used participatory modelling methodology in which stakeholders collaboratively develop a ‘cognitive map’ (a weighted, directed graph), representing the perceived causal structure of their system. This can be directly transformed by a workshop facilitator into simple mathematical models to be interrogated by participants by the end of the session. Such simple models provide thinking tools which can be used for discussion and exploration of complex issues, as well as sense checking the implications of suggested causal links. They increase stakeholder motivation and understanding of whole systems approaches, but cannot be separated from an intersubjective participatory context. Standard FCM methodologies make simplifying assumptions, which may strongly influence results, presenting particular challenges and opportunities. We report on a participatory process, involving local companies and organisations, focussing on the development of a bio-based economy in the Humber region. The initial cognitive map generated consisted of factors considered key for the development of the regional bio-based economy and their directional, weighted, causal interconnections. A verification and scenario generation procedure, to check the structure of the map and suggest modifications, was carried out with a second session. Participants agreed on updates to the original map and described two alternate potential causal structures. In a novel analysis all map structures were tested using two standard methodologies usually used independently: linear and sigmoidal FCMs, demonstrating some significantly different results alongside some broad similarities. We suggest a development of FCM methodology involving a sensitivity analysis with different mappings and discuss the use of this technique in the context of our case study. Using the results and analysis of our process, we discuss the limitations and benefits of the FCM methodology in this case and in general. We conclude by proposing an extended FCM methodology, including multiple functional mappings within one participant-constructed graph. PMID:24244303